id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.13297
Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design
This paper investigates the key role of Feed-Forward Networks (FFNs) in transformer models by utilizing the Parallel Attention and Feed-Forward Net Design (PAF) architecture, and comparing it to their Series Attention and Feed-Forward Net Design (SAF) counterparts. Central to the effectiveness of PAF are two main assumptions regarding the FFN block and the attention block within a layer: 1) the primary function of the FFN block is to maintain isotropy among token embeddings and prevent their degeneration, and 2) the residual norm computed in the attention block is substantially smaller than the input token embedding norm. To empirically validate these assumptions, we train PAF variants of two large language models (RoBERTa-large and bert-large-uncased). Our results demonstrate that both assumptions hold true in the PAF design. This study contributes to a deeper understanding of the roles and interactions between FFNs and self-attention mechanisms in transformer architectures.
Shashank Sonkar, Richard G. Baraniuk
2023-05-22T17:56:09Z
http://arxiv.org/abs/2305.13297v2
Investigating the Role of Feed-Forward Networks in Transformers using Parallel Attention and Feed-Forward Net Design ###### Abstract This paper investigates the key role of Feed-Forward Networks in transformer models by utilizing the Parallel Attention and Feed-Forward Net Design (PAF) architecture, and comparing it to their Series Attention and Feed-Forward Net Design (SAF) counterparts. Central to the effectiveness of PAF are two main assumptions regarding the FFN block and the attention block within a layer: 1) the primary function of the FFN block is to maintain isotropy among token embeddings and prevent their degeneration, and 2) the residual norm computed in the attention block is substantially smaller than the input token embedding norm. To empirically validate these assumptions, we train PAF variants of two large language models (RoBERTa-large and bert-large-uncased). Our results demonstrate that both assumptions hold true in the PAF design. This study contributes to a deeper understanding of the roles and interactions between FFNs and self-attention mechanisms in transformer architectures. ## 1 Introduction In recent years, the field of natural language processing (NLP) has witnessed substantial advancements due to the emergence of deep learning and the availability of vast amounts of data. One of the most significant breakthroughs is the transformer model, which has achieved state-of-the-art results in various NLP tasks, such as language translation (Edunov et al., 2018; Raganato and Tiedemann, 2018; Liu et al., 2020), text classification (Howard and Ruder, 2018; Chang et al., 2019; Sun et al., 2019; Chang et al., 2020), and question answering (Lukovnikov et al., 2019; Raffel et al., 2020; Cao et al., 2020). The transformer architecture, introduced by Vaswani et al. (2017), in the seminal paper 'Attention is All You Need', has revolutionized the NLP landscape and greatly enhanced the performance of numerous applications. Transformer architecture consists of several layers, each of which includes two main components: a self-attention block and a feed-forward neural network (FFN). The self-attention mechanism computes the attention weights between all pairs of positions in the input sequence and uses them to compute a weighted sum of the relevant information. The feed-forward network processes the output of the self-attention mechanism to generate a new representation for each position in the sequence. Both components use residual connections (He et al., 2016) and layer normalization (Ioffe and Szegedy, 2015) to improve performance and stability. Despite the significant success of transformer models, the precise roles of their components, particularly the Feed-Forward Network (FFN) blocks, are not yet fully comprehended. In this study, we strive to shed light on the functions of these components in transformer architectures by examining the Parallel Attention and Feed-Forward Net Design (PAF), initially proposed in Mesh-Transformers byWang (2021), subsequently employed by PaLM (Chowdhery et al., 2022). Contrary to the Series Attention and Feed-Forward Net Design (SAF), PAF facilitates parallelization by having the attention block and FFN block within each layer of the transformer model run concurrently (figure 1). In our analysis, we make two critical assumptions based on the PAF architecture: 1) drawing upon the findings from (Dong et al., 2021; Gao et al., 2019), we posit that the principal function of the FFN block is to prevent the degeneration of token embeddings into a single embedding; and 2) the residual norm computed by the attention block is considerably smaller than the input token embedding norm. To empirically validate these assumptions, we train PAF variants of two prominent language models, RoBERTa-large (Liu et al., 2019) and bert-large-uncased (Devlin et al., 2018), and compare their performance to their SAF counterparts on the General Language Understanding (GLUE) benchmark, covering textual entailment, sentiment analysis, and paraphrase detection. Our results reveal the validity of our assumptions on these PAF variants reinforcing our understanding of the FFN's role in maintaining isotropy in token embeddings. The paper is structured as follows: section 2 outlines the PAF design, section 3 deep dives into the assumptions and rationale of PAF design, and then we conclude in section 4. ## 2 Related work: Parallel Attention and Feed-Forward Net Design ### Paf In this section, we first introduce the PAF design for parallelization of attention and FFN blocks used in transformer models like PaLM Chowdhery et al. (2022) and Mesh-Transformers Wang (2021). #### 2.1.1 Design changes in PAF: At first let's see the computation in standard transformer models which we call the Series Attention and Feed-Forward Net Design (SAF). Let the input to a standard transformer \(\mathcal{T}\) at layer \(l\) be \(\mathbf{X}_{l}\in\mathbb{R}^{n\times d}\). Let \(\mathcal{T}=\{\mathcal{A}_{i},\mathcal{F}_{i}\}\) where Figure 1: On the left is the standard Series Attention and Feed-Forward Net Design (SAF) for transformers models. On the right is the Parallel Attention and Feed-Forward Net Design (PAF) used in transformer models like PaLM (Chowdhery et al., 2022) and Mesh-Transformers (Wang, 2021). \(0\leq i\leq L\), \(L\) is the number of layers, and \(\mathcal{A},\mathcal{F}\) are attention and FFN blocks respectively. Then, \[\mathbf{X}_{l+1} =\text{LN}\big{(}\mathbf{Y}_{l}+\mathcal{F}_{l}(\mathbf{Y}_{l}) \big{)},\ where \tag{1}\] \[\mathbf{Y}_{l} =\text{LN}\big{(}\mathbf{X}_{l}+\mathcal{A}_{l}(\mathbf{X}_{l}) \big{)}, \tag{2}\] where LN is layer norm operator. **PAF design:** Parallel Attention and Feed-Forward Net Design changes the operations of a standard transformer as follows: \[\mathbf{X}_{l+1}=\text{LN}\big{(}\mathbf{X}_{l}+\mathcal{A}_{l}(\mathbf{X}_{l} )+\mathcal{F}_{l}(\mathbf{X}_{l})\big{)}. \tag{3}\] Note that in the SAF design, the input to the FFN block \(\mathcal{F}_{l}\) which is \(\mathbf{Y}_{l}\) (equation 1) relies on the output of the attention block \(\mathcal{A}_{l}(\mathbf{X}_{l})\) (equation 2) thus making the SAF design impossible to parallelize. ## 3 Underlying Assumptions of PAF Design In this section, we delve into the reasoning that might explain why the PAF design is as effective as its SAF counterpart. We believe that the PAF design operates on two primary assumptions: PAF makes two assumptions: 1. Main function of a FFN block is to maintain isotropy within a layer, i.e., spread out the token embeddings so that the embeddings do not converge to one single embedding, and thereby token embeddings do not lose individual token information. 2. The norm of the residual computed by a attention block that gets added to input token embedding to the attention block is sufficiently small compared to the norm of the input token embedding. Though the success of PAF design itself validates the assumptions, next we provide more evidence to justify these assumptions. **Assumption 1: Role of FFN block in transformers is to prevent degeneration of token embeddings** Dong et al. (2021) show that the token embeddings in transformers models without skip connections and FFN blocks degenerate to a rank-1 matrix doubly exponentially with depth. The authors present a theoretical argument demonstrating the importance of skip connections in reducing degeneration and suggest that the FFN block can assist in slowing this process. However, it is important to note that the study does not provide definitive evidence that slowing down degeneration is the most critical or indispensable function of the FFN block. Further research is necessary to determine the full extent of the role played by the FFN block in transformer models. In this paper, we make a strong assumption that the main role of an FFN block is to counteract degeneration of token embeddings. The success/ failure of our experiments will thus validate/ undermine our assumption. Unlike Dong et al. (2021), we study the degeneration through the lens of isotropy as done by Gao et al. (2019). Isotropy measures the average distance between the embedding of each token and the embeddings of the other tokens. Isotropy \(I:\mathbb{R}^{n\times d}\rightarrow\mathbb{R}\) for an embedding matrix \(\mathbf{E}\in\mathbb{R}^{n\times d}\) is given by: \[I(\mathbf{E})=\sum_{0\leq i<n}\sum_{0\leq j<n}\frac{E_{i}^{T}E_{j}}{n^{2}\times ||E_{i}||\times||E_{j}||}. \tag{4}\] **Effectiveness of PAF to counteract degeneration:** For a transformer without FFN blocks, isotropy of token embeddings at layer \(I(\mathbf{X}_{l})\) rapidly approaches 1 after few layers of computation as can be seen in figure 1(a). Also, figure 1(a) shows the effectiveness of PAF design to maintain isotropy is at par with SAF design. **Assumption 2: Norm of the attention block's residual is sufficiently the norm of the input token embeddings to the attention block:** If the main role of FFN blocks is to maintain isotropy by spreading out token embeddings \(\mathbf{Y}_{l}\) at layer \(l\) and PAF feeds the input of the attention block \(\mathbf{X}_{l}\) to the FFN block rather than its output \(\mathbf{Y}_{l}\) (equations (2)- (3) and figure 1), it is imperative to show that \(\mathbf{X}_{l}\) and \(\mathbf{Y}_{l}\) are close in the high dimensional space. In other words, the residual \(\mathcal{A}_{l}(\mathbf{X}_{l})\) added to \(\mathbf{X}_{l}\) by the attention block is small. If it were not the case, FFN's spreading out \(\mathbf{X}_{l}\) instead of \(\mathbf{Y}_{l}\) would not work. In figure 1(b), we plot the norm of \(\mathbf{X}_{l}\) and \(\mathcal{A}_{l}(\mathbf{X}_{l})\) for all layers of RoBERTa-large model and find that it is indeed the case. #### Pre-training of PAF models To fairly compare the both the SAF and PAF counterparts to test our assumptions, we pre-trained two large language models RoBERTa-Large Liu et al. (2019) and Bert-Large-Uncased Devlin et al. (2018) on English Wikipedia and BooksCorpus (Zhu et al., 2015). Both models are 24 layer models and widely used in various NLP applications. We initialize the parameters for PAF models using their SAF variants and follow guidelines for learning rate, optimizer, and loss functions1. Each model is trained on four NVIDIA RTX A6000 gpus for a total of 72 hours. Footnote 1: [https://tinyurl.com/495vfeh9](https://tinyurl.com/495vfeh9) #### Fine-tuning details on the GLUE benchmark We tested the effectiveness of the pre-trained PAF variants of RoBERTa-Large and Bert-Large-Uncased, we finetune both models on the General Language Understanding Evaluation (GLUE) benchmark Wang et al. (2018). GLUE benchmark assesses various NLP tasks that range textual entailment (MNLI, QNLI), paraphrase detection (MRPC, QQP), sentence similarity (STS-B) and sentiment analysis (SST-2). The GLUE benchmark is a widely recognized evaluation standard for NLP models and provides a comprehensive evaluation of the performance of NLP models. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & **MRPC** & **STS-B** & **SST-2** & **QNLI** & **QQP** & **MNLI** & **Avg.** \\ \hline **RoBERTa-large** & 90.9 & 92.4 & 96.4 & 94.7 & 92.2 & 90.2 & 92.8 \\ \hline **RoBERTa-large** & \multirow{2}{*}{90.5} & \multirow{2}{*}{91.0} & \multirow{2}{*}{96.2} & \multirow{2}{*}{94.3} & \multirow{2}{*}{91.7} & \multirow{2}{*}{89.3} & \multirow{2}{*}{92.2} \\ **(w. PAF)** & & & & & & & \\ \hline **Bert-Large-Uncased** & 85.0 & 89.2 & 93.5 & 92.2 & 91.4 & 86.6 & 89.6 \\ \hline **Bert-Large-Uncased** & \multirow{2}{*}{86.8} & \multirow{2}{*}{88.8} & \multirow{2}{*}{93.5} & \multirow{2}{*}{91.4} & \multirow{2}{*}{91.2} & \multirow{2}{*}{85.5} & \multirow{2}{*}{89.5} \\ **(w. PAF)** & & & & & & & \\ \hline \end{tabular} \end{table} Table 1: This table highlights the effectiveness of the Parallel Attention and Feed-Forward Net Design (PAF) variants of RoBERTa-large and Bert-Large-Uncased on the GLUE benchmark. For both models, PAF variants perform similarly to the standard SAF equivalents. Note that the gap in RoBERTa is slightly larger than Bert (\(0.6\%\) vs \(0.1\%\)), but the PAF variant of RoBERTa has been trained on 10 times less data than the SAF model. For Bert, both the SAF and PAF variants use the same size of training data. Each task in GLUE is trained using the recommended2 hyperparameter choices which include learning rate, batch size, warmup steps, and optimizer settings on a single Quadro RTX 8000 GPU for five random seeds. We exclude the two smallest datasets of the GLUE benchmark - CoLA and RTE because of the high instability and variance in their fine-tuning (Dodge et al., 2020). Footnote 2: [https://tinyurl.com/26x46js6](https://tinyurl.com/26x46js6) #### PAF evaluation on GLUE benchmark As can be seen in table 1, PAF variants of both RoBERTa-Large and Bert-Large-Uncased perform nearly identically to their SAF equivalents. The gap for RoBERTa-Large is slightly less smaller than Bert-Large-Uncased (\(0.6\%\) vs \(0.1\%\)) which can be attributed to eight times smaller size of data used to train the PAF variant of RoBERTa-Large. RoBERTa models were trained on 160GB size dataset, however we only use 20 GB wikipedia and BooksCorpus dataset. ## 4 Conclusion In summary, this research offers valuable insights into the essential roles and interplay between Feed-Forward Networks (FFNs) and self-attention mechanisms in transformers by examining the Parallel Attention and Feed-Forward Net Design (PAF) architecture. The empirical validation conducted on two well-known language models, RoBERTa-large and bert-large-uncased, indicates that both main assumptions regarding the function of FFN blocks and the residual norm of the attention block hold true in the PAF design. Our findings enhance the understanding of FFNs' contributions to the overall performance of transformer models and open up new avenues for future research on improving and optimizing these architectures.
2310.18174
The differential bundles of the geometric tangent category of an operad
Affine schemes can be understood as objects of the opposite of the category of commutative and unital algebras. Similarly, $\mathscr{P}$-affine schemes can be defined as objects of the opposite of the category of algebras over an operad $\mathscr{P}$. An example is the opposite of the category of associative algebras. The category of operadic schemes of an operad carries a canonical tangent structure. This paper aims to initiate the study of the geometry of operadic affine schemes via this tangent category. For example, we expect the tangent structure over the opposite of the category of associative algebras to describe algebraic non-commutative geometry. In order to initiate such a program, the first step is to classify differential bundles, which are the analogs of vector bundles for differential geometry. In this paper, we prove that the tangent category of affine schemes of the enveloping operad $\mathscr{P}^{(A)}$ over a $\mathscr{P}$-affine scheme $A$ is precisely the slice tangent category over $A$ of $\mathscr{P}$-affine schemes. We are going to employ this result to show that differential bundles over a $\mathscr{P}$-affine scheme $A$ are precisely $A$-modules in the operadic sense.
Marcello Lanfranchi
2023-10-27T14:42:11Z
http://arxiv.org/abs/2310.18174v1
# The differential bundles ###### Abstract Affine schemes can be understood as objects of the opposite of the category of commutative and unital algebras. Similarly, \(\mathcal{P}\)-affine schemes can be defined as objects of the opposite of the category of algebras over an operad \(\mathcal{P}\). An example is the opposite of the category of associative algebras. The category of operadic schemes of an operad carries a canonical tangent structure. This paper aims to initiate the study of the geometry of operadic affine schemes via this tangent category. For example, we expect the tangent structure over the opposite of the category of associative algebras to describe algebraic non-commutative geometry. In order to initiate such a program, the first step is to classify differential bundles, which are the analogs of vector bundles for differential geometry. In this paper, we prove that the tangent category of affine schemes of the enveloping operad \(\mathcal{P}^{(A)}\) over a \(\mathcal{P}\)-affine scheme \(A\) is precisely the slice tangent category over \(A\) of \(\mathcal{P}\)-affine schemes. We are going to employ this result to show that differential bundles over a \(\mathcal{P}\)-affine scheme \(A\) are precisely \(A\)-modules in the operadic sense. Acknowledgements.We want to thank Sacha Ikonicoff and Jean-Simon Lemay for the work done together in [13] which led to this paper, and for the informal discussions we had around this topic. We are also thankful to Dorette Pronk and Geoffrey Cruttwell (PhD supervisors) for the discussions, advice, support and precious help during the realization of this article. ###### Contents * 1 Introduction * 1.1 Outline * 1.2 Background * 1.3 Notation and naming conventions * 2 The geometry of affine schemes over an operad * 2.1 The functoriality of the algebraic and the geometric tangent categories * 3 The slice tangent category as a right adjoint functor * 3.1 The universal property of slicing * 4 The slice tangent categories of the affine schemes over an operad * 4.1 The geometric tangent category of the enveloping operad * 4.2 The differential bundles of affine schemes over an operad * 5 Conclusion * 5.1 Future work Introduction Cruttwell and Lemay showed that some key geometrical features of affine schemes, in the sense of algebraic geometry, can be captured by defining a suitable tangent structure \(\mathbb{T}\) (cf. [9]). A tangent structure \(\mathbb{T}\) over a category \(\mathbb{X}\) provides a categorical axiomatization for the tangent bundle functor of differential geometry. Concretely, a tangent structure (cf. [5, Definition 2.3]) consists of an endofunctor \(\mathrm{T}\) of \(\mathbb{X}\) together with a projection \(p\colon\mathrm{T}\Rightarrow\mathrm{id}_{\mathbb{X}}\), a zero section \(z\colon\mathrm{id}_{\mathbb{X}}\Rightarrow\mathrm{T}\) of the projection, a sum morphism \(s\colon\mathrm{T}_{2}\Rightarrow\mathrm{T}\), whose domain \(\mathrm{T}_{2}\) is the pullback of the projection along itself, so that for every object \(A\in\mathbb{X}\), \(p\colon\mathrm{T}A\to A\) becomes an additive bundle (cf. [5, Definition 2.1]), that is a commutative monoid in the slice category over \(A\). Moreover, a tangent structure carries two other structures: a vertical lift \(l\colon\mathrm{T}\Rightarrow\mathrm{T}^{2}\), where \(\mathrm{T}^{2}\) denotes the composition of \(\mathrm{T}\) with itself, and a canonical flip \(c\colon\mathrm{T}^{2}\Rightarrow\mathrm{T}^{2}\). The vertical lift defines an abstract version of the Euler vector field and, by satisfying a key universal property (cf. [5, Section 2.5]), introduces a notion of linearity for morphisms of differential bundles (cf. [4]). Moreover, when the tangent category has negatives (cf. [5, Section 3.3]) this universal property is also used to equip the set of sections of the projection, i.e. the vector fields, with Lie brackets. Finally, the canonical flip encodes the symmetry of the Hessian matrix. Tangent categories (with negatives) were first introduced by Rosicky ([16]). Recently, the ideas of Rosicky were revisited and generalized by Cockett and Cruttwell ([5]) and expanded into a flourishing research program. In the tangent category of affine schemes described by Cruttwell and Lemay, the tangent bundle functor is the functor that maps a commutative algebra \(A\) into the symmetric algebra of the \(A\)-module of Kahler differentials \(\Omega A\) of \(A\), i.e. \(\mathrm{T}A\colon=\mathrm{Sym}_{A}\Omega A\) (cf. [9]). One striking result of their paper is the complete classification of differential bundles in this tangent category. Differential bundles, first introduced by Cockett and Cruttwell (cf. [4]), play the same role as vector bundles in the category of smooth finite-dimensional manifolds for an abstract tangent category (cf. [15]). Interestingly, Cruttwell and Lemay show that the category of differential bundles and linear morphisms over an affine scheme \(A\) is equivalent to the opposite of the category of modules over \(A\). The author of this paper together with Sacha Ikonicoff and Jean-Simon Lemay extended the idea of studying the algebraic geometry of affine schemes with tangent categories to a new plethora of contexts. In [13], they showed that the category of algebras \(\mathsf{Alg}_{\mathscr{P}}\) of a (symmetric) operad \(\mathscr{P}\) over the category of \(R\)-modules (for a commutative and unital ring \(R\)) comes equipped with a tangent structure. In the following, we refer to this as the **algebraic tangent structure** of the operad \(\mathscr{P}\) which will be denoted by \(\mathbb{L}^{(\mathscr{P})}\), or simply by \(\mathbb{L}\) when the operad \(\mathscr{P}\) is clear from the context. Moreover, the corresponding tangent category will be denoted as \(\mathsf{Alg}(\mathscr{P})\colon=(\mathsf{Alg}_{\mathscr{P}},\mathbb{L}^{( \mathscr{P})})\). In the aforementioned paper, it was proven that every operad comes with a coCartesian differential monad (cf. [13, Theorem 4.1.1]) and that this tangent category is precisely the tangent category of algebras of this monad. Crucially, \(\mathbb{L}^{(\mathscr{P})}\) admits an adjoint tangent structure (cf. [13, Theorem 4.4.4]) which makes the opposite of the category of operadic algebras into a tangent category. In the following, we refer to this tangent structure as the **geometric tangent structure** of the operad \(\mathscr{P}\) which will be denoted by \(\mathbb{T}^{(\mathscr{P})}\), or simply by \(\mathbb{T}\) when the operad \(\mathscr{P}\) is clear from the context. This tangent category can be interpreted as the tangent category of affine schemes over the operad \(\mathscr{P}\), and will be denoted by \(\mathsf{Geom}(\mathscr{P})\colon=(\mathsf{Alg}_{\mathscr{P}}^{\mathsf{op}}, \mathbb{T}^{(\mathscr{P})})\). To properly appreciate the relevance of this result, notice that before the article [13], the most revelant available examples of tangent categories were differential geometry, synthetic differential geometry, algebraic geometry, commutative rings etc. In particular, there was no example of non-commutative geometry completely described by tangent category theory. The existence of the geometric tangent category \(\mathsf{Geom}(\mathscr{A}\omega)\) of the associative operad \(\mathscr{A}\omega\), whose algebras are associative algebras, proves that tangent categories are suitable to describe a wider variety of geometries, including non-commutative geometry. In Example 2.15 we discuss in detail this particular case, with a comparison with the commutative one. In the same paper, differential objects (cf. [5, Definition 4.8]) of the geometric tangent category of an operad \(\mathcal{P}\) were classified and proved to be in bijective correspondence with left \(\mathcal{P}(1)\)-modules, where \(\mathcal{P}(1)\) denotes the unital and associative ring defined over the first entry of the operad \(\mathcal{P}\) and whose unit and multiplication are defined by the unit and the multiplication of the operad. In the same way as the tangent category described by Cruttwell and Lemay captures some key geometrical features of (commutative and unital) affine schemes, we expect the geometric tangent category of an operad \(\mathcal{P}\) to capture similar geometrical properties of the affine schemes over \(\mathcal{P}\). The goal of this paper is to investigate this assumption by covering the intimate relationship between operads and their corresponding geometric tangent categories. One of the main results of the paper will be the complete classification of differential bundles over operadic affine schemes. We will reinterpret Cruttwell and Lemay's result as a special case of a larger phenomenon: the category of differential bundles and linear morphisms over an operadic affine scheme is equivalent to the opposite of the category of modules of the affine scheme. To prove this, we will first show another key result: the geometric tangent category of the enveloping operad over a \(\mathcal{P}\)-algebra \(A\) is equivalent to the slice tangent category over \(A\) of the geometric tangent category of \(\mathcal{P}\). The classification of differential bundles will follow directly from this insight: differential bundles are precisely differential objects in the slice tangent category. ### Outline The paper is organized as follows. In Section 2, we first recall the main result of [13] which establishes that every operad \(\mathcal{P}\) produces two tangent categories: the algebraic and the geometric tangent categories of \(\mathcal{P}\). Once this is established, we show that the operation which takes an operad to its associated tangent categories is functorial (Section 2.1). In particular, we provide four distinct functors from the category of operads to the category of tangent categories. In Section 3 we recall the notion of the slice tangent category of a tangent category over an object and we give a new characterization of this construction. In particular, in Section 3.1 we show that the operation which takes a tangent pair to its associated slice tangent category extends to a right adjoint of the functor \(\mathsf{Term}\), which sends a tangent category with terminal object to the tangent pair formed by the tangent category and its terminal object. The main result of the paper is proved in Section 4. We first recall the definition of the enveloping operad of an operadic pair and then prove that the geometric tangent category of the enveloping operad \(\mathcal{P}^{(A)}\) of the operadic pair \((\mathcal{P};A)\) is equivalent to the slice tangent category over the geometric tangent category of the operad \(\mathcal{P}\) over \(A\). In Section 4.2 we employ this result to classify the differential bundles over an operadic affine scheme as modules over the affine scheme. Finally, we dedicate Section 5 to exploring some ideas for future work. ### Background We assume the reader is comfortable with the theory of symmetric operads over a symmetric monoidal category (see [14] for reference), and with fundamental notions of category theory like functors, adjunctions, limits, colimits, pullbacks, pushouts etc. We also assume the reader is knowledgeable about basic notions of tangent category theory (see [5] for reference). Even if we summarize in the first section the main results of the previous paper, we also recommend reading [13] to fully appreciate the whole story. ### Notation and naming conventions We denote by \(R\) a fixed commutative and unital ring and by \(\mathsf{Mod}_{R}\) the associated category of left \(R\)-modules. For an operad \(\mathcal{P}\) we refer to a symmetric operad over the symmetric monoidal category \(\mathsf{Mod}_{R}\), where the symmetric monoidal structure is defined by the usual tensor product over \(R\), simply denoted by \(\otimes\). The symmetric group that acts over \(n\) distinct elements is denoted by \(\mathbb{S}_{n}\). The generators of the free \(\mathcal{P}\)-algebra over an \(R\)-module \(M\) are denoted by \((\mu;v_{1},\dots,v_{m})\), where \(\mu\in\mathcal{P}(m)\), \(v_{1},\dots,v_{m}\in M\). Given \(\mu\in\mathcal{P}(m)\) and \(\mu_{1}\in\mathcal{P}(k_{1}),\dots,\mu_{m}\in\mathcal{P}(k_{m})\), for positive integers \(m,k_{1},\dots,k_{m}\), the operadic composition of \(\mu\) with \(\mu_{1},\dots,\mu_{m}\) is denoted by \(\mu(\mu_{1},\dots,\mu_{m})\). The unit of the operad \(\mathcal{P}\) is denoted by \(1_{\mathcal{P}}\); the monad associated with \(\mathcal{P}\) is denoted by \(\mathbb{S}_{\mathcal{P}}\), with \(\gamma_{\mathcal{P}}\) for the composition. We denote by \(\mathsf{Operad}\) the category of symmetric operads over \(\mathsf{Mod}_{R}\) and their morphisms. The category of \(\mathcal{P}\)-algebras is denoted by \(\mathsf{Alg}_{\mathcal{P}}\). Given a \(\mathcal{P}\)-algebra \(A\), the action of the abstract \(m\)-ary operation \(\mu\in\mathcal{P}(m)\) over \(m\) elements \(a_{1},\dots,a_{m}\) of \(A\) induced by the structure map of \(A\) is denoted by \(\mu_{A}(a_{1},\dots,a_{m})\) and when \(A\) is clear from the context simply by \(\mu(a_{1},\dots,a_{m})\). The category of modules (in the operadic sense) over a \(\mathcal{P}\)-algebra \(A\) is denoted by \(\mathsf{Mod}_{A}^{(\mathcal{P})}\), or simply by \(\mathsf{Mod}_{A}\) when \(\mathcal{P}\) is clear from the context. We will write expressions like \(\sum_{k=1}^{m}\mu(a_{1},\dots,x_{k},\dots,a_{m})\) to denote the sum over the index \(k\) of \(\mu\cdot\alpha_{k}(a_{1},\dots,a_{k-1},a_{k+1},\dots,a_{m},x_{k})\) where \(\sigma_{k}\) denotes the cylic permutation \((k\quad k+1\quad\dots\quad m)\), where \(x_{k}\in M\), \(a_{1},\dots,a_{k-1},a_{k+1},\dots,a_{m}\in A\) and \(M\) is an \(A\)-module. Given a tangent category \((\mathbb{K},\mathbb{T})\), we denote the tangent bundle functor \(\mathrm{T}\) by using the same letter as used for the tangent structure. For the projection, the zero morphism, the sum morphism, the lift, the canonical flip, and the negation (in case of a tangent category with negatives) we will use the letters \(p^{(\mathrm{T})},z^{(\mathrm{T})},s^{(\mathrm{T})},l^{(\mathrm{T})},c^{( \mathrm{T})}\) and \(n^{(\mathrm{T})}\), respectively. When the tangent structure is clear from the context, we will simplify the notation by omitting the superscript \({}^{(\mathrm{T})}\). Morphisms of tangent categories come in different flavours. We need to distinguish among them therefore we introduce the following convention. Given two tangent categories \((\mathbb{K},\mathbb{T})\) and \((\mathbb{K}^{\prime},\mathbb{T}^{\prime})\), we refer to a **lax tangent morphism**\((F,\alpha)\colon(\mathbb{K},\mathbb{T})\to(\mathbb{K}^{\prime},\mathbb{T}^{ \prime})\) as a functor \(F\colon\mathbb{K}\to\mathbb{K}^{\prime}\) together with a natural transformation \(\alpha\colon F\circ\mathrm{T}\Rightarrow\mathrm{T}^{\prime}\circ F\) compatible with the tangent structures (cf. [5, Definition 2.7]). We refer to \(\alpha\) as the **lax distributive law** of the morphism. By a **color tangent morphism**\((G,\beta)\colon(\mathbb{K},\mathbb{T})\twoheadrightarrow(\mathbb{K}^{ \prime},\mathbb{T}^{\prime})\) we mean a functor \(G\colon\mathbb{K}\to\mathbb{K}^{\prime}\) together with a natural transformation \(\beta\colon\mathrm{T}^{\prime}\circ G\Rightarrow G\circ\mathrm{T}\) compatible with the tangent structures (the compatibilities are similar to the ones of a lax tangent morphism, where the distributive law goes in the opposite direction). We refer to \(\beta\) as the **color distributive law** of the morphism. We also adopt the notation \(\twoheadrightarrow\) to denote colax tangent morphisms. By a **strong tangent morphism** we mean a lax tangent morphism where the distributive law is an isomorphism. Notice that the underlying functor of a strong tangent morphism together with the inverse of the lax distributive law defines a colax tangent morphism. Finally, by a **strict tangent morphism** we refer to a strong tangent morphism whose distributive law is the identity. Since, in this case, the distributive law is trivial, we will omit it completely in the notation and simply refer to the functor as the strict tangent morphism. We denote by \(\mathsf{TngCat}\) the category of tangent categories and lax tangent morphisms. When required, we abuse notation and denote by \(\mathsf{TngCat}\) the \(2\)-category with the same objects and \(1\)-morphisms and whose \(2\)-morphisms are natural transformations compatible with the lax distributive laws. Similarly, we denote by \(\mathsf{TngCat}_{\cong}\) the category of tangent categories and strong tangent morphisms, and finally, by \(\mathsf{TngCat}_{=}\) the category of tangent categories and strict tangent morphisms. Adopting the same naming convention used in [13], a category \(\mathbb{K}\) is called **semi-additive** if \(\mathbb{K}\) has finite biproducts, which means that it admits finite products, finite coproducts and the canonical morphism between \(n\) products and \(n\) coproducts is an isomorphism. We denote by \(\oplus\) the biproducts of \(\mathbb{K}\). In particular, in \(\mathsf{Mod}_{R}\), given two \(R\)-modules \(X\) and \(Y\), we denote the elements of \(X\oplus Y\) as pairs \((x,y)\) for each \(x\in X\) and \(y\in Y\). In such a category, the empty biproduct is denoted by \(0\) and is the zero object, which is an object that is both initial and terminal. Note that for a category to be semi-additive is equivalent to being enriched over the category of commutative monoids. A semi-additive category \(\mathbb{X}\) comes equipped with a canonical tangent structure \(\mathbb{L}\) whose tangent bundle functor \(\mathbb{L}\) is the diagonal functor \(\mathbb{L}X=X\oplus X\), the projection is the projection on the first coordinate, i.e. \(p=\pi_{1}\colon X\oplus X\to X\), the zero morphism is the injection in the first coordinate, i.e. \(z=\iota_{1}\colon X\to X\oplus X\), the \(n\)-fold pullback of the projection along itself is (isomorphic to) the \(n+1\) tuple \(\mathbb{L}_{n}X=X\oplus X\oplus\cdots\oplus X\), the sum morphism is the identity in the first coordinate and the sum on the second and the third, i.e. \(s\colon X\oplus X\oplus X\xrightarrow{\mathrm{id}_{X}\oplus\star}X\oplus X\); the vertical lift maps the first coordinate to the first one and the second coordinate to the fourth one, i.e. \(l\colon X\oplus X\xrightarrow{(\pi_{1}\circ\iota_{1},\pi_{4}\circ\iota_{2})}X \oplus X\oplus X\); the canonical flip flips the internal coordinates, i.e. \(c\colon X\oplus X\oplus X\xrightarrow{\mathrm{id}_{X}\oplus\mathrm{id}_{X}}X \oplus X\oplus X\), where \(\tau=\langle\pi_{2}\circ\iota_{1},\pi_{1}\circ\iota_{2}\rangle\colon X\oplus Y \to Y\oplus X\); finally, if \(\mathbb{X}\) is **additive**, i.e. Ab-enriched, then the negation morphism is the identity on the first coordinate and the negation on the second, i.e. \(n\colon X\oplus X\xrightarrow{\mathrm{id}_{X}\oplus\star}X\oplus X\). In this paper, we refer to the tangent structure \(\mathbb{L}\) induced by additivity over \(\mathsf{Mod}_{R}\) as the **canonical tangent structure** and to \((\mathsf{Mod}_{R},\mathbb{L})\) as the **canonical tangent category**. For two composable morphisms \(f\colon A\to B\) and \(g\colon B\to C\) of a category \(\mathbb{X}\), we denote by \(g\circ f\) their composition. We will also often use the diagrammatic notation, i.e. \(fg\colon=g\circ f\). For functors, we adopt a similar notation with a single variation: when an object \(X\in\mathbb{X}\) is specified, we denote by \(GFX\) the object \((G\circ F)(X)\) and similarly for morphisms. An adjunction between two functors \(F\colon\mathbb{X}\to\mathbb{X}^{\prime}\) and \(G\colon\mathbb{X}^{\prime}\to\mathbb{X}\) with unit \(\eta\) and counit \(\varepsilon\) is denoted by \((\eta,\varepsilon)\colon F\dashv G\). A similar notation will be adopted for conjunctions in the context of double categories. ## 2 The geometry of affine schemes over an operad In [13], the author of this paper, Sacha Ikonicoff, and Jean-Simon Lemay showed that every operad provides a tangent structure over the category of operadic algebras ([13, Theorem 4.3.3]) as well as a tangent structure over the opposite of the same category ([13, Theorem 4.4.4]). Since this is the starting point for this paper, we dedicate this section to recall this construction. Concretely, the tangent structure \(\mathbb{L}^{(\mathcal{P})}\) over the category of \(\mathcal{P}\)-algebras is defined as follows: \begin{tabular}{l l} **tangent bundle** & The tangent bundle functor \(\mathbb{L}\colon\mathsf{Alg}_{\mathcal{P}}\to\mathsf{Alg}_{\mathcal{P}}\) maps every \(\mathcal{P}\)-algebra \(A\) to the semi-direct functor & product \(A\ltimes A\), which is the \(\mathcal{P}\)-algebra over the \(R\)-module \(A\times A\) and with structure map \(\mu\left((a_{1},b_{1}),\ldots,(a_{m},b_{m})\right):=\left(\mu(a_{1},\ldots,a_{ m}),\sum_{k=1}^{m}\mu(a_{1},\ldots,b_{k},\ldots,a_{m})\right)\) \\ **projection** & The projection \(p^{(\mathbb{L})}\colon\mathbb{L}A\to A\) projects along the first component, that is: \\ & \(p^{(\mathbb{L})}(a,b)\colon=a\) \\ \(n\)**-fold pullbacks** & The \(n\)-fold pullback along the projection of the tangent bundle functor \(\mathbb{L}_{n}\colon\mathsf{Alg}_{\mathcal{P}}\to\mathsf{Alg}_{\mathcal{P}}\) maps every \(\mathcal{P}\)-algebra into the semi-direct product \(A\ltimes(A\times\cdots\times A)\). Moreover, the \(k\)-th projection \(\pi_{k}\colon\mathbb{L}_{n}A\to\mathbb{L}A\) is defined as follows: \\ & \(\pi_{k}(a;b_{1},\ldots,b_{n})\colon=(a,b_{k})\) \\ **zero morphism** & The zero morphism \(z^{(\mathbb{L})}\colon A\to\mathbb{L}A\) injects into the first component, that is: \\ & \(z^{(\mathbb{L})}(a)\colon=(a,0)\) \\ \end{tabular} **sum morphism**: The sum morphism \(s^{(\mathrm{L})}\colon\mathrm{L}_{2}A\to\mathrm{L}A\) is defined by: \[s^{(\mathrm{L})}(a;b_{1},b_{2})\colon=(a;b_{1}+b_{2})\] **vertical lift**: The vertical lift \(l^{(\mathrm{L})}\colon\mathrm{L}A\to\mathrm{L}^{2}A\) is defined by: \[l^{(\mathrm{L})}(a,b)\colon=(a,0,0,b)\] **canonical flip**: The canonical flip \(c^{(\mathrm{L})}\colon\mathrm{L}^{2}A\to\mathrm{L}^{2}A\) is defined by: \[c^{(\mathrm{L})}(a_{1},b_{1},a_{2},b_{2})\colon=(a_{1},a_{2},b_{1},b_{2})\] **negation**: The negation morphism \(n^{(\mathrm{L})}\colon\mathrm{L}A\to\mathrm{L}A\) is defined by: \[n^{(\mathrm{L})}(a,b)\colon=(a,-b)\] On the other hand, the tangent structure \(\mathbb{T}^{(\mathcal{P})}\) over the opposite of the category of the category of \(\mathcal{P}\)-algebras is defined as follows: **tangent bundle**: The tangent bundle functor \(\mathrm{T}\colon\mathsf{Alg}_{\mathcal{P}}^{\mathrm{op}}\to\mathsf{Alg}_{ \mathcal{P}}^{\mathrm{op}}\) maps a \(\mathcal{P}\)-algebra \(A\) to the \(\mathcal{P}\)-algebra \(\mathsf{Free}_{A}\Omega A\), where \(\mathsf{Free}_{A}\colon\mathsf{Mod}_{A}\to\mathsf{Alg}_{\mathcal{P}}\) is the functor that maps a \(A\)-module \(M\) to the free \(\mathcal{P}\)-algebra under \(A\) (cf. [13, Proposition 4.4.1]) and \(\Omega A\) is the module of Kahler differentials of \(A\). Concretely, \(\mathrm{T}A\) is the \(P\)-algebra generated by all elements \(a\) of \(A\) and symbols \(\mathrm{d}^{(\mathcal{P})}a\), for each \(a\in A\) such that the following relations are fulfilled: \[\mu_{\mathrm{T}A}(a_{1},\ldots,a_{m})=\mu_{A}(a_{1},\ldots,a_{m})\] \[\mathrm{d}^{(\mathcal{P})}(ra+sb)=r\mathrm{d}^{(\mathcal{P})}a+sd ^{(\mathcal{P})}b\] \[\mathrm{d}^{(\mathcal{P})}\left(\mu(a_{1},\ldots,a_{m})\right)= \sum_{k=1}^{m}\mu(a_{1},\ldots,\mathrm{d}^{(\mathcal{P})}a_{k},\ldots,a_{m})\] for every \(r,s\in R\) and \(a,b,a_{1},\ldots,a_{m}\in A\). In the following, we will omit the superscript \({}^{(\mathcal{P})}\) in \(\mathrm{d}^{(\mathcal{P})}\) whenever the operad \(\mathcal{P}\) is clear from the context. **projection**: The projection, regarded as an \(\mathsf{Alg}_{\mathcal{P}}\)-morphism, \(p^{(\mathrm{T})}\colon A\to\mathrm{T}A\) injects \(a\in A\) into \(a\in\mathrm{T}A\). **\(n\)-fold pullbacks**: The \(n\)-fold pushout (in \(\mathsf{Alg}_{\mathcal{P}}\)) along the projection of the tangent bundle functor \(\mathrm{T}_{n}:\mathsf{Alg}_{\mathcal{P}}^{\mathrm{op}}\to\mathsf{Alg}_{ \mathcal{P}}^{\mathrm{op}}\) is the \(\mathcal{P}\)-algebra generated by all the elements \(a\) of \(A\) and by symbols \(\mathrm{d}_{1}a\), \(\mathrm{d}_{2}a\), \(\ldots,\mathrm{d}_{n}a\), for each \(a\in A\), such that the following relations are fulfilled: \[\mu_{\mathrm{T}_{n}A}(a_{1},\ldots,a_{m})=\mu_{A}(a_{1},\ldots,a_{ m})\] \[\mathrm{d}_{i}(ra+sb)=r\mathrm{d}_{i}a+sd_{i}b\] \[\mathrm{d}_{i}\left(\mu(a_{1},\ldots,a_{m})\right)=\sum_{k=1}^{m }\mu(a_{1},\ldots,\mathrm{d}_{i}a_{k},\ldots,a_{m})\] for every \(r,s\in R\), \(a,b,a_{1},\ldots,a_{m}\in A\), and for every \(i=1,\ldots,n\). Moreover, the injections \(\iota_{k}\colon\mathrm{T}A\to\mathrm{T}_{n}A\) map each \(a\) to \(a\) and \(\mathrm{d}a\) to \(\mathrm{d}_{k}a\), for every \(k=1,\ldots,n\). **zero morphism**: The zero morphism, regarded as a \(\mathsf{Alg}_{\mathcal{P}}\)-morphism, \(z^{(\mathrm{T})}\colon\mathrm{T}A\to A\) projects each \(a\) to itself \(a\) and each \(\mathrm{d}a\) to \(0\). **sum morphism**: The sum morphism, regarded as a \(\mathsf{Alg}_{\mathcal{P}}\)-morphism, \(s^{(\mathsf{T})}\colon\mathrm{T}A\to\mathrm{T}_{2}A\) maps each \(a\) to \(a\) and each \(\mathsf{d}a\) into \(\mathsf{d}_{1}a+\mathsf{d}_{2}a\). **vertical lift**: The vertical lift, regarded as a \(\mathsf{Alg}_{\mathcal{P}}\)-morphism, \(l^{(\mathsf{T})}\colon\mathrm{T}^{2}A\to\mathrm{T}A\) maps each \(a\in A\) to \(a\), \(\mathsf{d}a\) and \(\mathsf{d}^{\prime}a\) to \(0\) and \(\mathsf{d}^{\prime}\mathsf{d}a\) to \(\mathsf{d}a\). **canonical flip**: The canonical flip, regarded as a \(\mathsf{Alg}_{\mathcal{P}}\)-morphism, \(c^{(\mathsf{T})}\colon\mathrm{T}^{2}A\to\mathrm{T}^{2}A\) maps each \(a\) to \(a\), \(\mathsf{d}a\) to \(\mathsf{d}^{\prime}a\), \(\mathsf{d}^{\prime}a\) to \(\mathsf{d}a\) and \(\mathsf{d}^{\prime}\mathsf{d}a\) to \(\mathsf{d}^{\prime}\mathsf{d}a\). **negation**: The negation morphism, regarded as a \(\mathsf{Alg}_{\mathcal{P}}\)-morphism, \(n^{(\mathsf{T})}\colon\mathrm{T}A\to\mathrm{T}A\) maps each \(a\) to \(a\) and each \(\mathsf{d}a\) to \(-\mathsf{d}a\). In the following, given an operad \(\mathcal{P}\), we refer to \(\mathsf{Alg}(\mathcal{P})\colon=(\mathsf{Alg}_{\mathcal{P}},\mathbb{L}^{( \mathcal{P})})\) as the **algebraic tangent category** of \(\mathcal{P}\) and to \(\mathsf{Geom}(\mathcal{P})\colon=(\mathsf{Alg}_{\mathcal{P}}^{\mathrm{op}}, \mathbb{T}^{(\mathcal{P})})\) as the **geometric tangent category** of \(\mathcal{P}\). ### The functoriality of the algebraic and the geometric tangent categories So far we recapped the main result of [13]: every operad produces two distinct tangent categories, \(\mathsf{Alg}(\mathcal{P})\) and \(\mathsf{Geom}(\mathcal{P})\). In this section, we explore the relationship between morphisms of operads and the corresponding morphisms of tangent categories; we will also show that this operation is functorial. First, we briefly recall that a morphism of operads \(\varphi\colon\mathcal{P}\to\mathcal{G}\) is a sequence of \(R\)-linear morphisms \(\{\varphi_{n}\colon\mathcal{P}(n)\to\mathcal{G}(n)\}_{n\in\mathbb{N}}\), compatible with the operadic structures, that is, given \(\mu\in\mathcal{P}(m),\mu_{1}\in\mathcal{P}(k_{1}),\dots,\mu_{m}\in\mathcal{P} (k_{m})\): \[\varphi_{1}(1_{\mathcal{P}})=1_{\mathcal{G}}\] \[\varphi_{M}(\mu(\mu_{1},\dots,\mu_{m}))=\varphi_{m}(\mu)(\varphi_ {k_{1}}(\mu_{1}),\dots,\varphi_{k_{m}}(\mu_{m}))\] where \(M\colon=k_{1}+\dots+k_{m}\). For the sake of simplicity, in the following, we will omit the index and simply denote by \(\varphi\) any of the morphisms in the sequence. A morphism of operads induces a forgetful functor \(\varphi^{*}\colon\mathsf{Alg}_{\mathcal{G}}\to\mathsf{Alg}_{\mathcal{P}}\), which sends a \(\mathcal{G}\)-algebra \(B\) into the \(\mathcal{P}\)-algebra \(\varphi^{*}B\) over the \(R\)-module underlying \(B\) and with structure map defined by: \[\mu_{\varphi^{*}B}(b_{1},\dots,b_{m})\colon=(\varphi(\mu))_{B}(b_{1},\dots,b_ {m})\] The functor \(\varphi^{*}\) admits a left adjoint \(\varphi_{!}\colon\mathsf{Alg}_{\mathcal{P}}\to\mathsf{Alg}_{\mathcal{G}}\), which sends each \(\mathcal{P}\)-algebra \(A\) to the \(\mathcal{G}\)-algebra \(\varphi_{!}A\) obtained by identifying the two structure maps induced by the operadic composition and by the structure map of \(A\) over the free \(\mathcal{G}\)-algebra over the underlying \(R\)-module of \(A\). Concretely, \(\varphi_{!}A\) can be understood as the coequalizer: where \(\theta\) is the structure map of \(A\). As already mentioned in the introduction, the existence of the algebraic tangent category \(\mathsf{Alg}(\mathcal{P})\) of an operad \(\mathcal{P}\) is a consequence of the fact that the monad \(\mathsf{S}_{\mathcal{P}}\) associated to \(\mathcal{P}\) carries a differential combinator \(\partial_{\mathcal{P}}\), so that \(\mathsf{S}_{\mathcal{P}}\) becomes a coCartesian differential monad over \(\mathsf{Mod}_{R}\) (see [13, Section 4.1] for details). As shown by [13, Corollary 3.2.6], over a semi-additive category \(\mathbb{X}\) there is a bijective correspondence between coCartesian differential monads over \(\mathbb{X}\) and tangent monads over the tangent category \((\mathbb{X},\mathbb{L})\), where \(\mathbb{L}\) is defined by the existence of biproducts in \(\mathbb{X}\) (cf. [13, Lemma 3.1.1]). We recall that a tangent monad, first introduced in [7, Definition 19], is a monad in the \(2\)-category \(\operatorname{TagCat}\) of tangent categories, lax tangent morphisms, and tangent natural transformations, which are natural transformations compatible in an obvious way with the distributive laws. We also recall that the distributive law associated to a tangent monad lifts the tangent structure over the base tangent category to the category of algebras of the monad. Concretely, the tangent bundle functor \(\operatorname{\mathbb{L}}^{(S)}\colon\operatorname{\mathsf{Alg}}_{S}\to \operatorname{\mathsf{Alg}}_{S}\) over the category of algebras of a tangent monad \((S,\alpha)\) over the canonical tangent category \((\operatorname{\mathsf{Mod}}_{R},\mathbb{L})\) sends an \(S\)-algebra \(A\) with structure map \(\theta\colon SA\to A\) into the \(S\)-algebra \(\operatorname{\mathbb{L}}\!A\) with structure map \(S\operatorname{\mathbb{L}}\!A\xrightarrow{\alpha}\operatorname{\mathbb{L}} \!SA\xrightarrow{\lambda\theta}\operatorname{\mathbb{L}}\!A\), where \(\alpha\colon S\circ\operatorname{\mathbb{L}}\Rightarrow\operatorname{ \mathbb{L}}\!\circ S\) is the lax distributive law of \(S\). This is precisely the origin of the tangent structure of \(\operatorname{\mathsf{Alg}}(\mathcal{P})\), which is lifted from the canonical tangent structure on \(\operatorname{\mathsf{Mod}}_{R}\). On the other hand, the tangent structure of \(\operatorname{\mathsf{Geom}}(\mathcal{P})\) is the adjoint tangent structure of the algebraic one (see [13, Section 4.4] for details). Concretely, this means that the tangent bundle functor \(\operatorname{\mathbb{T}}\), regarded as an endofunctor over \(\operatorname{\mathsf{Alg}}_{\mathcal{P}}\), is the left adjoint of \(\operatorname{\mathbb{L}}\) and that the projection, the zero morphism, the sum morphism, the vertical lift, the canonical flip and the negation of \(\operatorname{\mathbb{T}}\) are the mates of the corresponding natural transformations of \(\mathbb{L}\) along the adjunction \(\operatorname{\mathbb{T}}\dashv\operatorname{\mathbb{L}}\). The intimate connection between operads and tangent monads plays a crucial role in understanding the relationship between morphisms of operads and corresponding morphisms of tangent categories. It is not hard to see that a morphism of operads \(\varphi\colon\mathcal{P}\to\mathcal{G}\) induces a morphism of the corresponding tangent monads \(\varphi\colon(S_{\varphi},\alpha_{\mathcal{P}})\to(S_{\mathcal{G}},\alpha_{ \mathcal{G}})\), where we recall that the distributive law \(\alpha_{\mathcal{P}}\colon S_{\varphi}\circ\operatorname{\mathbb{L}}\Rightarrow \operatorname{\mathbb{L}}\circ S_{\varphi}\) associated to an operad \(\mathcal{P}\) is the natural transformation: \[\alpha_{\mathcal{P}}\left(\mu;(x_{1},y_{1}),\ldots,(x_{m},y_{m})\right)=\left( (\mu;x_{1},\ldots,x_{m}),\sum_{k=1}^{m}(\mu;x_{1},\ldots,y_{k},\ldots,x_{m})\right)\] In this context, a morphism of tangent monads \(\varphi\colon(S,\alpha)\to(W,\beta)\) over \((\operatorname{\mathsf{Mod}}_{R},\mathbb{L})\) consists of a natural transformation \(\varphi\colon S\Rightarrow W\), compatible with the lax distributive laws \(\alpha\) and \(\beta\), that is: (2.1) Moreover, since the tangent structure \(\mathbb{L}^{(S)}\) over the category of algebras \(\operatorname{\mathsf{Alg}}_{S}\) of a tangent monad \((S,\alpha)\) is lifted along the distributive law \(\alpha\) from the base tangent category \((\operatorname{\mathsf{Mod}}_{R},\mathbb{L})\), a morphism of tangent monads \(\varphi\colon(S,\alpha)\to(W,\beta)\) induces a strict tangent morphism \(\varphi^{*}\colon(\operatorname{\mathsf{Alg}}_{W},\mathbb{L}^{(W)})\to( \operatorname{\mathsf{Alg}}_{S},\mathbb{L}^{(S)})\), whose underlying functor is the forgetful functor which sends a \(W\)-algebra \(B\) with structure map \(\psi\colon WB\to B\) to the \(S\)-algebra \(B\) with structure map \(SB\xrightarrow{\varphi}WB\xrightarrow{\psi}B\). To see this, take a \(W\)-algebra \(B\) with structure map \(\psi\colon WB\to B\). So, \(\varphi^{*}\!\operatorname{\mathbb{L}}^{(W)}B\) is the \(S\)-algebra \(B\) with structure map: \[S\operatorname{\mathbb{L}}\!B\xrightarrow{\varphi\operatorname{\mathbb{L}}} \!W\operatorname{\mathbb{L}}\!B\xrightarrow{\beta}\operatorname{\mathbb{L}} \!WB\xrightarrow{\operatorname{\mathbb{L}}\!\psi}\operatorname{\mathbb{L}}\!B\] On the other hand, \(\operatorname{\mathbb{L}}^{(S)}\!\varphi^{*}\!B\) is the \(S\)-algebra \(B\) with structure map: \[S\operatorname{\mathbb{L}}\!B\xrightarrow{\alpha}\operatorname{\mathbb{L}} \!SB\xrightarrow{\lambda\varphi}\operatorname{\mathbb{L}}\!WB\xrightarrow{ \operatorname{\mathbb{L}}\!\beta}\operatorname{\mathbb{L}}\!B\] Thanks to Equation (2.1) and to the naturality of \(\varphi\), \(\varphi^{*}\!\operatorname{\mathbb{L}}^{(W)}B\) is precisely \(\operatorname{\mathbb{L}}^{(S)}\!\varphi^{*}\!B\). By putting together that morphisms of operads induce morphisms of tangent monads and that morphisms of tangent monads induce strict tangent morphisms of the corresponding tangent categories, we find that: **Proposition 2.1**.: _The operation which takes an operad to its algebraic tangent category extends to a functor \(\mathsf{Alg}^{\ast}\colon\mathsf{Operad}^{\mathsf{op}}\to\mathsf{TngCat}_{\_}\) which sends a morphism of operads \(\varphi\colon\mathscr{P}\to\mathscr{E}\) to the strict tangent morphism \(\varphi^{\ast}\colon\mathsf{Alg}(\mathscr{E})\to\mathsf{Alg}(\mathscr{P})\)._ As previously recalled, a morphism of operads \(\varphi\colon\mathscr{P}\to\mathscr{E}\) induces a left adjoint \(\varphi_{\_}1\colon\mathsf{Alg}_{\_}\to\mathsf{Alg}_{\_}\). Given a tangent morphism \((G,\beta)\colon(\mathbb{X}^{\prime},\mathbb{T}^{\prime})\to(\mathbb{X}, \mathbb{T})\) between two tangent categories whose underlying functor \(G\colon\mathbb{X}^{\prime}\to\mathbb{X}\) admits a left adjoint \(F\colon\mathbb{X}\to\mathbb{X}^{\prime}\), it is natural to ask whether or not the functor \(F\) inherits from \((G,\beta)\) a distributive law \(\alpha\) which makes \((F,\alpha)\) into a new tangent morphism. It turns out that this works only if \((G,\beta)\) is a color tangent morphism. In that case, \(F\) becomes a lax tangent morphism. This interesting role played by color tangent morphisms is better contextualized within the settings of double categories. Heuristically, a double category is a collection of objects together with two classes of morphisms, called horizontal and vertical morphisms, denoted by \(\to\) and the second ones by \(\twoheadrightarrow\), respectively, and a collection of double cells, that are squares: which can be composed horizontally and vertically. We invite the interested reader to consult [10] for more details on double categories. Notice that double categories can also be characterized as internal categories in the \(2\)-category of categories. **Proposition 2.2**.: _Tangent categories can be organized into a double category \(\mathsf{TngCat}\) whose horizontal morphisms are lax tangent morphisms, vertical morphisms are color tangent morphisms and double cells:_ _are **tangent double cells**, which are natural transformations \(\varphi\colon F_{\_}2\circ G\Rightarrow G^{\prime}\circ F_{\_}1\), fulfilling the commutativity of the following diagram:_ Proof.: The proof that \(\mathsf{TngCat}\) is a double category is straightforward but tedious, thus is left to the reader. Proposition 2.2 shows that tangent categories can be organized into a double category. Conjunctions in this double category play a fundamental role in our story. Intuitively speaking, a conjunction in an arbitrary double category is the analog of an adjunction of \(1\)-morphisms in a \(2\)-category. Concretely, a conjunction consists of a vertical morphism \(G\colon\mathbb{X}^{\prime}\to\mathbb{X}\) together with a horizontal morphism \(F\colon\mathbb{X}\to\mathbb{X}^{\prime}\) and two double cells \(\eta\) and \(\varepsilon\) fulfilling the triangle identities. **Proposition 2.3**.: _If the underlying functor \(G\) of a colax tangent morphism \((G,\beta)\colon(\mathbb{X}^{\prime},\mathbb{T}^{\prime})\twoheadrightarrow( \mathbb{X},\mathbb{T})\) is the right adjoint in a functorial adjunction \((\eta,\varepsilon)\colon F\dashv G\), then the left adjoint \(F\) becomes a lax tangent morphism with the lax distributive law defined as the mate of \(\beta\) along the adjunction, that is:_ \[\alpha\colon F\circ\mathbb{T}\xrightarrow{F\intercal\eta}F\circ\mathbb{T} \circ G\circ F\xrightarrow{F\beta r}F\circ G\circ\mathbb{T}^{\prime}\circ F \xrightarrow{\varepsilon\gamma r}\mathbb{T}^{\prime}\circ F \tag{2.2}\] _In particular, \((\eta,\varepsilon)\colon(F,\alpha)\dashv(G,\beta)\) forms a conjunction in the double category \(\mathbb{T}\mathtt{ngCat}\). Finally, also the opposite holds: any conjunction in \(\mathbb{T}\mathtt{ngCat}\) is of the form \((\eta,\varepsilon)\colon(F,\alpha)\dashv(G,\beta)\) where \(\alpha\) is defined as in Equation (2.2) and \((\eta,\varepsilon)\colon F\dashv G\) is a functorial adjunction._ Proof.: Let's start by proving that \((F,\alpha)\) is a lax tangent morphism. The first step is to show that \(\alpha\) is compatible with the projections, i.e. \(\alpha p_{F}^{(\mathbb{T}^{\prime})}=Fp^{(\mathbb{T})}\), where \(p^{(\mathbb{T}^{\prime})}\) denotes the projection of the tangent structure \(\mathbb{T}^{\prime}\) and \(p^{(\mathbb{T})}\) the projection of \(\mathbb{T}\). We will adopt a similar notation for the other natural transformations of the tangent structures. This amounts to showing the commutativity of the following diagram: To express the commutativity of the diagrams that compose the whole diagram we adopted the following convention: with \(\mathtt{Nat}\) we denoted commutativity by naturality, by \((\beta;p^{(\mathbb{T})},p^{(\mathbb{T}^{\prime})})\) we denoted the compatibility between \(\beta\) and the projections, and \(\Delta\) indicates the triangle identities between the unit and the counit of the adjunction. In the following, we adopt a similar notation. The second step is to prove the compatibility with the zero morphisms. This amounts to showing that \(Fz^{(\mathbb{T})}\alpha=z_{F}^{(\mathbb{T}^{\prime})}\), i.e.: \(F\circ\mathbb{T}\xrightarrow{(F\circ\mathbb{T})\eta}F\circ\mathbb{T}\circ G\circ F \xrightarrow{F\beta r}F\circ G\circ\mathbb{T}^{\prime}\circ F\xrightarrow{ \varepsilon\gamma r\alpha r}\mathbb{T}^{\prime}\circ F\)\(\xrightarrow{F\alpha(\mathbb{T})\eta}F\circ G\circ F\)\(\xrightarrow{F\beta r}F\)\(\xrightarrow{F\alpha(\mathbb{T}) Let's show the compatibility with the sum morphism, which is \((\alpha)_{2}s_{F}^{(\mathrm{T})}=Fs^{(\mathrm{T})}\alpha\): Let's now show the compatibility with the vertical lifts, i.e. \(\alpha l_{F}^{(\mathrm{T}^{\prime})}=Fl^{(\mathrm{T})}\alpha_{\mathrm{T}}\mathrm{T }^{\prime}\alpha\): \[\diagram{\diagram{\diagram{\diagram{\diagram{\diagram{\diagram{\diagram{ \diagram{\diagram{\diagram{\diagram{\diagram{\diagram\hskip{\hskip-0.5.0pt}}}}}}}}}}}}\hskip-1.0pt \diagram{\diagram{\diagram{\diagram{\diagram{\diagram{\diagram{\diagram \hskip-0.5pt}}}}}}}\hskip-1.0pt\diagram{\diagram{\diagram{\diagram{ \diagram{\diagram{\hskip-0.5pt}}}}}}}\hskip-1.0pt\diagram{\diagram{\diagram{ \diagram{\diagram{\diagram{\hskip-0.5pt}}}}}}\hskip-1.0pt\diagram{\diagram{ \diagram{\diagram{\diagram{\hskip-0.5pt}}}}}}\hskip-1.0pt\diagram{\diagram{ \diagram{\diagram{\diagram{\hskip-0.5pt}}}}}\hskip-1.0pt\diagram{\diagram{ \diagram{\diagram{\diagram{\hskip-0.5pt}}}}}\hskip-1.0pt\diagram{\diagram{ \diagram{\diagram{\hskip-0.5pt}}}}}\hskip-1.0pt\diagram{\diagram{\diagram{ \diagram{\hskip-0.5pt}}}}\hskip-1.0pt\diagram{\diagram{\diagram{\diagram{\hskip-0.5pt }}}}\hskip-1.0pt\diagram{\diagram{\diagram{\hskip-0.5pt}}}}\hskip-1.0pt \diagram{\diagram{\diagram{\diagram{\hskip-0.5pt}}}}\hskip-1.0pt\diagram{\diagram{ \diagram{\hskip-0.5pt}}}}\hskip-1.0pt\diagram{\diagram{\diagram{\hskip-0.5pt}}} }\hskip-1.0pt\] Finally, the compatibility with the canonical flips, i.e. \(\alpha_{\mathrm{T}}\mathrm{T}^{\prime}\alpha c_{F}^{\mathrm{(T)}}=F\epsilon^{ \mathrm{(T)}}\alpha_{\mathrm{T}}\mathrm{T}^{\prime}\alpha\): So far, we proved that \((F,\alpha)\) is a lax tangent morphism. The next step is to prove that: \[\begin{CD}(\mathbb{X},\mathbb{T})@>{(F,\alpha)}>{}>(\mathbb{X}^{\prime}, \mathbb{T}^{\prime})\\ \eta@V{}V{(G,\beta)}V{(\mathbb{X},\mathbb{T})}>(\mathbb{X},\mathbb{T})\\ (\mathbb{X},\mathbb{T})@>{(\mathbb{X},\mathbb{T})}>{}>(\mathbb{X},\mathbb{T}) \end{CD}\] are tangent double cells. This amounts to showing the commutativity of the following diagrams: \[\begin{CD}\begin{CD}\text{T}@>{\eta_{\tau}}>{}>G\circ F\circ\text{T}\\ \text{T}_{\eta}@>{\tau_{\tau_{\tau}}}>{}>\text{Nat}\\ \text{T}\circ G\circ F\circ\text{T}\circ G\circ F\\ \text{$\beta_{F}$}>\text{Nat}\\ \text{$G\circ\mathrm{T}^{\prime}\circ F$}>\text{$\theta_{\mathrm{G}\circ F} \circ G\circ\mathrm{T}^{\prime}\circ F$}\end{CD}\] The converse is a straightforward computation we leave for the reader to spell out. Thanks to Proposition 2.3 we can extends \(\mathsf{Alg}(\mathcal{P})\) to a covariant pseudofunctor which sends each morphism of operads \(\varphi\colon\mathcal{P}\to\mathcal{G}\) to a lax tangent morphism \((\varphi_{!},\alpha_{!})\colon\mathsf{Alg}(\mathcal{P})\to\mathsf{Alg}(\mathcal{ G})\). **Proposition 2.4**.: _The operation which takes an operad to its algebraic tangent category extends to a pseudofunctor \(\mathsf{Alg}_{!}\colon\mathsf{Operad}\to\mathsf{TngCat}\) which sends each morphism of operads \(\varphi\colon\mathcal{P}\to\mathcal{G}\) to the lax tangent morphism \((\varphi_{!},\beta_{!})\colon\mathsf{Alg}(\mathcal{P})\to\mathsf{Alg}(\mathcal{ G})\), whose underlying functor is the left adjoint of \(\varphi^{*}\) and \(\beta_{!}\) is defined as follows:_ \[\beta_{!}\colon\varphi_{!}\circ\bot^{(\beta)}\xrightarrow{\varphi_{!}\!,\bot^ {(\varphi)}\eta}\varphi_{!}\circ\bot^{(\mathcal{P})}\circ\varphi^{*}\circ \varphi_{!}=\varphi_{!}\circ\varphi^{*}\circ\bot^{(\mathcal{G})}\circ\varphi_{!} \xrightarrow{\varepsilon_{!}\!,\bot^{(\mathcal{G})}\varphi_{!}}\bot^{(\mathcal{ G})}\circ\varphi_{!}\] **Remark 2.5**.: Notice that \(\mathsf{Alg}_{!}\) is only pseudofunctorial. This comes from the fact that the left adjoint of a functor is only unique up to a unique natural isomorphism. Such a natural isomorphism equips \(\mathsf{Alg}_{!}\) with an associator and a left and a right unitor. In Proposition 2.4, we used that \(\varphi^{*}\) is a colax tangent morphism, since \(\varphi^{*}\) is a strict tangent morphism. To unwrap the definition of \(\beta_{!}\) notice that, given a \(\mathcal{P}\)-algebra \(A\), \(\varphi_{!}(\bot^{(\mathcal{P})}A)\) is the \(\mathcal{G}\)-algebra generated by pairs \((a,b)\) for \(a,b\in A\), satisfying some suitable relations defined by the coequalizer that defines \(\varphi_{!}\). Similarly, also \(\bot^{(\mathcal{G})}(\varphi_{!}A)\) is generated by pairs \((a,b)\) for \(a,b\in A\). So, \(\beta_{!}\) sends each generator \((a,b)\) to the corresponding generator \((a,b)\). **Example 2.6**.: Consider the operads \(\mathcal{A}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Example 2.7**.: Consider the operad \(\mathcal{L}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! We briefly recall that a tangent structure \(\mathbb{L}\) over a category \(\mathbb{X}\) is called **adjunctable** (in [5, Proposition 5.17]) the authors introduced the "dual tangent structure" while in [13, Definition 2.2.1] the authors use the expression "having an adjoint tangent structure". Here we use "adjunctable tangent structure") if for any positive integer \(n\), the functor \(\mathbb{L}_{n}:\mathbb{X}\to\mathbb{X}\), which sends each object \(A\in\mathbb{X}\) to the \(n\)-fold pullback \(\mathbb{L}_{m}A\) along the projection over \(A\), admits a left adjoint \(\mathbb{T}_{n}\). Cockett and Cruttwell proved in [5, Proposition 5.17] that if \(\mathbb{L}\) is adjunctable, then the opposite category \(\mathbb{X}^{\mathsf{op}}\) of \(\mathbb{X}\) admits a tangent structure \(\mathbb{T}\), called the **adjoint tangent structure** of \(\mathbb{L}\), whose tangent bundle functor is the left adjoint \(\mathbb{T}\) of \(\mathbb{L}\) and whose projection, zero morphism, sum morphism, vertical lift and canonical flip are mates of the corresponding natural transformations of \(\mathbb{L}\). Thanks to [13, Corollary 2.2.4], if \(\mathbb{X}\) has enough finite colimits, e.g. \(\mathbb{X}\) is cocomplete, then a tangent structure \(\mathbb{L}\) over \(\mathbb{X}\) is adjunctable if and only if the tangent bundle functor \(\mathbb{L}\) admits a left adjoint \(\mathbb{T}\). In the following we denote by \(\operatorname{adj}\mathsf{Tr}\mathsf{G}\mathsf{C}\mathsf{t}\) the \(2\)-category of adjunctable tangent categories, lax tangent morphisms and tangent natural transformations. **Proposition 2.9**.: _The operation which takes an adjunctable tangent category \((\mathbb{X},\mathbb{L})\) to its associated adjoint tangent category \((\mathbb{X}^{\mathsf{op}},\mathbb{T})\) extends to a pseudofunctor \((-)^{\mathsf{op}}:\operatorname{adj}\mathsf{Tr}\mathsf{G}\mathsf{C}\mathsf{t }\to\operatorname{adj}\mathsf{Tr}\mathsf{G}\mathsf{C}\mathsf{t}\), which equips the \(2\)-category \(\operatorname{adj}\mathsf{Tr}\mathsf{G}\mathsf{C}\mathsf{t}\) with a **pseudoinvolution**, that is an endofunctor together with a natural isomorphism \((-)^{\mathsf{op}}\circ(-)^{\mathsf{op}}\Rightarrow\operatorname{id}_{ \operatorname{adj}\mathsf{Tr}\mathsf{G}\mathsf{C}\mathsf{t}}\). In particular, given two adjunctable tangent categories \((\mathbb{X},\mathbb{L})\) and \((\mathbb{X}^{\prime},\mathbb{L}^{\prime})\) with adjoint tangent categories \((\mathbb{X}^{\mathsf{op}},\mathbb{T})\) and \((\mathbb{X}^{\prime\mathsf{op}},\mathbb{T}^{\prime})\),respectively, and a lax tangent morphism \((F,\alpha):(\mathbb{X},\mathbb{L})\to(\mathbb{X}^{\prime},\mathbb{L}^{\prime})\), \((F,\alpha)^{\mathsf{op}}:(\mathbb{X}^{\mathsf{op}},\mathbb{T})\to(\mathbb{X}^ {\prime\mathsf{op}},\mathbb{T}^{\prime})\) is the lax tangent morphism whose underlying functor is \(F^{\mathsf{op}}\) and whose lax distributive law \(\alpha^{\mathsf{op}}\), is the mate of \(\alpha\) along the adjunctions \((\theta,\tau)\colon\mathbb{T}+\mathbb{L}\) and \((\theta^{\prime},\tau^{\prime})\colon\mathbb{T}^{\prime}+\mathbb{L}^{\prime}\), that is:_ \[\alpha^{\mathsf{op}}:\mathbb{T}^{\prime}\circ F\xrightarrow{\mathbb{T}^{ \prime}F\theta}\mathbb{T}^{\prime}\circ F\circ\mathbb{L}\circ\mathbb{T} \xrightarrow{\mathbb{T}^{\prime}\alpha_{\mathbb{T}}}\mathbb{T}^{\prime}\circ L ^{\prime}\circ F\circ\mathbb{T}\xrightarrow{\mathbb{T}^{\prime}\tau_{ \mathbb{T}\tau_{\mathbb{T}\tau}}}F\circ\mathbb{T}\] _regarded as a morphism in \(\mathbb{X}^{\prime}\)._ Proof.: By definition, the natural transformations (i.e. projection etcetera) of the adjoint tangent structure \(\mathbb{T}\) of a tangent structure \(\mathbb{L}\) are mates along the adjunction \((\theta,\tau)\colon\mathbb{T}+\mathbb{L}\) between the tangent bundle functors of the corresponding natural transformations of \(\mathbb{L}\). Thanks to [1, Proposition 2.2], the mate of a pasting diagram is the pasting diagram of the mates, as long as the mate of each morphism of the diagram is well-defined. Therefore, given a lax tangent morphism \((F,\alpha)\) the distributive law \(\alpha^{\mathsf{op}}\) is compatible with the tangent structures and thus \((F^{\mathsf{op}},\alpha^{\mathsf{op}})\) is a lax tangent morphism between the corresponding adjoint tangent categories. To prove that \((-)^{\mathsf{op}}\) is a pseudofunctor notice first that, given three adjunctable tangent categories \((\mathbb{X},\mathbb{L}),(\mathbb{X}^{\prime},\mathbb{L}^{\prime})\) and \((\mathbb{X}^{\prime\mathsf{op}},\mathbb{T}^{\prime})\) with adjoint tangent categories \((\mathbb{X}^{\mathsf{op}},\mathbb{T}),(\mathbb{X}^{\prime\mathsf{op}},\mathbb{ T}^{\prime})\) and \((\mathbb{X}^{\prime\mathsf{op}},\mathbb{T}^{\prime})\), respectively, and two lax tangent morphisms \((F,\alpha)\colon(\mathbb{X},\mathbb{L})\to(\mathbb{X}^{\prime},\mathbb{L}^{ \prime})\) and \((G,\beta)\colon(\mathbb{X}^{\prime},\mathbb{L}^{\prime})\to(\mathbb{X}^{\prime \mathsf{op}},\mathbb{L}^{\prime\prime})\), the composition of \((F^{\mathsf{op}},\alpha^{\mathsf{op}})\) with \((G^{\mathsf{op}},\beta^{\mathsf{op}})\) is \((G^{\mathsf{op}}\circ F^{\mathsf{op}},G\alpha^{\mathsf{op}}\circ\beta_{F}^{ \mathsf{op}})\). This must be compared with the opposite of the composition \((G\circ F,\beta_{F}\circ G\alpha)\). However, for the pasting diagram property of mates, these are the same lax tangent morphism. Similarly, we can argue that \((\operatorname{id}_{\mathbb{X}}^{\mathsf{op}},\operatorname{id}_{\mathbb{L}}^{ \mathsf{op}})\) corresponds precisely to \((\operatorname{id}_{\mathbb{X}^{\prime\mathsf{op}}},\operatorname{id}_{ \mathbb{T}})\). Finally, notice that if \((\mathbb{X},\mathbb{L})\) is adjunctable, then so is its adjoint tangent category \((\mathbb{X}^{\mathsf{op}},\mathbb{T})\) and its adjoint is (isomorphic to) \((\mathbb{X},\mathbb{L})\). This makes \((-)^{\mathsf{op}}\) a pseudoinvolution over \(\operatorname{adj}\mathsf{Tr}\mathsf{G}\mathsf{C}\mathsf{t}\). **Remark 2.10**.: We point out that \((-)^{\mathsf{op}}\) defined by Proposition 2.9 is only a pseudofunctor and not a strict functor because the choice of a left adjoint for the tangent bundle functor \(\mathbb{L}\) is only unique up to a unique isomorphism. This implies that associativity and unitality are only defined up to a unique isomorphism, which defines the associator and the left and the right unitors of \((-)^{\mathsf{op}}\). **Remark 2.11**.: One could hope that a similar pseudoinvolution \((-)^{\mathsf{op}}\) could also occur in the \(2\)-category \(\operatorname{adj}\mathsf{Tr}\mathsf{G}\mathsf{C}\mathsf{t}\mathsf{o}\) of adjunctable tangent categories, colax tangent morphisms, and corresponding tangent natural transformations. However, this is not the case. The reason is that mates of the colax distributive laws along the adjunctions of the tangent bundle functors are simply not well-defined. This breaking of symmetry plays a crucial role in understanding the differences between non-commutative algebraic geometry and the geometry of commutative affine schemes. We will come back to this point later in Example 2.15. Before proving the functoriality of the operation which takes an operad to its geometric tangent category, we notice an interesting fact. **Lemma 2.12**.: _Consider a strong tangent morphism \((G,\alpha)\colon(\mathbb{X}^{\prime},\mathbb{L}^{\prime})\to(\mathbb{X}, \mathbb{L})\) between two adjunction tangent categories. Suppose also that the functor \(G\) has a left adjoint \(F\dashv G\) and denote by \(\beta\colon\alpha^{-1}\colon\mathbb{L}\circ G\Rightarrow G\circ\mathbb{L}^{\prime}\) the inverse of \(\alpha\colon G\circ\mathbb{L}^{\prime}\Rightarrow\mathbb{L}\circ G\). Then the corresponding tangent morphism \((F^{\mathsf{op}},(\beta_{!})^{\mathsf{op}})\colon(\mathbb{X}^{\mathsf{op}}, \mathbb{T})\to(\mathbb{X}^{\mathsf{op}},\mathbb{T}^{\prime})\) over the left adjoint \(F\) and between the adjoint tangent categories is also strong._ Proof.: By Proposition 2.3, the mate of \(\beta\) along the adjunction \(F\dashv G\) defines a lax tangent morphism \((F,\beta_{!})\colon(\mathbb{X},\mathbb{L})\to(\mathbb{X}^{\prime},\mathbb{L}^{ \prime})\), where \(\beta_{!}\colon F\circ\mathbb{L}\Rightarrow\mathbb{L}^{\prime}\circ F\). By Proposition 2.9, the mate of the distributive law \(\alpha\) along the adjunctions between the tangent bundle functors and their left adjoint defines a lax tangent morphism \((G^{\mathsf{op}},\alpha^{\mathsf{op}})\colon(\mathbb{X}^{\mathsf{op}}, \mathbb{T}^{\prime})\to(\mathbb{X}^{\mathsf{op}},\mathbb{T})\), so that, as an \(\mathbb{X}\)-morphism, \(\alpha^{\mathsf{op}}\colon\mathbb{T}\circ G\Rightarrow G\circ\mathbb{T}^{\prime}\). Similarly, \(\beta_{!}\) defines, again by mating, a lax tangent morphism \((F^{\mathsf{op}},(\beta_{!})^{\mathsf{op}})\colon(\mathbb{X}^{\mathsf{op}}, \mathbb{T})\to(\mathbb{X}^{\mathsf{op}},\mathbb{T}^{\prime})\), so that, as a \(\mathbb{X}^{\prime}\)-morphism, \((\beta_{!})^{\mathsf{op}}\colon\mathbb{T}^{\prime}\circ F\Rightarrow F\circ \mathbb{T}\). Interestingly, \(\alpha^{\mathsf{op}}\) admits a second mate along the adjunction \((\eta,\varepsilon)\colon F\dashv G\): \[(\alpha^{\mathsf{op}})_{!}\colon F\circ\mathbb{T}\xrightarrow{F\mathsf{T}\eta} F\circ\mathbb{T}\circ G\circ F\xrightarrow{F\alpha_{F}^{\mathsf{op}}}F\circ G \circ\mathbb{T}^{\prime}\circ F\xrightarrow{\varepsilon_{\mathsf{T}\tau_{F}}} \mathbb{T}^{\prime}\circ F\] regarded as a morphism in \(\mathbb{X}^{\prime}\). Thus, we also obtain a colax tangent morphism \((F^{\mathsf{op}},(\alpha^{\mathsf{op}})_{!})\colon(\mathbb{X}^{\mathsf{op}}, \mathbb{T})\to(\mathbb{X}^{\mathsf{op}},\mathbb{T}^{\prime})\). To prove that \((\alpha^{\mathsf{op}})\), is the inverse of \((\beta_{!})^{\mathsf{op}}\), consider the following diagram: where \((\eta,\varepsilon)\colon F+G,(\theta,\tau)\colon\mathbb{T}+\mathbb{L}\) and \((\theta^{\prime},\tau^{\prime})\colon\mathbb{T}^{\prime}+\mathbb{L}^{\prime}\). This shows that the following diagram commutes: However: We just proved that \((\beta_{!})^{\mathsf{op}}\circ(\alpha^{\mathsf{op}})_{!}=\mathsf{id}_{\mathsf{FT}}\). Similarly, one can prove also that converse and conclude that \((\alpha^{\mathsf{op}})_{!}\) is the inverse of \((\beta_{!})^{\mathsf{op}}\), as expected. **Remark 2.13**.: Given a pair of conjoints \((F,\beta_{!})\dashv(G,\alpha)\) in the double category of tangent categories where \((G,\alpha)\) is a strong tangent morphism, Lemma 2.12 establishes that the pseudofunctor \((-)^{\mathsf{op}}\) maps \((F,\beta_{!})\dashv(G,\alpha)\) to another pair of conjoints \((G^{\mathsf{op}},\alpha^{\mathsf{op}})\dashv(F^{\mathsf{op}},(\beta_{!})^{ \mathsf{op}})\) and that \((F^{\mathsf{op}},(\beta_{!})^{\mathsf{op}})\) is also a strong tangent morphism. However, if \((G,\alpha)\) is strict this does not imply that \((F^{\mathsf{op}},(\beta_{!})^{\mathsf{op}})\) is strict as well. In the following diagram, we represent the proof of Lemma 2.12. Starting from \(\beta\), which is the inverse of the strong distributive law \(\alpha\), by moving to the right, i.e. by mating along the adjunction \(F\dashv G\), we obtain a lax distributive \(\beta_{!}\), which, as noticed in Remark 2.8, in general, is not invertible. By moving down from \(\beta_{!}\), by applying the pseudofunctor \((-)^{\mathsf{op}}\), we obtain a lax distributive law \((\beta_{!})^{\mathsf{op}}\). Similarly, by starting from \(\alpha\) and moving down, i.e. applying \((-)^{\mathsf{op}}\), we obtain a lax distributive law \(\alpha^{\mathsf{op}}\), which, as mentioned in Remark 2.10, in general, is not invertible. Finally, by moving from \(\alpha^{\mathsf{op}}\) to the right, i.e. by mating along the adjunction \(F\dashv G\), we obtain a colax distributive law \((\alpha^{\mathsf{op}})_{!}\), which turns out to be the inverse of \((\beta_{!})^{\mathsf{op}}\). We can now prove the functoriality of the operation which takes an operad to its associated geometric tangent category. Similarly, as for the algebraic counterpart of this construction, this operation extends to two functors, one mapping operad morphisms \(\varphi\) to a lax tangent morphism whose underlying functor is \((\varphi^{*})^{\mathsf{op}}\) and the second to a strong tangent morphism whose underlying functor is \(\varphi_{!}^{\mathsf{op}}\). **Proposition 2.14**.: _The operation which takes an operad \(\mathcal{P}\) to its associated geometric tangent category \(\mathsf{Geom}(\mathcal{P})\) extends to a contraviarant pseudofunctor \(\mathsf{Geom}^{*}\colon\mathsf{Operad}^{\mathsf{op}}\to\mathsf{TngCat}\) which sends a morphism of operads \(\varphi\colon\mathcal{P}\to\mathcal{G}\) to the lax tangent morphism \((\varphi^{*},\alpha^{*})\colon\mathsf{Geom}(\mathcal{G})\to\mathsf{Geom}( \mathcal{P})\), where \(\alpha^{*}\) is defined as follows:_ \[\alpha^{*}\colon\mathrm{T}^{(\mathcal{P})}\circ\varphi^{*}\xrightarrow{ \mathrm{T}^{(\mathcal{P})}\circ\varphi^{*}\circ\mathsf{I}^{(\mathcal{P})} \circ\varphi^{*}\circ\mathrm{T}^{(\mathcal{G})}}=\mathrm{T}^{(\mathcal{P})} \circ\mathsf{I}^{(\mathcal{P})}\circ\varphi^{*}\circ\mathrm{T}^{(\mathcal{G}) }\xrightarrow{\tau^{(\mathcal{P})}_{\varphi^{*}\tau^{(\mathcal{G})}}} \varphi^{*}\circ\mathrm{T}^{(\mathcal{G})}\] _where \((\theta^{(\mathcal{O})},\tau^{(\mathcal{O})})\colon\mathrm{T}^{(\mathcal{O})} \dashv\mathrm{L}^{(\mathcal{O})}\) and \((\theta^{(\mathcal{O})},\tau^{(\mathcal{O})})\colon\mathrm{T}^{(\mathcal{O})} \dashv\mathrm{L}^{(\mathcal{O})}\). Moreover, the same operation extends also to a covariant pseudofunctor \(\mathsf{Geom}_{\mathrm{I}}\colon\mathrm{Operad}\to\mathsf{TngCat}_{\cong}\) which sends a morphism of operads \(\varphi\colon\mathcal{P}\to\mathbb{G}\) to the strong tangent morphism \((\varphi_{\mathrm{I}},\alpha_{\mathrm{I}})\colon\mathsf{Geom}(\mathcal{P}) \to\mathsf{Geom}(\mathcal{E})\), where \(\alpha_{\mathrm{I}}\) is defined as follows:_ \[\alpha_{\mathrm{I}}\colon\mathrm{T}^{(\mathcal{O})}\circ\varphi_{\mathrm{I}} \xrightarrow{\mathrm{T}^{(\mathcal{O})}\varphi_{\mathrm{I}}\circ(\varphi)} \mathrm{T}^{(\mathcal{O})}\circ\varphi_{\mathrm{I}}\circ\mathrm{T}^{(\mathcal{O })}\xrightarrow{\mathrm{T}^{(\mathcal{O})}(\beta_{\mathrm{I}})_{\mathrm{T}^{( \mathcal{O})}}}\mathrm{T}^{(\mathcal{O})}\circ\mathrm{L}^{(\mathcal{O})}\circ \varphi_{\mathrm{I}}\circ\mathrm{T}^{(\mathcal{O})}\xrightarrow{\tau^{( \mathcal{O})}_{\varphi_{\mathrm{I}}\circ(\varphi)}}\varphi_{\mathrm{I}}\circ \mathrm{T}^{(\mathcal{O})}\] _where \(\beta_{\mathrm{I}}\) is defined as in Proposition 2.4._ Concretely, given a morphism \(\varphi\colon\mathcal{P}\to\mathbb{G}\) and a \(\mathbb{G}\)-algebra \(B\), \(\varphi^{\ast}(\mathrm{T}^{(\mathcal{O})}B)\) is a \(\mathcal{P}\)-algebra generated by all \(b\in B\) and by symbols \(\mathrm{d}^{(\mathcal{O})}b\), for \(b\in B\), satisfying suitable relations. On the other hand, \(\mathrm{T}^{(\mathcal{O})}(\varphi^{\ast}B)\) is generated by all \(b\in B\) and by symbols \(\mathrm{d}^{(\mathcal{O})}b\), for \(b\in B\), satisfying suitable relations. Thus, the distributive law \(\alpha^{\ast}\colon\mathrm{T}^{(\mathcal{P})}(\varphi^{\ast}B)\to\varphi^{ \ast}(\mathrm{T}^{(\mathcal{O})}B)\) associated with \(\varphi^{\ast}\) sends each \(b\) to \(b\) and each \(\mathrm{d}^{(\mathcal{O})}b\) to \(\mathrm{d}^{(\mathcal{O})}b\). Similarly, given a \(\mathcal{P}\)-algebra \(A\), \(\varphi_{\mathrm{I}}(\mathrm{T}^{(\mathcal{O})}A)\) is generated by all \(a\in A\) and by \(\mathrm{d}^{(\mathcal{O})}a\) for \(a\in A\), satisfying suitable relations. On the other hand, \(\mathrm{T}^{(\mathcal{O})}(\varphi_{\mathrm{I}}A)\) is generated by all \(a\in A\) and by \(\mathrm{d}^{(\mathcal{O})}a\), for \(a\in A\), satisfying suitable relations. Thus, the distributive law \(\alpha_{\mathrm{I}}\colon\varphi_{\mathrm{I}}(\mathrm{T}^{(\mathcal{O})}A) \to\mathrm{T}^{(\mathcal{O})}(\varphi_{\mathrm{I}}A)\) sends each \(a\) to \(a\) and each \(\mathrm{d}^{(\mathcal{O})}a\) to \(\mathrm{d}^{(\mathcal{O})}a\). **Example 2.15**.: In Example 2.7 we showed how the canonical morphism of operads \(\varphi\colon\mathcal{A}\cong\mathcal{P}\cong\mathcal{P}\cong\mathcal{P}\cong\) is mapped by the functors \(\mathsf{Alg}^{\ast}\) and \(\mathsf{Alg}_{\mathrm{I}}\). The functor \(\mathsf{Geom}^{\ast}\) maps \(\varphi\) to the lax tangent morphism defined over the pullback functor \(\varphi^{\ast}\). Interestingly, this lax tangent morphism is not strong, i.e. the distributive law \(\mathrm{T}^{(\mathcal{A}\mathcal{A})}\circ\varphi^{\ast}\to\varphi^{\ast} \circ\mathrm{T}^{(\mathcal{P}\cong)}\) (as a \(\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} \mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A}\mathcal{A} functor is (the opposite of) \(\varphi^{*}\). In order to understand the distributive law \(\mathrm{T}^{(\mathcal{Z}ie)}\circ\varphi^{*}\Rightarrow\varphi^{*}\circ\mathrm{T} ^{(\mathcal{Z}ie)}\) (as an associative algebra morphism), let's first take a closer look at \(\mathrm{T}^{(\mathcal{Z}ie)}(\varphi^{*}(A))\) and \(\varphi^{*}(\mathrm{T}^{(\mathcal{Z}ie)}(A))\) for an associative algebra \(A\). The former one is the Lie algebra generated by \(a\in A\) and by symbols \(\mathrm{d}^{(\mathcal{Z}ie)}a\) for each \(a\in A\), satisfying the following relations: \[[a,b]=ab-ba\] \[\mathrm{d}^{(\mathcal{Z}ie)}([a,b])=[\mathrm{d}^{(\mathcal{Z}ie) }a,b]+[a,\mathrm{d}^{(\mathcal{Z}ie)}b]\] The second algebra is generated by \(a\in A\) and by symbols \(\mathrm{d}^{(\mathcal{Z}ie)}a\) for each \(a\in A\), satisfying the following relations: \[[a,b]=ab-ba\] \[\mathrm{d}^{(\mathcal{Z}ie)}(ab)=\mathrm{d}^{(\mathcal{Z}ia)}a \cdot b+a\mathrm{d}^{(\mathcal{Z}ia)}b\] \[[a,\mathrm{d}^{(\mathcal{Z}ia)}b]=a\mathrm{d}^{(\mathcal{Z}ia)}b -\mathrm{d}^{(\mathcal{Z}ia)}b\cdot a\] \[[\mathrm{d}^{(\mathcal{Z}ia)}a,\mathrm{d}^{(\mathcal{Z}ia)}b]= \mathrm{d}^{(\mathcal{Z}ia)}a\mathrm{d}^{(\mathcal{Z}ia)}b-\mathrm{d}^{( \mathcal{Z}ia)}b\mathrm{d}^{(\mathcal{Z}ia)}a\] Note that the relations of the former one are implied by the relations of the latter. The canonical quotient map \(\mathrm{T}^{(\mathcal{Z}ie)}(\varphi^{*}(A))\rightarrow\varphi^{*}(\mathrm{T }^{(\mathcal{Z}ia)}(A))\) corresponds the distributive law. Note that such a map is not an isomorphism. Finally, the functor \(\mathsf{Geom}\) maps \(\varphi\) to the lax tangent morphism whose underlying functor is the (opposite of the) universal enveloping algebra functor \(\varphi_{1}\). To understand the distributive law \(\mathrm{T}^{(\mathcal{Z}ia)}\circ\varphi_{1}\Rightarrow\varphi_{1}\circ \mathrm{T}^{(\mathcal{Z}ie)}\) (as an associative algebra morphism), we first take a closer look at \(\mathrm{T}^{(\mathcal{Z}ia)}(\varphi_{1}(\mathfrak{g}))\) and \(\varphi_{1}(\mathrm{T}^{(\mathcal{Z}ie)}(\mathfrak{g}))\) for a Lie algebra \(\mathfrak{g}\). The former is the associative algebra generated by all \(g\in\mathfrak{g}\) and by symbols \(\mathrm{d}^{(\mathcal{Z}ia)}g\) for each \(g\in\mathfrak{g}\) and satisfying the relations: \[gh-hg=[g,h]\] \[\mathrm{d}^{(\mathcal{Z}ie)}(gh)=\mathrm{d}^{(\mathcal{Z}ia)}g \cdot h+g\mathrm{d}^{(\mathcal{Z}ie)}h\] The latter is the associative algebra generated by \(g\in\mathfrak{g}\) and by symbols \(\mathrm{d}^{(\mathcal{Z}ie)}g\) for each \(g\in\mathfrak{g}\), satisfying the relations: \[gh-hg=[g,h]\] \[\mathrm{d}^{(\mathcal{Z}ie)}g\cdot h-h\mathrm{d}^{(\mathcal{Z}ie) }g=[\mathrm{d}^{(\mathcal{Z}ie)}g,h]\] \[g\mathrm{d}^{(\mathcal{Z}ie)}h-\mathrm{d}^{(\mathcal{Z}ie)}h\cdot g =[g,\mathrm{d}^{(\mathcal{Z}ie)}h]\] \[\mathrm{d}^{(\mathcal{Z}ie)}g\cdot\mathrm{d}^{(\mathcal{Z}ie)}h- \mathrm{d}^{(\mathcal{Z}ie)}h\cdot\mathrm{d}^{(\mathcal{Z}ie)}g=[\mathrm{d}^ {(\mathcal{Z}ie)}g,\mathrm{d}^{(\mathcal{Z}ie)}h]\] \[\mathrm{d}^{(\mathcal{Z}ie)}[g,h]=[\mathrm{d}^{(\mathcal{Z}ie) }g,h]+[h,\mathrm{d}^{(\mathcal{Z}ie)}g]\] Because the first set of relations implies the latter, this allows us to define a morphism of associative algebras \(\varphi_{1}(\mathrm{T}^{(\mathcal{Z}ie)}(\mathfrak{g}))\rightarrow\mathrm{T}^{ (\mathcal{Z}ia)}(\varphi_{1}(\mathfrak{g}))\), which corresponds to the (inverse of the) distributive law. Thanks to Lemma 2.12, this morphism is an isomorphism. ## 3 The slice tangent category as a right adjoint functor Rosicky proved that, under mild assumptions, the slice of a tangent category \((\mathbb{X},\mathbb{T})\) over an object \(A\in\mathbb{X}\) is still a tangent category (cf. [16]). Cockett and Cruttwell further investigated this construction and related this to the notion of tangent fibrations (cf. [4]). In this section, we prove an important result that shows the deep relationship between operads and tangent categories. In a nutshell, we show that the slice tangent category of the geometric tangent category of an operad \(\mathcal{P}\) over a \(\mathcal{P}\)-affine scheme \(A\in\mathsf{Geom}(\mathcal{P})\) is still the geometric tangent category \(\mathsf{Geom}(\mathcal{P}^{(A)})\) of an operad \(\mathcal{P}^{(A)}\). In particular, \(\mathcal{P}^{(A)}\) is the enveloping operad of the \(\mathcal{P}\)-algebra \(A\). To prove this result we are going to show that the functor which associates to each pair \(((\mathbb{X},\mathbb{T}),A)\) formed by a tangent category (sliceable over \(A\)) and an object \(A\in\mathbb{X}\) the corresponding slice tangent category \((\mathbb{X},\mathbb{T})/A\) fulfills the same universality condition of the functor that associates to each pair \((\mathcal{P},A)\) formed by an operad \(\mathcal{P}\) and a \(\mathcal{P}\)-algebra \(A\) the corresponding enveloping operad \(\mathcal{P}^{(A)}\). This is not just an important connection between the world of operads and the one of tangent categories, but it also provides a new characterization for the construction of the slice tangent category in terms of a right adjoint functor and therefore it also constitutes a new result in tangent category theory. The section is organized as follows: first, we recall the original definition of the slice tangent category of a tangent category over an object. Then, we introduce the new characterization of this construction in terms of a right adjoint functor. Let's start with the main definitions. **Definition 3.1**.: _A tangent category \((\mathbb{X},\mathbb{T})\) is **sliceable over** an object \(A\in\mathbb{X}\) if for any \(E\in\mathbb{X}\) and \(f\colon E\to A\) in \(\mathbb{X}\), the \(\mathrm{T}\)-pullback of \(\mathrm{T}f\) along the zero morphism is well-defined, that is the following diagram:_ (3.1) _is a well-defined pullback diagram and for every positive integer \(m\) the functor \(\mathrm{T}^{m}\colon=\mathrm{T}\circ\mathrm{T}\circ\ldots\circ\mathrm{T}\) preserves its universality. We also say that a tangent category is **sliceable** if it is sliceable over all of its objects._ Given a sliceable tangent category \((\mathbb{X},\mathbb{T})\) over an object \(A\) we can define a tangent bundle functor: \[\mathrm{T}^{(A)}\colon\mathbb{X}/A\to\mathbb{X}/A\] which maps each morphism \(f\colon E\to A\) in \(\mathbb{X}\) to the unique morphism \(f^{*}\colon\mathrm{T}^{(A)}E\to A\). We adopt the following notation: we will write \(\mathrm{T}^{(A)}f\) for the tangent bundle over \(f\in\mathbb{X}/A\), regarded as an object in the slice category over \(A\). Abusing notation, we also denote by \(\mathrm{T}^{(A)}E\) the domain of \(\mathrm{T}^{(A)}f\colon\mathrm{T}^{(A)}E\to A\), regarded as a morphism of \(\mathbb{X}\). Notice that \(\mathrm{T}^{(A)}\) is functorial in the slice category but not in \(\mathbb{X}\). This characterization of the slice tangent bundle functor, also known as the vertical tangent bundle functor, is due to Cockett and Cruttwell in their article on differential bundles and tangent fibrations. The equivalent original characterization of \(\mathrm{T}^{(A)}\) is due to Rosicky. For our purposes the Rosicky version is more useful, therefore we recall briefly here this construction. First, notice that a tangent category \((\mathbb{X},\mathbb{T})\) is sliceable over \(A\in\mathbb{X}\) if and only if for any morphism \(f\colon E\to A\), the equalizer: is well-defined and is a \(\mathrm{T}\)-equalizer, which means that for every positive integer \(m\) the functor \(\mathrm{T}^{m}\) preserves its universality. In the following, we denote by \(v_{f}\) the equalizer map \(v_{f}\colon\mathrm{T}^{(A)}E\to\mathrm{T}E\). We can then give the following characterization: **tangent bundle** The tangent bundle functor \(\mathrm{T}^{(A)}\colon\mathbb{X}/A\to\mathbb{X}/A\) is defined as follows: **functor** \[\mathrm{T}^{(A)}(f\colon E\to A)\colon\mathrm{T}^{(A)}E\xrightarrow{v_{f}} \mathrm{T}E\xrightarrow{p}E\xrightarrow{f}A\] for any \(f\in\mathbb{X}/A\). Moreover, given a morphism \(g\colon(f\colon E\to A)\to(f^{\prime}\colon E^{\prime}\to A)\), i.e. \(g\colon E\to E^{\prime}\) such that \(gf^{\prime}=f\), we can define: In particular, \(\mathrm{T}^{(A)}\) is functorial. **projection**: The projection \(p^{(A)}\colon\mathrm{T}^{(A)}f\to f\), is defined as: \[\mathrm{T}^{(A)}E\xrightarrow{\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c} \text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{[]{c}\text{\tiny$\begin{array}{c}\text{\tiny$\begin{array}{c}\text{\tiny$ \begin{\begin{array}{c}\begin{\tiny$\begin{array}{c}\text{\tiny$\begin{\begin{array}{c}\begin{array}{c}\text{\tiny$ \begin{\begin{array}{\begin{\begin{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox{\raisebox{\raiseboxraisebox{\raisebox \raisebox{\raisebox{\raisebox{\raisebox \raisebox{\raisebox \raisebox{\raisebox \raisebox \raisebox{\raisebox \raisebox \raisebox{\raisebox \raisebox \raisebox \raisebox{ \raisebox \raisebox \raisebox \raisebox{ \raisebox \raisebox \raiseboxraisebox \raisebox{ \raiseboxraiseboxraisebox \raisebox{ \raiseboxraiseboxraisebox \raisebox{ \raiseboxraiseboxraiseboxraiseboxraisebox \raisebox{ \raiseboxraiseboxraisebox{ \raiseboxraiseboxraiseboxraisebox \raisebox{\raiseboxraisebox \raiseboxraiseboxraisebox{ \raiseboxraisebox \raiseboxraiseboxraisebox{ \raiseboxraisebox \raisebox{\raiseboxraiseboxraiseboxraisebox \raisebox{\raiseboxraiseboxraisebox \raisebox \raisebox{ \raiseboxraiseboxraiseboxraiseboxraisebox \raisebox{ \raiseboxraiseboxraiseboxraiseboxraiseboxraisebox{ \raiseboxraiseboxraisebox \raisebox{ \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox \raiseboxraisebox}{ \raiseboxraisebox{ \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox}{ \raiseboxraiseboxraisebox}{ {\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox}{ {\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox}{ {\raiseboxraiseboxraiseboxraiseboxraiseboxraisebox}{ }{\raiseboxraiseboxraiseboxraisebox}{ }{ {\raisebox{\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox}}{ {{{\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox}}}{ {{\raisebox{{\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox}}}{ {{\raiseboxraiseboxraiseboxraiseboxraiseboxraisebox}}{ { }}}}}}\) where we employed the universality of \(v_{f}\); **sum morphism**: The sum morphism \(s^{(A)}\colon\mathrm{T}_{2}^{(A)}f\to\mathrm{T}^{(A)}f\) is defined as: where we employed the universality of \(v_{f}\); where we employed the universality of \(\mathrm{T}v_{f}\) and \(v_{\mathrm{T}^{(A)}f}\); **canonical flip**: The canonical flip \(c^{(A)}\colon(\Gamma^{(A)})^{2}f\to(\Gamma^{(A)})^{2}f\) is defined as: where we employed the universality of \(\Gamma\!v_{f}\) and \(v_{\Gamma^{(A)}f}\). If \((\mathbb{X},\mathbb{T})\) has negatives with negation \(n\), then we can also lift the negation morphism to the slice tangent category as follows: **negation**: The negation morphism \(n^{(A)}\colon\Gamma^{(A)}f\to\Gamma^{(A)}f\) is defined by: where we employed the universality of \(v_{f}\). We refer to this tangent category as the **slice tangent category** of \((\mathbb{X},\mathbb{T})\) over \(A\) and we denote it by \((\mathbb{X},\mathbb{T})/A\). **Remark 3.2**.: Given a sliceable tangent category \((\mathbb{X},\mathbb{T})\) it is not hard to see that the operation \(A\mapsto(\mathbb{X},\mathbb{T})/A\) extends to a pseudofunctor \(\mathsf{Slice}^{(\mathbb{T})}\colon\mathbb{X}^{\mathsf{op}}\to\mathsf{TrgCat}_{ \mathbb{n}}\). Cockett and Cruttwell in [4, Theorem 5.3] proved that the fibres of a tangent fibration (cf. [4, Definition 5.2]) are tangent categories and that the substitution functors are strong tangent morphisms. This result extends to a correspondence between tangent fibrations and pseudofunctors like \(\mathbb{X}^{\mathsf{op}}\to\mathsf{TrgCat}_{\mathbb{n}}\). Interestingly, [4, Proposition 5.7] shows that \(\mathsf{Slice}^{(\mathbb{T})}\) is the pseudofunctor associated to a suitable tangent fibration. ### The universal property of slicing The goal of this subsection is to prove that the operation which takes a pair \((\mathbb{X},\mathbb{T};A)\) formed by a tangent category (sliceable over \(A\)) and an object \(A\in\mathbb{X}\) to its associated slice tangent category extends to a right adjoint of the functor that sends each tangent category \((\mathbb{X},\mathbb{T})\) with terminal object \(*\) to the pair \(((\mathbb{X},\mathbb{T}),*)\). Let's start by introducing some useful jargon. **Definition 3.3**.: _A **tangent pair** is a pair formed by a tangent category \((\mathbb{X},\mathbb{T})\) sliceable over an object \(A\) and the object \(A\) itself. We denote a tangent pair by \((\mathbb{X},\mathbb{T};A)\). Moreover, given two tangent pairs \((\mathbb{X},\mathbb{T};A)\) and \((\mathbb{X}^{\prime},\mathbb{T}^{\prime};B)\), a **morphism of tangent pairs**\((F,\alpha;\varphi)\colon(\mathbb{X},\mathbb{T};A)\to(\mathbb{X}^{\prime},\mathbb{T}^{\prime};B)\) is a lax tangent morphism \((F,\alpha)\colon(\mathbb{X},\mathbb{T})\to(\mathbb{X}^{\prime},\mathbb{T}^{ \prime})\) together with a morphism \(\varphi\colon FA\to B\) of \(\mathbb{X}^{\prime}\)._ Tangent pairs together with their morphisms form a category denoted by \(\mathsf{TrgPair}\). In particular, notice that the composition of two morphisms of tangent pairs \((F,\alpha;\varphi)\colon(\mathbb{X},\mathbb{T};A)\to(\mathbb{X}^{\prime}, \mathbb{T}^{\prime};B)\) and \((G,\beta;\psi)\colon(\mathbb{X}^{\prime},\mathbb{T}^{\prime};B)\to(\mathbb{X}^ {\prime\prime},\mathbb{T}^{\prime\prime};C)\) is the triple formed by \(G\circ F\colon\mathbb{X}\to\mathbb{X}^{\prime\prime}\), the associated lax distributive law \(G\circ F\circ\mathbb{T}\xrightarrow{G\alpha}G\circ\mathbb{T}^{\prime}\circ F \xrightarrow{\beta_{F}}\mathbb{T}^{\prime\prime}\circ G\circ F\), and the morphism \(G(F(A))\xrightarrow{G\varphi}GB\xrightarrow{\psi}\mathbb{C}\) of \(\mathbb{X}^{\prime\prime}\). **Remark 3.4**.: Consider the pseudofunctor \(\mathcal{S}\colon\mathsf{TngCat}\to\mathsf{Cat}\) which sends each tangent category \((\mathbb{X},\mathbb{T})\) to the category of objects \(A\) of \(\mathbb{X}\) such that \((\mathbb{X},\mathbb{T})\) is sliceable over \(A\). Via the Grothendieck construction, this produces a cofibration \(\int^{\mathsf{TngCat}}\mathcal{S}\to\mathsf{TngCat}\). The category of elements of this cofibration coincides with the category \(\mathsf{TngPair}\) of tangent pairs. **Proposition 3.5**.: _Consider two tangent pairs \((\mathbb{X},\mathbb{T};A)\) and \((\mathbb{X}^{\prime},\mathbb{T}^{\prime};B)\) and a morphism of tangent pairs \((F,\alpha;\varphi)\colon(\mathbb{X},\mathbb{T};A)\to(\mathbb{X}^{\prime}, \mathbb{T}^{\prime};B)\). Let \(f\colon E\to A\) a morphism in \(\mathbb{X}\). Finally, consider the morphism \(\theta_{f}\colon F\mathrm{T}^{(A)}E\to\mathrm{T}^{\prime(B)}FE\), as the unique morphism which makes commuting the following diagram:_ _Therefore, the functor:_ \[F\colon\mathbb{X}/A\to\mathbb{X}^{\prime}/B\] \[F(f\colon E\to A)\mapsto(FE\xrightarrow{Ff}FA\xrightarrow{ \varphi}B)\] \[F(g\colon(f\colon E\to A)\to(f^{\prime}\colon E^{\prime}\to A))\mapsto (Fg\colon(Ff\varphi)\to(Ff^{\prime}\varphi))\] _extends to a lax tangent morphism:_ \[(F,\alpha)/\varphi\colon(\mathbb{X},\mathbb{T})/A\to(\mathbb{X}^{\prime}, \mathbb{T}^{\prime})/B\] _whose distributive law is defined by the natural transformation \(\theta_{f}\colon F\mathrm{T}^{(A)}f\to\mathrm{T}^{\prime(B)}Ff\)._ Proof.: For starters, let's prove the compatibility between \(\theta\) and the projections: which corresponds to the diagram: Let's take into consideration the compatibility diagram between \(\theta\) and the zero morphisms: To show that, first, consider the diagram: Thus \(Fz^{(A)}\theta v_{F}=z^{(B)}_{F}v_{F}\) and from the universality of \(v_{F}\) we conclude that \(Fz^{(A)}\theta=z^{(B)}_{F}\), as expected. The next step is to prove the compatibility with the sum morphism: Thus, consider the following diagram: Thus, \(Fs^{(A)}\theta\upsilon_{F}=(\theta\times\theta)s^{(B)}_{F}v_{F}\) and from the universality of \(v_{F}\) we conclude that \(Fs^{(A)}\theta=(\theta\times\theta)s^{(B)}\), as expected. Let's prove the compatibility with the lift: As before, consider the following diagram: Therefore, \(\theta l^{(B)}_{F}\Upsilon^{\prime(B)}\upsilon_{\Upsilon^{\prime}F}=Fl^{(A)} \theta_{\Upsilon^{(A)}}\Upsilon^{\prime(B)}\theta\Upsilon^{\prime(B)} \upsilon_{\Upsilon^{\prime}F}\). By the universality of \(\Upsilon^{\prime(B)}\upsilon_{\Upsilon^{\prime}F}\) we conclude that \(\theta l^{(B)}_{F}=Fl^{(A)}\theta_{\Upsilon^{(A)}}\Upsilon^{\prime(B)}\theta\), as expected. Finally, let's prove the compatibility with the canonical flip: Thus: This proves that \(\theta_{\mathrm{T}^{(A)}}\mathrm{T}^{\prime(B)}\theta c_{F}^{(B)}\mathrm{T}^{ \prime(B)}v\mathrm{T}v_{\mathrm{T}^{\prime}F}=Fc^{(A)}\theta_{\mathrm{T}^{(A)}} \mathrm{T}^{\prime(B)}\theta\mathrm{T}^{\prime(B)}v\mathrm{T}v_{\mathrm{T}^{ \prime}F}\). Finally, using the universality of \(\mathrm{T}^{\prime(B)}v\mathrm{T}v_{\mathrm{T}^{\prime}F}\) we conclude that \(\theta_{\mathrm{T}^{(A)}}\mathrm{T}^{\prime(B)}\theta c_{F}^{(B)}=Fc^{(A)} \theta_{\mathrm{T}^{(A)}}\mathrm{T}^{\prime(B)}\theta\), as expected. Proposition 3.5 allows us to lift morphisms of tangent pairs to the corresponding slice tangent categories. The next step is to find sufficient conditions so that the corresponding tangent morphism over the slice categories is strong. This will play a key role in the next section. Let's introduce a definition. **Definition 3.6**.: _Given two tangent pairs \((\mathbb{X},\mathbb{T};A)\) and \((\mathbb{X}^{\prime},\mathbb{T}^{\prime};B)\), a morphism of tangent pairs \((F,\alpha;\varphi)\colon(\mathbb{X},\mathbb{T};A)\to(\mathbb{X}^{\prime}, \mathbb{T}^{\prime};B)\) is **Cartesian** if the following diagrams:_ _are pullback diagrams, and moreover the functor \(F\) preserves the pullbacks of Equation (3.1). Concretely, this last condition means that for every morphism \(f\colon E\to A\) of \(\mathbb{X}\), the diagram:_ _must be a pullback diagram._ **Lemma 3.7**.: _A Cartesian morphism of tangent pairs \((F,\alpha;\varphi)\colon(\mathbb{X},\mathbb{T};A)\to(\mathbb{X}^{\prime}, \mathbb{T}^{\prime};B)\) lifts to the slice tangent categories as a strong tangent morphism. Concretely, this means that the natural transformation \(\theta_{f}\colon\mathrm{FT}^{(A)}f\to\mathrm{T}^{\prime(B)}Ff\) is invertible._ Proof.: Consider the following diagram: where we used that \(Fz\alpha=z_{F}\). Thanks to the Cartesianity of \((F,\alpha;\varphi)\) this is a pullback diagram, since it is formed by pullback diagrams. However, by definition, \(\theta\) is defined by the diagram: However, the top and the right rectangular sides of this triangular diagram are pullbacks, so \(\theta\) must be an isomorphism. The next is a key concept for our discussion. **Definition 3.8**.: _A tangent category **with terminal object** is a tangent category \((\mathbb{X},\mathbb{T})\) equipped with a terminal object \(\star\) so that the unique morphism \(\mathrm{T}\ast\rightarrow\ast\) is an isomorphism. We also denote by \(\mathsf{trm}\mathsf{TngCat}\) the \(2\)-category of tangent categories with terminal objects, lax tangent morphisms and natural transformations compatible with the lax distributive laws._ In the following, we denote by \(\ast\) the (unique up to unique isomorphism) terminal object of a category. Moreover, for any object \(A\), the unique morphism from \(A\) to \(\ast\) is denoted by \(!\colon A\rightarrow\ast\). It is straightforward to see that a tangent category \((\mathbb{X},\mathbb{T})\) with terminal object is sliceable over \(\ast\) and that the slice tangent category \((\mathbb{X},\mathbb{T})/\ast\) is isomorphic to \((\mathbb{X},\mathbb{T})\) via \((!\colon A\rightarrow\ast)\mapsto A\). This observation allows us to define the following pseudofunctor: \[\mathsf{Term}\colon\mathsf{trm}\mathsf{TngCat}\rightarrow\mathsf{ TngPair}\] \[\mathsf{Term}(\mathbb{X},\mathbb{T})\colon=(\mathbb{X},\mathbb{ T};\ast)\] \[\mathsf{Term}((F,\alpha)\colon(\mathbb{X},\mathbb{T})\rightarrow( \mathbb{X}^{\prime},\mathbb{T}^{\prime}))\colon=(F,\alpha;F\ast\stackrel{{!}}{{\rightarrow}}\ast)\colon(\mathbb{X},\mathbb{T};\ast) \rightarrow(\mathbb{X}^{\prime},\mathbb{T}^{\prime};\ast)\] Thanks to Proposition 3.5, the operation which takes a tangent pair \((\mathbb{X},\mathbb{T};A)\) to its slice tangent category extends to a functor. Observe that the slice tangent category of a tangent pair \((\mathbb{X},\mathbb{T};A)\) is equipped with a terminal object, the terminal object being the identity over \(A\). With this in mind, we are able to define the following pseudofunctor: \[\mathsf{Slice}\colon\mathsf{TngPair}\to\mathsf{trmTngCat}\] \[\mathsf{Slice}(\mathbb{X},\mathbb{T};A)\mapsto(\mathbb{X}, \mathbb{T})/A\] \[\mathsf{Slice}((F,\alpha;\varphi)\colon(\mathbb{X},\mathbb{T};A) \to(\mathbb{X}^{\prime},\mathbb{T}^{\prime};B))\colon=(F,\alpha)/\varphi\colon( \mathbb{X},\mathbb{T})/A\to(\mathbb{X}^{\prime},\mathbb{T}^{\prime})/B\] **Remark 3.9**.: Term and Slice are not strict functors but rather pseudofunctors. This comes from the fact that terminal objects and slice tangent structures are defined only up to unique isomorphisms. Thus, the associators and unitors are defined by these unique isomorphisms. We can finally characterize the operation which takes a tangent pair to its slice tangent category as an adjunction between pseudofunctors. **Theorem 3.10**.: _The pseudofunctors \(\mathsf{Slice}\colon\mathsf{TngPair}\leftrightarrows\mathsf{trmTngCat}\colon \mathsf{Term}\) form an adjunction whose left adjoint is \(\mathsf{Term}\), the right adjoint is \(\mathsf{Slice}\), the unit \((U,\eta)\colon(\mathbb{X},\mathbb{T})\to\mathsf{Slice}(\mathsf{Term}(\mathbb{X}, \mathbb{T}))=(\mathbb{X},\mathbb{T})/\star\), as a lax tangent morphism between tangent categories with terminal objects, is the isomorphism:_ \[U\colon\mathbb{X}\to\mathbb{X}/\star\] \[U(A)\mapsto(!\colon A\to\star)\] \[U(f\colon A\to B)\mapsto(f(!\colon A\to\star)\to(!\colon B\to \star))\] \[\eta\colon(U(\mathrm{TA}))=(!\colon\mathrm{TA}\to\star)\stackrel{{ \mathrm{id}_{\mathrm{TA}}}}{{\longrightarrow}}(!\colon\mathrm{TA}\to\star)= \mathrm{T}(U(A))\] _and the counit \((C,\varepsilon;\varphi)\colon\mathsf{Term}(\mathsf{Slice}(\mathbb{X}, \mathbb{T};A))=((\mathbb{X},\mathbb{T})/A,\mathrm{id}_{A})\to(\mathbb{X}, \mathbb{T};A)\) is the morphism of tangent pairs:_ \[C\colon(\mathbb{X},\mathbb{T})/A\mapsto(\mathbb{X},\mathbb{T})\] \[C(f\colon E\to A)\mapsto E\] \[C(g\colon(f\colon E\to A)\to(f^{\prime}\colon E^{\prime}\to A)) \mapsto(g\colon E\to E^{\prime})\] \[\varepsilon\colon C(\mathrm{T}^{(A)}(f\colon E\to A))=\mathrm{T}^ {(A)}E\stackrel{{\upsilon}}{{\longrightarrow}}\mathrm{TE}=\mathrm{T} (C(f\colon E\to A))\] \[\varphi\colon C(\mathrm{id}_{A}\colon A\to A)=A\stackrel{{ \mathrm{id}_{A}}}{{\longrightarrow}}A\] Proof.: To prove the result we need to show that the unit \((U,\eta)\) and the counit \((C,\varepsilon;\varphi)\) fulfill the triangle identities. Let's start by considering the following diagram: for a tangent category \((\mathbb{X},\mathbb{T})\) with terminal object. However, it is straightforward to realize that the underlying tangent morphisms \((C,\varepsilon)\) and \((U,\eta)\) of \((C,\varepsilon;\varphi)_{\mathsf{Term}}\) and \(\mathsf{Term}(U,\eta)\) define the equivalence between \((\mathbb{X},\mathbb{T})\) and \((\mathbb{X},\mathbb{T})/\star\) and that, by the universality of the terminal object, that the composition of the comparison morphisms \(\varphi=\mathrm{id}\), and \(!\colon U\star\to\star\) is the identity over the terminal object. Similarly, by considering the diagram: for a tangent pair \((\mathbb{X},\mathbb{T};A)\), it is straightforward to show the underlying tangent morphisms of \(\mathsf{Slice}(C,\varepsilon;\varphi)\) and \((U,\eta)_{\mathsf{Slice}}\) define the equivalence between \((\mathbb{X},\mathbb{T})/A\) and \(((\mathbb{X},\mathbb{T})/A)/\mathrm{id}_{A}\) and that the composition of the comparison morphisms gives the identity. Finally, notice that the unit is always an isomorphism. ## 4 The slice tangent categories of the affine schemes over an operad The previous section was dedicated to characterizing the slicing of tangent categories via the adjunction between two pseudofunctors. A similar phenomenon happens in the realm of operads: given an operad \(\mathcal{P}\) and a \(\mathcal{P}\)-algebra \(A\) the enveloping operad \(\mathcal{P}^{(A)}\) of \(\mathcal{P}\) over \(A\) is the operad whose category of algebras is equivalent to the coslice category of \(\mathsf{Alg}_{\mathcal{P}}\) under \(A\). The goal of this section is to prove that these two phenomena are two faces of the same coin: the geometric tangent category of the enveloping operad of \(\mathcal{P}\) over \(A\) is equivalent to the slice tangent category of the geometric tangent category of \(\mathcal{P}\) over \(A\). Let's start by recalling the definition of the enveloping operad of a pair \((\mathcal{P};A)\). We advise the interested reader to consult [2], [14], or [11]. For this purpose, recall that since the category of algebras of an operad \(\mathcal{P}\) is cocomplete, each operad has an initial algebra, which corresponds to the \(R\)-module \(\mathcal{P}(0)\) together with structure map \(\mathcal{P}(m)\otimes\mathcal{P}(0)^{\otimes m}\to\mathcal{P}(0)\) defined by the operadic composition. This allows us to introduce an operation \(\mathcal{P}\mapsto(\mathcal{P};\mathcal{P}(0))\) between operads and operadic pairs. Notice that for a **operadic pair** we mean a pair \((\mathcal{P};A)\) formed by an operad \(\mathcal{P}\) and a \(\mathcal{P}\)-algebra \(A\). Moreover, given two operadic pairs \((\mathcal{P};A)\) and \((\mathcal{G};B)\) a **morphism of operadic pairs**\((f;\varphi)\colon(\mathcal{P};A)\to(\mathcal{G};B)\) is a morphism of operads \(f\colon\mathcal{P}\to\mathcal{G}\) together with a morphism of \(\mathcal{P}\)-algebras \(\varphi\colon A\to f^{*}B\), \(f^{*}\colon\mathsf{Alg}_{\mathcal{G}}\to\mathsf{Alg}_{\mathcal{P}}\) being the pullback functor induced by \(f\). Operadic pairs together with their morphisms form a category that we denote by \(\mathsf{OprPair}\). So, we have: \[\mathsf{Init}\colon\mathsf{Operad}\to\mathsf{OprPair}\] \[\mathsf{Init}(\mathcal{P})\colon=(\mathcal{P};\mathcal{P}(0))\] \[\mathsf{Init}(f\colon\mathcal{P}\mapsto\mathcal{G})\colon=(f;!\colon\mathcal{P}(0)\to f^{*}\mathcal{G}(0))\] ! being the unique morphism of \(\mathcal{P}\)-algebras induced by the universality of the initial algebra \(\mathcal{P}(0)\). Concretely,! sends an element \(u\in\mathcal{P}(0)\) to \(f_{0}(u)\). **Remark 4.1**.: Similarly as for tangent pairs (see Remark 3.4), also operadic pairs can be regarded as objects in the category of elements of a fibration. Consider the pseudofunctor \(\mathfrak{I}\colon\mathsf{Operad}^{\mathsf{op}}\to\mathsf{Cat}\) which sends each operad to the corresponding category of algebras. Via the Grothendieck construction, this is equivalent to a fibration \(\int_{\mathsf{Operad}}\mathfrak{I}\to\mathsf{Operad}\) and the category of elements \(\int_{\mathsf{Operad}}\mathfrak{I}\) is equivalent to the category \(\mathsf{OprPair}\) of operadic pairs. Init admits a left adjoint \(\mathsf{Env}\) (cf. [2]), which sends an operadic pair \((\mathcal{P};A)\) to the corresponding enveloping operad \(\mathsf{Env}(\mathcal{P};A)\colon=\mathcal{P}^{(A)}\). Following the description provided by [11, Section 4.1.3], \(\mathcal{P}^{(A)}\) is generated by the symbols \((\mu;a_{1},\ldots,a_{k}]\), for every \(\mu\in\mathcal{P}(m+k)\), \(a_{1},\ldots,a_{k}\in A\) and every non-negative integer \(k\) (when \(k=0\), (\(\mu\) are the only terms) which fulfill the following relations: \[(\mu;a_{1},\ldots,\nu(a_{i},\ldots,a_{i+n}),\ldots,a_{k+n}]=(\mu\circ_{i}\nu;a_ {1},\ldots,a_{k+n}] \tag{4.1}\] for \(\mu\in\mathcal{P}(m+k)\), \(v\in\mathcal{P}(n)\) and \(a_{1},\ldots,a_{k+n}\in A\), where we used the notation \(\mu\circ_{i}v\) for \(\mu(1_{\mathcal{P}},\ldots,v,\ldots,1_{\mathcal{P}})\). In particular, it is not hard to see that \(\mathcal{P}^{(A)}(0)\cong A\). So, the functor \(\mathsf{Env}\) sends a morphism of operadic pairs \((f,\varphi)\colon(\mathcal{P};A)\to(\mathcal{G};B)\) to the morphism of operads \(\mathsf{Env}(f;\varphi)\colon\mathcal{P}^{(A)}\to\mathcal{G}^{(B)}\) defined on generators as follows: \[(\mu;a_{1},\ldots,a_{k}|\mapsto(f(\mu);\varphi(a_{1}),\ldots,\varphi(a_{k})|\] From this description of the enveloping operad, it is not hard to see that an algebra \(B\) of the enveloping operad \(\mathcal{P}^{(A)}\) is precisely given by a \(\mathcal{P}\)-algebra \(C^{*}B\), \(C\colon\mathcal{P}\to\mathcal{P}^{(A)}\) being the canonical inclusion \(\mu\mapsto(\mu|\), together with a morphism of \(\mathcal{P}\)-algebras \(A\to C^{*}B\) induced by the structure map \(A=\mathcal{P}^{(A)}(0)\to C^{*}B\) of \(B\). Conversely, every morphism of \(\mathcal{P}\)-algebras \(f\colon A\to B\) induces over \(B\) a \(\mathcal{P}^{(A)}\)-algebra structure defined as follows: \[(\mu;a_{1},\ldots,a_{k}|(b_{1},\ldots,b_{m})\colon=\mu_{B}(f(a_{1}),\ldots,f(a _{k}),b_{1},\ldots,b_{m})\] for \(\mu\in\mathcal{P}(m+k)\), \(a_{1},\ldots,a_{k}\in A\) and \(b_{1},\ldots,b_{m}\in B\). This proves that the category of \(\mathcal{P}^{(A)}\)-algebras is equivalent to the coslice category of \(\mathcal{P}\)-algebras over \(A\) (cf. [2, Lemma 1.7]). ### The geometric tangent category of the enveloping operad Theorem 3.10 establishes that \(\mathsf{Term}\) and \(\mathsf{Slice}\) form an adjunction and from our discussion on the enveloping operad we also know that also \(\mathsf{Env}\) and \(\mathsf{Init}\) form an adjunction. We would like to compare \(\mathsf{Term}\) with \(\mathsf{Init}\) and \(\mathsf{Slice}\) with \(\mathsf{Env}\). However, \(\mathsf{Term}\) is a left adjoint, while \(\mathsf{Init}\) is a right adjoint and similarly, \(\mathsf{Slice}\) is a right adjoint and \(\mathsf{Env}\) is a left adjoint. To solve this issue, we transpose the adjunction \(\mathsf{Env}\dashv\mathsf{Init}\) to the opposite categories. To compare these functors, notice that \(\mathsf{Geom}^{*}\) extends to operadic pairs as follows: \[\mathsf{Geom}^{*}\colon\mathsf{OprPair}^{\mathsf{op}}\to\mathsf{ TngPair}\] \[\mathsf{Geom}^{*}(\mathcal{P};A)\colon=(\mathsf{Geom}(\mathcal{P });A)\] \[\mathsf{Geom}^{*}((f,\varphi)\colon(\mathcal{P};A)\to(\mathcal{G}; B))\colon=(\mathsf{Geom}^{*}(f)=(f^{*},\alpha^{*});\varphi^{\mathsf{op}}\colon A \leftarrow\varphi^{*}B)\colon(\mathsf{Geom}(\mathcal{P});A)\to(\mathsf{Geom}( \mathcal{G});B)\] Note that, since \(\mathsf{Alg}_{\mathcal{P}}\) is cocomplete, \(\mathsf{Geom}(\mathcal{P})\) is sliceable. **Lemma 4.2**.: _The following diagram:_ _commutes._ Proof.: It is straightforward to see that, for an operad \(\mathcal{P}\): \[\mathsf{Geom}^{*}(\mathsf{Init}((\mathcal{P}))=(\mathsf{Geom}(\mathcal{P}); \mathcal{P}(0))=\mathsf{Term}(\mathsf{Geom}(\mathcal{P}))=\mathsf{Term}( \mathsf{Geom}^{*}(\mathcal{P}))\] and for a morphism of operads \(f\colon\mathcal{P}\to\mathcal{G}\): \[\mathsf{Geom}^{*}(\mathsf{Init}(f))=\mathsf{Geom}^{*}(f;1\colon f ^{*}\mathcal{G}(0)\leftarrow\mathcal{P}(0))=\] \[=(f^{*},\alpha^{*};1\colon\mathcal{P}(0)\to f^{*}\mathcal{G}(0))= \mathsf{Term}(f^{*},\alpha^{*})=\mathsf{Term}(\mathsf{Geom}^{*}(f))\] So, the diagram commutes. Thanks to Lemma 4.2 we can now also compare the functors \(\mathsf{Env}\) and \(\mathsf{Slice}\). Crucially, to do that we are going to use that \(\mathsf{Init}+\mathsf{Env}\) (on the opposite categories) and that \(\mathsf{Term}+\mathsf{Slice}\) form adjunctions. In general, given a square diagram as follows: with \((\eta,\varepsilon)\colon F+U\) and \((\eta^{\prime},\varepsilon^{\prime})\colon F^{\prime}+U^{\prime}\) forming adjunctions, then if the diagram: commutes, then, by using mates, we can define the following natural transformation: \[G\circ U^{\prime}\xrightarrow{\eta\circ\mu^{\prime}}U\circ F\circ G\circ U^{ \prime}=U\circ H\circ F^{\prime}\circ U^{\prime}\xrightarrow{UH\varepsilon^ {\prime}}U\circ H\] A priori, there is no reason to conclude that such a natural transformation is a natural isomorphism. In order to prove that the natural transformation induced by the adjunctions \(\mathsf{Init}+\mathsf{Env}\), \(\mathsf{Term}+\mathsf{Slice}\), and by Lemma 4.2 is an isomorphism, we need to show that the counit of \(\mathsf{Init}+\mathsf{Env}\) induces a Cartesian morphism of tangent pairs over the geometric tangent pairs. **Lemma 4.3**.: _The counit, regarded as a morphism of \(\mathsf{OpPair}\), \((C,\varepsilon)\colon(\mathcal{P};A)\to\mathsf{Init}(\mathsf{Env}(\mathcal{P}; A))=(\mathcal{P}^{A};\mathcal{P}^{(A)}(0))\) of the adjunction \(\mathsf{Init}+\mathsf{Env}\) induces a Cartesian morphism of tangent pairs:_ \[\mathsf{Geom}^{*}(C,\varepsilon)\colon\mathsf{Geom}^{*}(\mathcal{P}^{(A)}; \mathcal{P}^{(A)}(0))\to\mathsf{Geom}^{*}(\mathcal{P};A)\] Proof.: Let's start by recalling the definition of the counit. \(C\colon\mathcal{P}\to\mathcal{P}^{(A)}\) is the morphism of operads which includes \(\mathcal{P}\) into \(\mathcal{P}^{(A)}\) by mapping \(\mu\in\mathcal{P}(m)\) into \((\mu|\in\mathcal{P}^{(A)}(m)\). Moreover, \(\varepsilon\colon A\to C^{*}\mathcal{P}^{(A)}(0)\) is the isomorphism \(A\ni a\mapsto(1_{\mathcal{P}};a|\in C^{*}\mathcal{P}^{(A)}(0)\), where \(1_{\mathcal{P}}\in\mathcal{P}(1)\) is the unit of \(\mathcal{P}\). To see that this is an isomorphism, notice that the generators of \(\mathcal{P}^{(A)}(0)\) are all the symbols \((\mu;a_{1},\ldots,a_{m}|\) for every \(\mu\in\mathcal{P}(m)\) and \(a_{1},\ldots,a_{m}\in A\), but thanks to the relations (4.1) we also have: \[(\mu;a_{1},\ldots,a_{m}|=(1_{\mathcal{P}}(\mu);a_{1},\ldots,a_{m}|=(1_{ \mathcal{P}};\mu(a_{1},\ldots,a_{m})|\] So, with the identification \(a=(1_{\mathcal{P}};a|\) we have that \(\mathcal{P}^{(A)}(0)\) is equal to \(A\). Notice also that, given a \(\mathcal{P}^{(A)}\)-algebra \(B\), \(C^{*}B\) is the \(\mathcal{P}\)-algebra over \(B\) with structure map defined by: \[\mu(b_{1},\ldots,b_{m})\colon=(\mu|_{B}(b_{1},\ldots,b_{m})\] To distinguish between the different tangent structures, for this proof we adopt the following convention: we denote by \(\mathbb{T}\) the geometric tangent structure of \(\mathcal{P}\), by \(\mathbb{T}^{(B)}\) the slice tangent structure over \(B\), and by \(\mathbb{T}_{A}\) the geometric tangent structure of \(\mathcal{P}^{(A)}\). The Cartesianity of \(\mathsf{Geom}^{*}(C,\varepsilon)\) means that for a morphism \(f\colon B\to E\) of \(\mathscr{P}^{(A)}\)-algebras the diagrams in the category of \(\mathscr{P}\)-algebras: are all pushout diagrams, where \(f_{*}\) is the morphism defined by the pushout diagram in \(\mathsf{Alg}_{\mathscr{P}^{(A)}}\): Notice that the third diagram is trivially a pushout since \(\varepsilon\) is an isomorphism. Let's focus on the first diagram and let's consider two morphisms \(g\colon\mathbb{C}^{*}\mathrm{TE}\to K\) and \(h\colon\mathbb{C}^{*}A\to K\) of \(\mathscr{P}\)-algebras satisfying the commutativity of the diagram: So, we have that: \[g(b)=h(f(b))\] \[h(df(b))=0\] for every \(b\in B\). Recall that \(\mathscr{P}^{(A)}\)-algebras are equivalent to \(\mathscr{P}\)-algebra morphisms with \(A\) for domain. Since \(B\) is a \(\mathscr{P}^{(A)}\)-algebra, we obtain a \(\mathscr{P}\)-algebra morphism \(\beta\colon A\to\mathbb{C}^{*}B\). So, by post-composing by \(g\) we get a new \(\mathscr{P}\)-algebra morphism \(A\xrightarrow{\beta}C^{*}B\xrightarrow{\delta}K\), thus, we get a \(\mathscr{P}^{(A)}\)-algebra structure over \(K\). Concretely, the structure map of this \(\mathscr{P}^{(A)}\)-algebra \(\overline{K}\) is defined by: \[(\mu;a_{1},\dots,a_{k}|_{\overline{K}}(x_{1},\dots,x_{m})\colon=\mu_{K}(g( \beta(a_{1}),\dots,g(\beta(a_{k})),x_{1},\dots,x_{m})\] Moreover, we can also lift \(g\) to a morphism of \(\mathscr{P}^{(A)}\)-algebras \(\overline{\overline{g}}\colon B\to\overline{K}\), defined simply by \(b\mapsto g(b)\). Let's now do the same for \(h\): define a morphism of \(\mathscr{P}^{(A)}\)-algebras \(\overline{h}\colon\mathrm{T}_{A}E\to\overline{K}\). To do so, note also that \(\mathrm{T}_{A}E\) is a \(\mathscr{P}^{(A)}\)-algebra, which corresponds to a morphism of \(\mathscr{P}\)-algebras \(\gamma\colon A\to\mathbb{C}^{*}\mathrm{T}_{A}E\) but because \(f\) is a morphism of \(\mathscr{P}^{(A)}\)-algebras, we have that for any \(a\in A\): \[\gamma(a)=p_{A}(f(\beta(a)))=f(\beta(a))\] where we used that the projection \(E\to\mathrm{T}_{A}E\) sends each element \(x\in E\) to itself, \(\mathrm{T}_{A}E\) being generated by all \(x\) and all \(\mathrm{d}x\). This implies that we can define \(\overline{h}\) as the morphism which sends each \(y\in\mathrm{T}_{A}E\) to \(h(y)\). To see that this is a morphism of \(\mathcal{P}^{(A)}\)-algebras note that: \[(\mu;a_{1},\ldots,a_{k}\frac{1}{|K|}(\overline{h}(y_{1}),\ldots, \overline{h}(y_{m}))\] \[= \mu_{K}(g(\beta(a_{1})),\ldots,g(\beta(a_{k})),h(y_{1}),\ldots,h( y_{m}))\] \[= \mu_{K}(h(f(\beta(a_{1}))),\ldots,h(f(\beta(a_{k}))),h(y_{1}), \ldots,h(y_{m}))\] \[= h(\mu_{C^{*}\mathrm{T}_{A}E}(f(\beta(a_{1})),\ldots,f(\beta(a_{ k})),y_{1},\ldots,y_{m}))\] \[= h(\mu_{C^{*}\mathrm{T}_{A}E}(\gamma(a_{1}),\ldots,\gamma(a_{k}),y_{1},\ldots,y_{m}))\] \[= h((\mu;a_{1},\ldots,a_{k}|_{\mathrm{T}_{A}E}(y_{1},\ldots,y_{m}))\] where we used that \(g(b)=h(f(b))\), for any \(b\in B\). Moreover, note that \(C^{*}\overline{K}=K\), \(C^{*}\overline{g}=g\) and that \(C^{*}\overline{h}=h\). Thus, we now have the following commutative diagram in \(\mathrm{Alg}_{\mathcal{P}^{(A)}}\): To see that recall that \(g(b)=h(f(b))\) and that \(h(\mathrm{d}f(b))=0\), which precisely implies the commutativity of this diagram. Therefore, we have a unique morphism \(\overline{[g,h]}\colon(\mathrm{T}_{A})^{(B)}E\to\overline{K}\) of \(\mathcal{P}^{(A)}\)-algebras. So, let's introduce: \[[g,h]\colon=C^{*}(\overline{[g,h]})\colon C^{*}(\mathrm{T}_{A})^{(B)}E\to C^ {*}\overline{K}=K\] However: \[C^{*}f[g,h]=C^{*}fC^{*}(\overline{[g,h]})=C^{*}(f\overline{[g,h] })=C^{*}\overline{g}=g\] \[C^{*}v_{f}[g,h]=C^{*}v_{f}C^{*}(\overline{[g,h]})=C^{*}(v_{f} \overline{[g,h]})=C^{*}\overline{h}=h\] Finally, suppose that \(r\colon C^{*}(\mathrm{T}_{A})^{(B)}E\to K\) is a second morphism of \(\mathcal{P}\)-algebra such that \((C^{*}f)r=g\) and \((C^{*}v_{f})r=h\). However, in a similar fashion we can also lift \(r\) to a morphism of \(\mathcal{P}^{(A)}\)-algebras \(\overline{r}\colon(\mathrm{T}_{A})^{(B)}E\to\overline{K}\) such that \(C^{*}\overline{r}=r\). But this implies that \(\overline{r}=\overline{[g,h]}\) and thus, \(r=C^{*}\overline{r}=C^{*}(\overline{[g,h]})=[g,h]\). This proves that the first diagram is a pushout. Finally, let's prove that the diagram which expresses the naturality of \(\alpha^{*}\) is also a pushout. The first step is to lift \(\alpha^{*}\) to a morphism of \(\mathcal{P}^{(A)}\)-algebras \(\overline{\alpha^{*}}\) so that \(C^{*}(\overline{\alpha^{*}})=\alpha^{*}\). Secondly, we are going to show that \(\overline{\alpha^{*}}\) is a coequalizer morphism from direct inspection, and finally, we use that \(C^{*}\) preserves the universality property of \(\overline{\alpha^{*}}\) to conclude our result. Let's start by noticing that, since \(B\) is a \(\mathcal{P}^{(A)}\)-algebra it corresponds to a morphism of \(\mathcal{P}\)-algebras \(\beta\colon A\to C^{*}B\). Moreover, using the projection we obtain a morphism \(A\xrightarrow{\beta}C^{*}B\xrightarrow{p}\mathrm{TC}^{*}B\) of \(\mathcal{P}\)-algebras which defines a new \(\mathcal{P}^{(A)}\)-algebra \(\overline{\mathrm{TC}^{*}B}\). Concretely, this is the \(\mathcal{P}^{(A)}\)-algebra defined over \(\mathrm{TC}^{*}B\) whose structure map is defined by: \[(\mu;a_{1},\ldots,a_{k}|(x_{1},\ldots,x_{m})\colon=\mu_{\mathrm{TC}^{*}B}( \beta(a_{1}),\ldots,\beta(a_{k}),x_{1},\ldots,x_{m})\] Then, it is not hard to see that \(\alpha^{*}\) can be lifted to a morphism of \(\mathcal{P}^{(A)}\)-algebras \(\overline{\alpha^{*}}\colon\overline{\mathrm{TC}^{*}B}\to\mathrm{T}_{A}B\), which sends an element \(y\in\mathrm{TC}^{*}B\) to \(\alpha^{*}(y)\in\mathrm{T}_{A}B\). Recall also that, by construction, \(\alpha^{*}\) sends the generators \(b\) and \(\mathrm{d}b\) of \(\mathrm{TC}^{*}B\) to the corresponding generators \(b\) and \(\mathrm{d}_{A}b\) of \(\mathrm{C}^{*}\mathrm{T}_{A}B\). By direct inspection we see that the \(\mathcal{P}^{(A)}\)-algebra \(\mathrm{T}_{A}B\) is generated by all \(b\in B\) and by symbols \(\mathrm{d}_{A}b\) for \(b\in B\), satisfying the following properties: \[(\mu;a_{1},\ldots,a_{k}|_{\mathrm{T}_{A}B}(b_{1},\ldots,b_{m})=( \mu;a_{1},\ldots,a_{k}|_{B}(b_{1},\ldots,b_{m})=\mu_{C^{*}B}(\beta(a_{1}),\ldots,\beta(a_{k}),b_{1},\ldots,b_{m})\] \[\mathrm{d}_{A}((\mu;a_{1},\ldots,a_{k}|(b_{1},\ldots,b_{m}))= \sum_{j=1}^{m}(\mu;a_{1},\ldots,a_{k}|(b_{1},\ldots,\mathrm{d}_{A}b_{j},\ldots,b_{m})\] \[=\sum_{j=1}^{m}\mu_{C^{*}\mathrm{T}_{A}B}(\beta(a_{1}),\ldots, \beta(a_{k}),b_{1},\ldots,\mathrm{d}_{A}b_{j},\ldots,b_{m})\] Similarly, it is not hard to see that \(\overline{\mathrm{TC}^{*}B}\) is also generated by \(b\in B\) and by symbols \(\mathrm{d}b\), for \(b\in B\), satisfying the following properties: \[(\mu;a_{1},\ldots,a_{k}|_{\overline{\mathrm{TC}^{*}B}}(b_{1}, \ldots,b_{m})=\mu_{\mathrm{TC}^{*}B}(\beta(a_{1}),\ldots,\beta(a_{k}),b_{1}, \ldots,b_{m})=\mu_{C^{*}B}(\beta(a_{1}),\ldots,\beta(a_{k}),b_{1},\ldots,b_{m})\] \[\mathrm{d}(\mu(b_{1},\ldots,b_{m}))=\sum_{j=1}^{m}\mu(b_{1}, \ldots,\mathrm{d}b_{j},\ldots,b_{m})\] It is clear from this that the relations of \(\mathrm{T}_{A}B\) imply the ones of \(\overline{\mathrm{TC}^{*}B}\). Since \(\overline{\alpha^{*}}\) sends generators to corresponding generators, this implies that \(\mathrm{T}_{A}B\) can be represented as a quotient algebra of \(\overline{\mathrm{TC}^{*}B}\) over a specific ideal \(I\), that is \(\mathrm{T}_{A}B\cong\overline{\mathrm{TC}^{*}B}/I\), and that \(\overline{\alpha^{*}}\) is the quotient map \(\overline{\mathrm{TC}^{*}B}\to\overline{\mathrm{TC}^{*}B}/I\). Direct inspection shows that the ideal \(I\) is generated by all the \(\mathrm{d}_{A}(\beta(a))\) for every \(a\in A\), that is in \(\mathrm{T}_{A}B\), \(\mathrm{d}_{A}(\beta(a))=0\). Using a similar argument as the one we used to prove that the first diagram was a pushout, we conclude also that \(\alpha^{*}\) is a quotient map \(\mathrm{TC}^{*}B\to C^{*}\mathrm{T}_{A}B\), so that \(C^{*}\mathrm{T}_{A}B\) is a quotient algebra of \(\mathrm{TC}^{*}B\) over an ideal \(I\) generated by \(\mathrm{d}_{A}(\beta(a))=0\). Let's now come back to the naturality diagram and consider \(g\colon\mathrm{TC}^{*}E\to K\) and \(h\colon C^{*}\mathrm{T}_{A}B\to K\) as follows: This implies that: \[h(b)=g(f(b))\] \[h(\mathrm{d}_{A}b)=g(f(\mathrm{d}b))=g(\mathrm{d}f(b))\] for every \(b\in B\). Notice that, since \(E\) is a \(\mathcal{P}^{(A)}\)-algebra, we can also define a morphism of \(\mathcal{P}\)-algebras \(\gamma\colon A\to E\) and that since \(f\) is a morphism of \(\mathcal{P}^{(A)}\)-algebras we have that \(f(\beta(a))=\gamma(a)\). So, to lift \(g\) to \(C^{*}\mathrm{T}_{A}E\) we need to show that \(g(\mathrm{d}\gamma(a))=0\), however, we have the following: \[g(\mathrm{d}\gamma(a))\] \[= g(\mathsf{d}f(\beta(a))\] \[= g(\mathsf{d}h(\mathsf{d}_{A}\beta(a)))\] \[= 0\] where we used that \(\mathsf{d}_{A}\beta(a)=0\). This finally proves that we can lift \(g\) to \(\mathrm{TC}^{*}E/I=C^{*}\mathrm{T}_{A}E\), that is we find a morphism \(\overline{[g,h]}\colon C^{*}\mathrm{T}_{A}B\to K\). We leave to the reader to prove that such a morphism is the unique morphism which makes commuting the following diagram: This concludes the proof. We can finally prove the main result of this paper. **Proposition 4.4**.: _Consider the tangent morphism obtained as follows:_ \[\mathsf{Geom}^{*}\circ\mathsf{Env}\xrightarrow{(U,\eta)_{\mathrm{Coom}^{*} \mathsf{Env}}}\mathsf{Slice}\circ\mathsf{Term}\circ\mathsf{Geom}^{*}\circ \mathsf{Env}\cong\] \[\cong\mathsf{Slice}\circ\mathsf{Geom}^{*}\circ\mathsf{Init}\circ \mathsf{Env}\xrightarrow{\mathsf{Slice}(\mathsf{Geom}^{*}(C,\varepsilon))} \mathsf{Slice}\circ\mathsf{Geom}^{*}\] _This defines an equivalence of pseudofintors which makes commutative the following diagram:_ Proof.: By Theorem 3.10, \((U,\eta)\) is an equivalence of tangent categories. Moreover, thanks to Lemma 4.3, \(\mathsf{Geom}^{*}(C,\varepsilon)\) is a Cartesian morphism of tangent pairs. By Lemma 3.7, \(\mathsf{Slice}\) maps Cartesian morphisms into strong tangent morphisms. Thus, \(\mathsf{Slice}(\mathsf{Geom}^{*}(C,\varepsilon))\) is strong. Finally, thanks to [2, Lemma 1.7] the functorial component of \(\mathsf{Slice}(\mathsf{Geom}^{*}(C,\varepsilon))\) is an isomorphism between the categories of \(\mathcal{P}^{(A)}\)-algebras and the coslice category of \(\mathcal{P}\)-algebras under \(A\), i.e. the slice category \(\mathsf{Alg}_{\mathcal{P}}^{\mathsf{op}}/A\). Therefore, \(\mathsf{Slice}(\mathsf{Geom}^{*}(C,\varepsilon))\) is an equivalence of tangent categories. **Theorem 4.5**.: _Given an operad \(\mathcal{P}\) and a \(\mathcal{P}\)-algebra \(A\), the geometric tangent category of the enveloping operad \(\mathcal{P}^{(A)}\) of \(\mathcal{P}\) over \(A\) is equivalent, as a tangent category, to the slice tangent category over \(A\) of the geometric tangent category of \(\mathcal{P}\). In formulas:_ \[\mathsf{Geom}(\mathcal{P}^{(A)})=\mathsf{Geom}(\mathcal{P})/A\] Thanks to this characterization, we can now understand the vector fields over a \(\mathcal{P}^{(A)}\)-algebra. For this purpose, recall that for a morphism of \(\mathcal{P}\)-algebras \(\beta\colon A\to B\) and a \(B\)-module \(M\) (see Section 4.2 for details) an \(\beta\)-relative derivation is a derivation \(\delta\colon B\to M\), i.e. an \(R\)-linear morphism which satisfies the Leibniz rule: \[\delta(\mu(b_{1},\dots,b_{m}))=\sum_{k=1}^{m}\mu(b_{1},\dots,\delta(b_{k}),\dots, b_{m})\] and moreover \(\delta\circ\beta=0\). **Corollary 4.6**.: _For an operad \(\mathcal{P}\), a \(\mathcal{P}\)-algebra \(A\), and a \(\mathcal{P}^{(A)}\)-algebra \(B\), the vector fields over \(B\) in the geometric tangent category of \(\mathcal{P}^{(A)}\) are in bijective correspondence with \(\beta\)-relative derivations, \(\beta\colon A\to C^{*}B\) being the morphism of \(\mathcal{P}\)-algebras corresponding to the \(\mathcal{P}^{(A)}\)-algebra \(B\)._ Proof.: Recall that in [13, Corollary 4.5.3] it was proved that vector fields in a geometric tangent category of an operad correspond to derivations over the operadic algebras. Concretely, a vector field \(v\colon\mathrm{T}A\to A\), regarded a morphism of \(\mathcal{P}\)-algebras, corresponds to a derivation \(\delta_{v}\colon A\to A\) defined by: \[\delta_{v}(a)\colon =v(\mathsf{d}a)\] Viceversa, a derivation \(\delta\) defines a vector field \(v_{\delta}\colon\mathrm{T}A\to A\) by: \[v(a)\colon =a\] \[v(\mathsf{d}a)\colon =\delta(a)\] Thanks to Theorem 4.5, we have that \(\mathsf{Geom}(\mathcal{P}^{(A)})\cong\mathsf{Geom}(\mathcal{P})/A\), thus, given a morphism \(\beta\colon A\to C^{*}B\), by definition of the slice tangent category, the tangent bundle functor \(\mathrm{T}^{(A)}\) of \(\mathsf{Geom}(\mathcal{P})/A\) is given by the coequalizer (in the category of \(\mathcal{P}\)-algebras): \[\mathrm{TC}^{*}A\xrightarrow[\mathbb{T}\beta]{\mathsf{T}\beta\mathcal{P}} \mathrm{TC}^{*}B\xrightarrow[\mathbb{T}^{(A)}B]{\mathsf{T}^{(A)}B}\] or equivalently, by the pushout diagram: This implies that \(\mathrm{T}^{(A)}B\) is the quotient of \(\mathrm{TC}^{*}B\) by the ideal generated by \(\mathsf{d}\beta(a)\), for every \(a\in A\). Therefore, a vector field \(v\colon\mathrm{T}^{(A)}B\to B\) corresponds to a derivation \(\delta_{v}\colon B\to B\) defined by \(\delta_{v}(b)\colon=v(\mathsf{d}b)\), and satisfying the following: \[\delta_{v}(\beta(a))=v(\mathsf{d}\beta(a))=0\] that is a \(\beta\)-relative derivation of \(B\). Conversely, a \(\beta\)-relative derivation \(\delta\colon B\to B\) being a derivation over \(B\), defines a vector field \(v_{\delta}\colon\mathrm{TC}^{*}B\to C^{*}B\) over \(C^{*}B\) by \(v_{\delta}(b)\colon=b\) and \(v_{\delta}(\mathsf{d}b)=\delta(b)\), but since \(\delta\) is \(\beta\)-relative, \(v_{\delta}(\mathsf{d}\beta(a))=\delta(\beta(a))=0\), thus \(v_{\delta}\) lifts to \(\mathrm{T}^{(A)}B\to B\). ### The differential bundles of affine schemes over an operad In [13, Section 4.6], the classification of differential objects for the geometric tangent category of an operad \(\mathcal{P}\) was given. Roughly speaking, differential objects in a tangent category, first introduced by Cockett and Cruttwell in [5, Definition 4.8], are the objects whose tangent bundle is trivial. In the tangent category of (connected) finite-dimensional smooth manifolds, differential objects correspond to the manifolds \(\mathbb{R}^{m}\), for all integers \(m\). For the geometric tangent category \(\mathsf{Geom}(\mathcal{P})\) of an operad \(\mathcal{P}\), differential objects are in bijective correspondence with \(\mathcal{P}(1)\)-left modules, where we recall that \(\mathcal{P}(1)\) becomes a unital and associative ring, once equipped with the unit and the composition of \(\mathcal{P}\). A related concept is the notion of differential bundles, introduced by Cockett and Cruttwell in [4, Definition 2.3]. Roughly speaking, differential bundles are bundles whose fibres are differential objects (cf. [4, Corollary 3.5]). More precisely, a differential bundle over \(A\in\mathbb{X}\) in a tangent category \((\mathbb{X},\mathbb{T})\) consists of a morphism \(q\colon E\to A\) which admits pullbacks along any other morphism \(B\to A\), together with a zero morphism \(z_{q}\colon A\to E\), a sum morphism \(s_{q}\colon E_{2}\to E\), \(E_{2}\) being the pullback of \(q\) along itself, and a vertical lift \(l_{q}\colon E\to\operatorname{\mathrm{TE}}\) satisfying a similar universality property of the vertical lift of the tangent structure \(\mathbb{T}\). In [15], MacAdam proved that in the tangent category of finite-dimensional smooth manifolds, differential bundles are precisely vector bundles.1 Footnote 1: There is a slight difference between vector bundles and differential bundles over smooth manifolds. Vector bundles are defined as fibre bundles whose _typical fibre_ is a vector space. In general, differential bundles don’t have a typical fibre and, when the manifold is not connected, they allow different connected components to have fibres with different dimensions. The two notions coincide for connected smooth manifolds. We also recall that a linear morphism \(f\colon(q\colon E\to A)\to(q^{\prime}\colon E^{\prime}\to A)\) of differential bundles over \(A\in\mathbb{X}\) is a morphism \(f\colon E\to E^{\prime}\) compatible with the lifts. In this section, we are going to prove an important result: differential bundles over an operadic affine scheme \(A\) in the geometric tangent category \(\mathsf{Geom}(\mathcal{P})\) of an operad \(\mathcal{P}\) are equivalent to \(A\)-modules in the operadic sense. We recall that a module over a \(\mathcal{P}\)-algebra \(A\) consists of an \(R\)-module \(M\) equipped with a collection of \(R\)-linear morphisms \(\mathcal{P}(m+1)\otimes A^{\otimes m}\otimes M\to M\) satisfying an equivariance condition with respect to the symmetric action, and associativity and unitality with respect to the structure map of \(A\). We invite the interested reader to consult [14], [2] and [11] for a detailed definition of modules over operadic algebras. First, we prove that the correspondence between differential objects and left \(\mathcal{P}(1)\)-modules shown in [13, Theorem 4.6.8] extends to a functorial equivalence between the category \(\mathsf{DObj}_{\mathsf{Im}}(\mathcal{P})\) of differential objects and linear morphisms of the geometric tangent category \(\mathsf{Geom}(\mathcal{P})\) of \(\mathcal{P}\) and the opposite of the category of left \(\mathcal{P}(1)\)-modules. We also prove that this equivalence is indeed an equivalence of tangent categories. To understand what is the tangent structure over \(\mathsf{Mod}^{\mathsf{op}}_{\mathcal{P}(1)}\), notice that, for any associative and unital \(R\)-algebra \(A\), \(\mathsf{Mod}_{A}\) is a semi-additive category, that is it has finite biproducts, denoted by \(\oplus\). Thus, it comes with the canonical tangent structure \(\underline{\mathbb{L}}_{A}\), whose tangent bundle functor is the diagonal functor \(\mathbb{L}_{A}M\colon=M\oplus M\). It is straightforward to see that \(\mathbb{L}_{A}\) is left-adjoint to itself, thus it also defines an adjoint tangent structure \(\mathbb{T}_{A}\) over the opposite category \(\mathsf{Mod}^{\mathsf{op}}_{A}\). It is interesting to note that (see [13, Example 4.6.10]) given an associative and unital \(R\)-algebra \(A\), the geometric tangent category of the associated operad \(A^{\bullet}\) whose only non-trivial entry is \(A^{\bullet}(1)=A\), is precisely \((\mathsf{Mod}^{\mathsf{op}}_{A},\mathbb{T}_{A})\). So, in particular, \((\mathsf{Mod}^{\mathsf{op}}_{\mathcal{P}(1)},\mathbb{T}_{\mathcal{P}(1)})\) is the geometric tangent category of \(\mathcal{P}(1)^{\bullet}\). Note also that this construction extends to operadic algebras. To see that take into account a \(\mathcal{P}\)-algebra \(A\) and let \(\mathsf{Env}_{\mathcal{P}}(A)\) be its enveloping algebras. Concretely, the enveloping algebra \(\mathsf{Env}_{\mathcal{P}}(A)\) of \(A\) corresponds to the associative and unital algebra \(\mathcal{P}^{(A)}(1)\) which satisfies the following property: the category of modules over \(A\) is equivalent to the category of left modules over \(\mathsf{Env}_{\mathcal{P}}(A)\). Thus, let \(A^{\bullet}\) be the operad whose only non-trivial entry is \(A^{\bullet}(1)\colon=\mathsf{Env}_{\mathcal{P}}(A)\). Thus, \(\mathsf{Geom}(A^{\bullet})\cong(\mathsf{Mod}^{\mathsf{op}}_{\mathsf{Env}(A)}, \mathbb{T}_{\mathsf{Env}(A)})\cong(\mathsf{Mod}^{\mathsf{op}}_{A},\mathbb{T} _{A})\). **Proposition 4.7**.: _For an operad \(\mathcal{P}\), the tangent category \(\mathsf{DOb}_{\mathsf{lin}}(\mathcal{P})\) of differential objects and linear morphisms of the geometric tangent category \(\mathsf{Geom}(\mathcal{P})\) of \(\mathcal{P}\) is equivalent to the geometric tangent category \(\mathsf{Geom}(\mathcal{P}(1)^{\bullet})=(\mathsf{Mod}^{\mathsf{op}}_{\mathcal{P }(1)},\mathbb{T}_{\mathcal{P}(1)})\) associated with the associative and unital \(R\)-algebra \(\mathcal{P}(1)\)._ Proof.: First, recall that \(\mathcal{P}(1)\) is the enveloping algebra of the initial \(\mathcal{P}\)-algebra \(\mathcal{P}(0)\) (cf. [2, Lemma 1.4]), thus \(\mathsf{Mod}_{\mathcal{P}(1)}\cong\mathsf{Mod}^{(\mathcal{P})}_{\mathcal{P}(0)}\). Recall also that in [13] it was proved the existence of a functor, for every \(\mathcal{P}\)-algebra \(A\), \(\mathsf{Free}_{A}\colon\mathsf{Mod}_{A}\to A/\mathsf{Alg}_{\mathcal{P}}\), which sends an \(A\)-module \(M\), in the operadic sense, to a morphism of \(\mathcal{P}\)-algebras \(A\to\mathsf{Free}_{A}M\). In particular, this defines a functor \(\mathsf{Free}_{A}\colon\mathsf{Mod}_{A}\to\mathsf{Alg}_{\mathcal{P}}\), which maps each \(M\) to \(\mathsf{Free}_{A}M\). In [13, Theorem 4.6.8] it was proved that, for every \(\mathcal{P}(0)\)-module \(M\), \(\mathsf{Free}_{\mathcal{P}(0)}M\) comes equipped with a canonical differential structure, so that \(\mathsf{Free}_{\mathcal{P}(0)}M\) is a differential object of \(\mathsf{Geom}(\mathcal{P})\). Conversely, for a differential object \(A\in\mathsf{Geom}(\mathcal{P})\), there is a canonical vertical lift \(l\colon\mathbb{T}A\to A\) (regarded as a morphism of \(\mathcal{P}\)-algebras). In particular, \(l\) defines a derivation over \(A\), as follows: \[\delta_{l}(a)\mapsto l(\mathsf{d}a)\] It was shown that the image of \(\delta_{l}\) gives a \(\mathcal{P}(0)\)-module \(UA\) and that the correspondence \(M\mapsto\mathsf{Free}_{\mathcal{P}(0)}M\) and \(A\to UA\) are inverses to each other, up to a canonical isomorphism. It is not hard to prove that this correspondence extends to a correspondence between linear morphisms. In particular, this means that given a \(\mathcal{P}(0)\)-linear morphism \(f\colon M\to N\) of \(\mathcal{P}(0)\)-modules, \(\mathsf{Free}_{\mathcal{P}(0)}f\) is again linear, in the sense that is compatible with the lifts of the corresponding differential objects. Similarly, given a linear morphism of differential objects \(g\colon A\to B\) (regarded as a morphism of \(\mathcal{P}\)-algebras), define \(Ug\) as the morphism whose domain is the image of \(\delta_{l}\), \(l\) being the vertical lift of \(A\). So, for each \(a\in A\), \(Ug(\delta_{l}(a))\colon=g(l(\mathsf{d}a))\). However, since \(g\) is compatible with the lifts, we have that \(g(l(\mathsf{d}a))=l^{\prime}(\mathsf{d}g(a))=\delta_{l^{\prime}}(g(a))\), \(l^{\prime}\) being the vertical lift of \(B\). Thus, \(Ug\) is well-defined and also \(\mathcal{P}(0)\)-linear. Finally, this correspondence is functorial and it extends to an equivalence of categories. Finally, notice that the tangent structure over differential objects reduces to a Cartesian differential structure (cf. [5, Theorem 4.11]), thus the tangent bundle functor \(\mathrm{T}\) sends a differential object \(A\) to \(A\times A\), being \(\times\) the Cartesian product. Moreover, since all morphisms are linear, the same is true for morphisms as well, i.e. \(\mathrm{T}f\cong f\times f\). However, Cartesian products in \(\mathsf{Geom}(\mathcal{P})\) are coproducts in \(\mathsf{Alg}_{\mathcal{P}}\) and \(\mathsf{Free}_{\mathcal{P}(0)}\) preserves coproducts, thus \(\mathsf{Free}_{\mathcal{P}(0)}\mathrm{T}\) is isomorphic to the tangent bundle functor over \(\mathsf{Mod}^{\mathsf{op}}_{\mathcal{P}(0)}\). Similarly, \(\mathsf{Free}_{\mathcal{P}(0)}\) maps all the natural transformations of the tangent structure \(\mathbb{T}^{(\mathcal{P})}\) to the ones of \(\mathbb{T}_{\mathcal{P}(0)}\). This concludes the proof. Cockett and Cruttwell in [4, Proposition 5.12]) proved that differential bundles over an object \(A\in\mathbb{X}\) of a tangent category \((\mathbb{X},\mathbb{T})\) are equivalent to differential objects of the slice tangent category \((\mathbb{X},\mathbb{T})/A\) of \((\mathbb{X},\mathbb{T})\) over \(A\). It is not hard to see that this correspondence extends to an equivalence of tangent categories: \[\mathsf{DBnd}(\mathbb{X},\mathbb{T};A)\cong\mathsf{DObj}((\mathbb{X},\mathbb{T })/A)\] between the tangent category \(\mathsf{DBnd}(\mathbb{X},\mathbb{T};A)\) of differential bundles over \(A\) and the tangent category \(\mathsf{DObj}((\mathbb{X},\mathbb{T})/A)\) of differential objects of the slice tangent category \((\mathbb{X},\mathbb{T})/A\). Moreover, this equivalence restricts to linear morphisms, that is: \[\mathsf{DBnd}_{\mathsf{linr}}(\mathbb{X},\mathbb{T};A)\cong\mathsf{DObj}_{ \mathsf{linr}}((\mathbb{X},\mathbb{T})/A)\] where \(\mathsf{lnr}\) indicates that morphisms are only linear morphisms (cf. [4, Definition 2.3]). Let's denote by \(\mathsf{DBnd}_{\mathsf{linr}}(\mathcal{P};A)\) the tangent category of differential bundles and linear morphisms over a \(\mathcal{P}\)-affine scheme \(A\) in the geometric tangent category \(\mathsf{Geom}(\mathcal{P})\) of an operad \(\mathcal{P}\). **Theorem 4.8**.: _Let \(\mathcal{P}\) be an operad and \(A\) a \(\mathcal{P}\)-affine scheme. Then the tangent category \(\mathsf{DBnd_{\mathsf{Inr}}}(\mathcal{P};A)\) of differential bundles over \(A\) and linear morphisms in the geometric tangent category of \(\mathcal{P}\) is equivalent to the geometric tangent category of the operad \(A^{\bullet}\):_ \[\mathsf{DBnd_{\mathsf{Inr}}}(\mathcal{P};A)\cong\mathsf{Geom}(A^{\bullet}) \cong(\mathsf{Mod}_{A}^{\mathrm{op}},\mathbb{T}_{A})\] _In particular, differential bundles over \(A\) are equivalent to \(A\)-modules in the operadic sense and linear morphisms of differential bundles over \(A\) are equivalent to \(A\)-linear morphisms of \(A\)-modules (in the opposite of the category of \(A\)-modules)._ Proof.: Take into account an operad \(\mathcal{P}\) and a \(\mathcal{P}\)-algebra \(A\). Then, the tangent category \(\mathsf{DBnd_{\mathsf{Inr}}}(\mathcal{P};A)\) of differential bundles over \(A\) and linear morphisms in the geometric tangent category \(\mathsf{Geom}(\mathcal{P})\) of \(\mathcal{P}\) is equivalent to the tangent category \(\mathsf{DObj_{\mathsf{Inr}}}(\mathsf{Geom}(\mathcal{P})/A)\) of differential objects and linear morphisms of the slice tangent category \(\mathsf{Geom}(\mathcal{P})/A\). Thanks to Theorem 4.5, \(\mathsf{Geom}(\mathcal{P})/A\cong\mathsf{Geom}(\mathcal{P}^{(A)})\), \(\mathcal{P}^{(A)}\) being the enveloping operad of \(\mathcal{P}\) over \(A\). By Proposition 4.7, differential objects over \(\mathsf{Geom}(\mathcal{P}^{(A)})\) are \(\mathcal{P}^{(A)}(1)\)-left modules; in particular, \(\mathsf{DObj_{\mathsf{Inr}}}(\mathsf{Geom}(\mathcal{P}^{(A)}))\cong\mathsf{ Geom}(\mathcal{P}^{(A)}(1)^{\bullet})\), but \(\mathcal{P}^{(A)}(1)\) is the enveloping algebra of \(A\) (cf. [2, Definition 1.11]), thus \(\mathsf{Geom}(\mathcal{P}^{(A)}(1)^{\bullet})\cong\mathsf{Geom}(A^{\bullet})\): \[\mathsf{DBnd_{\mathsf{Inr}}}(\mathcal{P};A)\] \[= \mathsf{DBnd_{\mathsf{Inr}}}(\mathsf{Geom}(\mathcal{P});A)\] Diff. bundles are diff. objects in the slice tangent cat. \[\cong \mathsf{DObj_{\mathsf{Inr}}}(\mathsf{Geom}(\mathcal{P})/A))\] Theorem 4.5 \[\cong \mathsf{DObj_{\mathsf{Inr}}}(\mathsf{Geom}(\mathcal{P}^{(A)}))\] Proposition 4.7 \[\cong \mathsf{Geom}(\mathcal{P}^{(A)}(1)^{\bullet})\] \[\cong \mathsf{Geom}(A^{\bullet})\] This concludes the proof. ## 5 Conclusion The main results of this paper are the following: **Theorem 3.10**: In Section 3.1 we gave a new characterization for the operation which takes a tangent pair \((\mathbb{X},\mathbb{T};A)\) to its associated slice tangent category \((\mathbb{X},\mathbb{T})/A\) in terms of the adjunction \(\mathsf{Term}+\mathsf{Slice}\). **Theorem 4.5**: In Section 4.1 we proved that the geometric tangent category of the enveloping operad of a \(\mathcal{P}\)-algebra \(A\) is equivalent to the slice tangent category \(\mathsf{Geom}(\mathcal{P})/A\) of the geometric tangent category of \(\mathcal{P}\) over \(A\). **Theorem 4.8**: In Section 4.2 we classified differential bundles over operadic affine schemes in the geometric tangent category of an operad \(\mathcal{P}\). We showed that differential bundles correspond to modules over the \(\mathcal{P}\)-algebras. We also proved some minor but striking results: **Propositions 2.2**: We showed that tangent categories are organized in a double category whose horizontal and vertical morphisms are respectively lax and colax tangent morphisms. We also classified conjunctions in this double category in terms of a colax and a lax tangent morphism whose underlying functors form an adjunction and whose distributive laws are mates along this adjunction. **Propositions 2.1**: We proved that the operation which takes an operad to its algebraic tangent category extends to a pair of functors \(\mathsf{Alg}^{*}\) and \(\mathsf{Alg}_{!}\). **Proposition 2.14**: We proved that the operation which takes an operad to its geometric tangent category extends to a pair of functors \(\mathsf{Geom}^{*}\) and \(\mathsf{Geom}_{!}\). **Lemma 3.7**: We proved that Cartesian morphisms of tangent pairs lift to the slice tangent categories as strong tangent morphisms. **Corollary 4.6**: We classified vector fields over the geometric tangent category of the enveloping operad \(\mathscr{P}^{(A)}\) as relative derivations. ### Future work This paper is not just a natural continuation of [13] but also the beginning of a fruitful program of research dedicated to understanding the intimate relationship between operads and the geometrical features of their corresponding operadic affine schemes. The classification of differential bundles, of vector fields (already covered in [13]), and the classification of the geometric tangent category of the enveloping operads represent the starting point of this program. Here is a list of some possible future directions of research: 1. Classifications of connections. Connections, introduced in [3] are probably one of the most important geometrical tools available in a tangent category; 2. Classifications of principal bundles and principal connection. Principal bundles and principal connections were first introduced in the context of tangent categories by Cruttwell during a talk at Foundational Methods in Computer Science 2017 (General connections in tangent categories - FMCS 2017); 3. Study of sector forms and cohomology for operadic affine schemes (cf. [8]); 4. Study of curve objects and of differential equations for operadic affine schemes (cf. [\(\mathcal{C}\)]); 5. An important application of this program is the study of associative affine schemes, which lead to a description of non-commutative algebraic geometry via tangent categories; 6. An important construction in the theory of operads is Kozsul duality (cf. [14, Chapter 7]). A natural question is what is the geometric tangent category of the Kozsul dual of an operad \(\mathscr{P}\); 7. There is a notion of distributive laws between operads which allows two operads to be composed together. An example of an operad obtained via a distributive law between the operad \(\mathcal{A}\!s\) and \(\mathcal{L}\!te\) is the operad \(\mathscr{P}\!o\!s\), whose algebras are Poisson algebras. What kind of relationship exists between the geometric tangent categories of two operads \(\mathscr{P}\) and \(\mathscr{C}\) and the one of the operad obtained by composing \(\mathscr{P}\) and \(\mathscr{C}\) provided a distributive law between them? These are only a few of the possible new paths of research that this paper inspires.
2308.02366
The Formation of Star-forming Disks in the TNG50 Simulation
We investigate the disk formation process in the TNG50 simulation, examining the profiles of SFR surface density ($\Sigma_{\rm SFR}$), gas inflow and outflow, and the evolution of the angular momentum of inflowing gas particles. The TNG50 galaxies tend to have larger star-forming disks, and also show larger deviations from exponential profiles in $\Sigma_{\rm SFR}$ when compared to real galaxies in the MaNGA (Mapping Nearby Galaxies at APO) survey. The stellar surface density of TNG50 galaxies show good exponential profiles, which is found to be the result of strong radial migration of stars over time. However, this strong radial migration of stars in the simulation produces flatter age profiles in TNG50 disks compared to observed galaxies. The star formation in the simulated galaxies is sustained by a net gas inflow and this gas inflow is the primary driver for the cosmic evolution of star formation, as expected from simple gas-regulator models of galaxies. There is no evidence for any significant loss of angular momentum for the gas particles after they are accreted on to the galaxy, which may account for the large disk sizes in the TNG50 simulation. Adding viscous processes to the disks, such as the magnetic stresses from magneto-rotational instability proposed by Wang & Lilly 2022, will likely reduce the sizes of the simulated disks and the tension with the sizes of real galaxies, and may produce more realistic exponential profiles.
Enci Wang, Simon J. Lilly
2023-08-04T14:57:24Z
http://arxiv.org/abs/2308.02366v1
# The Formation of Star-Forming Disks in the TNG50 Simulation ###### Abstract We investigate the disk formation process in the TNG50 simulation, examining the profiles of SFR surface density (\(\Sigma_{\rm SFR}\)), gas inflow and outflow, and the evolution of the angular momentum of inflowing gas particles. The TNG50 galaxies tend to have larger star-forming disks, and also show larger deviations from exponential profiles in \(\Sigma_{\rm SFR}\) when compared to real galaxies in the MaNGA (Mapping Nearby Galaxies at APO) survey. The stellar surface density of TNG50 galaxies show good exponential profiles, which is found to be the result of strong radial migration of stars over time. However, this strong radial migration of stars in the simulation produces flatter age profiles in TNG50 disks compared to observed galaxies. The star formation in the simulated galaxies is sustained by a net gas inflow and this gas inflow is the primary driver for the cosmic evolution of star formation, as expected from simple gas-regulator models of galaxies. There is no evidence for any significant loss of angular momentum for the gas particles after they are accreted on to the galaxy, which may account for the large disk sizes in the TNG50 simulation. Adding viscous processes to the disks, such as the magnetic stresses from magneto-rotational instability proposed by Wang & Lilly (2022), will likely reduce the sizes of the simulated disks and the tension with the sizes of real galaxies, and may produce more realistic exponential profiles. galaxies: evolution -- galaxies: ISM -- galaxies: star formation + Footnote †: journal: ApJ 0000-0002-8881-8885]Erici Wang ## 1 Introduction Massive star-forming galaxies in the local universe are widely seen to have a disky morphology (e.g. Simard et al., 2011; Meert et al., 2013). The surface brightness of disk galaxies are typically composed of two components (e.g. Freeman, 1970; Kent, 1984; Allen & Martos, 1986; Weiner et al., 2001; Simard et al., 2011; Casasola et al., 2017): a central spheroidal core and a highly flattened disk. The disk components are observed to have an nearly exponential broad-band photometric profile, extending over four scale lengths or more (e.g. Kent, 1985; Weiner et al., 2001; Pohlen & Trujillo, 2006; Meert et al., 2013). This characteristic exponential profile is found not only in the radial distribution of stars, as traced by optical-to-near infrared broad-band images, but also in the radial profile of the star formation rate (SFR) surface density (\(\Sigma_{\rm SFR}\); e.g. Wyder et al., 2009; Gonzalez-Lopezira et al., 2012; Gonzalez Delgado et al., 2016; Casasola et al., 2017; Wang et al., 2019), as traced by ultraviolet continuum, thermal infrared emission, and/or the H\(\alpha\) emission of ionized gas. The most recent generation of cosmological hydrodynamical simulations, such as EAGLE (Schaye et al., 2015) and IllustrisTNG (Pillepich et al., 2018) represent a remarkable success in reproducing the development of the galaxy population over cosmic time (see Vogelsberger et al., 2020, and references therein). In general, such simulations successfully match many observational constraints, including the galaxy colour bimodality, cold gas fractions, the statistical properties of galaxy morphology and other properties (Trayford et al., 2015; Furlong et al., 2015; Nelson et al., 2018; Genel et al., 2018; Diemer et al., 2019; Donnari et al., 2019; Rodriguez-Gomez et al., 2019). For instance, Nelson et al. (2018) found that, including the dust attenuation of stellar light, the simulated color distributions for massive galaxies (\(M_{*}>10^{9}\)M\({}_{\odot}\)) in TNG100 are in excellent quantitative agreement with those of SDSS (the Sloan Digital Sky Survey; York et al., 2000) galaxies. Rodriguez-Gomez et al. (2019) also showed that the optical morphologies of IllustrisTNG galaxies are in good agreement with the galaxies of Pan-STARRS (Panoramic Survey Telescope and Rapid Response System; Chambers et al., 2016) observations. However, it is not clear to what extent the simulations reproduce the detailed internal structure of disks that is seen in the observations, i.e. specifically the exponential form of the stellar disks and star-formation profiles. The origin of the exponential disks in galaxies has been studied for more than half century, but is still not well understood. Since the radial distribution of mass in a rotating disk is closely linked to the angular momentum distribution of the material in the disk, much attention has been paid to the initial angular momentum of the material that ultimately ends up in the disk, invoking conservation of specific angular momentum of each element during the collapse of proto-galaxy (e.g. Mestel, 1963; Freeman, 1970; Fall & Efstathiou, 1980). For instance, Freeman (1970) showed that the angular momentum distribution of a self-gravitating exponential disk is remarkably similar to that of a uniformly rotating sphere of uniform density. This basic concept has been developed in many subsequent discussions, based on realistic N-body simulations of the formation of cosmic structure (e.g. Fall & Efstathiou, 1980; Mo et al., 1998; Dutton & van den Bosch, 2009). These have ultimately tried to connect the specific angular momentum distribution of the baryonic material in galactic disks to the original distribution of that material, which is taken to match that of the dark matter particles in the host halos. On the other hand, by investigating the angular momentum transport of gas particles for individual disk galaxies in SPH simulations, Kaufmann et al. (2007) found that angular momentum of the particles can be lost in such simulations (presumably transported outwards), but that the severity of this decreases substantially as the mass-resolution is increased and that this was therefore an artificial effect. Kaufmann et al. (2007) claimed that, with \(10^{6}\) gas and dark matter particles, disc particles lose only 10-20 per cent of their original angular momentum, and that exponential disks cannot be obtained. Another idea with a long history is that the formation of an exponential stellar disk is the result of secular evolution of the distribution of stars, after they are formed, as a result of bar formation (Hohl, 1971; Foyle et al., 2008), and/or by the scattering of stars by massive clumps (Elmegreen & Struck, 2013; Wu et al., 2020). Consistent with this, Herpich et al. (2017) suggested that the exponential stellar disk could be viewed as the maximum entropy state for the distribution of specific angular momentum of stars. However, these ideas do not explain the exponential form of the SFR surface density in disk galaxies, since there is no clear reason why star-formation should follow a similar radial distribution to that of the long-lived stars. A third idea that has been explored over the years is that the exponential disk is produced by the operation of a viscous accretion disk (e.g. Lynden-Bell & Pringle, 1974; Pringle, 1981; Lin & Pringle, 1987; Yoshii & Sommer-Larsen, 1989; Wang et al., 2009; Wang & Lilly, 2022, 2020). The viscosity in an accretion disk transports mass inwards and angular momentum outwards. The viscosity in a gas disk can be produced by a number of processes, such as cloud-cloud collisions, turbulence of the gas disk from supernova feedback and/or the motions produced by gravitational instabilities of gas clouds (e.g. Lynden-Bell & Pringle, 1974; Pringle, 1981; Ferguson & Clarke, 2001; Stevens et al., 2016), or as discussed below, also by magnetic fields (Wang & Lilly, 2022). The short gas depletion timescales of galaxies suggests that the gas in galaxies is being continually replenished from outside. Assuming that the rate of star-formation of a galaxy depends on the available gas, and that outflows of gas are related to the star-formation rate, this leads to the idea that galaxies may be thought of as "gas-regulator" systems (Lilly et al., 2013), in which the gas-content of a galaxy continually adjusts itself to achieve a quasi-equilibrium between inflow, star-formation and outflow. Variations of this idea have been explored in a number of papers (e.g. Bouche et al., 2010; Schaye et al., 2010; Dave et al., 2011; Lilly et al., 2013; Belfiore et al., 2019; Wang & Lilly, 2021). In parallel, multiple simulations based on different hydrodynamical codes show that the inflowing gas is almost co-planar and co-rotating with the gas disk regardless of its thermal history, at least at low redshifts (e.g. Keres et al., 2005; Stewart et al., 2011; Danovich et al., 2015; Stewart et al., 2017; Peroux et al., 2020; Trapp et al., 2022; Hafen et al., 2022; Gurvich et al., 2022). Specifically, based on FIRE-2 (Feedback In Realistic Environments; Hopkins et al., 2018) simulation, Hafen et al. (2022) found that the inflowing gas becomes coherent and angular momentum-supported prior to accretion onto the disks. On the other hand, the outflow of gas, driven by stellar winds and/or supernova (SN) explosions, is preferentially leave gas disks along the direction that is perpendicular to the disks (e.g. Nelson et al., 2019; Peroux et al., 2020; Trapp et al., 2022). Motivated by these developments, we recently revisited the viscous disk idea in Wang & Lilly (2022). We focused, via a reverse-engineering approach, on what was required in order for a viscous co-planar gas disk to produce an exponential profile of star-formation in the disk. In such a picture, the exponential stellar disk is then a natural outcome of the exponential form of the star-forming disk (although some subsequent migration or rearrangement of stars is not ruled out). We showed that if galaxy disks can indeed be viewed as "modified accretion disks", in which the dominant gas flow is co-planar inflow within the disk but in which (unlike a classical accretion disk) gas is continually extracted from the disk due to star-formation (and any associated ex-planar outflows of gas), then the required viscous stresses \(W(r)\) can be derived _solely_ from the steady-state star-formation rate profile \(\Sigma_{\rm SFR}(r)\). We further argued that magneto-rotational instability (MRI) is an attractive and plausible source of the viscosity in galactic-scale disks. We showed that exponential \(\Sigma_{\rm SFR}(r)\) profiles over several scale-lengths will be established if (and only if) there is a relation between the magnetic field strength and the \(\Sigma_{\rm SFR}(r)\) that is of the precise type that has actually been indicated from observations of galaxies. It has been widely accepted for a long time that MRI is likely to be an important source of the viscosity in classical accretion disks around compact objects, (e.g. Shakura & Sunyaev, 1973; Blandford, 1989; Balbus & Hawley, 1991). Balbus & Hawley (1991) found that the combination of a negative gradient in the angular velocity with a weak magnetic field of any plausible astrophysical strength would lead to a dynamical instability, called the MRI. Based on MHD simulations, Hawley et al. (1995) found that the transportation of angular momentum is dominated, for Keplerian disks, by the magnetic stress (or Maxwell stress), rather than by the kinetic stress (or Reynolds stress). As in all accretion disks, this viscosity causes an inward transport of mass within the galactic disk, which is required to sustain the exponential SFR surface density profile. There is also an outward transport of angular momentum within the gas disk associated with the loss of angular momentum as the gas moves down to the center of the disk. The observed rotation curve means that, in this picture, the gas must lose a substantial fraction of its angular momentum as it flows inwards through the gas disk, e.g. losing 50% of its specific angular momentum as it moves from a radius of 4 disk scalelengths down to 1 disk scalelength. In Wang & Lilly (2022) we examined the expected metallicity gradients in this "modified accretion disk" model of galaxy disks, and showed that this also is determined primarily by the \(\Sigma_{\rm SFR}(r)\) profile, and is, perhaps counter-intuitively, _independent_ of the gas content or star-formation efficiency of the disk. The model naturally produces a negative gradient of gas-phase metallicity due to the progressive enrichment of gas by in-situ star formation in the disk as the gas flows down towards the center of the galaxy. The expected profiles can quantitatively match observational results (e.g. Crockett et al., 2006; Magrini et al., 2007; Bresolin et al., 2012; Scarano & Lepine, 2013). In this paper, we focus on the formation of galaxy disks in hydro-simulations. We take advantage of the publicly available state-of-art simulation, TNG50 (e.g. Pillepich et al., 2018; Nelson et al., 2018), which is the successor of the Illustris simulation (Vogelsberger et al., 2014; Genel et al., 2014), and appears to successfully reproduce many observational facts about the galaxy population(e.g. Diemer et al., 2019; Nelson et al., 2019). Interestingly, Cannarozzo et al. (2022) focused on the early-type galaxies, and found that TNG100 generally produces an excellent agreement with the observations in the shape of stellar surface density, stellar age and metallicity. In this work, we instead focus on the disk galaxies and examine whether the simulation reproduces the exponential stellar disks, as well as the exponential star-forming disk. In addition to the general comparison between the output of simulations with observations, TNG50 and other similar simulations contain a wealth of information to address questions about how gas is accreted onto disks, whether gas accretion or the pre-existing gas is dominant for sustaining star formation in galaxies, and how the angular momentum changes after the gas particles are accreted onto galaxies. As the simulation of the highest resolution within the IllustrisTNG suite, TNG50 is able to address these questions. The paper is organized as follows. In Section 2, we will briefly introduce the TNG50 simulation, as well as the selection of the sample of simulated galaxies used in this work. We examine whether the \(\Sigma_{\rm SFR}\) profiles of simulated galaxies are close to exponential or not in Section 3, and provide a quantitative comparison with the observational results from MaNGA galaxies. Another focus of this work is to investigate the inflow and outflow of gas particles on the disks of individual galaxies, in order to examine whether the gas disk of different radii can be treated as a gas-regulator system or not in simulations. These results are shown in Section 4. In Section 5, we will track individual inflowing gas particles to examine the change of their angular momentum after being accreted onto the gas disks. We then summarize this work in Section 6. Throughout the paper, we adopt a flat cold dark matter cosmology model with cosmological parameters derived from Planck Collaboration et al. (2016), i.e. \(\Omega_{\Lambda,0}=\)0.6911, \(\Omega_{m,0}=\)0.3089, \(\Omega_{b,0}=\)0.0486, and H\({}_{0}\)=67.74 km s\({}^{-1}\)Mpc\({}^{-1}\). This is the same as the one adopted in IllustrisTNG simulation (Nelson et al., 2019). ## 2 The TNG50 Simulation ### Brief introduction of TNG50 As the successor to the original Illustris simulation(e.g. Vogelsberger et al., 2014; Sijacki et al., 2015), IllustrisTNG (e.g. Springel et al., 2018; Nelson et al., 2018) is a suite of state-of-the-art magneto-hydrodynamic cosmological simulations with an updated physical model, run with the moving-mesh code AREPO(Springel, 2010). The implemented physical processes include star formation, stellar evolution, chemical enrichment, outflows driven by stellar feedback and etc (see Pillepich et al., 2018, for details). In IllustrisTNG, star formation and pressurization of the multi-phase ISM are implemented following the model of Springel & Hernquist (2003). Specifically, stars form stochastically following the empirically defined Kennicutt-Schmidt relation, in gas clouds above a density threshold of \(n_{\rm H}=0.1\) cm\({}^{-3}\). The recipe of star formation driven kinetic winds is refined in several ways with respect to the approach in the Illustris simulation (Vogelsberger et al., 2013; Torrey et al., 2014). For instance, winds are injected isotropically in IllustrisTNG, and the initial speed of the wind particles is set to be redshift-dependent. The total energy release rate that is available to drive galactic winds, is set by the instantaneous star formation rate and the energy released by Type II SN per unit stellar mass that is formed. Both depend on spatial location within a galaxy and on time. With these improved treatments of physical processes, the output of IllustrisTNG has been shown to be consistent with a wide range of observational data in addition to those that were used to calibrate or tune the model (Nelson et al., 2019). These test data include the galaxy stellar mass functions up to \(z\sim 4\)(Pillepich et al., 2018), the spatial clustering of red and blue galaxies up to tens of Mpc (Springel et al., 2018), the color-magnitude diagram of blue and red galaxies (Nelson et al., 2018), and the stellar sizes of star-forming and quiescent galaxies up to \(z\sim 2\)(Genel et al., 2018), the optical morphologies of galaxies (Rodriguez-Gomez et al., 2019). These, and the publicly available output at the particle level, make the IllustrisTNG one of the best choices to study the formation of disks in galaxies within cosmological simulations. The IllustrisTNG project includes three simulation volumes with different resolutions. In this work, we use the highest-resolution version of IllustrisTNG50-1 (hereafter, TNG50 for short). TNG50 simulation starts at \(z=127\) and evolves down to \(z=0\), with a box side length of \(\sim\)50 Mpc. It initially contains \(2160^{3}\) dark matter particles and \(2160^{3}\) gas cells. The mass of each dark matter particle is \(3.07\times 10^{5}\)M\({}_{\odot}/h\), and the average gas cell mass is \(5.74\times 10^{4}\)M\({}_{\odot}/h\). The softening length employed in TNG50 is 0.29 kpc for both dark matter and stellar particles, and an adaptive gravitational softening length for gas cells is adopted, with a minimum value of 0.074 kpc. In this work, the gas cells are also named as gas particles. In the released data, the halos and subhalos are identified by the FoF and Subfind algorithms (Springel et al., 2005; Dolag et al., 2009), and the merger tress are constructed by the SubLink algorithms (Rodriguez-Gomez et al., 2015). Simulated galaxies are associated with the subhalos, while the merger trees are useful to trace their histories of formation and evolution. ### The sample selection from TNG50 Since star-forming galaxies almost always show well-defined disks, we select the sample of galaxies to be studied from the SFR-\(M_{*}\) diagram. The left panel of Figure 1 shows the SFR-\(M_{*}\) relation for TNG50 galaxies at redshift of \(z=0\). Throughout the paper, the stellar mass of the simulated galaxies is defined as the total mass of the member stellar particles within the radius at which the mean surface brightness profile (computed from all member stellar particles) drops below the limit of 20.7 mag arcsec\({}^{-2}\) in the K band (the SubhaloStellarPotometricMassinRad of galaxy catalog from TNG50 data release; Nelson et al., 2019). Here, the SFR is defined as the total SFR of the cells within twice the stellar half-mass radius (the SubhaloSFRinRad of galaxy catalog from TNG50 data release). These definitions of SFR and stellar mass are only used in the current work to define the sample selection. We note that different definitions of SFR are adopted later when calculating the \(\Sigma_{\rm SFR}\) of galaxies, as specified in Section 3 and Section 4. As can be seen, the TNG50 galaxies are found on a tight sequence in the SFR-\(M_{*}\) diagram, reproducing the so-called star formation main sequence (SFMS; Brinchmann et al., 2004; Daddi et al., 2007; Elbaz et al., 2011) of real galaxies. We perform a linear fitting to the SFMS of the TNG50 galaxies iteratively, and this is shown in green solid line. Specifically, we first fit a straight line of SFR-\(M_{*}\) relation for all the TNG50 galaxies. Then we re-fit the relation by excluding the galaxies that are 1 dex below the fitted SFMS. Repeating the above process for a few times, we then obtain a fitted SFMS, shown in green solid line on the left panel of Figure 1. The green dashed line is the demarcation line of star-forming and quenched galaxies, which is 1 dex below the finally fitted SFMS. For comparison, we show the observed SFMS of real SDSS galaxies as the cyan solid line, taken from Bluck et al. (2016). The fitted SFMS of TNG50 galaxies appears to be in excellent agreement with that of observations. In this work, in order to select a representative sample of manageable size for detailed analysis, we select four subsamples of star-forming galaxies based on the SFR-\(M_{*}\) diagram. Specifically, we randomly select 50 star-forming TNG50 galaxies, excluding galaxies with significant mergers (mass-ratio greater than 0.1) since redshift of 0.4, in each three stellar mass intervals: \(9.5<\log M_{*}/({\rm M}_{\odot}h^{-1})<10.0,\ 10.0<\log M_{*}/{\rm M}_{ \odot}h^{-1})<10.5\) and \(10.5<\log M_{*}/({\rm M}_{\odot}h^{-1})<11.0\). In a fourth mass bin, \(11.0<\log M_{*}/({\rm M}_{\odot}h^{-1})<11.5\), where there are fewer galaxies, we take all the star-forming galaxies, but exclude the mergers as before. This results in \(11\)1 galaxies. In the remaining analysis of this work, we will focus on only these 161 selected sample galaxies, which are shown with red triangles in Figure 1. Footnote 1: One galaxy that matches our selection criteria is further excluded as it frequently caused problems in computing the net inflow rate on TNG JupiterLab. This may be due to the fact that this galaxy contains too many gas particles. We therefore do not include this one galaxy here and also in the following analysis. The right panel of Figure 1 shows the mass-size relation of TNG50 galaxies at redshift of \(z=0\). The size used here is the half-stellar mass radius, denoted as \(R_{1/2}\). For comparison, we show the mass-size relation of late-type galaxies of SDSS as a cyan solid line, taken from Shen et al. (2003). However, the circularized radii2 used by Shen et al. (2003) are typically a factor of 1.4 smaller than circularized radii, or the major-axis radii (van der Wel et al., 2014; Furlong et al., 2015). We therefore shift the mass-size relation of Shen et al. (2003) vertically by a factor of 1.4 in the right panel of Figure 1, to account for this effect. Footnote 2: The circularized radius is defined as \(\sqrt{b/a}\) times the major-axis size, where a and b the minor- and major-axes. We find overall trend of the mass-size relation of TNG50 is in quite good agreement with that from observations, over four orders of magnitude in stellar mass, but that there is an offset of about 0.11 dex at high mass end. Similarly, Genel et al. (2018) investigated the evolution of the mass-size relation for galaxies of TNG100, and found that a quantitative comparison of the projected \(r\)-band sizes in TNG100 and in observations (van der Wel et al., 2014), also shows overall agreement to within 0.25 dex from \(0.0<z<0.2\) but with the TNG100 galaxies having systematically larger sizes than the real galaxies with \(M_{*}>10^{10.0}{\rm M}_{\odot}h^{-1}\), which is similar to what we found for TNG50 galaxies in the right panel of Figure 1. ## 3 Do the simulations form exponential disks? ### The profiles of SFR surface density As mentioned in the introduction, the \(\Sigma_{\rm SFR}\) profiles of star-forming galaxies typically have an exponential form, which may well naturally account for the exponential form of the stellar disks. Specifically, Wang & Lilly (2022b) looked at the deviations of individual \(\Sigma_{\rm SFR}(r)\) profiles of MaNGA (Mapping Nearby Galaxies at APO; Bundy et al., 2015) star-forming galaxies when compared with pure exponential profiles. In the current analysis, the \(\Sigma_{\rm SFR}(r)\) of MaNGA galaxies at a given radius was calculated as the mean \(\Sigma_{\rm SFR}\) of pixels within the corresponding annulus, rather then the median, in order to match our treatment of the simulated TNG50 galaxies. It was found that more than half (or 86%) of galaxies with the rms deviations or less than 0.1 dex (or 0.2 dex). Observationally, exponential star-forming disks seem to be a very common feature of galaxies, especially for galaxies with low-to-intermediate stellar mass (\(\log M_{*}/\mathrm{M}_{\odot}<10.7\); Wang & Lilly 2022b). Therefore, in this subsection, we investigate the \(\Sigma_{\rm SFR}\) profiles of TNG50 galaxies, and examine whether the exponential form is found in them, as compared to real galaxies. The red lines in Figure 2 show the mean \(\Sigma_{\rm SFR}\) profiles of individual galaxies for the selected TNG50 sample based on the \(z=0\) snapshot. In calculating the radial profiles of \(\Sigma_{\rm SFR}(r)\) for each simulated galaxy, we first determine the center of the galaxy, and then determine the 3-d orientation of the disk by minimizing the mean absolute perpendicular distance of all star particles from the plane of the disk, testing all possible orientations of the disk in space that are centered on the galactic center. We stress that, throughout this paper, the orientation of the disk is always determined based on the distribution of stellar particles. We can then obtain the face-on distribution of stellar particles for each individual galaxy. By radially binning the stellar particles into a set of annuli, the mean \(\Sigma_{\rm SFR}(r)\) of each TNG50 galaxies in Figure 2 is then computed from the _initial mass_ of stellar particles that are formed within each annulus within the last 100 Myr and is therefore an average \(\Sigma_{\rm SFR}\) on a 100 Myr timescale. This reduces the shot noise. Based on this method, we have estimated that the signal-noise ratio of \(\Sigma_{\rm SFR}\) induced by shot noise is typically greater than 3.0 for \(\Sigma_{\rm SFR}\sim 10^{5}\)\(\mathrm{M}_{\odot}\)\(\mathrm{h}^{-1}\)/Gyr (kpc/h)\({}^{2}\) with a radial bin of 1 kpc at 8 kpc, given the typical mass of \(5.7\times 10^{4}\mathrm{M}_{\odot}\)\(h^{-1}\) for gas particle. For comparison, we show the mean \(\Sigma_{\rm SFR}\) for a well-defined sample of MaNGA star-forming galaxies as the blue lines in Figure 2. These are taken from Wang et al. (2019). Here we only briefly describe the sample definition and the calculation of \(\Sigma_{\rm SFR}\), and refer the reader to Wang et al. (2019) for further details. This galaxy sample is originally selected from the SDSS Data Release 14 (Abolfathi et al., 2018), excluding quenched galaxies, mergers, irregulars, and heavily disturbed galaxies. This results in a sample of 976 MaNGA star-forming galaxies, which is a good representation of normal SFMS galaxies. The SFR for MaNGA galaxies is calculated based on the dust-corrected H\(\alpha\) luminosity with the Kennicutt (1998) star formation law and Chabrier (2003) initial mass function (IMF). The spatial coverage of MaNGA galaxies is typically larger than 1.5 effective radius. The \(\Sigma_{\rm SFR}\) profiles of MaNGA galaxies are corrected for the projection effect, based on the minor-to-major axis ratio from the NSA catalog (Blanton et al., 2011). The spatial resolution of MaNGA observation is 1-2 kpc, so in generating the profiles for the TNG50 galaxies, a radial bin width of 1 kpc was chosen to minimize the effect of Figure 1: Left panel: The stellar mass-SFR relation for the galaxies in the TNG50 simulation at redshift of \(z=0\). We perform a linear fitting to the star-formation main sequence for the TNG50 galaxies iteratively, which is shown in green solid line. The green dashed line is the defined demarcation line of star-forming galaxies and quenched galaxies, and is 1 dex below the green solid line. For comparison, we show the SFMS of SDSS galaxies in cyan solid line (Bluck et al., 2016). The selected subsample, investigated in the present work, is shown in red triangles. Right panel: The stellar mass-size relation for the TNG50 galaxies. The selected subsample is shown in red triangles. For comparison, we show the mass-size relation of SDSS late-type galaxies in cyan line (Shen et al., 2003), multiplied by 1.4 to account for the use of circularized radii (Furlong et al., 2017). this spatial resolution. We have tested the effect of bin size, and find that our main result is unchanged when using a bin size of 0.5 kpc and 2 kpc. As shown in Figure 2, although the individual \(\Sigma_{\rm SFR}\) profiles appear to be noisy, we find that the \(\Sigma_{\rm SFR}\) profiles of TNG50 galaxies are overall overlapped with those of MaNGA galaxies for all the four stellar mass bins, suggesting a good overall consistency. However, when looking in more detail, the shapes of the \(\Sigma_{\rm SFR}\) profiles of the simulated TNG50 galaxies generally appear to be flatter and more extended than those of real MaNGA galaxies. This is in line with the fact that TNG50 galaxies are overall somewhat larger than the real galaxies in the mass bins we considered, as was shown in the right panel of Figure 1. To reduce the effect of different sizes between different galaxies and different samples, we replot the \(\Sigma_{\rm SFR}\) profiles of TNG50 galaxies and MaNGA galaxies in Figure 3, by normalizing the radius with the half-stellar-mass radius (or the effective radius for MaNGA galaxies). We also re-normalized the \(\Sigma_{\rm SFR}\), so that it is computed as the SFR within an area that itself scales as \(\rm R_{1/2}^{2}\). This ensures that the visual integration of a profile on the \(\Sigma_{\rm SFR}\)-radius diagram reflects the actual integrated quantity in physical terms (see also Wang et al., 2019). As shown in Figure 3, the profiles of all the galaxies tend to be more parallel after the normalization and make the exponential form easier to see on the diagram. In addition, for each stellar mass bins, we also show the median \(\Sigma_{\rm SFR}\) profile of the populations with red and blue diamonds for the TNG50 and MaNGA samples, respectively. These median profiles can be treated as a representative profile that captures the general features of the individual profiles within the corresponding stellar mass bin. As can be seen in Figure 3, the median \(\Sigma_{\rm SFR}\) profiles of MaNGA galaxies can be well-fitted by an exponential function for all the stellar mass bins except for the very highest ones (also see Wang et al., 2019). However, the median \(\Sigma_{\rm SFR}\) profiles of TNG50 galaxies show quite large deviations from an single exponential function, and this is true for all the stellar mass bins examined. We see a gradual change of the median \(\Sigma_{\rm SFR}\) profile of TNG50 galaxies with increasing stellar mass: the median \(\Sigma_{\rm SFR}\) profiles of two lowest stellar mass bin are clearly centrally-peaked (\(<0.4R_{1/2}\)), and become nearly flat out to the radius of \(1.6R_{1/2}\) or more; the median \(\Sigma_{\rm SFR}\) profiles of the two highest stellar mass bins clearly show, in contrast, centrally-suppressed star formation (\(<0.6R_{1/2}\)), and become flat up to the radius of \(1.6R_{1/2}\) as we considered here. Figure 2: The mean SFR surface density profiles of TNG50 galaxies (red lines) and MaNGA galaxies (blue lines). The galaxies are separated into 4 stellar mass bins, as denoted in each panels. It is clear that the \(\Sigma_{\rm SFR}\) profiles of TNG50 galaxies show larger deviations from the pure exponential function with respect to those of MaNGA galaxies. We will further explore this in the next section. ### Quantifying the deviation from the exponential profiles In this subsection, we try to further quantify the deviations of the \(\Sigma_{\rm SFR}(r)\) profiles of TNG50 and MaNGA galaxies from the pure exponential function. To do this, we define a set of quantities for individual galaxies based on their \(\Sigma_{\rm SFR}(r)\) profile in the following way. For an individual MaNGA galaxy, we first calculate the cumulative profile of SFR(\(<r\)), i.e. the total SFR that is enclosed within the (de-projected) radius \(r\). This cumulative SFR profile therefore stops at the radius corresponding to the coverage of MaNGA observation for that individual galaxy, which is denoted as \(r_{\rm max}\). Then we fit the observed cumulative SFR profile with the cumulative function of an pure exponential \(\Sigma_{\rm SFR}(r)\propto\exp(-r/h_{\rm R})\), which can be written as (also see equation 8 in Wang & Lilly 2022b): \[{\rm SFR}(<r)={\rm SFR}_{\rm tot}\cdot[1-(r/h_{\rm R}+1)\cdot\exp(-r/h_{\rm R} )], \tag{1}\] where \({\rm SFR}_{\rm tot}\) is the _total_ SFR of the whole galaxy, and \(h_{\rm R}\) is the scalelength of the idealised exponential \(\Sigma_{\rm SFR}(r)\). For illustration, we show an example (MaNGA ID: 8326-12701) in the left panel of Figure 4, where the black line is the observed SFR(\(<r\)) and the red line is the best-fit curve of Equation 1. We then define \(\Delta\)SFR as the maximum absolute deviation of the observed SFR(\(<r\)) curve to the fitted one, which is shown as the green line segment in the left panel of Figure 4. The ratio of \(\Delta\)SFR to SFR(\(<r_{\rm max}\)) is then a good parameter to quantify the deviation of \(\Sigma_{\rm SFR}\) from an pure exponential function. It is a measure of the fraction of the star-formation in the galaxy (or at least of that which is spatially observable) does _not_ follow a perfect exponential profile. As just noted, the finite area of the MaNGA data do not cover all of the star-formation in these galaxies. The right panel of Figure 4 shows the distribution of Figure 3: The same as Figure 2, but showing the profiles as a function of the normalized radius r/R\({}_{1/2}\) (or the effective radius for MaNGA galaxies). We also plot a normalized SFR surface density, which is computed as the SFR in an area that itself scales as R\({}_{1/2}^{2}\). This ensures that the visual integration of a profile on the \(\Sigma_{\rm SFR}\)-radius diagram reflects the actual integrated quantity in physical terms (see also Wang et al. 2019). In each panel, the blue and red diamonds shows the median \(\Sigma_{\rm SFR}\) of that stellar mass bin for the MaNGA and TNG50 galaxies, respectively. The median \(\Sigma_{\rm SFR}\) profiles of MaNGA galaxies are close to exponential functions, and we therefore fit an exponential function to the MaNGA median \(\Sigma_{\rm SFR}\) profile shown in the blue solid line of each panel. the ratio of SFR(\(<r_{\rm max}\)) to the fitted SFR\({}_{\rm tot}\). The SFR(\(<r_{\rm max}\))/SFR\({}_{\rm tot}\) peaks at \(\sim\)0.8, which means that the MaNGA observations have typically covered \(\sim\)80% of the total star formation in these galaxies. In order to apply as similar a procedure to TNG50 galaxies as possible, we therefore define an artificial \(r_{\rm max}\) for the TNG50 galaxies to be the radius within which 80% of their total star formation is enclosed. We then measure the \(\Delta\)SFR/SFR(\(<r_{\rm max}\)) for TNG50 galaxies as for the MaNGA ones. Figure 5 shows the distributions of \(\Delta\)SFR/SFR(\(<r_{\rm max}\)) for TNG50 galaxies (red histograms) and MaNGA galaxies (blue histograms) for the four stellar galactic mass bins. As shown, the \(\Delta\)SFR/SFR(\(<r_{\rm max}\)) of MaNGA galaxies is systematically less than that of the TNG50 galaxies across the full range of stellar mass that we considered. Specifically, the median \(\Delta\)SFR/SFR(\(<r_{\rm max}\)) of MaNGA for the four stellar mass bins are 0.029, 0.039, 0.048 and 0.054 with increasing mass, while the median \(\Delta\)SFR/SFR(\(<r_{\rm max}\)) of TNG50 galaxies are 0.101, 0.093, 0.116 and 0.103, respectively. In general, the median \(\Delta\)SFR/SFR(\(<r_{\rm max}\)) of TNG50 galaxies are typically a factor of 2-3 larger as those of MaNGA galaxies, indicating that TNG50 galaxies show significantly larger deviation of their \(\Sigma_{\rm SFR}\) profiles from the pure exponential function than do MaNGA galaxies. In a similar way, we show the distribution of \(r_{\rm max}\) for the TNG50 galaxies and MaNGA galaxies in the four stellar mass bins in Figure 6. Since \(r_{\rm max}\) (roughly) corresponds to the radius that contains 80% of star formation, the value of \(r_{\rm max}\) can be taken to reflect a measure of the size of the star-forming disks. As shown in Figure 6, TNG50 galaxies typically have larger star-forming disk than the MaNGA galaxies for all the stellar mass bins except the highest one. Specifically, the median \(r_{\rm max}\) of MaNGA galaxies are 5.0, 6.1, 8.4, and 12.3 kpc/\(h\) with increasing stellar mass for the four mass bins, while the median \(r_{\rm max}\) of TNG50 galaxies are 8.0, 10.0, 13.5 and 11.0 kpc/\(h\) with increasing mass, respectively. Except in the highest mass bin, TNG50 galaxies typically have larger star-forming disks (as defined here by our \(r_{\rm max}\) parameter) by a factor of \(\sim\)1.6 (or 0.2 dex larger) than MaNGA galaxies with \(\log M_{*}/({\rm M}_{\odot}h^{-1})<11.0\). In fact, it is quite common that cosmological simulations produce larger galaxies than the observations, not just in IllustrisTNG (also see Genel et al., 2018). For instance, by investigating the sizes of galaxies in the EAGLE simulation (Schaye et al., 2015), Furlong et al. (2015) found that the predicted mass-size relation is systematically shifted with respect to that of observations, with the simulated galaxies 0.2 dex larger than real ones at a given stellar mass. By comparing the size of galaxies from Illustris and SDSS, Bottrell et al. (2017) found the simulated galaxies are also roughly twice as large (0.3 dex) on average as real galaxies when matching stellar masses. The general conclusion from this section is that star-formation is occurring too far out in the simulated galaxies, causing them to deviate further from pure exponential profiles and giving larger overall sizes. This is presumably because not enough gas is penetrating down to small radii to fuel star-formation there. This suggests the need for one or more physical processes to transport Figure 4: Left panel: An example to illustrate the definitions of \(r_{\rm max}\), SFR(\(<r_{\rm max}\)), \(\Delta\)SFR and SFR\({}_{\rm tot}\). The black solid line shows the cumulative SFR as a function of radius for an individual MaNGA galaxy (8326-12701), and the red solid line shows the fit of the cumulative SFR with an exponential \(\Sigma_{\rm SFR}\) profile. Therefore, the \(\Delta\)SFR is defined as the maximum (absolute) deviation of the black line to the red fitted line, shown in green line segment. The r\({}_{\rm max}\) shows the maximum radius of the coverage in MaNGA observation, and the SFR(\(<r_{\rm max}\)) is the SFR within the radius of r\({}_{\rm max}\). The SFR\({}_{\rm tot}\) is the fitted total SFR, i.e. SFR(\(<+\infty\)). Right panel: The distribution of the ratio of SFR(\(<r_{\rm max}\)) to SFR\({}_{\rm tot}\) for MaNGA galaxies. This distribution is peaked at SFR(\(<r_{\rm max}\))/SFR\({}_{\rm tot}\sim\) 0.8. We therefore perform a similar analysis for TNG50 galaxies, defining the \(r_{\rm max}\) as the radius within which the 80% of star formation is enclosed. angular momentum of disk gas outward in the simulations, in order to reduce the sizes of the simulated disks. ### The stellar surface density profiles In the previous subsection, we showed that the \(\Sigma_{\rm SFR}(r)\) profiles of TNG50 galaxies show stronger deviations from pure exponential functions than do real MaNGA galaxies. In this subsection, we turn to examine the resulting profiles of stellar surface density (\(\Sigma_{*}(r)\)) for the TNG50 galaxies. Figure 7 shows the \(\Sigma_{*}(r)\) profiles of the TNG50 galaxies for the four stellar mass bins. Interestingly, we find from simple visual inspection that the \(\Sigma_{*}(r)\) profiles typically show very good exponential disks with a Sersic core (or bulge) in the galactic center. This appears to be in good agreement with the observations on the stellar distribution of disk galaxies (e.g. Kent, 1985; Weiner et al., 2001; Pohlen & Trujillo, 2006; Meert et al., 2013). It is interesting that the \(\Sigma_{\rm SFR}(r)\) profiles do not show good exponential form but the \(\Sigma_{*}(r)\) profiles do, since the \(\Sigma_{*}\) might be expected to be the time-integration of \(\Sigma_{\rm SFR}\), if there is no radial migration of stars3. Indeed, Lilly & Carollo (2016) have shown that good exponential stellar disks with a superposed bulge, as well as realistic radial gradients of sSFR with radius, can be produced by straightforward superposition of (pure) exponential \(\Sigma_{\rm SFR}(r)\), assuming that the evolution of the overall SFR and the exponential scalelength of \(\Sigma_{\rm SFR}(r)\) follow the relations indicated from observations of real galaxies at different redshifts. Footnote 3: Significant mergers have already being excluded in the sample selection of TNG50 galaxies since redshift of 0.4 (see Section 2). To explore this question further, we examine the importance of radial migration of stars in two individual galaxies (TNG50 ID: 342447 and 503437). These two galaxies are not special, and can be treated as a representative of the TNG50 galaxies, at least on the question we considered here. The radial redistribution of stars can be seen by comparing the radial distribution of (all) stars when they are actually formed with the radial distribution of the same set of stars at the current epoch. The left panels of Figure 8 show the \(\Sigma_{*}(r)\) profiles _at the current epoch_ (\(z=0.0\)) for the stellar particles in these two galaxies that were formed prior to a set of different redshifts. The right panels of Figure 8 show the evolution of \(\Sigma_{*}(r)\) for these two galaxies _at different epochs_, which is obtained by tracking their main progenitors through the simulation. If there was Figure 5: The distribution of \(\Delta\)SFR/SFR(\(<r_{\rm max}\)) for TNG50 (red histograms) and MaNGA (blue histograms) galaxies. In each panel, the red and blue vertical dashed lines indicate the median values of \(\Delta\)SFR/SFR(\(<r_{\rm max}\)) for TNG50 and MaNGA, respectively. As above, galaxies are separated into the four stellar mass bins. no redistribution of stars after they formed, then these two representations would obviously be identical. As can be seen in the right panels of Figure 8, the size of the main progenitors gradually increases with time for both galaxies. This is consistent with the significant size evolution seen in observations (e.g. Buitrago et al., 2008; Newman et al., 2012; Mosleh et al., 2012; van der Wel et al., 2014; Genel et al., 2018). Specifically, the size of the stellar disk for the main progenitors is very small before \(z=1.5\). However, the distribution at the current epoch of those stars that formed before \(z=1.5\) show a much more extended distribution than when they were actually formed. Some of this could well be due to mergers of the galaxies, but it is noticeable that the effect is seen even at redshifts below \(z=0.4\), when significant mergers have been excluded. This comparison indicates that there is a strong radial migration of stars in the evolution of the stellar disk in the simulated TNG50 galaxies. It is then interesting to understand why the radial migration of stars can result in an exponential stellar disk. As mentioned in the Introduction (Section 1), Elmegreen and Struck (2013) proposed that stellar scattering by encountering with massive clumps can redistribute stars and produce exponential profiles in a self-gravitating system (Wu et al., 2020), which may be due to the fact that the exponential stellar disk has the maximum entropy state for the distribution of specific angular momentum of stars (Hernich et al., 2017). However, it is not clear whether the scattering of stars is as important in real galaxies as in the simulations. One diagnostic of radial redistribution is the radial profile of stellar age, which is likely to be flattened by radial redistribution of stars. Therefore, we examine the profiles of mass-weighted stellar age for TNG50 galaxies (red lines) and as estimated for MaNGA galaxies (blue lines) of the four stellar mass bins in Figure 9. In each mass bin, we also show the median age profiles of the TNG50 sample and MaNGA sample in the red and blue diamonds, respectively. The stellar age profiles are taken from Wang et al. (2018), obtained from the output of STARLIGHT code (Cid Fernandes et al., 2005). In the STARLIGHT fitting, the templates are 45 single stellar populations taken from Bruzual and Charlot (2003) with assuming a Chabrier (2003) IMF and a Cardelli et al. (1989) stellar extinction law. Although the stellar age of MaNGA obtained from the STARLIGHT fitting is not model-independent, Wang et al. (2018) pointed out that Figure 6: The distribution of \(r_{\rm max}\) for TNG50 (red histograms) and MaNGA (blue histograms) galaxies. The \(r_{\rm max}\) roughly corresponds to the radius within which 80% of star formation is enclosed for each galaxy. The red and blue vertical dashed lines indicate the median values of \(r_{\rm max}\) for TNG50 and MaNGA, respectively. As above, galaxies are separated into the four stellar mass bins. the gradients in stellar age obtained by STARLIGHT for MaNGA galaxies show good consistency with those of the output from pPXF and FIREFLY (Zheng et al., 2017; Li et al., 2018; Goddard et al., 2017). As shown in Figure 9, the overall negative gradients in stellar age profiles are in good general agreement with an inside-out growth scenario for disk galaxies (e.g. Perez et al., 2013; Li et al., 2015; Ibarra-Medel et al., 2016; Wang et al., 2018), for both TNG50 and MaNGA. However, the TNG50 galaxies show significantly flatter age gradients than do real MaNGA galaxies, across the full range of stellar mass. This supports our previous suspicion that the radial migration of stars in TNG50 galaxies is much larger than in real galaxies, and that the good exponential distributions of stellar mass that are seen in TNG50 galaxies (despite their less exponential \(\Sigma_{\rm SFR}(r)\) profiles) may plausibly reflect this. ## 4 Gas Inflow, Outflow and Star Formation in the TNG50 Disks In previous section, we found that simulated TNG50 galaxies typically show larger sizes and larger deviations from pure exponential \(\Sigma_{\rm SFR}\) profiles than do real MaNGA galaxies. The good exponential profiles of stellar mass density \(\Sigma_{*}\) in TNG50 galaxies are likely to be the result of large scale radial migration of stars by stellar scattering, which may be not realistic because it produces too flat age gradients compared to observations. In this section, we focus on the gas accretion onto gas disks, in order to examine to what extent galaxies in the simulation behave as gas-regulator systems (e.g. Lilly et al., 2013). ### The definition of inflow and outflow rates There are usually two approaches to computing the inflow or outflow rate: 1) deriving instantaneous fluxes of mass that are based on gas velocities at a given time, i.e. in one snapshot of the simulation (e.g. Ocvirk et al., 2008; Nelson et al., 2019) and 2) deriving mass fluxes across a boundary by tracking the movement of gas particles between two different snapshots (e.g. Nelson et al., 2013) which necessarily averages the inflow/outflow over the time interval between the snapshots. In this work, we adopt the latter approach, since our main focus is the gas inflow and outflow on relatively long timescale, upto a few Gyr. We compute the inflow rate and outflow rate of our sample of TNG50 galaxies in the following way. For a given galaxy in the simulation, we first extract the particle information (at different snapshots) for its main progenitor from the merging tree. We then define the average (between any two snapshots) inflow and outflow rate across any given boundary. Specifically, for two snapshots S1 (at redshift \(z1\)) and S2 (at redshift \(z2\) Figure 7: The stellar surface mass density profiles for TNG50 galaxies. As above, we show the result for the four stellar mass bins, as denoted in each panel. and \(z1>z2\)), the inflowing gas particles within the time interval are defined as those gas particles, plus those stellar particles that are formed between the two snapshots, that appear within the given volume in S2 but are not there in S1. In the same way, the outflowing gas particles are defined as the gas particles that are in a given volume at S1 but not in the corresponding volume at S2. The inflow rate (or outflow rate) is then the sum of all inflowing (or outflowing) gas particles divided by the time interval between the two snapshots. The tracing of gas mass by their particle IDs, as adopted here, ignores the change of mass of individual particles due to mass exchange between them, that is occurs during the "refinement" and "de-refinement" procedures used in the TNG simulation to maintain the average mass of gas particles within a factor of two of a given value. We have checked that the net effect of these changes of individual particle masses is infact negligible with respect to the total gas mass, and may, as here, be ignored, at least for the purpose of this analysis. The boundary of the volume is defined in the following way. For a given TNG50 galaxy, we first convert the coordinates of all the particles in this galaxy into cylindrical coordinates, with the zero point as the galactic center and the \(z\)-axis as the direction of the disk axis, calculated as above. We calculate the inflow and outflow rates using the above approach across a cylindrical boundary with \(|z|=5\) kpc and a variable radius \(r\). By increasing the radius \(r\) we can therefore obtain the inflow (or outflow) rate of the disk as a function of galactocentric radius. For illustration, Figure 10 shows the inflow rate and outflow rate as a function of radius for one particular galaxy (TNG50 ID: 342447) between different pairs of Figure 8: Illustration of the radial migration of stellar particles with two TNG50 individual galaxies: ID-342447 (top panels) and ID-503437 (bottom panels). Left panels: The radial distribution of the \(\Sigma_{*}\)_at the current epoch_ (\(z=0.0\)) for those stellar particles that formed prior to a set of different redshifts. Right panels: The evolution of \(\Sigma_{*}\) for the individual galaxies by tracing their main progenitors in the merging tree at different snapshots. Therefore, by comparing of left and right panels, one can imagine that the differences are due to the radial migration of stellar particles and mergers. snapshots, as denoted in each panel. In each panel, we show as a blue solid line the net inflow rate as a function of radius, denoted as \(\Phi(r)\) and defined as the inflow rate minus the outflow rate through the cylindrical surface of radius \(r\) described above. For comparison, we also show the cumulative SFR(\(<r\)) as a function of radius (as the red solid lines), computed as the time-averaged (between the two snapshots) SFR within the radius \(r\). In addition, we also show as a green solid line average the rate of change of the total gas mass within the radius \(r\), denoted as \(\dot{M}_{\rm gas}(<r)\). Although we set a fixed cylindrical boundary of \(|z|=5\) kpc, the resulting net inflow rate is not sensitive to this choice. We have examined that the net inflow rates generally only change by a few percent when \(|z|\) is varied from 3 kpc and 10 kpc, and the main conclusion from this section therefore does not change. The conservation of mass then implies that these quantities should be related by. \[\dot{M}_{\rm gas}(<r)=\Phi(r)-{\rm SFR}(<r). \tag{2}\] We verify this equation by showing the \(\Phi(r)-{\rm SFR}(<r)-\dot{M}_{\rm gas}(<r)\) as a function of radius as the red dashed line in each panel of Figure 10. It should be noted that our estimate of the net inflow rate will be affected by stellar mass loss. Although in TNG50, the effects of stellar mass loss are incorporated by increasing the mass of nearby gas particles, the refinement/de-refinement processes ensure that the average mass of gas particles stays approximately constant, and therefore the mass injection from gas return from stars eventually leads (effectively) to the creation of new gas particles that, in our approach, will lead to an over-estimate of the mass inflow rate4. Apart from this effect of stellar mass return, the splitting or merging of gas particles during refinement/de-refinement should not affect the net inflow because the perturbations of the inflow and outflow, which will arise from splitting and merging respectively, should cancel out. It will be shown in Section 4.2 that this gas return component is small compared with the true inflow of pre-existing gas particles. Footnote 4: This is also the reason why we can verify the mass conservation in Equation 2. It should be noted that the SFR we considered throughout this work is the instantaneous formation rate of newly formed stars, rather than the formation rate of long-lived stars, sometimes, known as the reduced SFR. Therefore, comparison between our defined \(\Phi\) and SFR is appropriate to examine whether the TNG galaxies can be treated as gas-reg Figure 9: The mass-weighted age profiles for TNG50 galaxies (red lines) and MaNGA galaxies (blue lines). The red and blue diamonds show the median age profiles for TNG50 and MaNGA, respectively. As above, we show the result in the four stellar mass bins. also means that Equation 2 is satisfied. To eliminate this effect, one could subtract the gas return rate, which is \(\sim\)30% of SFR for the TNG simulation (Pillepich et al., 2018), from our defined \(\Phi\) (and the SFR). This does not change our main conclusions. In the top left panel of Figure 10, all the quantities are computed between \(0.0<z<0.4\). However, for the other three panels, the quantities are computed for three shorter intervals \(0.24<z<0.4\), \(0.11<z<0.24\), and \(0.0<z<0.11\), respectively. Interestingly, the inflow (and outflow) rate is higher for a factor of \(\sim\)1.6 when measured in the narrower time intervals, while the net inflow rate calculated between \(0.0<z<0.4\) appears to be comparable to the ones calculated within the three narrower time intervals (of order 1.5 Gyr). This suggests gas recycling is very important in the simulated galaxies, in the sense that it implies that a significant number of gas particles are being registered as entering the galaxy in one of the shorter time intervals and then registered as leaving in another interval and are therefore not counted as having crossed the boundary within the longer time interval. We have examined this for the galaxy (ID: 342447) during the epochs after redshift of 0.4. About 40% of inflowing gas particles, defined as those appearing between a given pair of snapshots (with mean \(\Delta t\sim\) 170 Myr), have then left the galaxy before the next snapshot, and are therefore classified as outflowing particles between this next pair of snapshots. To avoid this issue, in this work, we only use the _net_ inflow rate computed over the whole interval \(0.0<z<0.4\) in the following analysis. ### Gas accretion supports star formation in galaxies Comparison of the relative sizes of the net inflow rate, the star-formation rate, and the rate of change of gas mass provides a basic check of the gas regulator picture which requires that the rate of change of gas mass be smaller than the other two. Figure 11 shows the net inflow rate \(\Phi(r)\), the cumulative SFR(\(<r\)), and the \(\dot{M}_{\rm gas}(<r)\) as a function of radius for individual TNG50 galaxies for each of the four stellar mass bins. The \(\dot{M}_{\rm gas}(<r)\) of TNG50 galaxies are typically negative across the full range of stellar mass and over the full range of radii. This indicates an overall reduction Figure 10: Illustration of the definition of net inflow rate with one individual TNG50 galaxy (ID:342447). Top left panel: the red line shows the cumulative SFR as a function of radius, i.e. SFR(\(<r\)). The blue dashed line shows the inflow rate as a function of radius, the cyan dashed line shows the outflow rate as a function of radius, and the blue solid line shows the net inflow rate as a function of radius. The green line shows the change rate of cumulative gas mass as a function of radius. We can check that the net inflow rate within a given \(r\) equals the SFR(\(<r\)) plus the \(\dot{M}_{\rm gas}(<r)\), which is shown in the red dashed line (net inflow rate minus SFR(\(<r\)) and \(\dot{M}_{\rm gas}(<r)\)). We note that all these quantities are measured between the two snapshots of \(z=0.0\) and \(z=0.4\). The other three panels: The same as the top left panel, but all these quantities are measured between two snapshots of shorter time intervals, as denoted in each panel. of gas mass in the disk over the last 4.4 Gyr. In the gas regulator picture, such a reduction is required in order to be consistent with the cosmic evolution of individual galaxies implied by the evolution of the sSFR of the SFMS (e.g. Noeske et al., 2007; Daddi et al., 2007; Peng et al., 2010; Stark et al., 2013) provided that the star-formation efficiency is more or less constant. As a whole, both the \(\Phi(r)\) and the SFR(\(<r\)) appear to be larger than the absolute value of \(\dot{M}_{\rm gas}(<r)\). Specifically, we calculate the mean value of \(\Phi/|\dot{M}_{\rm gas}|\) at \(r=\)30 kpc for the four stellar mass bins, and find these to be 5.3, 4.2 1.7 and 1.5 respectively with increasing stellar mass. This indicates that the gas fuel for star formation in TNG50 galaxies is primarily provided by gas accretion, rather than by consumption of a pre-existing gas reservoir within the galaxies, at least for galaxies with \(\log[M_{*}/({\rm M}_{\odot}h^{-1})]<10.5\). For galaxies at the higher masses, the change of gas mass \(\dot{M}_{\rm gas}\) becomes more significant. We suggest that this is due to the overall reduction of SFR for massive star-forming galaxies (see later Figure 12). To illustrate this more clearly, we directly show the _ratio_ of \(\Phi(r)\) and SFR(\(<r\)) as a function of radius for individual TNG50 galaxies, in Figure 12. For each stellar mass bin, we compute the mean value of \(\Phi(r)\)/SFR(\(<r\)) as the red solid line. For comparison, we also show the values of \(\Phi\)/SFR that are expected from the cosmic evolution of the SFMS in the observation for the corresponding stellar mass bins, in green horizontal lines. This predicted \(\Phi\)/SFR in the following way. Motivated from the observations (e.g. Daddi et al., 2007; Elbaz et al., 2011; Stark et al., 2013), we assume that the evolution of specific SFR, defined as SFR/\(M_{*}\), follows the formula (Lilly and Carollo, 2016; Wang and Lilly, 2022): \[{\rm sSFR}(M_{*},z)=\frac{0.07}{1-R}\times(\frac{M_{*}}{3\times 10^{10}M_{ \odot}})^{-0.2}\times(1+z)^{2}\,{\rm Gyr}^{-1}, \tag{3}\] where \(R\) is the fraction of mass formed in new stars that is subsequently returned to the interstellar medium through winds and supernova explosion. We take \(R=0.4\) for the Chabrier (2003) IMF (see Vincenzo et al., 2016). Based on Equation 3, we can then derive the star formation histories for individual galaxies of any stellar mass at the current epoch. We adopt a typical gas depletion timescale of \(\tau_{\rm gas}=\)5.4 Gyr (including both atomic and molecular gas) taken from Saintonge et al. (2017), which is nearly independent of galaxy mass. Based on the above assumptions, we can then obtain the predicted \(\Phi\)/SFR between redshift of 0.0 and 0.4 for galaxies of any given stellar mass: \[\Phi/\langle{\rm SFR}\rangle=1-\frac{\dot{M}_{\rm gas}}{\langle{\rm SFR} \rangle}=1-\frac{\Delta{\rm SFR}}{\Delta t}\times\tau_{\rm gas}/\langle{\rm SFR }\rangle, \tag{4}\] where \(\langle{\rm SFR}\rangle\) is the mean SFR within the time interval. Figure 11: The cumulative SFR(\(<r\)), net inflow rate and \(\dot{M}_{\rm gas}(<r)\) as a function of radius for the TNG50 galaxies. As above, galaxies are separated into the four stellar mass bins in showing the result. As can be seen in Figure 12, the radial profiles of \(\Phi(r)/\)SFR(\(<r\)) are overall flat for individual TNG50 galaxies across the full range of stellar mass. The mean values of \(\Phi(r)/\)SFR(\(<r\)) are slightly less than unity, as expected from the cosmic evolution of SFR in galaxies. The key requirement of the gas regulator picture is that the rate of change in gas mass is small compared with the SFR and net inflow of the system. This is clearly satisfied for TNG50 galaxies, at least for galaxies with \(\log[M_{*}/(\mathrm{M}_{\odot}h^{-1})]<10.5\). Allowing for the cosmic reduction of SFR for massive star-forming galaxies, TNG50 galaxies can therefore be treated as gas-regulator systems in which gas continuously flows through the system, rather than simple "close-box" or "leaky-box" scenarios in which a gas reservoir is "consumed" (Lilly et al., 2013). The gas reservoir, which must represent a balance between inflow, star-formation and wind-driven outflow, adjusts so as to maintain the SFR close to that required to consume the inflowing gas (Schaye et al., 2010; Bouche et al., 2010; Dave et al., 2011; Lilly et al., 2013; Belfiore et al., 2019). The notion of "gas-regulator" was proposed by Lilly et al. (2013) followed the work of Dave et al. (2012). This simple idea of galaxy formation appears to explain a large range of both observational and simulated data. Specifically, Dave et al. (2012) provided an analytic formalism to describe the evolution of stellar mass, gas mass and metallicity of galaxies, assuming an equilibrium state that the mass of the gas reservoir is constant with time. This scenario is known as the "bathtub" model (Bouche et al., 2010; Dave et al., 2012). However, releasing this restriction of zero change in gas mass, Lilly et al. (2013) proposed that the SFR in galaxies is regulated by the instantaneous gas mass continually adjusting to the inflow rate. This is known as the "gas-regulator" model. In the gas-regulator framework, the SFR emerges naturally as a second parameter in the mass-metallicity relation Lilly et al. (2013), and therefore provides a natural explanation for the claimed universal (epoch independent) mass-metallicity-SFR relation (known as the fundamental metallicity relation; Mannucci et al., 2010; Nakajima et al., 2012; Dayal et al., 2013; Salim et al., 2014; Cresci et al., 2019; Curti et al., 2020). Further, in the same framework of the gas-regulator picture, Wang et al. (2019) found that the dispersion of \(\Sigma_{\mathrm{SFR}}\) across the galaxy populations can be quantitatively explained by the response of gas regulator systems to temporal variations of gas inflow rate on timescales of a few Gyr. For a given variation of inflow, the amplitude of the response of the regulator depends on the gas consumption timescale of the system. Interestingly, Figure 12: The ratio of net inflow rate to SFR(\(<r\)) as a function of radius for the TNG50 galaxies of the four stellar mass bins. In each panel, the red line shows the mean profile of the TNG50 galaxies of the corresponding stellar mass bin. The green horizontal line shows the value of \(\Phi/\)SFR expected from the cosmic evolution of SFMS (see text). this time-variation explanation of the dispersion in \(\Sigma_{\rm SFR}\) is further supported by a direct analysis of the temporal changes in the SFR as indicated by comparisons of the H\(\alpha\) emission and H\(\delta\) absorption lines (Wang & Lilly, 2020). Consistent with this, by exploring the time-variability of star formation histories for galaxies taken from hydrodynamical simulations and semi-analytic models, Iyer et al. (2020) found that the dark matter accretion histories of galaxies are in general coherent with the in-situ star formation on timescales \(>\)3 Gyr (see also Tacchella et al., 2020). More recently, Wang & Lilly (2021) further developed the gas-regulator model to understand the correlations between SFR, gas mass, and metallicity, as well as their time-variability, on different spatial scales. Wang & Lilly (2021) found that the negative correlation between SFR and metallicity on galactic scale is the result of a time-varying inflow rate, while the positive correlation between these two quantities on 100 pc scale within individual galaxies is the result of a time-varying star formation efficiency as gas orbits around the galaxy (Kreckel et al., 2018; Kruijssen et al., 2019; Chevance et al., 2020). ## 5 The evolution of angular momentum of inflowing gas particles In the previous section, we established that, as expected, the TNG50 simulated galaxies can be treated as gas regulator systems in which the inflowing gas is the key to driving the cosmic evolution of star formation in galaxies (see also Iyer et al., 2020). It is then interesting to examine the histories of the inflowing gas particles, especially the evolution of their angular momenta, in order to better understand the reason for the somewhat large disk sizes and non-exponential \(\Sigma_{\rm SFR}\) profiles that are seen in TNG50 galaxies (see Section 3.1). Therefore, we examine the angular momentum (\(J\)) of the gas particles when they are first accreted through our boundary, and any change of \(J\) afterwards. Figure 13 illustrates the evolution of inflowing gas particles for two individual galaxies (TNG50 ID: 342447 and 503437) taken as examples. For individual galaxies, we record all the gas particles that are identified as inflowing, beginning at \(z\sim 0.4\), and then follow these particles in all later snapshots down to \(z\sim 0.0\). For any two adjacent snapshots, the inflowing particles are defined in the same way as in Section 4, with the boundary being set as \(|z|<5\) kpc and \(r<10\) kpc in cylindrical coordinates centered on the galactic center in each of the snapshot. We adopt this same boundary for all the individual galaxies in the analysis of this section. These identified inflowing gas particles can then, at some later time, either leave the volume as outflowing gas particles, be converted into stellar particles through star-formation, or remain as gas particles within the volume. We stress that in this section, we only consider those outflowing, stellar particles or gas particles that entered the boundary as inflowing gas particles at times after \(z\sim 0.4\). We consider the angular momentum for the different type of particles in the following way. The \(J\) of inflowing particles are computed only when the gas particles first appear within the defined boundary of the galaxies. In the subsequent evolution, the \(J\) of outflowing gas particles are computed using the last snapshot in which they were within the defined boundary. We may therefore conceivably underestimate the \(J\) of such particles, if they acquire additional angular momentum immediately before they pass through the boundary. This is a technical limitation that we cannot easily track these particles after they have left the simulated galaxies (without loading the full set of particles of a given snapshot). The \(J\) of the stellar particles are computed in the snapshot in which they first become stellar particles through star-formation. In the left panels of Figure 13, we show the evolution of the cumulative mass of inflowing gas particles as the blue solid line, and the three other components in orange (cumulative mass of outflowing gas), green (instantaneous mass of stellar particles) and red (instantaneous mass of remaining gas particles) solid lines. The first three of these steadily increase with time. As required, the cumulative mass of the inflowing gas particles equals at all times the cumulative mass of the outflow plus the instantaneous mass of the stars and remaining gas. We show the cumulative \(J\) for inflowing gas particles as a function of redshift, the cumulative \(J\) of the outflowing particles, and the instantaneous \(J\) of the stellar and gas particles, in the middle panels of Figure 13, as denoted by different colors. Note that the cumulative inflow is much larger than the mass of stars, and thus of the mass return from stars, so most of the "inflow" is real inflow and not mass-return from stars. Additionally we show the sum of the three last of these (outflow, stars and remaining gas) as a purple line. Interestingly, we find this combined angular momentum to be slightly less than (though by \(<\)10%) the \(J\) of the inflowing particles. On the one hand, this indicates a small loss of angular momentum after particles appear as inflow. This small loss may due to the underestimation of angular momentum for the outflowing particles as discussed above. However, the fraction of angular momentum is very small compared to their original \(J\), which further suggests that the processes for transporting angular momentum of gas particles within the simulation are not very effective. In the right panels of Figure 13, we show the evolution of the mean (mass-weighted) specific angular momentum, defined as \(j=J/M\), for the inflowing particles, outflowing particles, the newly-formed stellar particles and the remaining gas particles. It should be noted that, unlike the left and middle panels of Figure 13, the \(j\) of the inflowing, outflowing and newly-formed stellar particles in the right panels of Figure 13 are not cumulative quantities, and represent the \(j\) of the particles that are identified (as inflow, outflow, newly formed star particle, or within the gas reservoir) at each epoch. We find the \(j\) of the inflowing particles does not show significant evolution since redshift of 0.4, for these two particular galaxies. Actually, we have examined the evolution of \(j\) of the inflowing particles for the sample galaxies, and find that this is true for the majority of TNG50 galaxies. This is also the case for the outflowing, newly-formed stellar particles and remaining gas particles. It appears that the inflowing particles show slightly higher \(j\) than the other three kinds of particles, while the newly-formed stellar particles show slightly lower or comparable \(j\) than the other three kinds of particles. This suggests that the initial angular momentum of inflowing particles may be weakly related to their fates, in the sense that, inflowing gas particles with lower \(j\) tend to have a larger chance to be converted into a star particle, rather than being ejected as an outflowing particle later. This might be thought to be due to the fact that gas particles of lower \(j\) may fall into inner regions of galaxies, where the star formation efficiency is relatively high. We then explore the evolution of \(j\) for individual inflowing particles, by comparing the \(j\) of individual particles at the time of their being accreted to the \(j\) of the same particles when they are later converted into stars, when they left the system as outflowing particles, or at the last snapshot we considered if they were still gas particles. We define the change of \(j\) for individual particles between entry and their eventual fate as \(\Delta j\). We track the inflowing gas particles for all our sample galaxies of TNG50 from redshift of 0.4 to 0.25 Figure 14 shows the mass-weighted distribution of \(\Delta j\) for the Figure 13: The angular momentum evolution of the inflowing gas particles since redshift of 0.4 for two individual galaxies: ID-342447 (top panels) and ID-503437 (bottom panels). The inflowing gas particles can later be either outflowing gas particles, stellar particles or remaining gas particles (see text). We therefore show the cumulative mass of inflowing gas particles, as well as the mass of three different components (the cumulative mass of outflowing particles, the instantaneous mass of stellar and gas particles) in the left column of panels. We can check that the combination of the three components equals to the cumulative mass of inflowing gas particles at any epoch for consistency check. The middle column of panels show the cumulative angular momentum for the inflowing gas particles, as well as the three different components (the cumulative angular momentum for the outflowing gas particles, and the angular momentum of instantaneous stellar and gas particles). The right column of panels show the evolution of mean specific angular momentum of the inflowing gas, outflowing gas, remaining gas and the newly formed stellar particles (defined at the corresponding epochs rather than the cumulative particles). inflowing particles split by whether they end up as outflowing gas particles, stellar particles or remaining gas particles, for all our sample of TNG50 galaxies of the four stellar mass bins. The histograms are normalized by the total number of particles in each category. As can be seen, for all the four stellar mass bins, the distribution of \(\Delta j\) is nearly symmetric. The mass-weighted average is slightly negative and shows a small offset to zero, as indicated by the vertical dashed line in each panel. These small offsets correspond to the loss of 4.8%, 8.2%, 8.5% and 7.6% of the initial \(J\) for the inflowing gas particles for the four stellar mass bins, respectively. Most strikingly, the distribution of \(\Delta j\) evidently does _not_ depend on the ultimate fate of particles, i.e. whether they later become outflowing particles or formed stars, or remain in the volume as gas particles. In other words, there is no evidence that those gas particles that are eventually being formed into stars have preferentially lost angular momentum. It is clear that gas particles in TNG50 simulation that flow into galaxies, or more specifically, flow through the cylindrical boundary considered here, only lose a very small fraction of angular momentum after they are accreted. In other words, there appears to be a lack of effective mechanisms to remove angular momentum of the inflowing gas in the simulations, and transport it outwards, in TNG50 simulation. We propose that this may be the reason why the simulated star-forming disks are typically larger than the real disk galaxies for TNG50 and many hydro-dynamical simulations (e.g. Furlong et al., 2015; Bottrell et al., 2017; Genel et al., 2018), as well as why the radial profiles of \(\Sigma_{\rm SFR}\) for TNG50 galaxies show larger deviation from exponential function than those of observations, typically with holes in \(\Sigma_{\rm SFR}\) at the center for galaxies of two highest mass bins. Wang & Lilly (2022) have proposed a disk formation model that the galactic gas disk is viewed as a "modified accretion disk" in which coplanar gas inflow, driven by viscous processes in the disk, provides the fuel for star formation. In this scenario, Wang & Lilly (2022) found that magnetic stresses arising from magneto-rotational instability are the most plausible source of the required viscosity for the formation and maintenance of exponential star-forming disks \(\Sigma_{\rm SFR}(r)\). As with all viscous disks, the viscous stresses remove angular momentum from the inspiralling gas and transport it outwards. Specifically, by linking the magnetic field strength to the local \(\Sigma_{\rm SFR}\) (\(B_{\rm tot}\propto\Sigma_{\rm SFR}^{\alpha}\)), Wang & Lilly (2022) showed that the gas disk can reach a stable steady-state with exponential \(\Sigma_{\rm SFR}\) of reasonable scalelength, as long as Figure 14: The mass-weighted distribution of the change of specific angular momentum, \(\Delta\mathrm{j}\) (see the detailed definition in text), for individual inflowing gas particles (being either outflowing gas particles, stellar particles or remaining gas particles). We calculate the \(\Delta\mathrm{j}\) for all the sample galaxies of TNG50 by tracing the inflowing particles from \(z=0.4\) to \(z=0.2\) (to save computing time). As above, the results are shown for the four stellar mass bins. In each panel, we indicate the mass-weighted average \(\Delta j\) for the three cases with the corresponding colors, and show the mass-weighted average \(\Delta\mathrm{j}\) for all of the inflowing particles in the vertical dashed line in each panel. 0.15, the value indicated from spatially-resolved observations of nearby galaxies (e.g. Tabatabaei et al., 2013; Heesen et al., 2014). It should be stressed that the setting up of the exponential profile in \(\Sigma_{\rm SFR}\) is independent of the assumed star-formation law and thus also the gas profile. If this picture is correct, the size (or the angular momentum) of the resultant stellar disk is not closely connected to the angular momentum of the inflowing gas. Instead, the angular momentum of the gas disk is the result of the action of the viscous process within the disk which remove substantial amounts of angular momentum from the inflowing gas. In this scenario, 50%-70% of the initial \(J\) of the gas at the boundary of 10 kpc is lost by the time that gas is eventually formed into stars, for a typical disk scalelength of 2-5 kpc (see figure 14 in Wang and Lilly, 2022). This is much larger than the loss of angular momentum in the simulated galaxies in TNG50 (\(<10\%\)). Transportation of angular momentum by magnetic stress is not implemented in the current magneto-hydrodynamic simulations (e.g. Schaye et al., 2015; Vogelsberger et al., 2014; Nelson et al., 2018). We suggest that doing so is potentially a way to produce smaller disks with exponential profiles of \(\Sigma_{\rm SFR}\), as seen observations. ## 6 Summary In recent years, cosmological hydrodynamical simulations, such as EAGLE and IllustrisTNG, have made great advances in understanding galaxy formation (see Vogelsberger et al., 2020, and references therein). They successfully reproduce many observational facts about the galaxy population, including galaxy colour bimodality, cold gas fraction, statistical properties of galaxy morphology and etc (Trayford et al., 2015; Furlong et al., 2015; Nelson et al., 2018; Genel et al., 2018; Diemer et al., 2019; Donnari et al., 2019; Rodriguez-Gomez et al., 2019). However, it is not clear whether these simulations are currently able to reproduce the detailed internal structure of galaxies. We take advantage of the publicly-released state-of-art simulation, TNG50 (e.g. Pillepich et al., 2018; Nelson et al., 2018), which is the successor of the Illustris simulation with an updated physical model (Vogelsberger et al., 2014; Genel et al., 2014). In this work, we focus on the disks and ask whether these simulated galaxies have exponential stellar and star-forming disks. IllustrisTNG has been shown to be in excellent consistent with a wide range of observational data (see details in Nelson et al., 2019). In addition to the general comparison between simulations and observations, in this work, we also perform a particle-level analysis of gas evolution, in order to understand whether the simulated galaxies can be treated as gas-regulator systems, and also to investigate how the angular momentum of gas particles change after they are accreted onto gas disks. We examine a randomly selected sample of main sequence star-forming galaxies from TNG50, excluding galaxies with significant mergers since a redshift of 0.4. The main results of this analysis are as follows. * TNG50 star-forming galaxies tend to have larger star-forming disks, and show larger deviations from exponential profiles in the star-formation surface density \(\Sigma_{\rm SFR}\) when compared with MaNGA galaxies. Specifically, the \(\Sigma_{\rm SFR}\) profiles in TNG50 galaxies are often quite flat out to radii of \(1.6R_{1/2}\), across the range of stellar mass, with a central peak for \(\log M_{*}/({\rm M}_{\odot}h^{-1})<10.5\), and a central suppression for galaxies with \(\log M_{*}/({\rm M}_{\odot}h^{-1})>10.5\). Real galaxies have a much more exponential profile in \(\Sigma_{\rm SFR}\). * The stellar surface density profiles of TNG50 galaxies do however show good exponential profiles, in good consistency with observations. By comparing the radial distributions of stars when they are formed at earlier epochs and of the same set of stars at the current epoch, we find that the exponential stellar disks are the result of strong radial migration of stars. However, this strong radial migration may not be realistic for real galaxies because the radial profiles of mass-weighted age for TNG50 galaxies appear to be significantly flatter than those of real MaNGA galaxies. * By investigating the net inflow rate of individual TNG50 galaxies between the redshifts \(z\sim 0.4\) and \(z\sim 0\), we find that the net inflow rate is slight less than, or comparable to, the SFR, while both are considerably larger than the absolute rate of change in the gas mass at least for galaxies with \(\log M_{*}/({\rm M}_{\odot}h^{-1})<10.5\). Allowing for the cosmic reduction of SFR in massive star-forming galaxies, we conclude that the star formation in TNG50 galaxies is therefore being sustained by continuous gas accretion, rather than the consumption of pre-existing gas in galaxies. As expected, the simulated galaxies can be treated as gas-regulator systems in which gas "flows through" the galaxies on short timescales, with attendant implications for chemical enrichment and other properties of galaxies. * By tracking the evolution of individual gas particles that enter the galaxies, we find that there is no significant systematic loss of angular momentum after gas is accreted onto the disks. Furthermore, there is no correlation between the change of angular momentum of a gas particle after it is accreted and its eventual fate, i.e. whether it is later transformed into a star-particle, is ejected from the galaxy in an outflow, or remains as a gas particle within the galaxy. These suggest that the simulations lack an effective mechanism for removing angular momentum from the gas in the disk and transporting it outwards, and that this may account for the somewhat large sizes, and non-exponential profiles, of the star-forming disks in the simulated galaxies. We have argued in a previous paper (Wang & Lilly, 2022b) that magnetic stress from magneto-rotational instability within the disk are the most plausible source of the viscosity that is required to maintain the inwards flow of gas to produce an exponential star-forming disk. Adding such viscosity in future simulations may potentially reduce the simulated disk sizes, and also produce more exponential star-forming disks. We thank the referee for their report on our paper, which has helped us improve the paper and understand better some aspects of the TNG simulations. E.W. acknowledges the support from the start-up Fund by University of Science and Technology of China (no. KY2030000200). The IllustrisTNG simulations were undertaken with compute time awarded by the Gauss Centre for Supercomputing (GCS) under GCS Large-Scale Projects GCS-ILLU and GCS-DWAR on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS), as well as on the machines of the Max Planck Computing and Data Facility (MPCDF) in Garching, Germany.
2303.17419
Zero loci of nullvectors and skew zero forcing in graphs and hypergraphs
There is interesting internal structure in the nullspaces of graph and hypergraph adjacency matrices, especially for trees, bipartite graphs, and related combinatorial classes. The zero loci of nullvectors, i.e., their zero coordinates' indices, encode information about matchings, coverings, and edges' influence on rank. This set system is the lattice of flats of a ``kernel matroid'', a subsystem of which are the ``stalled'' sets closed under skew zero forcing (SZF), a graph percolation/infection model known to have connections with rank and nullity. For a wide variety of graphs, the lattice of SZF-closed sets is also a matroid, a fact which can be used to obtain a polynomial-time algorithm for computing the skew zero forcing number. This contrasts with the general case, where we show that the corresponding decision problem is NP-hard. We also define skew zero forcing for hypergraphs, and show that, for linear hypertrees, the poset of SZF-closed sets is dual to the lattice of ideals of the hypergraph's nullvariety; while, for complete hypergraphs, the SZF-closed sets and the zero loci of nullvectors are more loosely related.
Joshua Cooper, Grant Fickes
2023-03-30T14:39:55Z
http://arxiv.org/abs/2303.17419v1
# Zero loci of nullvectors and skew zero forcing in graphs and hypergraphs ###### Abstract There is interesting internal structure in the nullspaces of graph and hypergraph adjacency matrices, especially for trees, bipartite graphs, and related combinatorial classes. The zero loci of nullvectors, i.e., their zero coordinates' indices, encode information about matchings, coverings, and edges' influence on rank. This set system is the lattice of flats of a "kernel matroid", a subsystem of which are the "stalled" sets closed under skew zero forcing (SZF), a graph percolation/infection model known to have connections with rank and nullity. For a wide variety of graphs, the lattice of SZF-closed sets is also a matroid, a fact which can be used to obtain a polynomial-time algorithm for computing the skew zero forcing number. This contrasts with the general case, where we show that the corresponding decision problem is NP-hard. We also define skew zero forcing for hypergraphs, and show that, for linear hypertrees, the poset of SZF-closed sets is dual to the lattice of ideals of the hypergraph's nullvariety; while, for complete hypergraphs, the SZF-closed sets and the zero loci of nullvectors are more loosely related. ## 1 Introduction It is classical that the multiplicity of zero as Laplacian eigenvalue of a graph is the number of connected components (see [4]), but the nullity of adjacency matrices is much subtler. Significant attention has been paid to understanding the adjacency nullity of graphs (an important survey is [13]), and to a much lesser extent, hypergraphs (e.g., [6]). The present work is an attempt to understand not just the multiplicity of zero, but its associated nullspace - or, in the case of hypergraphs, its "nullvariety". In particular, we show that the set of adjacency nullvectors can be decomposed into combinatorially informative components according to their "zero loci": the coordinates where the vectors are zero. Inspired in part by work of Sciriha et al - for example, [26] - we begin by examining the minimal zero locus of trees' nullvectors, which we call their "generating set". (Some literature refers to these sets as "core-forbidden vertices.") In Section 2, we show that this set has a number of interesting interpretations from the perspective of maximum matchings, vertex covers, the Dulmage-Mendelsohn/Gallai-Edmonds decomposition, the effect on rank of edge deletion and contraction, and _skew zero forcing_. Graph forcing is a topic that emerged from studying the maximum rank and nullity of real matrices in a special class \(\mathcal{P}\) (symmetric, skew-symmetric, positive semidefinite, etc) whose nonzero entries correspond to edges of a given graph. See [16] for an exploration of this rapidly developing topic. The term "forcing" refers to iteratively applying a color-change rule to a unfilled/filled vertex coloring wherein the filled set spreads until it "stalls". (Some literature, e.g., [18], calls stalled sets "derived sets" and refers to unfilled/filled as white/blue or white/black.) The size of the smallest set which only stalls when the whole graph is filled is the "\(\mathcal{P}-\)zero forcing number", and the size of the largest stalled proper subset is the "failed \(\mathcal{P}\)-zero forcing number". The most common classes considered are "zero forcing" (for \(\mathcal{P}\) the symmetric matrices) and "skew zero forcing" (for \(\mathcal{P}\) the skew-symmetric matrices). For concision, we typically write "SZF" for "skew zero forcing". In Section 3, we show that the skew zero forcing rule is a closure operator, and the family of SZF-stalled sets is _often_ the collection of closed sets of a matroid we term the "SZF matroid." Furthermore, the collection of zero loci of nullvectors is always a matroid, the "kernel matroid", a quotient of which is the SZF matroid. We show that these two matroids are identical - a property we term "SZF-completeness" - for trees, cycles of length divisible by 4, complete bipartite graphs, and graphs derived by various operations applied to smaller SZF-complete graphs. We also characterize nonsingular bipartite SZF-complete graphs, extending results of [3], and answer a question of theirs by showing that, while it is NP-hard in general to decide if the skew zero forcing number - the smallest set which SZF-closes to the full vertex set - is at most \(k\), this quantity can be computed in polynomial time if the set system is a matroid. In particular, minimal sets whose closure is the whole vertex set are bases, so the rank of the matroid is the SZF number. (The maximal sets whose closure is _not_ the whole vertex set - "failed sets" - are hyperplanes/coatoms.) In Section 4, we extend the story to hypergraphs, giving a new SZF rule for hypergraphs. (Hogben has studied another choice of rule with nice properties: [15].) We show that SZF-closed sets are a special kind of vertex cover which are in bijection with irreducible components of the nullvariety for linear hypertrees, and describe the nullvarieties' components and SZF-closed families for complete hypergraphs via symmetric polynomials. Finally, in Section 5, we mention several open problems arising from the present work. Throughout the sequel, we refer to the set of adjacency nullvectors of a graph or hypergraph \(G\) by \(\ker(G)\). For graphs, this means that \(\ker(G)=\{\mathbf{v}:A(G)\mathbf{v}=\mathbf{0}\}\), where \(A(G)\) is the adjacency matrix of \(G\); for hypergraphs, we explain the more complicated definition in Section 4. Given a vector \(\mathbf{v}=(v_{u_{1}},\ldots,v_{u_{n}})\in\mathbb{C}^{V(G)}\), the set \(Z(\mathbf{v}):=\{u\in V(G):v_{u}=0\}\) is the _zero locus_ of \(\mathbf{v}\). Minimal Zero Loci of Trees In this section, we investigate the minimal zero loci of nullvectors of trees. First, we describe the relationship between edges which are mandatory/optional/forbidden in maximum matchings and vertices which are mandatory/optional/forbidden in minimum vertex covers, via a "thermal decomposition" of trees. In the next subsection, we formally introduce skew zero forcing ("SZF") and show that it gives rise to a closure operator on vertex sets. For trees, it turns out that the SZF-closed sets are exactly the zero loci of nullvectors. In the third subsection, the previous results are connected with the Dulmage-Mendelsohn decomposition in one of our main theorems: a multifaceted characterization of trees' generating sets. Then, the last subsection gives another description of the thermal decomposition in terms of the effect on rank of deletion or contraction of edges. ### Matchings, Coverings, and the Thermal Decomposition Recall that a _matching_ of a graph \(G\) is a set \(F\subseteq E(G)\) of pairwise disjoint edges, and that a _cover_ is a set \(S\subseteq V(G)\) of vertices so that every edge \(e\in E(G)\) has a nonempty intersection with \(S\). A cover is _minimum_ if it has minimum cardinality among all covers, and a matching is _maximum_ if it has maximum cardinality among all matchings. A vertex \(v\in V(G)\) is a _pendant vertex_ if it has degree \(1\), and an edge is a _leaf edge_ if it contains a pendant vertex. A matching \(M\)_saturates_ a vertex \(v\) if there exists \(e\in M\) with \(v\in e\), and it is a _perfect_ matching if it saturates all of \(V(G)\). **Definition 2.1**.: _Let \(T\) be a tree. We say the edge \(e\in E(T)\) is **matching-frozen** if either \(e\) is contained in every maximum matching of \(T\) or contained in no maximum matching of \(T\). If the edge \(e\) is not matching-frozen, then we call \(e\)**matching-thawed**, i.e., \(e\) is contained in some but not all maximum matchings of \(T\)._ **Definition 2.2**.: _Let \(T\) be a tree. We say the **thermal decomposition** of \(T\) is a partition of \(E(T)\) into three classes, \((M_{T},F_{T},O_{T})\), so that \(M_{T}\) ("mandatory") is the collection of edges of \(T\) in every maximum matching, \(F_{T}\) ("forbidden") is the collection of edges of \(T\) in no maximum matching, and \(O_{T}\) ("optional") is the collection of edges of \(T\) in some but not all maximum matching of \(T\). Furthermore, let \(F^{\prime}_{T}\subseteq F_{T}\) be the collection of forbidden edges of \(T\) which are incident to at least one edge of \(O_{T}\). Lastly, define \(\mathcal{F}(T)=(V(T),E(T)\setminus F^{\prime}_{T})\)._ **Definition 2.3**.: _Let \(T\) be a tree. We say the vertex \(v\in V(T)\) is **cover-frozen** if \(v\) is contained in every minimum cover of \(T\) or if \(v\) is contained in no minimum cover of \(T\). If the vertex \(v\) is not cover-frozen, then we call \(v\)**cover-thawed**, i.e., if \(v\) is in some, but not all minimum covers of \(T\)._ We recall the following basic fact about matchings and covers. We refer the interested reader to [19] for a great deal more on this subject. **Proposition 2.4**.: _Let \(T\) be a tree, \(M\subseteq E(T)\) a maximum matching of \(T\) and \(C\subseteq V(T)\) a minimum vertex cover of \(T\). Then every edge of \(M\) contains exactly one vertex of \(C\). Furthermore, for every \(v\in C\), there exists \(e\in M\) so that \(v\in e\)._ Proof.: Since \(T\) is a tree, \(|M|=|C|\), by the Konig-Egervary Theorem. Since \(C\) is a cover of \(T\) and \(M\subseteq E(T)\), clearly every edge of \(M\) contains at least one vertex of \(C\). On the other hand, \(M\) is an independent collection of edges, so no two elements of \(M\) contain the same vertex of \(C\). Thus, \(|M|=|C|\). As for the second part of the claim, \(|M|=|C|\) together with the result given by the previous paragraph give the proof. **Proposition 2.5**.: _Let \(T\) be a tree. Then no edge of \(O_{T}\) is incident to an edge of \(M_{T}\). Moreover, if \(v\in V(T)\) then the edges incident to \(v\) consist of one of the following._ * _Exactly one edge incident to_ \(v\) _is in_ \(M_{T}\) _and the rest are in_ \(F_{T}\)_._ * _Some edges incident to_ \(v\) _are in_ \(O_{T}\) _while all others are in_ \(F_{T}\)_._ Proof.: We first justify that no edge of \(O_{T}\) is incident to an edge of \(M_{T}\). If \(e\in M_{T}\), then all other edges incident to \(e\) are elements of \(F_{T}\), since the degree of any vertex in a matching is no more than one. As for the enumeration in the second part of the claim, if \(v\in V(T)\) is incident to an edge of \(M_{T}\), then clearly \(v\) is incident to just one edge of \(M_{T}\). The first part of the proof shows that all other edges incident to \(v\) come from \(F_{T}\). If \(v\) is not incident to an edge of \(M_{T}\), then the edges incident to \(v\) fall into one of the other two categories. By the preceding proposition, the components of \(\mathcal{F}(T)\) come in exactly two types: either all edges belong to \(O_{T}\), or all edges belong to \(M_{T}\cup F_{T}\). The former matching-thawed components are refered to as bc-trees below (see Theorem 2.7), while the latter matching-frozen components have a perfect matching. **Proposition 2.6**.: _Let \(T\) be a tree. Then \(M\) is a maximum matching of \(T\) if and only if \(M\) restricts to a maximum matching of the components of \(\mathcal{F}(T)\). Additionally, \(C\) is a minimum cover of \(T\) if and only if \(C\) restricts to a minimum cover of the components of \(\mathcal{F}(T)\)._ Proof.: It was already noted that if \(M\) is a maximum matching and \(C\) is a minimum cover, then \(|M|=|C|\). Therefore, the two statements in question are equivalent. We elect to show the matching form of the result. Note that by Proposition 2.5, the components of \(\mathcal{F}(T)\) take two forms, trees whose edges come from \(F_{T}\cup M_{T}\), i.e., trees with a perfect matching, and trees with all edges in \(O_{T}\). \((\Rightarrow):\) Clearly the claim holds for the first kind of component. Let \(X\) be a component of \(\mathcal{F}(T)\) with \(E(X)\subseteq O_{T}\). If \(v\in V(X)\) so that \(\deg_{X}(v)<\deg_{T}(v)\), the definition of \(\mathcal{F}(T)\) gives that edges of \(E(T)\setminus E(X)\) incident to \(v\) are forbidden. Thus, the maximum matching \(M\) of \(T\) restricts to a maximum matching \(M_{X}=M\cap E(X)\) of \(X\), as otherwise would contradict that \(M\) was a maximum matching of \(T\). \((\Leftarrow)\): Let \(M\) be the union of a maximum matching of each component of \(\mathcal{F}(T)\). Clearly, \(M\) is a matching of \(T\). Suppose \(M^{\prime}\) is a maximum matching of \(T\) with \(|M|<|M^{\prime}|\). Since \(E(T)\setminus E(\mathcal{F}(T))=F_{T}^{\prime}\), \(M^{\prime}\) is a matching of \(\mathcal{F}(T)\). By the other direction of the proof, \(M^{\prime}\) restricts to a maximum matching of every component of \(\mathcal{F}(T)\). Since \(|M|<|M^{\prime}|\), there exists a component \(X\) of \(\mathcal{F}(T)\) so that \(|E(X)\cap M|<|E(X)\cap M^{\prime}|\), contradicting that \(M\) is the union of a maximum matching on all components of \(\mathcal{F}(T)\). Thus, \(M\) is a maximum matching of \(T\). The following theorem appears (stated slightly differently) already in work of Harary and Plummer from 1967. **Theorem 2.7**.: _[_14_]_ _The following statements are equivalent for any tree \(T\)._ 1. \(E(T)=M_{T}\cup O_{T}\) _(equivalently:_ \(F_{T}=\emptyset\)_)_ 2. \(T\) _has a unique minimum cover, the cover is independent, and contains no pendant vertex._ 3. _If_ \(v,u_{1},\ldots,u_{p}\) _are the pendant vertices of_ \(T\)_, then_ \(\operatorname{dist}(v,u_{i})\) _is even for all_ \(1\leq i\leq p\)_._ 4. \(T\) _is a bc-tree, i.e., the distance between any pair of pendant vertices is even._ 5. \(T\) _is the block-cutpoint-tree of some connected graph_ \(G\)_._ Some components of \(\mathcal{F}(T)\) have edge sets fully contained in \(O_{T}\). Thus, the previous theorem implies such a component is a bc-tree, a fact we use repeatedly below. **Proposition 2.8**.: _Let \(T\) be a bc-tree with unique minimum cover \(C\). Then a vertex \(v\in V(T)\) is saturated in every maximum matching of \(T\) if and only if \(v\in C\)._ Proof.: Clearly, the backwards implication is given by Proposition 2.4. For the forward implication, suppose \(v\in V(T)\) is saturated in every maximum matching of \(T\). Moreover, by way of contradiction, suppose \(v\notin C\). Since \(v\notin C\) and \(C\) is a cover of \(T\), \(N(v)\subseteq C\). Consider rooting \(T\) at \(v\). Since \(T\) is a bc-tree, all leaf edges of \(T\) occur at either an even distance from \(v\) or all an odd distance from \(v\). Moreover, assuming the root occurs at height zero, \(C\) contains all vertices at odd heights. Thus, since \(C\) contains no leaf edges, the leaf edges of \(T\) rooted at \(v\) occur at even heights. Since the height of a vertex in \(T\) is the same as the distance from \(v\), \(v\) is an even distance from every leaf edge of \(T\). Let \(M\) be a maximum matching of \(T\), and let \(P\) be a maximal alternating path in \(T\) with respect to edges in \(M\) so that \(v\) is a pendant vertex of \(P\) and the edge of \(P\) incident to \(v\) is contained in \(M\). Since every vertex at odd height is saturated in \(M\), \(P\) terminates at a leaf edge, \(\ell\), of \(T\). Since \(\operatorname{dist}(v,\ell)\) is even, the last edge of \(P\) does not belong to \(M\). Thus, \(M^{\prime}:=(M\setminus E(P))\cup(E(P)\setminus M)\) is a matching of \(T\) satisfying \(|M|=|M^{\prime}|\) and \(v\) is unsaturated in \(M^{\prime}\), a contradiction. **Proposition 2.9**.: _Let \(T\) be a tree. If \(X\) is a bc-tree component of \(\mathcal{F}(T)\) containing vertex \(v\) satisfying \(\deg_{T}(v)>\deg_{X}(v)\), then \(v\) is contained in the unique minimum cover of \(X\)._ Proof.: Since \(X\) is a bc-tree, \(X\) contains a unique minimum cover, \(C\subseteq V(X)\), and Proposition 2.5 gives that \(|E(X)|>1\), so \(|C|>0\). Suppose \(v\in V(X)\) with \(\deg_{T}(v)>\deg_{X}(v)\). Proposition 2.5 gives the existence of edge \(vu=e\in E(T)\) with \(e\in F_{T}\). By way of contradiction, suppose \(v\) is not in the unique minimum cover of \(X\). By Proposition 2.8, there exists a maximum matching \(M_{X}\) of \(X\) which leaves \(v\) unsaturated. Moreover, Proposition 2.6 implies there exists a maximum matching \(M_{T}\) of \(T\) so that \(M_{T}\cap E(X)=M_{X}\). Since \(e\notin M_{T}\) and \(v\) is unsaturated in \(M_{T}\), \(u\) is saturated in \(M_{T}\), as otherwise would imply \(M_{T}\cup\{e\}\) is a matching of \(T\). Let \(e^{\prime}\) be the edge of \(M_{T}\) incident to \(u\). Then \((M_{T}\setminus\{e^{\prime}\})\cup\{e\}\) is a maximum matching of \(T\), contradicting that \(e\in F_{T}\), completing the proof. **Lemma 2.10**.: _Let \(T\) be a tree with a perfect matching. Then \(T\) contains two minimum covers, \(C_{1}\) and \(C_{2}\) which partition \(V(T)\)._ Proof.: Let \(M\) be a perfect matching of \(T\). Root \(T\) at vertex \(v\). Let \(C_{1}\) be the vertices of \(T\) at odd heights of the tree, and \(C_{2}\) the vertices of \(T\) at even heights. Clearly, \(C_{1}\) and \(C_{2}\) are vertex covers of \(T\), since every edge of \(T\) contains one vertex from each height parity, so \(|C_{1}|,|C_{2}|\geq|M|\). On the other hand, since every edge of \(M\) contains one vertex from each height parity class, no edge of \(M\) has both vertices in either cover, so \(|C_{1}|,|C_{2}|\leq|M|\). Therefore, \(|C_{1}|=|C_{2}|=|M|\), so both covers are minimum covers. The observation that \(C_{1}\) and \(C_{2}\) partitions \(V(T)\) is clear. **Theorem 2.11**.: _Let \(T\) be a tree_ 1. \(v\in V(T)\) _is cover-frozen if and only if_ \(v\) _is not incident to any edges of_ \(M_{T}\)_._ 2. \(v\in V(T)\) _is cover-thawed if and only if every edge incident to_ \(v\) _is matching-frozen._ 3. \(e\in E(T)\) _is matching-thawed if and only if_ \(e\) _is incident to two cover-frozen vertices, one in every minimum cover and the other omitted from every minimum cover._ 4. \(e\in E(T)\setminus F_{T}^{\prime}\) _is matching-frozen if and only if_ \(e\) _is incident to two cover-thawed vertices._ Proof.: We start with the proof of (1). (\(\Rightarrow\)): If \(v\in V(T)\) is cover-frozen, then \(v\) is contained in a component of \(\mathcal{F}(T)\) which is a bc-tree, by the remark after Proposition 2.5 combined with Lemma 2.10. Another application of Proposition 2.5 completes the argument. (\(\Leftarrow\)): If \(v\) is not incident to any edges of \(M_{T}\), then again, \(v\) is contained in a component of \(\mathcal{F}(T)\) which is a bc-tree. Proposition 2.6 completes the proof. Note that (2) is merely the contrapositive of (1) since cover-thawed and cover-frozen vertices partition \(V(T)\), and Proposition 2.5 implies the edges of \(T\) incident to a vertex \(v\in V(T)\) are either (1) some in \(O_{T}\) and some in \(F_{T}\), or (2) one in \(M_{T}\) and the rest in \(F_{T}\). Thus, a vertex \(v\) being incident to an edge of \(M_{T}\) implies the collection of edges incident to \(v\) fall into type (2), meaning every edge incident to \(v\) is matching-frozen. Now we prove (3). \((\Rightarrow)\): If \(e\in E(T)\) is matching-thawed, then \(e\) is contained in a component of \(\mathcal{F}(T)\) which is a bc-tree. Proposition 2.6 completes the proof, noting that the minimum cover of a bc-tree is independent. \((\Leftarrow)\): We prove this direction by contraposition. If \(e\) is incident to two cover-frozen vertices, then there are two cases. Either \(e\) is contained in a component of \(\mathcal{F}(T)\) which is a bc-tree, in which case the desired result holds, or \(e\in F_{T}^{\prime}\). If \(e\in F_{T}^{\prime}\), clearly \(e\) is matching-frozen. By the definition of \(F_{T}^{\prime}\), at least one endpoint of \(e\) is incident to edges of \(O_{T}\). Without loss of generality, suppose \(e=xy\) and \(x\) is incident to edges of \(O_{T}\). Then \(x\) is contained in a component of \(\mathcal{F}(T)\) which is a bc-tree, so Proposition 2.9 gives that \(x\) is a cover vertex. If \(y\) is incident to edges of \(O_{T}\), then \(y\) is also a cover vertex. On the other hand, if \(y\) is not incident to vertices of \(O_{T}\), then \(y\) is contained in a component of \(\mathcal{F}(T)\) which has a perfect matching. Lemma 2.10 and Proposition 2.6 together imply \(y\) is not cover-frozen, providing a contradiction. Now we prove (4). \((\Rightarrow)\): Suppose \(e\in E(T)\setminus F_{T}^{\prime}\) is cover-frozen. Then \(e\) is contained in a component of \(\mathcal{F}(T)\) which has a perfect matching. Thus, Lemma 2.10 and Proposition 2.6 together imply the endpoints of \(e\) are cover-thawed. \((\Leftarrow)\): Now suppose \(e\in E(T)\) is incident to two cover-thawed vertices. By Proposition 2.9, \(e\notin F_{T}^{\prime}\), so \(e\) is contained in a component of \(\mathcal{F}(T)\) which has a perfect matching. Thus, Lemma 2.10 and Proposition 2.6 together imply the endpoints of \(e\) are cover-thawed. ### Skew Zero Forcing **Definition 2.12**.: _[_18_]_ _Let \(G=(V,E)\) be a graph._ * _A subset_ \(S\subseteq V\) _defines an initial coloring by filling all vertices of_ \(S\) _and leaving all the vertices not in_ \(S\) _unfilled._ * _The skew zero forcing rule says: If a vertex_ \(v\in V\) _has exactly one unfilled neighbor,_ \(w\)_, change the color of_ \(w\) _to filled. In this case we say that_ \(v\) _forces_ \(w\)_._ * _The skew derived set of an initial filled set_ \(S\) _is the result of applying the skew zero forcing rule until no more changes are possible._ Given a graph \(G\) and \(S\subseteq V(G)\), define a _skew zero forcing closure_ of \(S\) (SZF-closure of \(S\)) to be a skew derived set of the initial coloring \(S\). **Proposition 2.13**.: _Let \(G\) be a graph and \(S\subseteq V(G)\). Then if \(S_{1}\) and \(S_{2}\) are skew derived sets of \(S\), then \(S_{1}=S_{2}\)._ Proof.: Suppose \(S_{1}\) and \(S_{2}\) are two closures of \(S\) so that \(x\in S_{1}\). Since \(S\subseteq S_{1}\cap S_{2}\) it suffices to assume \(x\notin S\). If there exists a vertex \(v\in V(G)\) so that \(x\in N(v)\) and \(N(v)\setminus S=\{x\}\), then \(x\in S_{2}\), as no sequence of the skew zero forcing rule can increase the number of unfilled neighbors of \(v\). If this is not the case, there exist sequences of vertices \(\{v_{i}\}_{i=1}^{t},\{x_{i}\}_{i=1}^{t}\subseteq V(G)\) so that \(v_{i}\) forces \(x_{i}\) for \(1\leq i\leq t\) and then \(v_{t+1}:=v\) forces \(x_{t+1}:=x\) in \(S_{1}\). By the previous paragraph, \(x_{1}\in S_{2}\), since \(v_{1}\) has the property that \(x_{1}\in N(v_{1})\) and \(N(v_{1})\setminus S=\{x_{1}\}\). Let \(S^{1}=S\cup\{x_{1}\}\), so that \(S^{1}\subseteq S_{2}\). Recursively define \(S^{i}=S^{i-1}\cup\{x_{i}\}\) for \(2\leq i\leq n+1\). Suppose also that there exists \(k\in\mathbb{Z}\) so that \(1\leq k\leq n\) and \(S^{k}\subseteq S_{2}\). Then \(x_{k+1}\in S_{2}\), since the construction of \(S_{1}\) gives that \(v_{k+1}\) has the property that \(x_{k+1}\in N(v_{k+1})\) and \(N(v_{k+1})\setminus S^{k}=\{x_{k+1}\}\), so that \(x_{k+1}\) is eventually added to \(S_{2}\) if it is not already an element of \(S^{k}\subseteq S_{2}\). Thus, _the_ skew zero forcing closure of a set \(S\subseteq V(G)\) is well-defined. We denote it by \(\overline{S}\). If \(S=\overline{S}\), then we say that \(S\) is skew zero forcing _closed_, which some literature refers to as "stalled". This is a meaningful term for any closure operator; see the beginning of Section 3 for more. We sometimes refer to any set \(S\subseteq V(G)\) so that \(\overline{S}=V(G)\) as a "skew zero forcing set". **Proposition 2.14**.: _Let \(G\) be a graph. If \(A,B\subseteq V(G)\) so that \(A\subseteq B\), then \(\overline{A}\subseteq\overline{B}\)._ Proof.: Let \(v_{1},\ldots,v_{t}\in V(G)\) be the collection of vertices, in order of inclusion via repeated Figure 1: A uniformly random tree \(T\) on 30 vertices with thermal decomposition: solid = mandatory (\(M_{T}\)), dashed = optional (\(O_{T}\)), dotted = forbidden (\(F_{T}\)). Filled vertices are the generating set. application of the skew zero forcing rule, in \(\overline{A}\setminus A\). Further, let \(u_{1},\ldots,u_{t}\in V(G)\) so that the skew zero forcing rule applied to \(u_{i}\) resulted in the inclusion of vertex \(v_{i}\). By induction, we show that \(\{v_{i}\}_{i=1}^{t}\subseteq\overline{B}\). Let \(j\in\mathbb{Z}\) so that \(1\leq j\leq t\). If \(j=1\), then all neighbors of \(u_{1}\) except \(v_{1}\) are contained in \(A\). Since \(A\subseteq B\), all neighbors of \(u_{1}\) except \(v_{1}\) are in \(B\). Regardless of whether \(v_{1}\) is contained in \(B\) or not, it is clear that \(A\cup\{v_{i}\}_{i=1}^{1}\subseteq\overline{B}\). Now suppose there exists \(k\in\mathbb{Z}\) so that \(k\geq 1\) and \(A\cup\{v_{i}\}_{i=1}^{k}\subseteq\overline{B}\). We consider \(v_{k+1}\). By the definition of the skew zero forcing rule, all neighbors of \(u_{k+1}\) except \(v_{k+1}\) are contained in \(A\cup\{v_{i}\}_{i=1}^{k}\). By the induction hypothesis, \(A\cup\{v_{i}\}_{i=1}^{k}\subseteq\overline{B}\), so all neighbors of \(u_{k+1}\) except (possibly) \(v_{k+1}\) are contained in \(\overline{B}\). Thus, \(v_{k+1}\in\overline{B}\), completing the proof of the desired inclusion. **Proposition 2.15**.: _Let \(T\) be a tree. The set \(S\subseteq V(T)\) is skew zero forcing closed if and only if there exists \(\mathbf{x}\in\ker(T)\) so that the zeros of \(\mathbf{x}\) occur exactly at the vertices of \(S\)._ Proof.: (\(\Leftarrow\)): This implication is clear. (\(\Rightarrow\)): Let \(S\subseteq V(T)\) be a skew zero forcing closed set. Consider rooting \(T\) at a leaf edge \(\ell\). We construct a nullvector \(\mathbf{x}\) with entries \(x_{v}\) for \(v\in V(T)\) by iteratively working through the tree \(T\). Start by assigning zeros to coordinates corresponding to vertices of \(S\), so let \(x_{v}=0\) for each \(v\in S\). Now we choose values for all nonzero coordinates of \(\mathbf{x}\). If \(\ell\notin S\), let \(x_{\ell}=1\). Furthermore, let \(v_{1}\) be the unique neighbor of \(\ell\) (note that \(v_{1}\) is the only vertex of \(T\) at height one). Since \(S\) is skew zero forcing closed, \(v_{1}\in S\), so \(x_{v_{1}}=0\). Let \(v\in V(T)\) be at height \(h\geq 2\), so that \(v\) is not a pendant vertex, and if \(u\in V(T)\) is at height less than \(h\), \(x_{u}\) has already been assigned. Let \(w\) be the unique neighbor of \(v\) at height \(h-1\). If \(x_{w}=0\), then either all neighbors of \(v\) are in \(S\) or at least two are not. If the former is the case, \(x_{u}=0\) for all neighbors \(u\) of \(v\) at height \(h+1\). If the latter is the case, let \(N(v)\setminus S=\{u_{1},u_{2},\ldots,u_{m}\}\). Then, define \(x_{u_{1}}=1\) and \(x_{u_{i}}=-1/(m-1)\) for \(2\leq i\leq m\). In either case, we have that \(\sum_{w:vw\in E(T)}x_{w}=0\). The other case considers if \(x_{w}\neq 0\). In this case, we note that \(x_{w}\) has already been assigned. Let \(N(v)\setminus S=\{u_{1},u_{2},\ldots,u_{m}\}\) and define \(x_{u_{i}}=-x_{w}/m\). Thus \(\sum_{w:vw\in E(T)}x_{w}=0\). Note that using this recursive algorithm, it is possible to populate all entries of \(\mathbf{x}\). Moreover, the above argument shows \(\sum_{w:vw\in E(T)}x_{w}=0\) for all vertices \(v\) of \(T\) except pendant vertices. Since pendant vertices have a unique neighbor and \(S\) is skew zero forcing closed, we have that \(\sum_{w:vw\in E(T)}x_{w}=0\) for all pendant vertices \(v\in V(T)\) as well, completing the proof. Note that the backwards implication above holds for any graph, not just trees: if \(S\subseteq V(G)\) is the zero locus of some nullvector \(\mathbf{v}\), then \(S\) is SZF-closed. Indeed, if any vertex \(x\) had exactly one neighbor \(y\) for which \(v_{y}\neq 0\), i.e., \(y\not\in S\), then the \(x\) coordinate of \(A(G)\mathbf{v}\) would also be nonzero, a contradiction. This is made more precise in Proposition 3.11. ### Dulmage-Mendelsohn Decomposition and Characterizing Generating Sets **Definition 2.16**.: [9, 21] _Let \(G\) be a bipartite graph, \(M\) a maximum-cardinality matching in \(G\), and \(V_{0}\) the set of vertices of \(G\) unsaturated by \(M\) (the "free vertices"). Then \(G\) can be partitioned into three parts_ * \(E\) _- the vertices reachable from_ \(V_{0}\) _by an_ \(M\)_-alternating path of even length._ * \(O\) _- the vertices reachable from_ \(V_{0}\) _by an_ \(M\)_-alternating path of odd length._ * \(U\) _- the vertices unreachable from_ \(V_{0}\) _by an_ \(M\)_-alternating path._ It is well-known that the Dulmage-Mendelsohn decomposition is a special case of the Gallai-Edmonds decomposition, and these decompositions have highly useful properties when considering maximum matchings and minimum covers of graphs. **Theorem 2.17** ([19]).: _If \(M\) and \(N\) are two maximum-matchings of bipartite graph \(G\), then \(M\) and \(N\) define the same \((U,E,O)\) decomposition._ For any graph \(G\), it will be convenient to define the subset \(\mathcal{V}(S)\) of \(\ker(G)\)_generated_ by a set \(S\subseteq V(G)\). This definition is extended in Section 4 to hypergraphs. **Definition 2.18**.: _If \(S\subseteq V(G)\), then we denote by \(\mathcal{V}^{G}(S)\) the subspace \(\{\mathbf{v}\in\ker(G):x\in S\Rightarrow v_{x}=0\}\). If \(X\subseteq\ker(G)\), then we say that \(S\) "generates" \(X\) if \(\mathcal{V}^{G}(S)=X\)._ Note that, given a set \(X\subseteq\ker(G)\), if there is a set \(S\subseteq V(G)\) which generates \(X\), then there is a maximal set which does so: \(\{x:(\forall\mathbf{v}\in X)(v_{x}=0)\}\), the intersection of all zero loci of vectors in \(X\), which we refer to as "the" generating set of \(X\). **Theorem 2.19**.: _Let \(T\) be a tree, and \(S\subseteq V(T)\). Then the following are equivalent._ 1. \(\mathcal{V}^{T}(S)=\ker(T)\)_, i.e.,_ \(S\) _is the generating set for the nullspace of_ \(T\)_, i.e.,_ \(S\) _is the set of common indices of zeros of all nullvectors._ 2. \(S\) _is the skew zero forcing closure of_ \(\emptyset\)_._ 3. \(S\) _is the union of all vertices in matching-frozen components of_ \(\mathcal{F}(T)\) _and the unique minimum cover of matching-thawed components of_ \(\mathcal{F}(T)\)_._ 4. \(S\) _is the union of all minimum covers of_ \(T\)_._ 5. \(S\) _is the intersection of all sets of saturated vertices in maximum matchings of_ \(T\)_._ 6. \(S=U\cup O\) _in the Dulmage-Mendelsohn Decomposition of_ \(T\) Proof.: \(2\Leftrightarrow 5\): By Corollary 3.14 (which does not use results from outside subsection 3.3; see discussion preceding the proof), the sets unsaturated by maximum matchings are precisely the minimal sets \(S\subseteq V(T)\) so that \(\overline{S}=V(T)\). Therefore, if \(M\) is a maximum matching of \(T\), and \(X_{M}\) is the collection of vertices unsaturated by \(M\), then the minimality of the skew zero forcing set \(X_{M}\) gives \(X_{M}\cap\overline{\emptyset}=\emptyset\), since \(\overline{X_{M}\setminus\overline{\emptyset}}\supseteq(X_{M}\setminus \overline{\emptyset})\cup\overline{\emptyset}=V(T)\) by Proposition 2.14. Thus, \(\overline{\emptyset}\subseteq V(M)\) for every maximum matching \(M\), so if \(S\subseteq V(T)\) is the collection of vertices saturated by every maximum matching, \(\overline{\emptyset}\subseteq S\). Now, suppose \(x\not\in\overline{\emptyset}\), and let \(M\) be a maximum matching. If there is a maximum matching \(M\) which does not saturate \(x\), then \(x\not\in S\). Suppose \(x_{0}:=x\) is saturated in \(M\); we construct a maximal \(M\)-alternating path \(P=x_{0}x_{1}x_{2}\ldots x_{m}\) of even length starting with \(x\) and the edge \(xx_{1}\in M\) so that \(x_{2t}\not\in\emptyset\) for each \(t\geq 0\). We claim \(x_{m}\) is unmatched: otherwise, \(m\) is odd, but then it has only one neighbor - namely, \(x_{m-1}\) - outside the set \(\overline{\emptyset}\), contradicting that \(\overline{\emptyset}\) is SZF-closed. Thus, \(M^{\prime}=(M\setminus E(P))\cup(E(P)\setminus M)\) is a maximal matching which does not saturate \(x\), so \(x\not\in S\). We may conclude that \(S\subseteq\overline{\emptyset}\). \(1\Rightarrow 2\): Since \(S\) is the collection of common zeros of all nullvectors of \(T\), \(S\) is clearly skew zero forcing closed by the remark following Proposition 2.15. Since \(\emptyset\subseteq S\), \(\overline{\emptyset}\subseteq\overline{S}=S\). It remains to show \(S\subseteq\overline{\emptyset}\). Since \(S\) is the generating set for \(\ker(T)\), \(v_{s}=0\) for every \(\mathbf{v}\in\ker(T)\) and every \(s\in S\). By definition, \(\mathcal{V}^{T}(\overline{\emptyset})\subseteq\ker(T)\), so this inclusion implies that \(v_{s}=0\) for every \(\mathbf{v}\in\mathcal{V}^{T}(\overline{\emptyset})\) and every \(s\in S\). Thus, \(S\subseteq\overline{\emptyset}\). \(2\Rightarrow 1\): Let \(S=\overline{\emptyset}\), and let \(G\) be the generating set for the nullspace of \(T\). By Proposition 2.15, \(\overline{\emptyset}\) is the zero locus of some nullvector of \(T\). Thus, since \(G\) is the collection of all coordinates which are zero for every nullvector of \(T\), \(G\subseteq\overline{\emptyset}\). On the other hand, Proposition 2.15 implies that \(G\) is skew zero forcing closed, meaning \(G=\overline{G}\). Thus, Proposition 2.14 and \(\emptyset\subseteq G\) give \(S=\overline{\emptyset}\subseteq\overline{G}=G\), completing the proof of \(S=G\). \(3\Leftrightarrow 4\): Let \(S\) be the union of all vertices in matching-frozen components of \(\mathcal{F}(T)\) and the unique minimum cover of matching-thaured components of \(\mathcal{F}(T)\). Recall from the comment after Proposition 2.5 that components of \(\mathcal{F}(T)\) are each either a bc-tree or have a perfect matching. Moreover, by Theorem 2.11, the matching-frozen components of \(\mathcal{F}(T)\) are those components with a perfect matching, since every vertex of a perfectly matched component of \(\mathcal{F}(T)\) is incident only to matching-frozen edges. Similarly, Theorem 2.11 implies that the matching-thaured components of \(\mathcal{F}(T)\) are the bc-tree components, since every vertex of a bc-tree component of \(\mathcal{F}(T)\) is not incident to any edge of \(M_{T}\). Now, let \(\mathcal{C}\) be the collection of all minimum covers of \(T\). By Proposition 2.6, any cover in \(\mathcal{C}\) restricts to a minimum cover of each component of \(\mathcal{F}(T)\). Let \(X\) be a component of \(\mathcal{F}(T)\). If \(X\) is a bc-tree, then Theorem 2.7 implies \(X\) has a unique minimum cover. Proposition 2.6 further implies \(C\cap V(X)\) is the unique minimum cover in \(X\) for any \(C\in\mathcal{C}\). Thus, \(S\cap V(X)=(\bigcup\mathcal{C})\cap V(X)\) in this case for the bc-tree/matching-thaured components of \(\mathcal{F}(T)\). On the other hand, if \(X\) contains a perfect matching, Lemma 2.10 gives the existence of two covers \(C_{1},C_{2}\in\mathcal{C}\) so that \((C_{1}\cap V(X))\cup(C_{2}\cap V(X))=V(X)\). Thus, \(S\cap V(X)=(\cup\mathcal{C})\cap V(X)\) in this case for the perfect matching/matching-frozen components of \(\mathcal{F}(T)\). \(4\Leftrightarrow 5\): Let \(\mathcal{C}\) be the collection of all minimum covers of \(T\), and define \(S=\bigcup\mathcal{C}\). By Proposition 2.4, for every \(C\in\mathcal{C}\) and every \(v\in C\), \(v\) is saturated by every maximum matching of \(T\). Thus, \(S\) is contained in the intersection of sets of vertices saturated by maximum matchings of \(T\). As for the other inclusion, suppose \(v\in V(T)\) so that \(v\) is saturated by every maximum matching of \(T\). Let \(X\) be the component of \(\mathcal{F}(T)\) containing \(v\). If \(X\) contains a perfect matching, then Lemma 2.10 and Proposition 2.6 give that \(V(X)\subseteq S\), so \(v\in S\). On the other hand, if \(X\) is a bc-tree, then Proposition 2.8 gives that \(v\) is contained in the minimum cover of \(X\), and Proposition 2.6 gives that \(v\in S\), completing the proof of the desired equality. \(5\Leftrightarrow 6\): (This is easily deduced from Theorem 3.2.1 in [19]; we include a proof here for completeness.) Suppose that \(S\) is the intersection of sets of saturated vertices in maximum matchings of \(T\). As is given by previous proofs, \(S\) contains every vertex of perfect matching components of \(\mathcal{F}(T)\) and exactly the unique minimum cover of bc-tree components of \(\mathcal{F}(T)\). Let \((U,E,O)\) be the Dulmage-Mendelsohn decomposition of \(T\). Let \(v\in V(T)\), and let \(X\) be the component of \(\mathcal{F}(T)\) which contains \(v\). If \(X\) is a bc-tree, then Proposition 2.8 gives that \(v\) is not an element of the unique minimum cover of \(X\) if and only if \(v\) is unsaturated in some maximum matching of \(T\). Thus, if \(C\) is the unique minimum cover of \(X\), then \(V(X)\setminus C\subseteq E\), because \(E\) contains all unsaturated vertices. Moreover, since \(C\) is independent, for every \(c\in C\), \(N_{X}(c)\subseteq V(X)\setminus C\subseteq E\). Thus, \(C\subseteq O\). On the other hand, if \(X\) contains a perfect matching, then all vertices of \(X\) are saturated by every maximum matching of \(T\). Let \(M\) be such a matching, and \(V_{0}\) be the vertices of \(T\) unsaturated by \(M\). By way of contradiction, suppose \(v\notin U\), meaning there exists path \(P\) with terminal vertices \(v\) and \(u\in V(T)\) so that \(P\) is alternating with respect to \(M\) and \(u\in V_{0}\). Let \(Y\) be the component of \(\mathcal{F}(T)\) containing \(u\). Since \(u\in V_{0}\), \(Y\) is a bc-tree, and \(u\) is not contained in the minimum cover of \(Y\). Let this minimum cover be \(C_{Y}\). Since \(Y\) and \(X\) are different components of \(\mathcal{F}(T)\), there exists edge \(e\in F_{T}^{\prime}\cap E(P)\) so that \(\{u_{1}\}:=e\cap V(Y)\). By Proposition 2.9, \(u_{1}\in C_{Y}\), meaning \(u_{1}\neq u\). Thus, if \(P^{\prime}\) denotes the alternating (with respect to \(M\)) subpath of \(P\) with endpoints \(u_{1}\) and \(u\), \(|E(P^{\prime})|>0\). Since \(P\) is an alternating path and \(e\notin M\), the edge of \(P^{\prime}\) incident to \(u_{1}\) is in \(M\). Since \(u\in V(P^{\prime})\) is unmatched in \(M\), the edge of \(P^{\prime}\) incident to \(u\) is not in \(M\), so \(P^{\prime}\) an alternating path implies \(\operatorname{dist}(u_{1},u)\) is even. However, \(u_{1}\in C_{Y}\) and Theorem 2.7 imply \(C_{Y}\) is exactly the vertices of \(Y\) an even distance from \(u_{1}\). Thus, \(u\in C_{Y}\), a contradiction, and so \(v\in U\), whence \(V(X)\subseteq U\) and we may conclude that \(S\subseteq U\cup O\). To show \(U\cup O\subseteq S\), we show instead \(S^{\prime}\subseteq E\), where we define \(S^{\prime}=V(T)\setminus S\). In a bc-tree component \(X\) of \(\mathcal{F}(T)\), we know \(S^{\prime}\cap V(X)\) is exactly the vertices not in the minimum cover. By Proposition 2.9, each of these vertices are omitted from some maximum matching of \(T\). Thus, Theorem 2.17 implies \(S^{\prime}\cap V(X)\subseteq E\). For a perfect matching component \(X\) in \(\mathcal{F}(T)\), \(S^{\prime}\cap V(X)=\emptyset\), so the inclusion \(S^{\prime}\cap V(X)\subseteq E\) is trivial. The equivalence of (1) and (5) appears essentially as Corollary 19 of [25]. ### Edge Influence on Rank in the Thermal Decomposition In this section, we describe another equivalent formulation of the thermal decomposition which is convenient for computation. Let \(\operatorname{rank}(G)\) denote \(\operatorname{rank}(A(G))\) for any graph \(G\). Let \(\nu(G)\) denote the size of a maximum matching in \(G\). Let \(\eta(G)\) denote the nullity of any graph \(G\). Finally, let \(c(G)\) denote the size of a minimum cover of \(G\). The authors of [7] prove that for a tree \(T\) on \(n\geq 1\) vertices with maximum matching size \(\nu(T)\), the nullity of \(T\) is given by \(\eta(T)=n-2\nu(T)\). Thus, it is also the case that \(\operatorname{rank}(T)=2\nu(T)\). **Proposition 2.20**.: _Let \(T\) be a tree with \(e\in E(T)\). Then \(e\in M_{T}\) if and only if \(\operatorname{rank}(T)=\operatorname{rank}(T-e)+2\)._ Proof.: Note that if \(\nu(T^{\prime})\) denotes the size of the maximum matching of the tree \(T^{\prime}\), then \(\nu(T^{\prime})=\operatorname{rank}(T^{\prime})/2\). Suppose first that \(\operatorname{rank}(T)=\operatorname{rank}(T-e)+2\). Then, \(\nu(T)=\nu(T-e)+1\). Thus, every maximum matching of \(T\) contains \(e\), giving \(e\in M_{T}\). Now suppose that \(e\in M_{T}\). Then, every maximum matching of \(T\) contains \(e\), so \(\nu(T)=\nu(T-e)+1\). By the relationship between \(\nu(X)\) and \(\operatorname{rank}(X)\) for any tree \(X\), we have the desired result. **Proposition 2.21**.: _Let \(T\) be a tree, and let \(e\in E(T)\). If \(\operatorname{rank}(T)=\operatorname{rank}(T-e)+2\), then \(\operatorname{rank}(T-e)=\operatorname{rank}(T/e)\)._ Proof.: Since \(\operatorname{rank}(T)=\operatorname{rank}(T-e)+2\), the proof of Proposition 2.20 gives that every maximum matching of \(T\) contains \(e\). Clearly, there are two possibilities for the relationship between \(\nu(T)\) and \(\nu(T/e)\). Either \(\nu(T)=\nu(T/e)\) or \(\nu(T)=\nu(T/e)+1\). If the former is true, then let \(M\) be a maximum matching of \(T/e\). Then \(M\) is also a maximum matching of \(T\) which omits \(e\), a contradiction. Thus, \(\nu(T)=\nu(T/e)+1\), implying \(\operatorname{rank}(T)=\operatorname{rank}(T/e)+2\). **Proposition 2.22**.: _Let \(T\) be a tree with \(e\in E(T)\), so that \(\operatorname{rank}(T)=\operatorname{rank}(T-e)\). Then \(e\in O_{T}\) if and only if \(\operatorname{rank}(T)=\operatorname{rank}(T/e)\)._ Proof.: First suppose \(\operatorname{rank}(T)=\operatorname{rank}(T/e)\). Since \(\operatorname{rank}(T)=\operatorname{rank}(T-e)\), \(\nu(T)=\nu(T-e)\). Thus, there exists a maximum matching of \(T\) which does not contain \(e\), so \(e\notin M_{T}\). So, at least one endpoint of \(e\) is saturated by every maximum matching of \(T\). On the other hand, since \(\operatorname{rank}(T)=\operatorname{rank}(T/e)\), there exists a maximum matching of \(T\) which leaves one endpoint of \(e\) unsaturated, since we can simply take a maximum matching of \(T/e\) and extend it to a maximum matching of \(T\). Let \(M\) be this matching, \(e=uv\), and suppose \(u\) is the unique endpoint of \(e\) saturated in \(M\). Let \(e^{\prime}\) be the matching edge incident to \(u\). Then \((M\setminus\{e^{\prime}\})\cup\{e\}\) is another maximum matching of \(T\), proving \(e\in O_{T}\), finishing the proof in this direction. Now, suppose \(\operatorname{rank}(T)=\operatorname{rank}(T/e)+2\), and let \(e=uv\). Note that \(\operatorname{rank}(T)=\operatorname{rank}(T/e)+2\) implies \(\nu(T)=\nu(T/e)+1\) and \(c(T)=c(T/e)+1\). Let the contracted vertex in \(T/e\) be \(v^{\prime}\). Suppose \(M\) is a maximum matching of \(T\) that contains \(e\). Let \(M^{\prime}=M\setminus\{e\}\), so that \(M^{\prime}\) is a matching of \(T/e\). Note that \(\nu(T)=\nu(T/e)+1\) implies \(M^{\prime}\) is a maximum matching of \(T/e\). Since \(v^{\prime}\) is unsaturated by \(M^{\prime}\), Proposition 2.4 gives that every minimum cover of \(T/e\) omits \(v^{\prime}\), meaning all neighbors of \(v^{\prime}\) in \(T/e\) are contained in every minimum cover of \(T/e\). Let \(C^{\prime}\) be such a minimum cover. Then \(C^{\prime}\) is also a cover of \(T-e\), giving that \(c(T/e)\geq c(T-e)\), but the assumption \(\operatorname{rank}(T)=\operatorname{rank}(T-e)\) implies \(c(T)=c(T-e)\). Substitution gives \(c(T/e)\geq c(T)\), but \(c(T)=c(T/e)+1\), a contradiction. Therefore, no maximum matching of \(T\) containing \(e\) exists, proving \(e\in F_{T}\). From the previous propositions, it is clear that \(e\in F_{T}\) if and only if \(\operatorname{rank}(T)=\operatorname{rank}(T-e)=\operatorname{rank}(T\setminus e )+2\). Therefore, we summarize these results as follows. **Theorem 2.23**.: _For any tree \(T\) and edge \(e\in E(G)\),_ 1. \(e\in M_{T}\) _iff_ \(\operatorname{rank}(T)=\operatorname{rank}(T-e)+2\) _(this also implies_ \(\operatorname{rank}(T)=\operatorname{rank}(T/e)+2\)_)._ 2. \(e\in O_{T}\) _iff_ \(\operatorname{rank}(T)=\operatorname{rank}(T/e)\) _(this also implies_ \(\operatorname{rank}(T)=\operatorname{rank}(T-e)\)_)._ 3. \(e\in F_{T}\) _iff_ \(\operatorname{rank}(T)=\operatorname{rank}(T-e)\) _and_ \(\operatorname{rank}(T)=\operatorname{rank}(T/e)+2\)_._ ## 3 Kernel and Skew Zero Forcing Matroids In the following section, we relate the set system of zero loci of nullvectors and SZF-closed sets of a graph. We begin by showing the former is a linear matroid, the "kernel matroid". In the next subsection, we show that, when the SZF-closure is a matroid closure operator, there is a polynomial time algorithm for computing the SZF number of a graph, despite the fact that it is NP-hard in general. We then show in the next subsection that this property of the SZF-closed sets holds for trees and graphs which are "SZF-complete", i.e., for which the zero loci of nullvectors and the SZF-closed sets are the same. We show that trees are SZF-complete; in general, the complement of any skew zero forcing set is saturated by some matching; and that, for some special bipartite graphs (including trees), they are precisely the complements of matching-saturated sets, and are therefore "gammoids". In the last subsection, we show that many classes of graphs are SZF-complete, and we characterize the nonsingular bipartite SZF-complete graphs. First, a useful definition: **Definition 3.1**.: _A set \(E\) together with a map \(\overline{\cdot}:\mathscr{P}(E)\to\mathscr{P}(E)\) defines a closure operator on \(E\) if the following are satisfied._ 1. _(Extensive Property)_ \(X\subseteq\overline{X}\) _for each_ \(X\in\mathscr{P}(E)\)_._ 2. _(Idempotent Property)_ \(\overline{X}=\overline{\overline{X}}\) _for each_ \(X\in\mathscr{P}(E)\)_._ 3. _(Monotone Property)_ \(\overline{X}\subseteq\overline{Y}\) _for any_ \(X,Y\in\mathscr{P}(E)\) _with_ \(X\subseteq Y\)_._ _If, in addition, the closure operator satisfies the following, it is a matroid closure operator._ 1. _(Mac Lane-Steinitz Exchange Property) For all elements_ \(a,b\) _of_ \(E\) _and all subsets_ \(X\) _of_ \(E\)_, if_ \(a\in\overline{X\cup\{b\}}\setminus\overline{X}\)_, then_ \(b\in\overline{X\cup\{a\}}\setminus\overline{X}\) Whole libraries have been written about the subline world of matroids, among which [29] is a valuable resource. Here we simply remark that, given a matroid closure operator \(\overline{\cdot}\): closed sets or "flats" are those \(S\subseteq E\) so that \(S=\overline{S}\), which form a lattice under set inclusion; bases are minimal sets so that \(\overline{S}=E\), which always have the same cardinality; independent sets are subsets of bases; dependent sets are non-independent sets; circuits are minimal dependent sets; and hyperplanes are maximal proper closed sets. Furthermore, the "rank" function which assigns to \(S\subseteq E\) the size of the smallest set \(F\subseteq S\) so that \(\overline{F}\supseteq S\) is also the rank function of the lattice of flats as a poset. Note that the results of subsection 2.2 imply that the SZF-closure is a bona fide closure operator. It is not always a matroid closure, however, so we investigate when it is below in subsection 3.2. ### The Kernel Matroid Given a graph \(G\), if \(S\subseteq V(G)\), then we call \(S\)_realizable_ if there exists \(\mathbf{x}\in\ker(G)\) so that \(S=Z(\mathbf{x})\). We examine the structure of the collection of realizable subsets of \(V(G)\). **Proposition 3.2**.: _Let \(G\) be a graph, and \(\mathcal{X}=\{X_{i}\}_{i=1}^{t}\) a collection of realizable subsets of \(V(G)\). If \(S=\cap_{i\in[t]}X_{i}\), then \(S\) is realizable._ Proof.: Clearly the \(t=1\) case is trivial, so we show the result for \(t=2\), from which the general case follows by induction. Since \(X_{1}\) and \(X_{2}\) are realizable, let \(\mathbf{x},\mathbf{y}\in\ker(G)\) so that \(X_{1}=Z(\mathbf{x})\) and \(X_{2}=Z(\mathbf{y})\). Further, let \(Q=\{-x_{v}/y_{v}:v\in V(G),y_{v}\neq 0\}\). Take any \(r\in\mathbb{R}^{*}\setminus Q\). Then, \(r\mathbf{y}+\mathbf{x}\in\ker(G)\), and \(Z(r\mathbf{y}+\mathbf{x})=S\), as quick examination shows \(ry_{v}+x_{v}=0\) if and only if \(y_{v}=x_{v}=0\) for any \(v\in V(G)\). Write \(\mathbf{e}_{j}\) to denote the elementary vector whose support is \(\{j\}\). For graph \(G\), let \(\operatorname{im}(G)\) denote the image of the adjacency matrix \(A(G)\). Furthermore, if \(X\subseteq V(G)\), define \(\operatorname{span}(X)\subseteq\mathbb{R}^{V(G)}\) to be the vector space spanned by elementary vectors corresponding to vertices of \(X\), i.e., \(\operatorname{span}(X)=\operatorname{span}(\{\mathbf{e}_{v}:v\in X\})\). **Definition 3.3**.: _Let \(G\) be a graph and \(S\subseteq V(G)\). Define \(\widehat{S}:=\{v:\mathbf{e}_{v}\in\operatorname{im}(G)+\operatorname{span}(S)\}\)._ **Proposition 3.4**.: _Let \(G\) be a graph and \(S\subseteq V(G)\). Then \(\widehat{S}\) is the intersection of all realizable sets containing \(S\)._ Proof.: Let \(\mathcal{X}\) be the collection of all realizable sets containing \(S\), let \(Y=\bigcap\mathcal{X}\), and suppose \(X\in\mathcal{X}\). By definition, \(X\subseteq\widehat{X}=\{x:\mathbf{e}_{x}\in\operatorname{im}(G)+\operatorname{ span}(X)\}\). Since \(S\subseteq X\), \(\operatorname{span}(S)\subseteq\operatorname{span}(X)\), so \(\operatorname{im}(G)+\operatorname{span}(S)\subseteq\operatorname{im}(G)+ \operatorname{span}(X)\), giving \(\widehat{S}\subseteq Y\). It remains to show the reverse inclusion. Since \(\mathcal{X}\) is a finite collection, Proposition 3.2 gives \(Y\) is a realizable set. By definition, \(\widehat{S}\) is a realizable set containing \(S\), so \(Y\subseteq\widehat{S}\), completing the proof. **Corollary 3.5**.: \(\widehat{S}\) _is the minimum realizable set containing \(S\) with respect to inclusion. Further, \(S\) is realizable if and only if \(\widehat{S}=S\)._ **Proposition 3.6**.: _Let \(G\) be a graph and \(\mathscr{P}(V(G))\) the collection of subsets of \(V(G)\). Then the map which sends \(v\mapsto\mathbf{e}_{v}+\operatorname{im}(G)\) is an isomorphism between the matroid \(\widehat{\cdot}\) on \(\mathscr{P}(V(G))\) and the linear matroid given by the collection of vectors \(\{\mathbf{e}_{\mathbf{v}}+\operatorname{im}(G)\}_{v\in V(G)}\)._ Proof.: Let \(f:V(G)\to\mathbb{R}^{V(G)}/\operatorname{im}(G)\) be the coset map, i.e., \(f(v)=\mathbf{e}_{v}+\operatorname{im}(G)\). Let \(S\subseteq V(G)\). Suppose \(v\in\widehat{S}\). By Definition 3.3, \(\mathbf{e}_{v}\in\operatorname{im}(G)+\operatorname{span}(S)\), so \(f(v)\in\operatorname{span}(f(S))\). Conversely, if \(\mathbf{w}\in\operatorname{span}(f(S))\cap f(V(G))\), then \(\mathbf{w}=\mathbf{e}_{u}+\operatorname{im}(G)\) for some \(u\in V(G)\), so \(\mathbf{w}=\mathbf{e}_{u}+\operatorname{im}(G)\in\operatorname{span}(f(S))= \operatorname{span}(S)+\operatorname{im}(G)\) implies \(u\in\widehat{S}\) and \(\mathbf{w}\in f(\widehat{S})\). Then \(\widehat{S}=f^{-1}\left(\operatorname{span}(f(S))\cap f(V(G))\right)\), from which the result follows. ### Skew Zero Forcing Matroids The authors of [3] asked for the computational complexity of computing various forcing numbers, mentioning that it is known that the decision problems of bounding by \(k\) the zero forcing numbers ([1]) and positive semidefinite zero forcing numbers ([10]) are NP-hard. Since then, [27] established that the failed zero-forcing number and failed skew zero forcing number are NP-hard to bound as well. We add to this by showing the skew zero forcing number is NP-hard to bound in general, although by contrast, there is a polynomial time algorithm to compute skew zero forcing numbers for graphs whose SZF-closure gives rise to a matroid. Then, below, we show that there are many graphs whose SZF-closure is a matroid closure: trees, cycles with length divisible by \(4\), bipartite graphs which have a unique perfect matching, complete bipartite graphs, graphs derived from these by appending a path of length \(2\) or subdividing an edge into a path of length \(5\), and bipartite graphs in which no maximum matching admits an alternating cycle. In the next few results, we refer to "ordinary" zero forcing, where the rule is that a vertex \(v\) which belongs to the set \(X\subseteq V(G)\) can force the addition of its neighbor \(w\) to \(X\) if \(w\) is the only unfilled neighbor of \(v\). Note that this rule contrasts with the skew zero forcing rule in that it requires that \(v\) be filled for it to force. **Proposition 3.7**.: _Let \(G\) be a graph on \(n\) vertices, with zero forcing number \(z(G)\). Define \(G^{\prime}\) to be a graph with vertex set \(V(G)\times\{1,2,3\}\) and edges \(E(G^{\prime})=A\cup B\), where \(A=\{\{(v,1),(w,1)\}:vw\in E(G)\}\) and \(B=\{\{(v,i),(v,j)\}:1\leq i<j\leq 3\}\). If \(X\subseteq V(G^{\prime})\) is minimal such that \(\overline{X}=V(G^{\prime})\), then \(|X|=z(G)\)._ Proof.: Denote by \(\Delta_{v}\) the triangle induced by \(\{v\}\times[3]\) for each \(v\in V(G)\), and write \(X=X_{0},X_{1},\ldots,X_{t}=\overline{X}\) for the results of a sequence of one-vertex SZF rule applications. If there exists a \(v\) so that \(\Delta_{v}\subseteq X\), let \(X^{\prime}=X\setminus\{(v,1)\}\). Then taking \(X^{\prime}_{0}=X^{\prime}\) and \(X^{\prime}_{r}=X_{r-1}\) for each \(1\leq r\leq t+1\) yields a SZF rule application sequence, so \(\overline{X^{\prime}}=V(G^{\prime})\), contradicting the minimality of \(X\). Furthermore, if \((v,j)\in X_{r}\) for some \(r\) and \(j\in[3]\), then \(\Delta_{v}\subseteq\overline{X_{r}}\). Therefore, \(\Delta_{v}\cap X\) is either empty or contains exactly one vertex, in which case we assume without loss of generality that \((v,1)\in X\). Note that, if \((v,1)\not\in X_{r}\) for some \(r\), then \(\Delta_{v}\cap X_{k}=\emptyset\) for each \(k\leq r\). In fact, no SZF rule can be applied at \((v,1)\) in \(X_{k}\) for \(k\leq r\), since \((v,1)\) has at least two unfilled neighbors. Thus, any \(r\) so that a SZF rule is applied at \((v,1)\) must have \((v,1)\in X_{r}\). In other words, the sequence \(X_{r}\cap(V(G)\times\{1\})\) is (other than some steps when no changes occur) an ordinary zero forcing rule application sequence applied to \(V(G)\times\{1\}\), and this sequence projects onto \(V(G)\) as an ordinary zero forcing rule application sequence there. So \(\operatorname{proj}_{V(G)}X\) is a zero forcing set for \(G\). Conversely, if \(X\) is a zero forcing set for \(G\), it is easy to see that \(X\times\{1\}\) is a skew zero forcing set for \(G^{\prime}\). **Corollary 3.8**.: _The decision problem, "Is the SZF number of \(G\) less than or equal to \(k\)?", is NP-hard._ Proof.: This follows from the above reduction and Aazami's result ([1]) that the ordinary zero forcing number decision problem is NP-hard. **Theorem 3.9**.: _If SZF-closure in \(G\) is a matroid closure operator, then there is an \(O(n^{3})\) algorithm to compute its skew zero forcing number._ Proof.: The rank of a matroid is the size of the smallest set whose closure is the whole ground set, so the rank of the SZF matroid of \(G\) is the cardinality of any minimal set whose SZF-closure contains all of \(V(G)\), i.e., the skew zero forcing number of \(G\). A form of the result then follows from [24], since they show that an oracle capable of computing closures can be used to compute rank in polynomial time. To be concrete, we provide the algorithm here. Identify the vertex set \(V(G)\) with \([n]\), where \(n=|V(G)|\), for convenience. \(S\leftarrow\emptyset\) \(k\gets 0\) **while**\(\overline{S}\neq V(G)\)**do** \(x\leftarrow\min(V(G)\setminus S)\) \(k\gets k+1\) **end while** **return**\(k\)\(\triangleright\) Return the SZF-number of \(G\) This algorithm succeeds because the vertices counted by \(k\) are a basis. Note that the iteration executes at most \(n\) times. Furthermore, a subroutine is needed to compute \(\overline{S}\) from \(S\); this can be done in time \(O(n^{2})\) by the following algorithm: \((\forall x\in V(G))(A_{x}\gets N(x)\setminus S)\) **while**\((\exists x)(|A_{x}|=1)\)**do** \(S\gets S\cup A_{x}\) \(X\gets A_{x}\) **for**\(z\in V(G)\)**do** \(A_{z}\gets A_{z}\setminus X\) **end for** **end while** **return**\(S\)\(\triangleright\) Return the SZF-closure \(\overline{S}\) of The set \(A_{x}\) keeps track of the vertices which are neighbors of \(x\) but have not yet joined \(\overline{S}\). This subroutine takes \(O(n^{2})\) time because (1) the time complexity of computing \(N(x)\setminus S\) is \(O(n)\) for each of \(n\) vertices; and (2) the inner loop executes \(n\) times. In total, we have an \(O(n^{3})\) execution time. Note that the above algorithm always returns the cardinality of _some_ set whose closure is \(G\), even if the SZF-closed sets are not the flats of a matroid. Therefore, it provides an upper bound in general. ### Matchings and the Skew Zero Forcing Matroid **Definition 3.10**.: _Call the graph \(G\)**skew zero forcing complete (SZF-complete)** if every SZF-closed \(S\subseteq V(G)\) is the zero locus of a nullvector of \(G\), i.e., \(G\) is SZF-complete if and only if the families of sets closed under \(\widehat{\cdot}\) and \(\overline{\cdot}\), respectively, are the same._ By Proposition 3.6, if a \(G\) is SZF-complete, then SZF-closure also gives rise to a matroid, which we term the _SZF matroid_ of \(G\). Note that Proposition 2.15 says that trees are SZF-complete. Thus, we ask: which other graphs are SZF-complete? **Proposition 3.11**.: _Let \(G\) be a graph, and \(X\subseteq V(G)\). If \(X=\widehat{X}\), then \(X=\overline{X}\)._ Proof.: If \(X=\widehat{X}\), then \(X\) is realizable by Corollary 3.5. Suppose \(X\neq\overline{X}\), meaning there exists \(v\in V(G)\) and \(u\not\in X\) so that \(v\) forces \(u\) under the SZF rule. If \(\mathbf{x}\in\ker(G)\) so that \(Z(\mathbf{x})=X\), then \((A\mathbf{x})_{v}=\sum_{w\sim v}x_{w}=x_{u}=0\), so \(u\in X\), a contradiction. The following results serve to identify the bases of the skew zero forcing and kernel matroids for trees. The authors of [13] extend the work of [7] by showing that if \(G\) is a \(C_{4s}\)-free bipartite graph, then \(\eta(G)=|V(G)|-2\nu(G)\). **Proposition 3.12**.: _Let \(G\) be a \(C_{4s}\)-free bipartite graph, and \((U,E,O)\) the Dulmage-Mendelsohn decomposition of \(G\). Then \(\eta(G)=|E|-|O|\)._ Proof.: By the work of [13], \(\eta(G)=|V(G)|-2\nu(G)\). The following computation completes the proof, noting that \(|V(G)|=|E|+|O|+|U|\), and \(\mu(G)=|O|+|U|/2\): \[\eta(G) =|V(G)|-2\nu(G)\] \[=(|E|+|O|+|U|)-(2|O|+|U|)\] \[=|E|-|O|.\] In particular, Proposition 3.12 gives the size of any basis for the SZF/kernel matroid of a tree. The next few results explore which vertex sets with size \(|E|-|O|\) actually form a basis for any \(G\). **Proposition 3.13**.: _Let \(G\) be a graph, \(X\subseteq V(G)\) and \(\overline{X}\) its SZF-closure. Then there is a matching \(M\) of \(G\) in which all vertices of \(\overline{X}\setminus X\) are saturated._ Proof.: Let \(x_{1},\ldots,x_{m}\) be the sequence of vertices added to \(X=X_{0},X_{1},\ldots,X_{m}=\overline{X}\) as the skew zero forcing rule is applied, where \(m=|\overline{X}\setminus X|\). Note that \(x_{j}\) is added only because there was a vertex \(y_{j}\in V(G)\) whose only neighbor outside \(X_{j}\) is \(x_{j}\). A vertex \(y_{j}\) never recurs in such a sequence, since, once its single unfilled neighbor is filled, it never has any unfilled neighbors again. No \(x_{j}\) occurs twice in the sequences of \(x_{i}\)'s, since it only gets filled once. Furthermore, \(x_{j}y_{j}\in E(G)\) for each \(1\leq j\leq m\), since \(y_{j}\) is a neighbor of \(x_{j}\). Therefore, the set \(D\) of directed edges \(\{(y_{j},x_{j})\}_{j=1}^{m}\) is an oriented subgraph of \(G\) with out-degrees and in-degrees at most one. Some components of \(D\) are cycles and others are paths; furthermore, \(D\) spans \(\overline{X}\). We claim that any such cycle has length at most 2. Suppose the vertex set of a cycle of length \(t\) in \(D\) is (in order) \(v_{0},\ldots,v_{t-1}\). Note that each \(v_{i}\) is a \(y_{j}\) for some \(j\) and an \(x_{k}\) for some \(k\). Without loss of generality, \(v_{1}=x_{j}\) is the first vertex of \(D\) added by SZF rule application. Then, if \(v_{t-1}=x_{k}\), then \(k>j\) and so \(x_{k}=v_{t-1}\not\in X_{j}\). But then \(v_{t-1}\) and \(v_{1}\) are neighbors of \(v_{0}=y_{j}\) not contained in the set \(X_{j}\), contradicting the fact that \(v_{1}=x_{j}\) was the only unfilled neighbor of \(v_{0}\) when \(x_{j}\) was added to \(X_{j}\) to obtain \(X_{j+1}\) unless \(v_{1}=v_{t-1}\), i.e., \(t=2\). Thus, \(D\) consists of 2-cycles and paths. From each such cycle, add the corresponding undirected edge to a set \(M\); for each path, add alternating edges to \(M\), starting with the sink vertex. Since the source \(y\) of such a path has in-degree zero, it is not \(x_{j}\) for any \(j\), so \(y\not\in\overline{X}\). Thus, \(M\) is a matching of all vertices of \(\overline{X}\) except a subset of \(S\subseteq X\). The following result is stated with only one direction of proof in the unpublished manuscript [8]. An important consequence is that, for trees \(T\), minimal sets \(X\subseteq V(T)\) so that \(\overline{X}=V(T)\) are precisely the sets of vertices unsaturated by some maximum matching. Note that the proof only invokes Corollary 3.13, which itself does not use any consequences of Theorem 2.19. **Corollary 3.14**.: _For any bipartite graph \(G\) in which no maximum matching admits an alternating cycle, the SZF matroid is dual to the matching matroid of \(G\). That is, the SZF matroid is a "gammoid."_ Proof.: DeAlba ([8] Proposition 4.1) showed that the unsaturated vertices of every maximum matching are a minimal skew forcing set, i.e., a basis of the SZF matroid. The proof proceeds as follows: Let \(M\) be a maximum matching, of cardinality \(r=|M|\), and let \(X=S\cup S^{\prime}\) be the set of its saturated vertices bipartitioned according to the bipartition of \(G\). For a vertex \(v\in V(M)\), write \(v^{\prime}\) for the vertex to which it is matched. Note that \(V(G)\setminus X\) are filled. Since \(G[X]\) contains no cycles, there is a vertex \(v_{1}\in S\) of degree one in this subgraph, so \(v_{1}\) forces \(v^{\prime}_{1}\in S^{\prime}\). Then \(G[X\setminus\{v_{1},v^{\prime}_{1}\}]\) is acyclic, so there is another vertex \(v_{2}\in S\) which forces \(v^{\prime}_{2}\in S^{\prime}\), and so on until \(v_{r}\) forces \(v^{\prime}_{r}\). For each \(j\in[r]\), \(v_{j}\) has no edges in \(G\) to \(v^{\prime}_{i}\) if \(i>j\). Now, \(v_{r}\) is the only unfilled neighbor of \(v^{\prime}_{r}\), so the former is forced by latter; then, \(v_{r-1}\) is the only unfilled neighbor of \(v^{\prime}_{r-1}\), so \(v_{r-1}\) is forced; and so on, until finally all of \(X\), and therefore \(V(G)\), is filled. Conversely, if \(B\) is a basis of the SZF matroid, then \(\overline{X}=V(G)\), so Proposition 3.13 implies that \(G\) admits a matching \(M\) which saturates \(V(G)\setminus X\). Then \(M\) can always be turned into maximum matching \(M^{\prime}\) so that \(\bigcup M\subseteq\bigcup M^{\prime}\) by applying augmenting paths, per Berge's Lemma. ### SZF-Completeness for Classes of Graphs Here we describe some explicit classes of graphs which are SZF-complete. **Proposition 3.15**.: _The cycle \(C_{n}\) is SZF-complete if and only if \(4|n\)._ Proof.: For convenience, suppose the vertices of \(C_{n}\) are labeled by \([n]\) in order around the cycle. Suppose \(4|n\). By Proposition 3.11, it suffices to show the SZF-closed sets are realizable. Quick examination shows \(C_{n}\) has four SZF-closed sets, namely \(\emptyset\), \(V(C_{n})\), \(\{x\in[n]:2|x\}\), and \(\{x\in[n]:2\nmid x\}\). For \(\emptyset\), take the corresponding vector to have \(j\) coordinate \((-1)^{j}\). For \(V(C_{n})\), the corresponding vector is the trivial \(\mathbf{0}\). For the remaining two SZF-closed sets, the corresponding vector places zeros at vertices of the specified set and alternates \(1\) and \(-1\) at vertices outside the specified set. It is straightforward to verify that these are nullvectors. Now suppose \(4\nmid n\). It is well known that the eigenvalues of the cycle \(C_{n}\) are \(2\cos(2\pi j/n)\) for \(0\leq j\leq n-1\). Thus, \(0\) is an eigenvalue of a cycle if and only if its length is divisible by four. Therefore, zero is not an eigenvalue of \(C_{n}\), but \(\emptyset\) is an SZF-closed set, completing the proof. **Proposition 3.16**.: _The complete bipartite graph \(K_{m,n}\) is SZF-complete._ Proof.: Let the partition classes of \(K_{m,n}\) be \(M\) and \(N\) so that \(|M|=m\) and \(|N|=n\). By Proposition 3.11, it suffices to show an arbitrary SZF-closed set is realizable. Suppose \(X\subseteq V(K_{m,n})\) is an SZF-closed set so that \(X\cap M=R\) and \(X\cap N=B\). By definition, \(|M|-|R|\neq 1\) and \(|N|-|B|\neq 1\). If \(|M|-|R|>1\) (resp. \(|N|-|B|>1\)), then define \(R^{\prime}:=M\setminus R\) (resp. \(B^{\prime}:=N\setminus B\)). In the corresponding vector, assign \(1\) to the coordinate corresponding to one arbitrarily chosen vertex of \(R^{\prime}\) (resp. \(B^{\prime}\)), and the remaining vertices of \(R^{\prime}\) (resp. \(B^{\prime}\)) with \(-1/(|R^{\prime}|-1)\) (resp. \(-1/(|B^{\prime}|-1)\)). All other coordinates (i.e., the ones corresponding to vertices of \(X\)) are assigned \(0\). It is easy to see that this is a nullvector realizing the set \(X\). Appending a path of length \(2\) to an SZF-complete graph yields another SZF-complete graph. **Proposition 3.17**.: _Suppose \(G\) is SZF-complete and \(x\in V(G)\). If \(G^{\prime}=(V(G)\cup\{y,z\},E(G)\cup\{xy,yz\})\), where \(y,z\notin V(G)\), then \(G^{\prime}\) is also SZF-complete._ Proof.: By Proposition 3.11 it suffices to show an arbitrary SZF-closed set is realizable. Suppose \(S^{\prime}\subseteq V(G^{\prime})\) is an SZF-closed set. Note that \(y\in S^{\prime}\), since otherwise \(z\) would have exactly one non-\(S^{\prime}\) neighbor. Furthermore, \(z\in S^{\prime}\) if and only if \(x\in S^{\prime}\), since otherwise would have exactly one non-\(S^{\prime}\) neighbor. Let \(S=V(G)\cap S^{\prime}\). The number of non-\(S\) neighbors of every vertex of \(V(G)\), including \(x\), is equal to its number of non-\(S^{\prime}\) neighbors in \(G^{\prime}\), so \(S\) is SZF-closed in \(G\). Since \(G\) is SZF-complete, there is a nullvector \(\mathbf{v}\) with zeros given by \(S\). Let \(\mathbf{v}^{\prime}\) be the vector in \(\mathbb{R}^{G^{\prime}}\) given by \(\mathbf{v}\oplus(0,-\mathbf{v}_{x})\). Then it is easy to check that the zero locus of \(\mathbf{v}^{\prime}\) is exactly \(S^{\prime}\), and that \(\mathbf{v}^{\prime}\) is a nullvector for \(G^{\prime}\), completing the proof. A graph \(G\)**uniquely perfect matchable (UPM)** if \(G\) has exactly one perfect matching. Godsil showed ([12]) that bipartite UPM graphs are exactly the perfectly-matchable subgraphs of the half-graph. We will employ the following additional characterization from Theorem 2 of [28]: **Lemma 3.18**.: _A bipartite graph \(G\) with bipartition \((X,Y)\) is a UPM-graph if and only if_ 1. _each of_ \(X\) _and_ \(Y\) _contains a pendant vertex, and_ 2. _when the pendant vertices and their neighbors are deleted, the resulting subgraph has a unique perfect matching._ The following result adds a property to the TFAE statement Theorem 2.12 of [3], and characterizes SZF-completeness for nonsingular bipartite graphs. **Theorem 3.19**.: _A nonsingular bipartite graph \(G\) is SZF-complete if and only if it is UPM._ Proof.: (\(\Rightarrow\)): We proceed by strong induction on \(|V(G)|\). It is easy to check the statement holds for 1 or 2 vertices. Suppose \(G\) is bipartite, SZF-complete, and nonsingular on at least 3 vertices. Since \(G\) is nonsingular, \(V(G)\) is the only zero locus of a nullvector of \(G\). Moreover, since \(G\) is SZF-complete, \(\emptyset\) is not SZF-closed. So, there must be a vertex \(v\) of degree one, with neighbor \(w\). Then (see [2], Lemma 1), \(|\det(G-\{v,w\})|=|\det(G)|\), so \(G^{\prime}=G-\{v,w\}\) is nonsingular. Take a set \(S^{\prime}\) which is SZF-closed for \(G^{\prime}\). If \(w\) has at least one neighbor in \(V(G^{\prime})-S^{\prime}\), let \(S=S^{\prime}\cup\{w\}\). Then \(S\) is SZF-closed for \(G\), so there is a nullvector \(\mathbf{x}\) whose zero coordinates are exactly \(S\). Restrict \(\mathbf{x}\) to \(G^{\prime}\) to obtain \(\mathbf{x}^{\prime}\), which is a nullvector for \(G^{\prime}\) corresponding to \(S^{\prime}\). If \(w\) has no \(S^{\prime}\)-neighbors, let \(S=S^{\prime}\cup\{v,w\}\). Then \(S\) is SZF-closed for \(G\), so there is a nullvector \(\mathbf{x}\) whose zero coordinates are exactly \(S\). Restrict \(\mathbf{x}\) to \(G^{\prime}\) to obtain \(\mathbf{x}^{\prime}\), which is a nullvector for \(G^{\prime}\) corresponding to \(S^{\prime}\). If \(w\) has no \(S^{\prime}\)-neighbors, let \(S=S^{\prime}\cup\{v,w\}\). Then \(S\) is SZF-closed for \(G\), so there is a nullvector \(\mathbf{x}\) whose zero coordinates are exactly \(S\). Restrict \(\mathbf{x}\) to \(G^{\prime}\) to obtain \(\mathbf{x}^{\prime}\), which is a nullvector for \(G^{\prime}\) corresponding to \(S^{\prime}\). Therefore, each SZF-closed set for \(G^{\prime}\) corresponds to a nullvector for \(G^{\prime}\), and \(G^{\prime}\) is SZF-complete as well. Now, since \(G\) is nonsingular, the permanent of the adjacency matrix of \(G\) is nonzero; since this permanent is the square of the number of perfect matchings in \(G\) (consider its adjacency matrix written in block form), \(G\) must have a perfect matching. Every such perfect matching must include the edge \(vw\). Now \(G^{\prime}\) is bipartite, nonsingular, and SZF-complete, so \(G^{\prime}\) also has a unique perfect matching \(M\) by the induction hypothesis, and \(M\cup\{vw\}\) is the unique perfect matching of \(G\). (\(\Leftarrow\)): Again, \(G\) is UPM implies that \(|\det(G)|\) is 1, so \(G\) is nonsingular. We show SZF-completeness by induction. The base case is easy. By Lemma 3.18, \(G\) has a pendant vertex \(v\) with neighbor \(w\) so that \(G^{\prime}=G-\{v,w\}\) is also bipartite UPM and therefore nonsingular. Let \(S\) be an SZF-closed set of vertices for \(G\). Then \(S\) contains \(w\), or else \(v\) would have exactly one non-\(S\) neighbor. Let \(S^{\prime}=S-\{v,w\}\) in \(G^{\prime}\). Then \(S^{\prime}\) is SZF-closed in \(G\) because the number of non-\(S^{\prime}\) neighbors of vertices \(u\) in \(V(G^{\prime})\) is the same as the number of non-\(S\) neighbors of \(u\) in \(G\). Thus, by the induction hypothesis, there is a nullvector \(\mathbf{x}^{\prime}\) for \(G^{\prime}\) whose zero coordinates are exactly \(S^{\prime}\). Extend \(\mathbf{x}^{\prime}\) to a vector \(\mathbf{x}\) by setting the \(w\) coordinate to \(0\) and the \(v\) coordinate equal to the negative of the sum \(C\) of the \(\mathbf{x}^{\prime}\) coordinates arising from neighbors of \(w\). Then \(\mathbf{x}\) is a nullvector for \(G\), and its set of zeros is exactly \(S\) unless \(S\) does not contain \(v\), but the \(v\)-coordinate \(-C\) of \(\mathbf{x}\) is zero. Then the nullspace of \(G\) has a nontrivial intersection with the space where the \(v\) and \(w\) coordinates are equal, contradicting the nonsingularity of \(G\), unless \(S^{\prime}\) is all of \(V(G^{\prime})\). But then \(S=V(G)\), and the zero vector corresponds to \(S\). So \(G\) is SZF-complete. Subdividing an edge of an SZF-complete graph into a path of length \(5\) yields another SZF-complete graph. **Proposition 3.20**.: _Let \(G\) be an SZF-complete graph, and \(xy\in E(G)\). If \(G^{\prime}=(V(G)\cup\{x_{i}\}_{i=1}^{4},[E(G)\setminus\{xy\}]\cup\{xx_{1},x_{1 }x_{2},x_{2}x_{3},x_{3}x_{4},x_{4}y\})\), then \(G^{\prime}\) is SZF-complete._ Proof.: Let \(X^{\prime}\subseteq V(G^{\prime})\) be SZF-closed, and let \(X=X^{\prime}\cap V(G)\). Notice that \(x\in X^{\prime}\) if and only if \(x_{2},x_{4}\in X^{\prime}\). Moreover, \(y\in X^{\prime}\) if and only if \(x_{1},x_{3}\in X^{\prime}\). We show \(X\) is SZF-closed in \(G\). Clearly, \(N_{G}(v)=N_{G^{\prime}}(v)\) for each \(v\in V(G)\setminus\{x,y\}\). Additionally, \(N_{G}(x)=(N_{G^{\prime}}(x)\setminus\{x_{1}\})\cup\{y\}\), and \(N_{G}(y)=(N_{G^{\prime}}(y)\setminus\{x_{4}\})\cup\{x\}\). Since \(x\in X^{\prime}\) if and only if \(x_{4}\in X^{\prime}\), and \(y\in X^{\prime}\) if and only if \(x_{1}\in X^{\prime}\), we have that \(X\) is SZF-closed in \(G\). Since \(G\) is SZF-complete, there exists \(\mathbf{z}\in\ker(G)\) so that \(X\) is the zero locus of \(\mathbf{z}\). Extend \(\mathbf{z}\in\mathbb{R}^{V(G)}\) to vector \(\mathbf{z}^{\prime}\in\mathbb{R}^{V(G^{\prime})}\) so that \(z_{v}=z_{v}^{\prime}\) for each \(v\in V(G)\), \(z_{x_{2}}^{\prime}:=-z_{x}\), \(z_{x_{4}}^{\prime}:=z_{x}\), \(z_{x_{1}}^{\prime}:=z_{y}\), and \(z_{x_{3}}^{\prime}:=-z_{y}\). Clearly, \(\mathbf{z}^{\prime}\in\ker(G^{\prime})\), so \(X^{\prime}\) is realizable, and Proposition 3.11 completes the proof. Note that SZF-completeness is _not_ required for a graph's SZF-closed set system to arise from a matroid. For example, the SZF-closed sets of \(C_{6}\) with vertex set [6] are \(\emptyset\), \(\{1,3,5\}\), \(\{2,4,6\}\), and [6], which form a \(2\)-dimensional Boolean lattice, but \(C_{6}\) is not SZF-complete by Proposition 3.15. ## 4 Hypergraphs The present work began with an investigation into adjacency nullvectors of hypergraphs, so we return to that topic here. As is often the case, hypergraphs add significant additional complexities to the situation for ordinary graphs. At least for linear hypertrees - connected hypergraphs with no nontrivial cycles and for which pairs of edges intersect in at most one vertex - we extend some of Theorem 2.19 in the first subsection below. This involves a new definition of SZF-closed sets for hypergraphs, which we show gives rise to a set system in containment-preserving bijection with the lattice of subvarieties of linear hypertrees' nullvarieties (terminology defined below). The following section then describes the kernel-closed and SZF-closed sets of complete hypergraphs. Spectral hypergraph theory is a large and growing area, so it is not possible to offer a thorough introduction here. We provide only key definitions. For a starting point on hypergraph spectra, we refer the reader to [5]; for more on eigenvarieties, see [11]; for a broader view from the theory of tensors, see [23]. A _hypergraph_\(\mathcal{H}\) is a pair \((V(\mathcal{H}),E(\mathcal{H}))\) of vertices and edges, with \(E(\mathcal{H})\subseteq\mathscr{P}(V(\mathcal{H}))\), where we assume that \(|e|>1\) for each \(e\in E(\mathcal{H})\); we denote the number of vertices of \(\mathcal{H}\) by \(n=n_{\mathcal{H}}\) and typically identify \(V(\mathcal{H})\) with \([n]\). The _rank_ of an edge \(e\in E(\mathcal{H})\) is its cardinality, and \(\mathcal{H}\) is said to be \(k\)-uniform, or a \(k\)-graph, if all edges have rank \(k\). The _adjacency hypermatrix_\(\mathcal{A}_{\mathcal{H}}\) of a \(k\)-uniform hypergraph \(\mathcal{H}\) on \(n\) vertices is a dimension-\(n\), order-\(k\) hypermatrix (often identified with the tensor of which it is the coordinate matrix), i.e., an element of \(\mathbb{C}^{[n]^{k}}\), whose \((i_{1},\ldots,i_{k})\) entry \(\mathcal{A}_{\mathcal{H}}(i_{1},\ldots,i_{k})\) is \(1/(k-1)!\) if \(\{i_{1},\ldots,i_{k}\}\) is an edge of \(\mathcal{H}\) and zero otherwise. The factor of \(1/(k-1)!\) is sometimes omitted in this definition - for present purposes, it is immaterial. The value \(\lambda\in\mathbb{C}\) is an eigenvalue with eigenvector \(\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{C}^{n}\) if \[\sum_{i_{2},\ldots,i_{k}}\mathcal{A}_{\mathcal{H}}(i,i_{2},\ldots,i_{k})x_{i_ {2}}\cdots x_{i_{k}}=\lambda x_{i}^{k-1}\] for each \(i\in[n]\). The \(k\)-form \(p(x_{1},\ldots,x_{k})=\sum_{i_{1},i_{2},\ldots,i_{k}}\mathcal{A}_{\mathcal{H} }(i_{1},\ldots,i_{k})x_{i_{1}}\cdots x_{i_{k}}\) whose gradient appears on the left-hand side above is sometimes known as the _Lagrangian polynomial_ of \(\mathcal{H}\). It is straightforward to show (see [22] proof of Proposition 1) that the coordinate \(\partial p/\partial x_{i}\) is \(k\) times the Lagrangian polynomial of the _link_ of vertex \(v_{i}\) in \(\mathcal{H}\), i.e., the hypergraph whose edges are \(\{e\setminus\{v_{i}\}\,|\,\,v_{i}\in e\in E(\mathcal{H})\}\). So, for \(k\)-uniform hypergraphs, it is equivalent to define an eigenvalue \(\lambda\) with eigenvector \(\mathbf{x}\) as a simultaneous solution to \[\sum_{\begin{subarray}{c}e\in E(\mathcal{H})\\ v\in e\end{subarray}}\prod_{\begin{subarray}{c}w\in e\\ w\neq v\end{subarray}}x_{w}=\lambda x_{v}^{k-1} \tag{1}\] for all \(v\in V(\mathcal{H})\). For ease of notation, we use \(f_{\mathcal{H},v}\) to denote the polynomial on the left-hand side of (1), the Lagrangian polynomial of the link of \(v\) in \(\mathcal{H}\). Even for mixed rank hypergraphs, i.e., non-uniform hypergraphs, we adopt this definition for eigenpairs \((\lambda,\mathbf{x})\). Notice that for graphs, i.e., when \(k=2\), this agrees with standard adjacency spectra in graph theory. Throughout this section, we are interested in the eigenvalue zero, in the same way as for graphs previously. For hypergraph \(\mathcal{H}\), the collection of all eigenvectors associated to the eigenvalue zero, called _nullvectors_, form an affine variety, called the _nullvariety_. The authors of [6] use \(\mathcal{V}_{0}(\mathcal{H})\) to denote this collection, but since these vectors comprise the kernel of \(\mathcal{A}_{\mathcal{H}}\) as a \((k-1)\)-form (at least for \(k\)-uniform hypergraphs), we denote the collection of nullvectors by \(\ker(\mathcal{H})\) here to emphasize its relationship to graphs' nullspaces. Given the collection of links' polynomials, \(f_{\mathcal{H},v}\), we require a notation for evaluating some variables at zero. Write \(\mathbf{x}_{U}=\{x_{v}\}_{v\in U}\) for any set \(U\subseteq V(\mathcal{H})\) and \(\langle\mathcal{F}\rangle\) for the polynomial ideal in \(\mathbb{C}[\mathbf{x}]=\mathbb{C}[\{x_{v}\}_{v\in V(\mathcal{H})}]\) generated by a collection of polynomials \(\mathcal{F}\) over \(\{x_{v}\}_{v\in V(\mathcal{H})}\). Furthermore, let \(\phi_{U}:\mathbb{C}[\mathbf{x}]\to\mathbb{C}[\mathbf{x}]\) be the evaluation homomorphism obtained by extending the maps \(x_{v}\mapsto 0\) if \(v\in U\) and \(x_{v}\mapsto x_{v}\) otherwise. Lastly, for \(U\subseteq V(\mathcal{H})\), define \(\mathcal{V}^{\mathcal{H}}(U)\) to be the affine variety defined by the ideal \(\langle\mathbf{x}_{U}\cup\{f_{\mathcal{H},v}:v\in V(\mathcal{H})\}\rangle= \langle\mathbf{x}_{U}\rangle+\langle\phi_{U}f_{\mathcal{H},v}:v\in V(\mathcal{ H})\}\rangle\), which captures what is left of \(\langle\{f_{\mathcal{H},v}:v\in V(\mathcal{H})\}\rangle\) after variables indexed by elements of \(U\) are set to zero. Notice that for any \(U\subseteq V(\mathcal{H})\), \(\mathcal{V}^{\mathcal{H}}(U)\) is a subvariety of \(\ker(\mathcal{H})\), and \(\ker(\mathcal{H})=\mathcal{V}^{\mathcal{H}}(\emptyset)\) in particular. The collection of graph nullvectors form a vector space; analogously, the collection of hypergraph nullvectors forms an algebraic variety. While there is a unique generating set for \(\ker(T)\) when \(T\) is a tree, for hypertrees \(\mathcal{T}\), \(\ker(\mathcal{T})\) breaks into many irreducible components, each having its own generating set. We therefore examine the generating sets of irreducible components of \(\ker(\mathcal{T})\). ### Linear Hypertrees A hypergraph \(\mathcal{H}\) is _linear_ if every pair of edges intersect in at most one vertex. A _cycle_ in \(\mathcal{H}\) is a sequence \(x_{0},e_{1},x_{1},\ldots,x_{t-1},e_{t},x_{t}\) of alternating vertices \(x_{j}\in V(\mathcal{H})\) and edges \(e_{j}\in E(\mathcal{H})\) so that the \(x_{j}\) are distinct except that \(x_{0}=x_{t}\), and the \(e_{j}\) are distinct. A hypergraph \(\mathcal{H}\) is a _hypertree_ if it admits no cycles. A _pendant vertex_ is a vertex of degree one, and a _leaf edge_ is an edge containing at most one non-pendant vertex. **Proposition 4.1**.: _Let \(\mathcal{T}\) be a linear hypertree on \(n\) vertices so that every edge has rank at least two. Let \(\mathbf{x}\in\mathbb{C}^{n}\) be a nullvector of \(\mathcal{T}\). Then for each edge \(e\in E(\mathcal{T})\), there exists vertex \(v\in e\) so that \(x_{v}=0\)._ Proof.: By way of contradiction, suppose that there exists an edge \(e\) so that \(x_{v}\neq 0\) for all \(v\in e\). Let \(E\) be the collection of all such edges of \(\mathcal{T}\) with this property, and further define \(\mathcal{T}^{\prime}\) to be the subgraph of \(\mathcal{T}\) containing all edges in \(E\) (and no isolated vertices). Then \(\mathcal{T}^{\prime}\) is a nonempty forest, so \(\mathcal{T}^{\prime}\) contains a pendant vertex, \(v\). Since \(f_{\mathcal{T}^{\prime},v}\) contains as an addend exactly one monomial which evaluates to a product of nonzero values, \(f_{\mathcal{T}^{\prime},v}(\mathbf{x})\neq 0\). By the construction of \(\mathcal{T}^{\prime}\), \(f_{\mathcal{T}^{\prime},v}(\mathbf{x})=f_{\mathcal{T},v}(\mathbf{x})\) (since any monomials corresponding to edges in \(E(\mathcal{T})\setminus E\) incident to \(v\) have at least one vertex \(u\) with \(x_{u}=0\)), further implying \(f_{\mathcal{T},v}(\mathbf{x})\neq 0\), a contradiction. **Proposition 4.2**.: _Let \(\mathcal{T}\) be a (not necessarily uniform) linear hypertree with pendant edge \(e\). Furthermore, let \(S\subseteq V(\mathcal{T})\) so that \(\mathcal{V}^{\mathcal{T}}(S)\) is an irreducible component of \(\ker(\mathcal{T})\). Then \(|e\cap S|\leq 2\)._ Proof.: By way of contradiction, suppose \(|e\cap S|\geq 3\). Let \(A=e\cap S\). Then at least two vertices of \(A\) are pendant vertices of \(e\), namely \(v_{1}\) and \(v_{2}\). Then \(\mathcal{V}^{\mathcal{T}}(S)\subsetneq\mathcal{V}^{\mathcal{T}}(S\setminus\{v_ {1}\})\), but \(\mathcal{V}^{\mathcal{T}}(S\setminus\{v_{1}\})\) is irreducible since \(\phi_{S\setminus\{v_{1}\}}f_{\mathcal{T},v}=\phi_{S}f_{\mathcal{T},v}\) for every \(v\) not a pendant vertex of \(e\), and \(\phi_{S\setminus\{v_{1}\}}f_{\mathcal{T},u}=0\) for every pendant vertex \(u\in e\), since \(|A\setminus\{v_{1}\}|\geq 2\). Thus, the irreducibility of \(\mathcal{V}^{\mathcal{T}}(S\setminus\{v_{1}\})\) contradicts that \(\mathcal{V}^{\mathcal{T}}(S)\) is an irreducible component of \(\ker(\mathcal{T})\), completing the proof. Notice also that Proposition 4.1 applies to \(2\)-trees as well as general linear hypertrees: the zero locus of any null vector is a vertex cover. However, in both the tree and hypertree settings, this proposition is not an equivalence, i.e., there are vertex covers of trees/hypertrees which do not correspond to the zero set of a nullvector. Proposition 2.15 gives that the vertex cover must also be skew zero forcing closed. A similar situation arises for hypertrees. Consider the following example (note that smaller examples exist), where the filled vertices denote a vertex cover of the given hypertree. If \(U\) is the set of filled vertices depicted in the previous figure and \(v\) is the unique vertex of degree four, then \(\phi_{U}f_{\mathcal{T},v}\) contains exactly one monomial, namely the monomial corresponding to \(e_{4}\). If a product of variables is zero, then at least one of the variables is zero, so \(U\) is not the set of zero entries for any nullvector of this hypergraph. This observation leads to a hypergraph skew zero forcing rule (which differs from the hypergraph zero forcing rule presented as Def. 1.5 in [15]). **Definition 4.3**.: _Let \(\mathcal{H}\) be a hypergraph._ * _A subset_ \(Z\subseteq V(\mathcal{H})\) _defines an initial coloring by filling all vertices of_ \(Z\) _and all other vertices remain unfilled._ * _The skew zero forcing rule at a vertex_ \(v\) _says: If a vertex_ \(v\) _is incident to exactly one edge_ \(e\) _with no filled vertices in_ \(e\setminus\{v\}\)_, then change the color of one vertex in_ \(e\setminus\{v\}\) _to filled._ * _An (SZF-)derived set of an initial coloring_ \(Z\) _is the result of applying the skew zero forcing rule until no more changes are possible._ In contrast to trees, the skew zero forcing rule for hypergraphs, even just for hypertrees, does not necessarily generate a unique derived set. For example, if \(\mathcal{T}\) is the \(3\)-uniform hyperedge with \(V(\mathcal{T})=\{u,v,w\}\) and \(Z=\{u\}\), then \(\{u,v\}\) and \(\{u,w\}\) are both derived sets of \(Z\). Similarly, if \(Z=\emptyset\), then all elements of \(\binom{V(\mathcal{T})}{2}\) are derived sets of \(Z\). Thus, the "skew zero forcing closure of \(\emptyset\)" that is considered in Theorem 2.19 is not well defined for hypertrees, as there are many sets derived from \(\emptyset\). However, we still sometimes refer to sets Figure 2: \(3\)-uniform hyperstar on four edges which are "stalled" in the sense that the SZF rule cannot be applied to them anywhere as "SZF-closed sets". On the other hand, a reasonable choice of analogue to kernel-closed sets in this context is the family of zero loci of nullvectors. Indeed, throughout the sequel, we refer to a set \(S\subseteq V(\mathcal{T})\) as "kernel-closed" if it is the zero locus of a nullvector. This raises the question if the family of SZF-closed vertex covers and the kernel-closed sets are the same. Unfortunately, they are not. Consider the hypergraph given in Figure 2. Let \(\mathcal{T}\) be the subhypertree given by edges \(e_{1}\) and \(e_{2}\). If \(v\) is the vertex of degree \(2\) in \(\mathcal{T}\), the generating sets for irreducible components of \(\ker(\mathcal{T})\) are \(\{v\}\) and \(V(\mathcal{T})\setminus\{v\}\). However, \(v\) together with a pendant vertex on each edge form a derived set of \(\emptyset\). So, not all sets derived from \(\emptyset\) generate irreducible components of \(\ker(\mathcal{T})\). However, the converse is true, as is shown by the following result. **Theorem 4.4**.: _For each irreducible component \(C\) of \(\ker(\mathcal{T})\), there is an SZF-closed vertex cover \(S\subseteq V(\mathcal{T})\) so that \(C=\mathcal{V}^{\mathcal{T}}(S)\)._ We postpone the proof to build a few useful tools. **Proposition 4.5** (Prop. 5.20 in [20]).: _If \(V\) and \(W\) are irreducible affine varieties over an algebraically closed field, then \(V\times W\) is as well._ In fact, the way we will often use Proposition 4.5 is: if \(I\subset\mathbb{C}[x_{1},\ldots,x_{n}]\) and \(J\subset\mathbb{C}[y_{1},\ldots,y_{m}]\) are prime ideals and \(I^{\prime},J^{\prime}\) are the ideals they generate in \(\mathbb{C}[x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}]\), respectively, then \(I^{\prime}+J^{\prime}\) is also a prime ideal, and \(\mathcal{V}(I^{\prime}+J^{\prime})=\mathcal{V}(I)\times\mathcal{V}(J)\). The following lemma establishes that some ideals of the form \(L_{\mathcal{T},V}=\langle\mathbf{x}_{V}\cup\{\phi_{V}f_{\mathcal{T},v}:v\in V (\mathcal{T})\}\rangle\) are prime. **Lemma 4.6**.: _Let \(\mathcal{T}\) be a linear hypertree where each edge has rank at least three. For any \(U\subseteq V(\mathcal{T})\) with \(U\) an SZF-closed vertex cover of \(\mathcal{T}\), \(L_{\mathcal{T},U}\) is a prime ideal, and \(\mathbf{x}_{U}\cup\{\phi_{U}f_{\mathcal{T},v}:v\in V(\mathcal{T})\}\setminus\{0\}\) is an irredundant set of generators for it._ Proof.: The generators of \(L_{\mathcal{T},U}\) are a finite collection of variables, as well as polynomials of a specific form: sums of monomials which are products of all but one vertex variable of an edge. Let \(\mathcal{K}=\{\phi_{U}f_{\mathcal{T},v}:v\in V(\mathcal{T})\}\setminus\{0\}\). Since \(U\) is a vertex cover of \(\mathcal{T}\), \(\mathcal{K}=\{\phi_{U}f_{\mathcal{T},v}:v\in V(\mathcal{T})\}\setminus\{0\}=\{ \phi_{U}f_{\mathcal{T},v}:v\in U\}\setminus\{0\}\). Let \(U^{\prime}\subseteq U\) be any minimal set of \(K:=|\mathcal{K}|\) vertices \(v\) so that \(\mathcal{K}=\{\phi_{U}f_{\mathcal{T},v}:v\in U^{\prime}\}\setminus\{0\}\). Since \(U\) is SZF-closed, \(U^{\prime}\) contains no pendant vertices of \(\mathcal{T}\). Let \(\ell\) be a pendant vertex of \(\mathcal{T}\), and label the elements of \(U^{\prime}=\{u_{i}\}_{i=1}^{K}\) so that if \(i<j\), then \(\operatorname{dist}(\ell,u_{i})\leq\operatorname{dist}(\ell,u_{j})\). We show by induction on \(k\in\{0,\ldots,K\}\) that the ideal generated by \(\mathbf{x}_{U}\cup\{\phi_{U}f_{\mathcal{T},u_{i}}\}_{i=1}^{k}\) is prime, and that \(\mathbf{x}_{U}\cup\{\phi_{U}f_{\mathcal{T},u_{i}}\}_{i=1}^{k}\) is irredundant as generators. Then the base case \(k=0\) holds because \(\mathbf{x}_{U}\), as just a collection of variables, generates a prime ideal and all such variables are necessary to generate \(\langle\mathbf{x}_{U}\rangle\). Fix an integer \(0\leq k<K\) and suppose that the result holds for \(\mathbf{x}_{U}\cup\{\phi_{U}f_{\mathcal{T},u_{i}}\}_{i=1}^{k}\). Let \(\mathcal{K}^{\prime}=\mathbf{x}_{U}\cup\{\phi_{U}f_{\mathcal{T},u_{i}}\}_{i=1}^ {k+1}\) and define \(g:=\phi_{U}f_{\mathcal{T},u_{k+1}}\). Since \(\mathcal{T}\) is a tree and \(\operatorname{dist}(\ell,u_{i})\leq\operatorname{dist}(\ell,u_{k+1})\) for all \(1\leq i\leq k\), some variables in \(g\) do not appear as variables in \(\mathbf{x}_{U}\cup\{\phi_{U}f_{\mathcal{T},u_{i}}\}_{i=1}^{k}\), e.g., any variables of vertices incident to \(u_{k+1}\) at distance \(\operatorname{dist}(\ell,u_{k+1})+1\) from \(\ell\), which exist since \(u_{k+1}\) is not pendant in \(\mathcal{T}\). Let \(X\) be the set of variables of \(g\) also occurring as variables of polynomials in \(\mathcal{K}^{\prime}\setminus\{g\}\). Define \(Y\) to be the variables that appear in \(g\) that are not contained in \(X\), which is nonempty by the argument above. Additionally, let \(Z\) be the collection of variables in polynomials of \(\mathcal{K}^{\prime}\) except the variables contained in \(X\). Define a collection of new variables \(X^{\prime}:=\{x^{\prime}_{m}:x_{m}\in X\}\). Let the polynomial \(g^{\prime}\) be \(g\) after application of the evaluation map that sends \(x_{m}\mapsto x^{\prime}_{m}\) for each \(x_{m}\in X\). Let \(I\) be the ideal generated by \(\mathcal{K}^{\prime}\setminus\{g\}\). The induction hypothesis gives that \(I\) is prime. Note that \(g=\phi_{U}f_{\mathcal{T},u_{k+1}}\) is a sum of at least two monomials of positive degree by the skew-closedness of \(U\). Thus, the ideal \(\langle g^{\prime}\rangle\) is prime because \(g^{\prime}\) is irreducible, since the linearity of \(\mathcal{T}\) implies that no variable divides more than one monomial of \(g^{\prime}\). Proposition 4.5 gives the primality of the ideal generated by \(I+\langle g^{\prime}\rangle\). Let \(\sigma:\mathbb{C}[X\cup Z]\times\mathbb{C}[X^{\prime}\cup Y]\to\mathbb{C}[X \cup Y\cup Z]\) be the quotient homomorphism \(\sigma:f\mapsto f+\langle\{x_{i}-x^{\prime}_{i}:x_{i}\in X\}\rangle\). Clearly, \(\sigma\) is surjective, so Proposition 3.34b in [20] (that surjective homomorphisms preserve primality) completes the proof of primality. Since \(Y\) is nonempty, \(g\) introduces a new variable in \(\mathcal{K}^{\prime}\). Since \(\mathcal{K}^{\prime}\setminus\{g\}\) is irredundant by induction, we may conclude that \(\mathcal{K}^{\prime}\) is also irredundant. Notice then that if \(\mathcal{T}\) is a linear hypertree where each edge has rank at least three and \(U\subseteq V(\mathcal{T})\) is an SZF-closed vertex cover, then the codimension of \(\mathcal{V}^{\mathcal{T}}(U)\) is \(|U\cup\{\phi_{U}f_{\mathcal{T},v}:v\in V(\mathcal{T})\}\setminus\{0\}|\). Proof of Theorem 4.4.: If \(\mathbf{x}\) is a nullvector of \(\mathcal{T}\), then \(Z(\mathbf{x})\) is an SZF-closed vertex cover of \(\mathcal{V}(T)\) by Proposition 4.1. Therefore, Lemma 4.6 gives that \(\mathcal{V}^{\mathcal{T}}(Z(\mathbf{x}))\) is an irreducible variety. Note that \(\mathbf{x}\in\mathcal{V}^{\mathcal{T}}(Z(\mathbf{x}))\), and \(\mathcal{V}^{\mathcal{T}}(Z(\mathbf{x}))\subseteq\ker(\mathcal{T})\), so, \[\ker(\mathcal{T})=\bigcup_{\mathbf{x}\in\ker(\mathcal{T})}\mathcal{V}^{ \mathcal{T}}(Z(\mathbf{x})).\] Thus, the minimal sets \(\mathcal{V}^{\mathcal{T}}(Z(\mathbf{x}))\) are in bijection with the irreducible components of \(\ker(\mathcal{T})\) (essentially because \(\mathbb{C}[x_{1},\ldots,x_{n}]\) is Noetherian), so \(C=\mathcal{V}^{\mathcal{T}}(Z(\mathbf{x}))\) for some \(\mathbf{x}\). **Lemma 4.7**.: _Let \(\mathcal{T}\) be a hypertree so that every edge has rank at least three. Let \(U\subseteq V(\mathcal{T})\) be given so that \(\mathcal{V}^{\mathcal{T}}(U)\) is an irreducible component of \(\ker(\mathcal{T})\). Then for each \(v\in V(\mathcal{T})\), \(|\{e\in E(\mathcal{T}):e\cap U=\{v\}\}|\neq 1\), i.e., the skew zero forcing rule stalls on \(U\)._ Proof.: If \(|\{e\in E(\mathcal{T}):e\cap U=\{v\}\}|=1\), then \(\phi_{U}f_{\mathcal{T},v}\) is one monomial. Since every edge of \(\mathcal{T}\) has rank at least three, \(\phi_{U}f_{\mathcal{T},v}\) is a product of at least two variables, contradicting that \(\mathcal{V}^{\mathcal{T}}(U)\) forms an irreducible component of \(\ker(\mathcal{T})\). **Lemma 4.8**.: _Let \(\mathcal{T}\) be a linear hypertree where each edge has rank at least three. Let \(U\subseteq V(\mathcal{T})\). Then \(\{\phi_{U}f_{\mathcal{T},v}:v\in V(\mathcal{T})\}\) does not contain any polynomials with exactly one monomial if and only if \(\mathcal{V}^{\mathcal{T}}(U)\) is irreducible if and only if \(U\) is SZF-closed._ Proof.: The backward direction of the first equivalence is Lemma 4.7, since the number of nonzero monomials appearing in \(\phi_{U}f_{\mathcal{T},v}\) equals \(|\{e\in E(\mathcal{T}):e\cap U=\{v\}\}|\), which also implies the second equivalence in the statement. This also implies that, if \(\{\phi_{U}f_{\mathcal{T},v}:v\in V(\mathcal{T})\}\) contains no single-monomial polynomials, then \(U\) is SZF-closed. The forward direction follows from Lemma 4.6 once we show that \(U\) is a vertex cover. We proceed by induction on the size of \(E(\mathcal{T})\). If \(|E(\mathcal{T})|=1\), then \(U\) - which is nonempty because it is SZF-closed - is clearly a vertex cover of \(\mathcal{T}\). Suppose the result holds for all \(1\leq|E(\mathcal{T})|\leq k\), and let \(|E(\mathcal{T})|=k+1\). Since \(\mathcal{T}\) is a hypertree, \(\mathcal{T}\) contains a leaf edge \(\ell\). Let \(v_{\ell}\) be a pendant vertex of \(\ell\), meaning \(f_{\mathcal{T},v_{\ell}}\) contains one monomial. Thus, \(U\) contains at least one element of \(\ell\). Let \(U_{\ell}=U\cap\ell\) and \(A:=\{e\in E(\mathcal{T}):e\cap U_{\ell}\neq\emptyset\}\). Further, let \(\mathcal{T}^{\prime}\) be the subhyperforest of \(\mathcal{T}\) induced by the non-isolated vertices of \((V(\mathcal{T}),E(\mathcal{T})\setminus A)\). If \(v\in V(\mathcal{T}^{\prime})\), then there exists \(e\in E(\mathcal{T})\) so that \(v\in e\) and \(e\cap U_{\ell}=\emptyset\), meaning \(v\notin U_{\ell}\). Thus, if \(e^{\prime}\in A\) is incident to \(v\), all vertices of \(e^{\prime}\cap U_{\ell}\) are distinct from \(v\). As a result, the monomial of \(f_{\mathcal{T},v}\) given by edge \(e^{\prime}\) does not appear in \(\phi_{U}f_{\mathcal{T},v}\). Since no monomials corresponding to edges of \(A\) appear in \(\phi_{U}f_{\mathcal{T},v}\) for any \(v\in V(\mathcal{T}^{\prime})\), \(\{\phi_{U}f_{\mathcal{T},v}:v\in V(\mathcal{T}^{\prime})\}=\{\phi_{U\setminus U _{\ell}}f_{\mathcal{T}^{\prime},v}:v\in V(\mathcal{T}^{\prime})\}\). Thus, \(\{\phi_{U\setminus U_{\ell}}f_{\mathcal{T}^{\prime},v}:v\in V(\mathcal{T}^{ \prime})\}\) does not contain any polynomials with exactly one monomial. Since \(|E(\mathcal{T}^{\prime})|<|E(\mathcal{T})|\), the induction hypothesis gives that \(U\setminus U_{\ell}\) is a vertex cover of \(\mathcal{T}^{\prime}\). Therefore, since \(U_{\ell}\) is a vertex cover of the edges in \(A\) and \(E(\mathcal{T})=A\cup E(\mathcal{T}^{\prime})\), \(U\) is a vertex cover of \(\mathcal{T}\), completing the proof. **Proposition 4.9**.: _Let \(\mathcal{T}\) be a linear hypertree. If \(A,B\subseteq V(\mathcal{T})\) so that \(A\subseteq B\) and \(B^{\prime}\subseteq V(\mathcal{T})\) is SZF-derived from \(B\), then there exists \(A^{\prime}\subseteq V(\mathcal{T})\) so that \(A^{\prime}\) is SZF-derived from \(A\) and \(A^{\prime}\subseteq B^{\prime}\)._ Proof.: Let \(A,B,B^{\prime}\subseteq V(\mathcal{T})\) so that \(A\subseteq B\) and \(B^{\prime}\) is SZF-derived from \(B\). Let \(\{v_{i}\}_{i=1}^{l}\), \(\{u_{i}\}_{i=1}^{l}\) be sequences of vertices so that the SZF rule applied to \(v_{i}\) forces \(u_{i}\) for each \(1\leq i\leq l\) when deriving \(B^{\prime}\) from \(B\). Furthermore, define \(\{v^{\prime}_{i}\}_{i=1}^{m},\{u^{\prime}_{i}\}_{i=1}^{m}\subseteq V(\mathcal{ T})\) to be maximal subsequences of \(\{v_{i}\}\) and \(\{u_{i}\}\) so that the SZF rule can be applied at \(v^{\prime}_{i}\) to force \(u^{\prime}_{i}\) starting from set \(A\). Let \(S=A\cup\{u^{\prime}_{i}\}_{i=1}^{m}\). Since \(A\subseteq B\), and \(B^{\prime}=B\cup\{u_{i}\}_{i=1}^{l}\), \(S\subseteq B^{\prime}\). If \(S\) is SZF-closed, then defining \(A^{\prime}:=S\) completes the proof. Suppose instead that \(S\) is not SZF-closed. Then there exists vertex \(w\in V(\mathcal{T})\) so that the SZF rule can be applied at \(w\). Thus, \(|\{e\ni w:(e\setminus\{w\})\cap S=\emptyset\}|=1\). Let \(f\) be the unique edge containing \(w\) so that \((f\setminus\{w\})\cap S=\emptyset\). Since \(S\subseteq B^{\prime}\) and \(B^{\prime}\) is SZF-closed, \((f\setminus\{w\})\cap B^{\prime}\neq\emptyset\). Let \(z\in(f\setminus\{w\})\cap B^{\prime}\), and define \(S_{1}:=S\cup\{z\}\), which is SZF-derived from \(S\) by applying the SZF-rule at \(w\). Then \(S_{1}\subseteq B^{\prime}\) and \(S_{1}\) is SZF-derived from \(A\). However, this contradicts the maximality of \(S\). **Proposition 4.10**.: _Let \(\mathcal{T}\) be a linear hypertree so that every edge has rank at least three. Let \(S\subseteq V(\mathcal{T})\) so that \(\mathcal{V}^{\mathcal{T}}(S)\) is an irreducible component of \(\ker(\mathcal{T})\) and \(S\) is the intersection of all zero loci of elements of \(\mathcal{V}^{\mathcal{T}}(S)\). Then \(S\) is SZF-derived from \(\emptyset\)._ Proof.: Let \(S\) be as is given in the statement. By Proposition 4.1, \(S\) is a vertex cover of \(\mathcal{T}\). Additionally, Lemma 4.8 gives that \(S\) is derived from itself under the skew zero forcing rule, i.e., \(S\) is SZF-closed. Let \(A\subseteq S\) be a set of minimum size so that \(S\) is SZF-derived from \(A\). By way of contradiction, suppose \(A\neq\emptyset\). Then, since \(\emptyset\subseteq A\), Proposition 4.9 gives the existence of SZF-closed set \(A_{0}\subseteq V(\mathcal{T})\) so that \(A_{0}\subseteq S\) and \(A_{0}\) is SZF-derived from \(\emptyset\). Notice that \(A\neq\emptyset\) implies \(A_{0}\subsetneq S\). Since \(A_{0}\) and \(S\) are SZF-closed, they are the intersections of zero \(\mathcal{V}^{\mathcal{T}}(A_{0})\) and \(\mathcal{V}^{\mathcal{T}}(S)\) respectively. Thus, \(\mathcal{V}^{\mathcal{T}}(S)\subsetneq\mathcal{V}^{\mathcal{T}}(A_{0})\). Furthermore, since \(A_{0}\) is SZF-closed, Lemma 4.8 implies \(\mathcal{V}^{\mathcal{T}}(A_{0})\) is an irreducible variety, contradicting that \(\mathcal{V}^{\mathcal{T}}(S)\) is an irreducible component of \(\mathcal{V}_{0}(\mathcal{T})\). Therefore, \(A=\emptyset\). **Corollary 4.11**.: _Let \(\mathcal{T}\) be a linear hypertree where every edge has cardinality at least \(3\). Then SZF-closed sets form a poset under the inclusion relation, and minimal elements of this poset are exactly the generators for irreducible components of \(\ker(\mathcal{T})\), and they are SZF-derived from \(\emptyset\)._ The above Corollary indicates the origin of the term "generating set" to refer to minimal zero loci of nullvectors. Now we turn our attention to kernel-closed sets in linear hypertrees and their relation to SZF-closed sets. The following result mirrors that of Proposition 2.15 for trees, in that we show that SZF-closed set are the zero loci of individual nullvectors. **Theorem 4.12**.: _Let \(\mathcal{T}\) be a linear hypertree. If \(S\subseteq V(\mathcal{T})\) is SZF-closed, then \(S\) is kernel-closed._ Proof.: Let \(S\subseteq V(\mathcal{T})\) be SZF-closed. Consider rooting \(\mathcal{T}\) at a pendant vertex \(w\) of leaf edge \(\ell\). We construct a nullvector \(\mathbf{x}\) with entries \(x_{v}\) for \(v\in V(\mathcal{T})\) by iteratively working through the hypertree \(\mathcal{T}\). Start by assigning zeros to coordinates corresponding to vertices of \(S\), i.e., let \(x_{v}=0\) for each \(v\in S\). Now we choose values for all nonzero coordinates of \(\mathbf{x}\). Let \(x_{v}=1\) for every pendant vertex \(v\in\ell\setminus S\). Since \(S\) is SZF-closed, \(\phi_{S}f_{\mathcal{T},v}=0\) for every pendant vertex \(v\in\ell\). If \(\mathcal{T}\) is a single edge, we have completed the proof. Otherwise, let \(z\in\ell\) be the unique vertex satisfying \(\deg(z)>1\). If \(z\in S\), then we already know \(x_{z}=0\). If not, let \(x_{z}=1\). Therefore, we have determined the entries of \(\mathbf{x}\) for every vertex with distance at most one from \(w\). Let \(v\in V(\mathcal{T})\) be distance \(h\geq 1\) from \(w\), and if \(u\in V(\mathcal{T})\) satisfies \(\operatorname{dist}(w,u)\leq h\), then \(x_{u}\) has already been assigned. Furthermore, assume that if \(u\in V(\mathcal{T})\) satisfies \(\operatorname{dist}(u,w)<h\), then \(f_{\mathcal{T},u}(\mathbf{x})=0\). If every edge incident to \(v\) contains a vertex of \(S\setminus\{v\}\), then define \(x_{y}=1\) for every \(y\in V(\mathcal{T})\setminus S\) satisfying \(\operatorname{dist}(w,y)=h+1\) and \(y\) is adjacent to \(v\). In this case, \(\phi_{S}f_{\mathcal{T},v}=0\), so \(f_{\mathcal{T},u}(\mathbf{x})=0\) holds. If this is not the case, let edges \(\{e_{i}\}_{i=1}^{m}\) contain no vertices of \(S\setminus\{v\}\) for some \(m\geq 2\). The bound on \(m\) comes from the assumption that \(S\) is SZF-closed. We split into two cases. Case 1: There exists \(1\leq j\leq m\) so that \(e_{j}\) contains vertices at distance \(h-1\) from \(w\). Then all entries of \(\mathbf{x}\) corresponding to vertices of \(e_{j}\) have already been assigned. Define \(c:=\prod_{u\in e_{j}\setminus\{v\}}x_{u}\), and note that by assumption, \(c\neq 0\). Choose \(u_{i}\in e_{i}\setminus\{v\}\) for each \(1\leq i\leq m\) so that \(i\neq j\). Then define \(x_{u_{i}}=-c/(m-1)\) and if \(u\in\bigcup_{i\neq j}(e_{i}\setminus\{v,u_{i}\})\), then let \(x_{u}=1\). Lastly, if \(u\) is a neighbor of \(v\) so that \(x_{u}\) has not been assigned (these are vertices outside \(S\) and outside \(\{e_{i}\}\)), let \(x_{u}=1\). It is straightforward to see that \(f_{\mathcal{T},v}(\mathbf{x})=0\). Case 2: There does not exist \(1\leq j\leq m\) so that \(e_{j}\) contains vertices at distance \(h-1\) from \(w\). For each \(u\in e_{1}\setminus\{v\}\), define \(x_{u}=1\). Choose \(u_{i}\in e_{i}\setminus\{v\}\) for each \(2\leq i\leq m\). Then define \(x_{u_{i}}=-1/(m-1)\) and if \(u\in\bigcup_{i\geq 2}(e_{i}\setminus\{v,u_{i}\})\), then let \(x_{u}=1\). Lastly, if \(u\) is a neighbor of \(v\) so that \(x_{u}\) has not been assigned (these are vertices outside \(S\) and outside \(\{e_{i}\}\)), let \(x_{u}=1\). Again we have that \(f_{\mathcal{T},v}(\mathbf{x})=0\) The last result of this section provides a hypergraph analogue to the statement (Proposition 2.15) that trees are SZF-complete. **Corollary 4.13**.: _For linear hypertree \(\mathcal{T}\) with every edge having rank at least three, a set \(S\subseteq V(\mathcal{T})\) is SZF-closed if and only if it is kernel-closed._ Proof.: The only content of the statement beyond Theorem 4.12 is that each kernel-closed set \(U=Z(\mathbf{x})\) is SZF-closed. Suppose not; then \(|\{e\in E(\mathcal{T}):e\cap U=\{v\}\}|=1\) for some \(v\in V(\mathcal{T})\), so \(\phi_{U}f_{\mathcal{T},v}\) is one monomial which evaluates to a nonzero value at \(\mathbf{x}\), contradicting that \(\mathbf{x}\) is a nullvector. ### Complete Hypergraph Nullvariety In this section, we investigate the nullvariety and SZF-closed sets of complete hypergraphs. We let \(\mathcal{K}_{n}^{(k)}\) denote the \(k\)-uniform complete hypergraph on \(n\) vertices. Let \(\mathcal{X}\) be a collection of variables. Then define \(e_{k}(\mathcal{X})\) to be the \(k\)th elementary symmetric polynomial in the variables of \(\mathcal{X}\), i.e., if \(\mathcal{X}=\{x_{i}\}_{i=1}^{m}\), then \[e_{k}(\mathcal{X})=\sum_{s\in\binom{[m]}{k}}\prod_{i\in s}x_{i}.\] One important property of the complete hypergraph \(\mathcal{K}_{n}^{(k)}=([n],\binom{[n]}{k})\), is the following equality, where we assume \(v\in[n]\). \[f_{\mathcal{K}_{n}^{(k)},v}=\frac{\partial}{\partial x_{v}}e_{k}\left(\{x_{u }\}_{u\in[n]}\right)\] We exploit this in the following result. The proof is a formalization of a sketch given by [17]. **Theorem 4.14**.: _If \(n\geq k\geq 2\), let \(\mathcal{G}=\mathcal{K}_{n}^{(k)}\). Then the collection of irreducible components of \(\ker(\mathcal{G})\) is given by_ \[\left\{\mathcal{V}^{\mathcal{G}}(S):S\in\binom{[n]}{n-k+2}\right\}_{.}\] Proof.: Let \(\mathbf{y}\in\ker(\mathcal{G})\), and let \(\mathcal{X}=\{x_{u}\}_{u\in[n]}\). Since \(f_{\mathcal{G},u}(\mathbf{y})=0\) for all \(u\in[n]\), we are interested in the simultaneous vanishing of the following set. \[\left\{\frac{\partial}{\partial x_{u}}e_{k}(\mathcal{X})\right\}_{u\in U}\] Note that for every \(S\in\binom{[n]}{n-k+2}\), we have \(\mathcal{V}^{\mathcal{G}}(S)\subseteq\ker(\mathcal{G})\), as for any \(\mathbf{z}\in\mathcal{V}(S)\), every monomial of \(\frac{\partial}{\partial x_{u}}e_{k}(\mathcal{X})\) evaluates to zero by the pigeonhole principle. Thus, it suffices to show that at least \(n-k+2\) coordinates of \(\mathbf{y}\) are zero. We prove the result by induction on \(k\) for fixed \(n\). Let \(n\geq 2\) be given. If \(k=2\), then \(\frac{\partial}{\partial x_{i}}e_{k}(\mathcal{X})=\left(\sum_{j\in[n]}x_{j} \right)-x_{i}\). Thus, if \(n=2\), then \(y_{1}=y_{2}=0\). Otherwise, if \(n>2\), then \(0=(\frac{\partial}{\partial x_{i}}e_{k}(\mathcal{X}))(\mathbf{y})-(\frac{ \partial}{\partial x_{j}}e_{k}(\mathcal{X}))(\mathbf{y})=y_{j}-y_{i}\) for each distinct pair \(i,j\in[n]\). Therefore, \(\mathbf{y}\) is a constant vector, so \(\mathbf{y}=\mathbf{0}\), giving the desired result in this case. Now suppose the result holds for some \(k\geq 2\) so that \(k<n\), and for all \(n^{\prime}\) so that \(k\leq n^{\prime}\leq n\). Now we consider \(\mathcal{G}^{\prime}:=\mathcal{K}_{n}^{(k+1)}\). Let \(\mathbf{y}^{\prime}\in\ker(\mathcal{G}^{\prime})\). Take note of the following equality: \[\frac{\partial}{\partial x_{u}}e_{k+1}(\mathcal{X})=e_{k}(\mathcal{X})-x_{u} \cdot\frac{\partial}{\partial x_{u}}e_{k}(\mathcal{X})\] Notice also that \[e_{k}(\mathcal{X})=\frac{1}{k}\sum_{u\in[n]}x_{u}\frac{\partial}{\partial x_{u }}e_{k}(\mathcal{X}). \tag{2}\] Thus, \(\frac{\partial}{\partial x_{u}}e_{k+1}(\mathcal{X})=(e_{k}(\mathcal{X}))( \mathbf{y}^{\prime})-(x_{u}\cdot\frac{\partial}{\partial x_{u}}e_{k}(\mathcal{ X}))(\mathbf{y}^{\prime})=0\) for all \(u\in[n]\) implies that every \((x_{u}\cdot\frac{\partial}{\partial x_{u}}e_{k}(\mathcal{X}))(\mathbf{y}^{ \prime})=0\), as otherwise contradicts (2). We seek to show that at least \(n-(k+1)+2\) coordinates of \(\mathbf{y}^{\prime}\) are zero. Suppose that \(q\in\mathbb{N}\) coordinates of \(\mathbf{y}^{\prime}\) are zero and the remaining \(n-q\) are nonzero. For the \(n-q\) coordinates \(u\) which are nonzero, \((x_{u}\cdot\frac{\partial}{\partial x_{u}}e_{k}(\mathcal{X}))(\mathbf{y}^{ \prime})=0\) implies \((\frac{\partial}{\partial x_{u}}e_{k}(\mathcal{X}))(\mathbf{y}^{\prime})=0\). Therefore, if \(n-q\geq k\), this contradicts the induction hypothesis for \(n^{\prime}=n-q\). Thus, \(n-q\leq k-1\), giving \(q\geq n-k+1=n-(k+1)+2\), completing the proof. **Corollary 4.15**.: _If \(n\geq k\geq 2\), then \(S\subseteq V(\mathcal{K}_{n}^{(k)})\) is kernel-closed if and only if \(|S|\geq n-k+2\)._ Proof.: Let \(S\subseteq V(\mathcal{K}_{n}^{(k)})\). If \(|S|<n-k+2\), then \(S\) is the not the zero locus of a nullvector, as otherwise would contradict Theorem 4.14. Conversely, if \(|S|\geq n-k+2\), there are at most \(k-2\) vertices outside \(S\). Since \(\mathcal{K}_{n}^{(k)}\) is \(k\)-uniform, \(S\) contains at least two vertices from every edge of \(E(\mathcal{K}_{n}^{(k)})\). Therefore, if \(v\in V(\mathcal{K}_{n}^{(k)})\), then \(\phi_{S}f_{\mathcal{K}_{n}^{(k)},v}=0\). Theorem 4.14 gives that \(\ker(\mathcal{K}_{n}^{(k)})\) contains \(\binom{n}{n-k+2}=\binom{n}{k-2}\) irreducible components each of codimension \(n-k+2\), i.e., dimension \(k-2\). Furthermore, Corollary 4.15 completely describes the collection of kernel-closed sets for \(\mathcal{K}_{n}^{(k)}\). What about the collection of SZF-closed sets? Those are summarized by the following result. **Proposition 4.16**.: _Let \(\mathcal{K}_{n}^{(k)}\) be a complete \(k\)-uniform hypergraph for some \(2\leq k\leq n\) and \(U\subseteq[n]=V(\mathcal{K}_{n}^{(k)})\). Then \(U\) is a stalled set if and only if \(|U|\notin\{n-k,n-k+1\}\)._ Proof.: Let \(U\subseteq[n]\) and \(v\in[n]\) be arbitrary. We split into cases according to \(|U|\). Case 1: \(|U|>n-k+1\). Then every edge of \(\mathcal{K}_{n}^{(k)}\) incident to \(v\) contains at least two representatives of \(U\), so the SZF rule cannot be applied at \(v\). Case 2: \(|U|=n-k+1\). Since \(n\geq k\), \(U\neq\emptyset\). If \(v\in U\), then the fact that \(\mathcal{K}_{n}^{(k)}\) is \(k\)-uniform and \(V(\mathcal{K}_{n}^{(k)})\setminus U\) contains exactly \(k-1\) vertices implies that there is exactly one edge \(e\) incident to \(v\) so that \((e\setminus\{v\})\cap U=\emptyset\) (namely, the edge containing \(v\) and all vertices outside \(U\)). Thus, \(U\) is not stalled as the SZF rule can be applied at \(v\). Case 3: \(|U|=n-k\). Since \(k\geq 2\), \(n-k<n\), so \(U\neq V(\mathcal{K}_{n}^{(k)})\). If \(v\notin U\), then the fact that \(\mathcal{K}_{n}^{(k)}\) is \(k\)-uniform and \(V(\mathcal{K}_{n}^{(k)})\setminus U\) contains exactly \(k\) vertices implies that exactly one edge \(e\) incident to \(v\) satisfies \((e\setminus\{v\})\cap U=\emptyset\) (namely the edge containing all vertices outside \(U\), which contains \(v\)). Thus, \(U\) is not stalled as the SZF rule can be applied at \(v\). Case 4: \(|U|<n-k\). If \(v\in U\), then the fact that \(\mathcal{K}_{n}^{(k)}\) is \(k\)-uniform and \(V(\mathcal{K}_{n}^{(k)})\setminus U\) contains at least \(k+1\) vertices implies that at least \(\binom{k+1}{k-1}>1\) edges \(e\) incident to \(v\) satisfy \((e\setminus\{v\})\cap U=\emptyset\). So, the SZF rule cannot be applied to \(v\). On the other hand, if \(v\notin U\), then the fact that \(\mathcal{K}_{n}^{(k)}\) is \(k\)-uniform and \(V(\mathcal{K}_{n}^{(k)})\setminus U\) contains at least \(k+1\) vertices implies that at least \(\binom{k}{k-1}>1\) edges \(e\) incident to \(v\) satisfy \((e\setminus\{v\})\cap U=\emptyset\). So, the SZF rule cannot be applied at \(v\). The previous proposition shows the SZF-closed sets can be partitioned into two classes, those with size at least \(n-k+2\) and those with size at most \(n-k-1\). The partition class with larger sets agrees with the kernel-closed, while those in the other partition class are not kernel-closed. However, if \(n=k\), the partition class containing sets of size \(n-k-1\) does not exist, meaning the collections agree only for the single edge, \(\mathcal{K}_{k}^{(k)}\). Thus, complete hypergraphs satisfying \(n>k\) provide an example where the collections of SZF-closed and kernel-closed sets are not the same. ## 5 Future Directions Here, we list a few problems that remain open. 1. Is there a combinatorial characterization of graphs which are SZF-complete, at least for bipartite graphs? What about for hypergraphs? 2. Describe the class of graphs whose SZF-closure is a matroid closure operator. 3. How can the rest of Theorem 2.19 be generalized to hypertrees? 4. Describe the structure of the kernel and SZF matroids. What are the atoms, coatoms, independent/dependent sets, cycles, hyperplanes, etc.? 5. Can the frameworks of closure operators and matroids, or generalizations thereof, be applied to describe the kernel and SZF-closed set systems for hypergraphs? 6. Is Corollary 3.14 true for a broader range of graphs \(G\)? ## 6 Acknowledgements Thanks to Alex Duncan and Darren Narayan for helpful discussions during the current work, to Vladimir Nikiforov for asking the right questions, and to Leslie Hogben for giving fascinating talks that sparked our interest in zero forcing.
2308.02400
Work-in-Progress: A Universal Instrumentation Platform for Non-Volatile Memories
Emerging non-volatile memories (NVMs) represent a disruptive technology that allows a paradigm shift from the conventional von Neumann architecture towards more efficient computing-in-memory (CIM) architectures. Several instrumentation platforms have been proposed to interface NVMs allowing the characterization of single cells and crossbar structures. However, these platforms suffer from low flexibility and are not capable of performing CIM operations on NVMs. Therefore, we recently designed and built the NeuroBreakoutBoard, a highly versatile instrumentation platform capable of executing CIM on NVMs. We present our preliminary results demonstrating a relative error < 5% in the range of 1 k$\Omega$ to 1 M$\Omega$ and showcase the switching behavior of a HfO$_2$/Ti-based memristive cell.
Felix Staudigl, Mohammed Hossein, Tobias Ziegler, Hazem Al Indari, Rebecca Pelke, Sebastian Siegel, Dirk J. Wouters, Dominik Sisejkovic, Jan Moritz Joseph, Rainer Leupers
2023-08-03T14:24:57Z
http://arxiv.org/abs/2308.02400v1
# Work-in-Progress: A Universal Instrumentation Platform for Non-Volatile Memories ###### Abstract Emerging non-volatile memories (NVMs) represent a disruptive technology that allows a paradigm shift from the conventional von Neumann architecture towards more efficient computing-in-memory (CIM) architectures. Several instrumentation platforms have been proposed to interface NWs allowing the characterization of single cells and crossbar structures. However, these platforms suffer from low flexibility and are not capable of performing CIM operations on NVMs. Therefore, we recently designed and built the NeuroBreakoutBoard, a highly versatile instrumentation platform capable of executing CIM on NVMs. We present our preliminary results demonstrating a relative error \(<5\%\) in the range of \(1\,\mathrm{k\SIUnitSymbolO}\) to \(1\,\mathrm{M\SIUnitSymbolO}\) and showcase the switching behavior of a HfO\({}_{2}\)/Ti-based memristive cell. ReRAM, CIM, LIM, memristor, instrumentation ## I Introduction Emerging non-volatile memories (NVMs) represent an ideal substrate for enabling computing-in-memory (CIM) by offering high density and non-volatility properties [1]. CIM can be implemented in analog [2] or digital [3] fashion. Although the former offers the best computational efficiency, analog CIM requires expensive ADCs/DACs to convert inputs and outputs from the digital to the analog domain and vice versa. The latter uses so-called logic families to implement digital gates within the NVM, thereby circumventing the conversion but suffering from lower computational efficiency. Several instrumentation platforms have been proposed to characterize NVMs at the device and crossbar levels. However, most of these platforms only support passive crossbar structures [4, 5, 6, 7, 8, 9, 10]. To the best of our knowledge, there are currently no platforms capable of executing neither analog nor digital CIM on NVMs. While the CIM operation is performed within the memory, the instrumentation platform must facilitate the surrounding circuitry to generate synchronized voltage pulses (inputs) and process the resulting currents (outputs). Hence, we designed and built the NeuroBreakoutBoard (NBB), a universal instrumentation platform to perform CIM on NVMs. Our platform supports all memristor-based memories in both passive (1R) and active (1T1R) crossbar configurations. Furthermore, the unique combination of the interconnection matrix, custom designed transimpedance amplifiers (TIAs), and the implemented firmware enable the NBB to perform both analog and digital CIM operations by using different NVMs. ## II NeuroBreakoutBoard (NBB) The NBB generates the required input pulses with a multichannel 12bit DAC (\(0\,\mathrm{V\SIUnitSymbolC}\)\(0\,\mathrm{V\SIUnitSymbolC}\)\(10\,\mathrm{V}\), \(\pm 2.5\,\mathrm{V}\), \(\pm 10\,\mathrm{V}\)). These inputs can be arbitrarily mapped by the interconnection matrix to the _NVM interface_ shown in Fig. 2. The west/east/south multiplexers provide five independent poten Fig. 1: Overview of the implemented structure and modules. Fig. 2: Image of the NeuroBreakoutBoard and its main components. tials, one external measurement/supply line, and the ground potential. Additionally, the north multiplexers connect to the sensing module capable of measuring currents and voltages. The sensing module consists of an array of programmable TIA, together with high-performance 14bit ADCs enabling precise measurements of resistances in the range of \(1\,\mathrm{k}\Omega\) to \(1\,\mathrm{M}\Omega\). Each TIA offers four distinct sensitivity levels subdividing the broad range of input currents/voltages by four to enhance the overall measurement accuracy. The NBB can control up to 68 lines simultaneously and connects via the _NVM interface_ to various extension boards implementing additional digital circuitry, chip packages, and measuring capabilities. The NBB is orchestrated by the _controller interface_, which bundles all critical control and data signals. The _controller unit_ connects to this interface and provides a microcontroller which runs the implemented NBB firmware. The software offers a Python/C++ interface to issue write/read/compute operations to be executed on the crossbar array. The implemented firmware executes operations by assigning the respective voltage pulses on the corresponding pins. Likewise, the firmware controls the sensing module by iteratively adjusting the programmable TIAs and triggering the respective ADCs. ## III Result In this section, we discuss our preliminary results of two experiments that showcase the measurement accuracy and flexibility of the NeuroBreakoutBoard. Each of the four sensitivity stages has been calibrated with high-precision resistances to obtain maximum accuracy. **Accuracy:** To determine the measurement accuracy of the NBB, we applied a similar methodology described by Berdan et al. [4]. Fig. 3 (a) illustrates the result of 1,000 measurements of a reference resistor. Each measurement incorporates 50 consecutive ADC samplings to omit the impact of noise. We determined the actual resistance of the resistor using the Keithley DMM7510 (marked red). Furthermore, Fig. 3 (b) depicts the relative error and standard deviation of various reference resistances calculated based on 10,000 measurements per resistance. _Both experiments report a high measurement precision (\(<5\%\) relative error and \(<1\%\) of relative \(\sigma\)) considering the additional line resistances and parasitic effects of the interconnection matrix. **Switching characteristics:** Finally, we performed several write/read operations on a HfO\({}_{2}\)/Ti 1T1R crossbar array. The crossbar is fabricated with the MAD200 process offered by CMP/CEA-LETI [11] in \(130\,\mathrm{nm}\) consisting of a demultiplexer and two \(512\times 32\) crossbar structures. To facilitate the chip on the NBB, we manufactured an extension board connecting to the NVM interface. The extension board offers the required digital circuitry (I/O expanders) to drive and control the chip. Fig. 3 (c) illustrates the resistance change of a single cell within the crossbar array over the course of alternating RESET/SET operations. ## IV Conclusion The NeuroBreakoutBoard is a versatile instrumentation platform to characterize NVMs and execute CIM operations. Our preliminary results indicate a relative error in the range of \(1\,\mathrm{k}\Omega\) to \(1\,\mathrm{M}\Omega\) lower than \(5\%\) and a high precision (\(\sigma<1\%\)). Moreover, we performed several read/write operations on a 1T1R crossbar structure. In the future, we intend to provide the NeuroBreakoutBoard as a commercial solution, thereby facilitating the accessibility of memristor-based CIM to both academia and industry. Additionally, we aim to integrate the NeuroBreakoutBoard into a hardware-in-the-loop simulation environment to conduct thorough investigations on the reliability of non-volatile memories (NVMs) under realistic workloads.
2303.00165
Diffusion Probabilistic Fields
Diffusion probabilistic models have quickly become a major approach for generative modeling of images, 3D geometry, video and other domains. However, to adapt diffusion generative modeling to these domains the denoising network needs to be carefully designed for each domain independently, oftentimes under the assumption that data lives in a Euclidean grid. In this paper we introduce Diffusion Probabilistic Fields (DPF), a diffusion model that can learn distributions over continuous functions defined over metric spaces, commonly known as fields. We extend the formulation of diffusion probabilistic models to deal with this field parametrization in an explicit way, enabling us to define an end-to-end learning algorithm that side-steps the requirement of representing fields with latent vectors as in previous approaches (Dupont et al., 2022a; Du et al., 2021). We empirically show that, while using the same denoising network, DPF effectively deals with different modalities like 2D images and 3D geometry, in addition to modeling distributions over fields defined on non-Euclidean metric spaces.
Peiye Zhuang, Samira Abnar, Jiatao Gu, Alex Schwing, Joshua M. Susskind, Miguel Ángel Bautista
2023-03-01T01:37:24Z
http://arxiv.org/abs/2303.00165v1
# Diffusion Probabilistic Fields ###### Abstract Diffusion probabilistic models have quickly become a major approach for generative modeling of images, 3D geometry, video and other domains. However, to adapt diffusion generative modeling to these domains the denoising network needs to be carefully designed for each domain independently, oftentimes under the assumption that data lives in a Euclidean grid. In this paper we introduce Diffusion Probabilistic Fields (DPF), a diffusion model that can learn distributions over continuous functions defined over metric spaces, commonly known as _fields_. We extend the formulation of diffusion probabilistic models to deal with this field parametrization in an explicit way, enabling us to define an end-to-end learning algorithm that side-steps the requirement of representing fields with latent vectors as in previous approaches (Dupont et al., 2022; Du et al., 2021). We empirically show that, while using the same denoising network, DPF effectively deals with different modalities like 2D images and 3D geometry, in addition to modeling distributions over fields defined on non-Euclidean metric spaces. ## 1 Introduction Diffusion probabilistic modeling has quickly become a central approach for learning data distributions, obtaining impressive empirical results across multiple domains like images (Nichol and Dhariwal, 2021), videos (Ho et al., 2022) or even 3D geometry (Luo and Hu, 2021). In particular, Denoising Diffusion Probabilistic Models (often referred to as DDPMs or diffusion generative models) (Ho et al., 2020; Nichol and Dhariwal, 2021) and their continuous-time extension (Song et al., 2021) both present a training objective that is more stable than precursors like generative adversarial nets (GANs) (Goodfellow et al., 2014) or energy-based models (EBMs) (Du et al., 2020). In addition, diffusion generative models have shown to empirically outperform GANs in the image domain (Dhariwal and Nichol, 2021) and to suffer less from mode-seeking pathologies during training (Kodali et al., 2017). A diffusion generative model consists of three main components: the forward (or _diffusion_) process, the backward (or _inference_) process, and the denoising network (also referred to as the _score network1_ due to its equivalence with denoising score-matching approaches Dickstein et al. (2015)). A substantial body of work has addressed different definitions of the forward and backward processes (Rissanen et al., 2022; Bansal et al., 2022; Song et al., 2021), focusing on the image domain. However, there are two caveats with current diffusion models that remain open. The first one is that data is typically assumed to live on a discrete Euclidean grid (exceptions include work on molecules (Hoogeboom et al., 2022) and point clouds (Luo and Hu, 2021)). The second one is that the denoising network is heavily tuned for each specific data domain, with different network architectures used for images (Nichol and Dhariwal, 2021), video (Ho et al., 2022), or geometry (Luo and Hu, 2021). Footnote 1: We use the terms score network/function and denoising network/function exchangeably in the paper. In order to extend the success of diffusion generative models to the large number of diverse areas in science and engineering, a unification of the score formulation is required. Importantly, a unification enables use of the same score network across different data domains exhibiting different geometric structure without requiring data to live in or to be projected into a discrete Euclidean grid. To achieve this, in this paper, we introduce the Diffusion Probabilistic Field (DPF). DPFs make progress towards the ambitious goal of unifying diffusion generative modeling across domains by learning distributions over continuous functions. For this, we take a functional view of the data, interpreting a data point \(\mathbf{x}\in\mathbb{R}^{d}\) as a function \(f:\mathcal{M}\to\mathcal{Y}\)(Dupont et al., 2022; Du et al., 2021). The function \(f\) maps elements from a metric space \(\mathcal{M}\) to a signal space \(\mathcal{Y}\). This functional view of the data is commonly referred to as a _field_ representation (Xie et al., 2022), which we use to refer to functions of this type. An illustration of this field interpretation is provided in Fig. 1. Using the image domain as an illustrative example we can see that one can either interpret images as multidimensional array \(\mathbf{x}_{i}\in\mathbb{R}^{h\times w}\times\mathbb{R}^{3}\) or as field \(f:\mathbb{R}^{2}\to\mathbb{R}^{3}\) that maps 2D pixel coordinates to RGB values. This field view enables a unification of seemingly different data domains under the same parametrization. For instance, 3D geometry data is represented via \(f:\mathbb{R}^{3}\to\mathbb{R}\), and spherical signals become fields \(f:\mathbb{S}^{2}\to\mathbb{R}^{d}\). In an effort to unify generative modeling across different data domains, field data representations have shown promise in three recent approaches: From data to functa (Functa) (Dupont et al., 2022), GEnerative Manifold learning (GEM) (Du et al., 2021) and Generative Adversarial Stochastic Process (GASP) (Dupont et al., 2022). The first two approaches adopt a latent field parametrization (Park et al., 2019), where a field network is parametrized via a Hypernetwork (Ha et al., 2017) that takes as input a trainable latent vector. During training, a latent vector for each field is optimized in an initial reconstruction stage (Park et al., 2019). In Functa (Dupont et al., 2022) the authors then propose to learn the distribution of optimized latents in an independent second training stage, similar to the approach by Rombach et al. (2022); Vahdat et al. (2021). Du et al. (2021) define additional latent neighborhood regularizers during the reconstruction stage. Sampling is then performed in a non-parametric way: one chooses a random latent vector from the set of optimized latents and projects it into its local neighborhood before adding Gaussian noise. See Fig. 2 for a visual summary of the differences between Functa (Dupont et al., 2022), GEM (Du et al., 2021) and our DPF. GASP (Dupont et al., 2022) employs a GAN paradigm: the generator produces a field while the discriminator operates on discrete points from the field, and distinguishes input source, _i.e._, either real or generated. In contrast to prior work (Dupont et al., 2022; Du et al., 2021; Dupont et al., 2022), we formulate a _diffusion generative model_ over functions in a _single-stage_ approach. This permits efficient end-to-end training without relying on an initial reconstruction stage or without tweaking the adversarial game, which we empirically find to lead to compelling results. Our contributions are summarized as follows: * We introduce the Diffusion Probabilistic Field (DPF) which extends the formulation of diffusion generative models to field representations. * We formulate a probabilistic generative model over fields in a single-stage model using an explicit field parametrization, which differs from recent work (Dupont et al., 2022; Du et al., 2021) and simplifies the training process by enabling end-to-end learning. * We empirically demonstrate that DPF can successfully capture distributions over functions across different domains like images, 3D geometry and spherical data, outperforming recent work (Dupont et al., 2022; Du et al., 2021; Dupont et al., 2022). ## 2 Background: Denoising Diffusion Probabilistic Models Denoising Diffusion Probabilistic Models (DDPMs) belong to the broad family of latent variable models. We refer the reader to Everett (2013) for an in depth review. In short, to learn a parametric data distribution \(p_{\theta}(\mathbf{x}_{0})\) from an empirical distribution of finite samples \(q(\mathbf{x}_{0})\), DDPMs reverse a diffusion Markov Chain (_i.e._, the forward diffusion process) that generates latents \(\mathbf{x}_{1:T}\) by gradually adding Gaussian noise to the data \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\) for \(T\) time-steps as follows: \[q(\mathbf{x}_{t}|\mathbf{x}_{t-1}):=\mathcal{N}\left(\mathbf{x}_{t-1};\sqrt{\bar{\alpha}_ {t}}\mathbf{x}_{0},(1-\bar{\alpha}_{t})\mathbf{I}\right). \tag{1}\] Here, \(\bar{\alpha}_{t}\) is the cumulative product of fixed variances with a handcrafted scheduling up to time-step \(t\). Ho et al. (2020) highlight two important observations that make training of DDPMs efficient: i) Eq. (1) adopts sampling in closed form for the forward diffusion process. ii) reversing the diffusion process is equivalent to learning a sequence of denoising (or score) networks \(\epsilon_{\theta}\), with tied weights. Reparametrizing Eq.1 as \(\mathbf{x}_{t}=\sqrt{\alpha_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon\) results in the "simple" DDPM loss \[\mathcal{L}_{\theta}=\mathbb{E}_{t\sim[0,T],\mathbf{x}_{0}\sim q(\mathbf{x}_{0}), \epsilon\sim\mathcal{N}(0,\mathbf{I})}\left[\|\epsilon-\epsilon_{\theta}( \sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon,t)\|^{2} \right], \tag{2}\] which makes learning of the data distribution \(p_{\theta}(\mathbf{x}_{0})\) both efficient and scalable. At inference time, we compute \(\mathbf{x}_{0}\sim p_{\theta}(\mathbf{x}_{0})\) via ancestral sampling (Ho et al., 2020). Concretely, we start by sampling \(\mathbf{x}_{T}\sim\mathcal{N}(0,\mathbf{I})\) and iteratively apply the score network \(\epsilon_{\theta}\) to denoise \(\mathbf{x}_{T}\), thus reversing the diffusion Markov Chain to obtain \(\mathbf{x}_{0}\). Sampling \(\mathbf{x}_{t-1}\sim p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) is equivalent to computing the update \[\mathbf{x}_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\left(\mathbf{x}_{t}-\frac{1-\alpha_{t}}{ \sqrt{1-\alpha_{t}}}\epsilon_{\theta}(\mathbf{x}_{t},t)\right)+\mathbf{z}, \tag{3}\] where at each inference step a stochastic component \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) is injected, resembling sampling via Langevin dynamics (Welling and Teh, 2011). A central part of the learning objective in Eq.2 is the score network \(\epsilon_{\theta}\), which controls the marginal distribution \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\). Notably, score networks are heavily tailored for each specific data domain. For example, in the image domain, score networks are based on a UNet (Ronneberger et al., 2015) with multiple self-attention blocks (Nichol and Dhariwal, 2021). In contrast, for 3D structures like molecules, score networks are based on graph neural nets (Hoogeboom et al., 2022). To unify the design of score networks across data domains, in this paper, we present the diffusion probabilistic field (DPF), which introduces a unified formulation of the score network that can be applied to multiple domains by representing data samples as fields. ## 3 Diffusion Probabilistic Fields A diffusion probabilistic field (DPF) is a diffusion generative model that captures distributions over fields. We are given observations in the form of an empirical distribution \(q(f_{0})\) over fields (living in an unknown field manifold) where a field \(f_{0}:\mathcal{M}\rightarrow\mathcal{Y}\) maps elements from a metric space \(\mathcal{M}\) to a signal space \(\mathcal{Y}\). For example, in the image domain an image can be defined as a field that maps 2D pixel coordinates to RGB values \(f_{0}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{3}\). In DPF the latent variables \(f_{1:T}\) are fields that can be continuously evaluated. To tackle the problem of learning a diffusion generative model over fields Figure 1: The left panel shows the parametrization of each data domain as a field. For visualization purposes we use the color red to denote the input to the field function (_e.g._, a metric space \(\mathcal{M}\) where the field is defined). In the right panel we show the graphical model of DPF, a diffusion generative model to capture distributions over fields. In DPF the latent variables are fields \(f_{1:T}\) that can be evaluated continuously. By taking a field parametrization of data we unify diffusion generative modeling across different domains (images, 3D shapes, spherical images) using the same score network implementation. we need to successfully deal with the infinite dimensional nature of the field representation in the forward process as well as in the score network and backward process. We adopt an explicit field parametrization, where a field is characterized by a set of coordinate-signal pairs \(\{(\mathbf{m}_{c},\mathbf{y}_{(c,0)})\}\), \(\mathbf{m}_{c}\in\mathcal{M},\mathbf{y}_{(c,0)}\in\mathcal{Y}\), which we denote as _context pairs_. For clarity we row-wise stack context pairs and refer to the resulting matrix via \(\mathbf{C}_{0}~{}=~{}[\mathbf{M}_{c},~{}\mathbf{Y}_{(c,0)}]\). Here, \(\mathbf{M}_{c}\) denotes the coordinate portion of all context pairs and \(\mathbf{Y}_{(c,0)}\) denotes the signal portion of all context pairs at time \(t=0\). Note that the coordinate portion does not depend on the time-step by design.2 This is a key difference with respect to Functa (Dupont et al., 2022) or GEM (Du et al., 2021), both of which adopt a latent parametrization of fields, where a learnt field \(\hat{f}:\Psi(\mathbf{z}_{0})\times\mathcal{M}\rightarrow\mathcal{Y}\) is parametrized by a latent weight vector \(\mathbf{z}_{0}\) through a hypernetwork model \(\Psi\)(Ha et al., 2017). Using a latent parametrization forces a reconstruction stage in which latents \(\mathbf{z}_{0}\) are first optimized to reconstruct their corresponding field (Park et al., 2019; Du et al., 2021) (_i.e._, defining an empirical distribution of latents \(q(\mathbf{z}_{0})\) living in an unknown latent manifold). A prior \(p_{\theta}(\mathbf{z}_{0})\) over latents is then learnt _independently_ in a second training stage (Dupont et al., 2022). In contrast, our explicit field parametrization allows us to formulate a _score field network_, enabling DPF to directly model a distribution over fields, which results in improved performance (cf. Sect. 4). Fig. 2 depicts the differences between latent field parametrizations in Functa (Dupont et al., 2022) and GEM (Du et al., 2021), and the explicit field parametrization in DPF. Footnote 2: We assume the geometry of fields does not change. Adopting an explicit field parametrization, we define the forward process for context pairs by diffusing the signal and keeping the coordinates fixed. Consequently the forward process for context pairs reads as follows: \[\mathbf{C}_{t}=[\mathbf{M}_{c},\mathbf{Y}_{(c,t)}=\sqrt{\bar{ \alpha}_{t}}\mathbf{Y}_{(c,0)}+\sqrt{1-\bar{\alpha}_{t}}\epsilon_{c}], \tag{4}\] where \(\epsilon_{c}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) is a noise vector of the appropriate size. We now turn to the task of formulating a score network for fields. By definition, the score network needs to take as input the context pairs (_i.e._, the field parametrization), and needs to accept being evaluated continuously in \(\mathcal{M}\) in order to be Figure 2: In DPF we explicitly parameterize a field by a set of coordinate-signal pairs, or _context pairs_, as opposed to a latent vector \(\mathbf{z}_{0}\) as in Functa (Dupont et al., 2022) or GEM (Du et al., 2021). This explicit field parametrization allows us to side-step the reconstruction training stage of prior approaches and instead _directly model the distribution of fields_ rather than the distribution of latents that encode fields. a field. We do this by using _query pairs_\(\{\mathbf{m}_{q},\mathbf{y}_{(q,0)}\}\). Equivalently to context pairs, we row-wise stack query pairs and denote the resulting matrix as \(\mathbf{Q}_{0}~{}=~{}[\mathbf{M}_{q},~{}\mathbf{Y}_{(q,0)}]\). Note that the forward diffusion process is equivalently defined for both context and query pairs: \[\mathbf{Q}_{t}=[\mathbf{M}_{q},\mathbf{Y}_{(q,t)}=\sqrt{\bar{\alpha}_{t}} \mathbf{Y}_{(q,0)}+\sqrt{1-\bar{\alpha}_{t}}\epsilon_{q}], \tag{5}\] where \(\epsilon_{q}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) is a noise vector of the appropriate size. However, the underlying field is solely defined by context pairs, and query pairs merely act as points on which to evaluate the score network. The resulting _score field_ model is formulated as follows, \(\acute{\epsilon}_{q}=\epsilon_{\theta}(\mathbf{C}_{t},t,\mathbf{Q}_{t})\). The design space for the score field model is spans all architectures that can process data in the form of sets, like transformers or MLPs. In particular, efficient transformer architectures offer a straightforward way to deal with large numbers of context and query pairs, as well as good mechanism for query pairs to interact with context pairs via attention. For most of our experiments we use a PerceiverIO (Jaegle et al., 2022), an efficient transformer encoder-decoder architecture, see Fig. 11 and Sect. B for details. In addition, in Sect. D we show that other architectures like vanilla Transformer Encoders (Vaswani et al., 2017) and MLP-mixers (Tolstikhin et al., 2021) are also viable candidates offering a very flexible design space without sacrificing the generality of the formulation of DPF. Using the explicit field characterization and the score field network, we obtain the training and inference procedures in Alg. 1 and Alg. 2, respectively, which are accompanied by illustrative examples for a field representation of images. For training, we uniformly sample context and query pairs from \(f_{0}\sim\mathrm{Uniform}(q(f_{0}))\) and only corrupt their signal using the forward process in Eq. (4) and Eq. (5). We then train the score field network \(\epsilon_{\theta}\) to denoise the signal in query pairs, given context pairs (see Fig. 11 for a visualization using a PerceiverIO implementation). During sampling, to generate a field \(f_{0}\sim p_{\theta}(f_{0})\) we first define query pairs \(\mathbf{Q}_{T}~{}=~{}[\mathbf{M}_{q},~{}\mathbf{Y}_{(q,T)}\sim\mathcal{N}( \mathbf{0},~{}\mathbf{I})]\) on which the field will be evaluated. Note that the number of points on which the score field is evaluated during sampling has to be fixed (_e.g._, to generate an image with \(32\times 32=1024\) pixels we define \(1024\) query pairs \(\mathbf{Q}_{T}\) at time \(t=T\)). We then let context pairs be a random subset of the query pairs. We use the context pairs to denoise query pairs and follow ancestral sampling as in the vanilla DDPM (Ho et al., 2020).3 Note that during inference the coordinates of the context and query pairs do not change, only their corresponding signal value. The result of sampling is given by \(\mathbf{Q}_{0}\) which is the result of evaluating \(f_{0}\) at coordinates \(\mathbf{M}_{q}\). Footnote 3: More efficient sampling approaches like DDIM (Song et al., 2021) are trivially adapted to DPF. ## 4 Experimental Results We present results on multiple domains: 2D image data, 3D geometry data, and spherical data. Across all domains we use the same score network architecture. We only adjust the dimensionality of the metric space \(\mathcal{M}\) and the signal space \(\mathcal{Y}\) as well as the network capacity (_i.e._, number of layers, hidden Figure 3: **Left:** DPF training algorithm. **Right**: Visual depiction of a training iteration for a field in the image domain. See Sect. 3 for definitions. units per layer, etc.) when needed. We implement the field score network \(\epsilon_{\theta}\) using a PerceiverIO architecture (Jaegle et al., 2022), an efficient transformer that enables us to scale the number of both context pairs and query pairs. Additional architecture hyperparameters and implementation details are provided in the appendix. ### 2D images We present empirical results on two standard image benchmarks: CelebA-HQ (Karras et al., 2018)\(64^{2}\) and CIFAR-10 (Krizhevsky, 2009)\(32^{2}\). All image datasets are mapped to their field representation, where an image \(\mathbf{x}\in\mathbb{R}^{h\times w\times 3}\) is represented as a function \(f:\mathbb{R}^{2}\rightarrow\mathbb{R}^{3}\) defined by coordinate-RGB pairs. In Tab. 1 and Tab. 2 we compare DPF with Functa (Dupont et al., 2022), GEM (Du et al., 2021), and GASP (Dupont et al., 2022), which are domain-agnostic generative models that employ field representations. For completeness, we also report results for the domain-specific VAE (Kingma and Welling, 2014), StyleGAN2 (Karras et al., 2020) and DDPM (Ho et al., 2020). Similarly, on CIFAR-10 data we compare with multiple domain-specific methods including auto-regressive (Ostrovski et al., 2018) and score-based (Song and Ermon, 2020) approaches. We report Frechet Inception Distance (FID) (Heusel et al., 2017), Inception Score (IS) (Salimans et al., 2016) and precision/recall metrics (Sajjadi et al., 2018) for different datasets. We observe that DPF obtains compelling generative performance on both CelebA-HQ \(64^{2}\) and CIFAR-10 data. In particular, DPF outperforms recent approaches that aim to unify generative modeling across different data domains such as Functa (Dupont et al., 2022) and GEM (Du et al., 2021). In particular, we observe that DPF obtains the best Precision score across all domain-agnostic approaches on CelebA-HQ \(64^{2}\)(Karras et al., 2018), as well as very competitive FID scores. One interesting finding is that GASP Dupont et al. (2022) reports comparable FID score than DPF. However, when inspecting qualitative examples shown in Fig. 5 we observe GASP samples to contain artifacts typically obtained by adversarial approaches. In contrast, samples generated from DPF are more coherent and without artifacts. We believe the texture diversity caused by these artifacts to be the reason for the difference in FID scores between GASP and DPF (similar findings were discussed \begin{table} \begin{tabular}{l c c} \hline \hline **CelebA-HQ**\(64^{2}\) & FID \(\downarrow\) Pre. \(\uparrow\) Rec. \(\uparrow\) \\ \hline VAE (Kingma and Welling, 2014) & 175.33 & 0.799 & 0.001 \\ StyleGAN2 (Karras et al., 2020) & 5.90 & 0.618 & 0.481 \\ \hline Functa (Dupont et al., 2022) & 40.40 & 0.577 & 0.397 \\ GEM (Du et al., 2021) & 30.42 & 0.642 & 0.502 \\ GASP (Dupont et al., 2022) & 13.50 & 0.836 & 0.312 \\ DPF (ours) & 13.21 & 0.866 & 0.347 \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative evaluation of image generation** on CelebA-HQ (Karras et al., 2018). The middle bar separates domain-specific (_top_) from domain-agnostic approaches (_bottom_). \begin{table} \begin{tabular}{l c c c} \hline \hline **CelebA-HQ**\(64^{2}\) & FID \(\downarrow\) Pre. \(\uparrow\) Rec. \(\uparrow\) \\ \hline VAE (Kingma and Welling, 2014) & 175.33 & 0.799 & 0.001 \\ StyleGAN2 (Karras et al., 2020) & 5.90 & 0.618 & 0.481 \\ \hline Functa (Dupont et al., 2022) & 40.40 & 0.577 & 0.397 \\ GEM (Du et al., 2021) & 30.42 & 0.642 & 0.502 \\ GASP (Dupont et al., 2022) & 13.50 & 0.836 & 0.312 \\ DPF (ours) & 13.21 & 0.866 & 0.347 \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative evaluation of image generation** on CIFAR-10 (Krizhevsky, 2009).The middle bar separates domain-specific (_top_) from domain-agnostic approaches (_bottom_). Figure 4: **Left:** DPF sampling algorithm. **Right**: Visual depiction of the sampling process for a field in the image domain. in (Dupont et al., 2022a)). Finally, when studying empirical results on CIFAR-10 shown in Tab. 2, we again see that DPF performs better than GEM (Du et al., 2021) both in terms of FID and IS. Notably, we can observe a gap between domain-agnostic and the most recent domain-specific approaches such as StyleGAN2 (Karras et al., 2020) and DDPM (Ho et al., 2020). This gap is a result of two main factors. First, domain-specific approaches can incorporate design choices tailored to their domain in the score network (_i.e._, translation equivariance in CNNs for images). Second, the training and inference settings are typically different for the domain-agnostic and domain-specific approaches reported in Tab. 1 and Tab. 2. To further study this issue we refer readers to Sect. A, where we compare the performance of DPF and DDPM (Ho et al., 2020) using the same training and evaluation settings. Finally, we show qualitative generation results for domain-agnostic approaches on CelebA-HQ \(64^{2}\)(Karras et al., 2018) and CIFAR-10 (Krizhevsky, 2009) in Fig. 5 and Fig. 6, respectively. We note that for CelebA-HQ, DPF generates diverse and globally consistent samples, without the blurriness (particularly in the backgrounds) observed in two-stage approaches (Dupont et al., 2022a; Du et al., 2021) that rely on an initial reconstruction step, or the artifacts of adversarial approaches (Dupont et al., 2022b). When evaluated on CIFAR-10 (Krizhevsky, 2009), we see that DPF generates crisper and more diverse results than GEM (Du et al., 2021). \(\sim 35\)k ShapeNet objects, where each object is represented as a voxel grid at a \(64^{3}\) resolution. Each object is then represented as a function \(f:\mathbb{R}^{3}\rightarrow\mathbb{R}^{1}\). Following the settings of GEM (Du et al., 2021), we report coverage and MMD metrics (Achlioptas et al., 2018) computed from sampling \(2048\) points on the meshes obtained from the ground truth and generated voxel grids. We compare them using the Chamfer distance. In Tab. 3 we compare DPF performance with 3 baselines: Latent GAN (Chen and Zhang, 2019), GASP (Dupont et al., 2022b) and GEM (Du et al., 2021). We train DPF at \(32^{3}\) resolution. During sampling we evaluate \(32^{3}\) query pairs on the score field, then reshape the results to a 3D grid of \(32^{3}\) resolution and tri-linearly up-sample to the final \(64^{3}\) resolution for computing evaluation metrics. DPF outperforms both GEM (Du et al., 2021) and GASP (Dupont et al., 2022b) in learning the multimodal distribution of objects in ShapeNet, as shown by the coverage metric. In addition, while GEM (Du et al., 2021) performs better in terms of MMD, we do not observe this difference when visually comparing the generated samples shown in Fig. 7. We attribute this difference in MMD scores to the fact that MMD over-penalizes fine-grained geometry. ### Data on \(\mathbb{S}^{2}\) Straightforwardly, DPF can also learn fields that are not defined in the Euclidean plane. To demonstrate this, we show results on signals defined over the sphere. In particular, following Cohen et al. (2018), we use a stereographic projection to map image data onto the sphere. Hence, each resulting example is represented as a field \(f_{0}:\mathbb{S}^{2}\rightarrow\mathbb{R}^{d}\). To uniformly sample points in \(\mathbb{S}^{2}\) we use the Driscoll-Healy algorithm (Driscoll and Healy, 1994) and sample points at a resolution of \(32^{2}\) and \(64^{2}\) for spherical MNIST (LeCun et al., 1998) and AFHQ (Choi et al., 2020) data, respectively. In Fig. 8 we show the distribution of real examples for MNIST (LeCun et al., 1998) and AFHQ (Choi et al., 2020) images projected on the sphere as well as the samples generated by our DPF. Unsurprisingly, since DPF is agnostic to the geometry of the metric space \(\mathcal{M}\), it can generate crisp and diverse samples for fields defined on the sphere. ## 5 Related Work Generative modeling has advanced significantly in recent years with generative adversarial nets (GANs) (Goodfellow et al., 2014; Mao et al., 2017; Karras et al., 2020b) and variational auto-encoders (VAEs) (Vahdat and Kautz, 2020) showing impressive performance. Even more recently, diffusion-based generative modeling has obtained remarkable compelling results (Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021b). Figure 7: **Qualitative comparison of different domain-agnostic approaches on ShapeNet (Chang et al., 2015).** \begin{table} \begin{tabular}{l c c} \hline \hline **ShapeNet \(64^{3}\)** & Coverage \(\uparrow\) & MMD \(\downarrow\) \\ \hline Latent GAN (Chen and Zhang, 2019) & 0.389 & 0.0017 \\ GASP (Dupont et al., 2022b) & 0.341 & 0.0021 \\ GEM (Du et al., 2021) & 0.409 & 0.0014 \\ DPF (ours) & 0.419 & 0.0016 \\ \hline \hline \end{tabular} \end{table} Table 3: **Quantitative evaluation of 3D geometry generation on ShapeNet (Chang et al., 2015). DPF outperforms prior approaches both in terms of Coverage and MMD metrics.** The formulation developed in DPF is orthogonal to the body of work on Riemannian generative models (Bortoli et al., 2022; Gemici et al., 2016; Rozen et al., 2021). The goal in Riemannian generative modeling is to explicitly constraint the learned density to a Riemannian manifold structure. For example, a Riemannian generative model can learn a distribution of points \(\mathbf{x}\in\mathbb{S}^{2}\) on the 2D sphere \(\mathbb{S}^{2}\), explicitly enforcing that any generated samples lie on the sphere. In contrast, DPF learns a generative model over a distribution of multiple samples of signals defined on the sphere, or any other metric space. The DPF formulation also differs from the recently introduced Functa (Dupont et al., 2022), GEM (Du et al., 2021) and GASP (Dupont et al., 2022). The first two approaches adopt a latent field parametrization (Park et al., 2019) and a two-stage training paradigm. However, different from our work, the field network in Functa and GEM is parametrized via a hypernetwork (Ha et al., 2017) that takes as input a trainable latent vector. During training, a small latent vector for each field is optimized in an initial auto-decoding or compression) stage (Park et al., 2019). In the second stage, a probabilistic model is learned on the latent vectors. GASP (Dupont et al., 2022) leverages a GAN whose generator produces field data whereas a point cloud discriminator operates on discretized data and aims to differentiate input source, _i.e._, either real or generated. Two-stage approaches like Functa (Dupont et al., 2022) or GEM (Du et al., 2021) make training the probabilistic model in the second stage more computationally efficient than DPF. This training efficiency of the probabilistic model often comes at the cost of compressing fields into small latent vectors in the first stage, which has a non-negligible computational cost, specially for large datasets of fields. The formulation introduced in DPF has is closely related to recent work on Neural Processes (NPs) (Garnelo et al., 2018; Kim et al., 2019; Dutordoir et al., 2022), which also learn distributions over functions via context and query pairs. As opposed to the formulation of Neural Processes which optimizes an ELBO Kingma & Welling (2014) we formulate DPF as a denoising diffusion process in function space, which results in a robust denoising training objective and a powerful iterative inference process. In comparison with concurrent work in Neural Processes (Garnelo et al., 2018; Kim et al., 2019; Dutordoir et al., 2022) we do not explicitly formulate a conditional inference problem and look at the more challenging task of learning an unconditional generative model over fields. We extensively test our hypothesis on complex field distributions for 2D images and 3D shapes, and on distribution of fields defined over non-Euclidean geometry. ## 6 Conclusion In this paper we made progress towards modeling distributions over fields by introducing DPF. A diffusion probabilistic model that directly captures a distribution over fields without resorting to a initial reconstruction stage (Dupont et al., 2022; Du et al., 2021) or tweaking unstable adversarial approaches (Dupont et al., 2022). We show that DPF can capture distributions over fields in different domains without making any assumptions about the data. This enables us to use the same denoising architecture across domains obtaining satisfactory results. In addition, we show that DPF can be used to learn distributions over fields defined on non-Euclidean geometry. Figure 8: **Qualitative comparison of empirical and generated samples for spherical versions of MNIST and AFHQ (Choi et al., 2020).** ## 7 Ethical statement When considering ethical impact of generative models like DPF a few aspects that need attention are the use generative models for creating disingenuous data, \(e\)._g_., "DeepFakes" Mirsky & Lee (2021), training data leakage and privacy Tinsley et al. (2021), and amplification of the biases present in training data Jain et al. (2020). For an in-depth review of ethical considerations in generative modeling we refer the reader to Rostamzadeh et al. (2021). In addition, we want to highlight that any face images we show are generated from our model. We do not directly reproduce face images from any dataset. ## 8 Reproducibility statement We take great care in the reproducibility of our results. We provide links to the public implementations that can be used to replicate our results in Sect. A and Sect. B, as well as describing all training parameters in Tab. 6. All of the datasets we report results on are public and can be freely downloaded.
2301.11611
Influence of Information Blocking on the Spread of Virus in Multilayer Networks
In this paper, we present the model of the interaction between the spread of disease and the spread of information about the disease in multilayer networks. Next, based on the characteristics of the SARS-COV-2 virus pandemic, we evaluated the influence of information blocking on the virus spread. Our results show that blocking the spread of information affects the speed at which the epidemic peak appears in our society and the number of infected individuals.
Paulina Wątroba, Piotr Bródka
2023-01-27T09:26:19Z
http://arxiv.org/abs/2301.11611v1
# Influence of Information Blocking on the Spread of Virus in Multilayer Networks + ###### Abstract In this paper, we present the model of the interaction between the spread of disease and the spread of information about the disease in multilayer networks. Next, based on the characteristics of the SARS-COV-2 virus pandemic, we evaluated the influence of information blocking on the virus spread. Our results show that blocking the spread of information affects the speed at which the epidemic peak appears in our society and the number of infected individuals. coexisting spreading processes; epidemics; network science; multilayer networks ## 1 Introduction The increase in human mobility and globalisation have created ideal conditions for the spread of new epidemics [1]. At the turn of 2019 and 2020, a new coronavirus started spreading in Wuhan. According to the University of Toronto Citizen Lab report [2], information about it was not released to the general public for more than three weeks, and multiple other sources report that there was an active campaign to limit the spread of information about the virus [3, 4, 5, 6, 7, 8, 9, 10]. Since spreading the information (awareness that there is a virus circulating in society) is an important tool in limiting the spread of the virus [11, 12, 13, 14, 15, 16, 17] (aware people might take preventive actions like staying at home, wearing face masks, washing hands more often etc.), we asked questions how delaying information spread influence the spread of the virus, how it affects the number of infected individuals and disease dynamic. Unfortunately, we were not able to find the answers to those questions in the related works; thus, we have developed a model for the interaction between virus and information spreading in multilayer networks (section 2), where becoming aware of the virus results in limiting the chance of getting infected. Next, we have adjusted the model using the Covid-19 pandemic data from its early days (section 2.3). Finally, we have performed experiments, (section 3), to analyse and compare the spread of SARS-COV-2 in three scenarios (i) only the virus spreads, (ii) the virus and information spread simultaneously from the beginning, and (iii) the virus and information spread simultaneously, but the information spread is delayed for some period of time. ## 2 Materials and Methods In this section, we briefly introduce the most important concepts and assumptions for our experimental part. ### Multilayer network To evaluate our ideas in a more realistic scenario, we have decided to use the multilayer network [18, 19, 17, 20], where the network is defined as \(M=(N,L,V,E)\)[20], where * \(N\) is a not empty set of actors \(\{n_{1},...,n_{n}\}\), * \(L\) is a not empty set of layers \(\{l_{1},...,l_{l}\}\), * \(V\) is a not empty set of nodes, \(V\subseteq N\times L\), * \(E\) is a set of edges \((v_{1},v_{2}):v_{1},v_{2}\in V\), and if \(v_{1}=(n_{1},l_{1})\) and \(v_{2}=(n_{2},l_{2})\in E\) then \(l_{1}=l_{2}\). The example of a multilayer network is presented in figure 1. This network contains: * six actors (\(\{n_{1},n_{2},n_{3},n_{4},n_{5},n_{6}\}\)), * two layers \(\{l_{1},l_{2}\}\), * ten nodes \(\{v_{1}=(n_{1},l_{1}),v_{2}=(n_{2},l_{1}),v_{3}=(n_{3},l_{1}),v_{4}=(n_{4},l_{ 1}),v_{5}=(n_{5},l_{1}),v_{6}=(n_{1},l_{2}),v_{7}=(n_{2},l_{2}),v_{8}=(n_{3},l_ {2}),v_{9}=(n_{4},l_{2}),v_{10}=(n_{6},l_{2})\}\), and * eleven edges \(\{(v_{1},v_{2}),(v_{1},v_{5}),(v_{2},v_{5}),(v_{2},v_{3}),(v_{2},v_{4}),(v_{3},v_{4}),(v_{6},v_{9}),(v_{6},v_{10}),(v_{7},v_{8}),(v_{7},v_{9}),(v_{8},\)\(v_{9})\}\). This network model allows us to have two different networks (layers), the first one for disease spreading, which for obvious reasons needs to be limited to offline world contacts to support virus spread, and the second one for "online" contacts that allow information spreading. Both layers can have a completely different topology, e.g. two people living in two geographically distant cities may never meet, but they can exchange information via phone or social platforms; on the other hand, two people can exchange viruses because they have shared the same shopping cart or used the same bus, but they might never talk and exchange information. ### Spreading Models #### 2.2.1 Epidemic spreading. In the \(SIR\) model, every person who belongs to a population, also called an actor or a node, can be in one of three states: \(S\) (Susceptible), in which a person is susceptible to infection; \(I\) (Infected), which means infected and at the same time spreading the disease, and \(R\) (Recovered) in a person has recovered and acquired immunity or died and can no longer infect or get sick again (e.g., smallpox, mumps, and other diseases for which people can be vaccinated). Figure 1: An example of multilayer networks A susceptible actor can be infected by an infected actor in one cycle with probability \(\beta\), while infected actors can recover in each cycle with probability \(\gamma\). This process can be described by the following equations: \[\frac{ds}{dt}=-\beta is,\ \frac{di}{dt}=\beta is-\gamma i,\ \frac{dr}{dt}=\gamma i,\] where \(i,s,r\) represent the fraction of susceptible, infected, and recovered individuals in the total population, respectively. The state changes are also presented in fig. 2. It must be noted that in a real epidemic spreading, for diseases such as chickenpox or mumps, a person in state \(S\) will be infected only if he or she has direct contact with an infected person. Nevertheless, in a complex network, actors are represented by nodes, and the possibility of contact is determined by connections between them, i.e., edges in the network. In such circumstances, a node in state \(S\) may change its state to \(I\) only if it has at least one infected neighbor. In this way, classical epidemic models can be extended to network representation, and the presented expressions can be considered as a special case where the corresponding network is fully connected. In the absence of some connections in the network, the fraction of susceptible individuals in the total population may be larger, and there may be actors who will not be infected [21]. #### 2.2.2 Information spreading. In the \(SIS\) model, an actor can be in one of two states: Susceptible (\(S\)) or Infected (\(I\)). The person's state change is determined by the relevant probability. If an actor is in the \(S\) state, its switch to \(I\), in any iteration, will depend on probability \(\beta\). Return from the state \(I\) to \(S\) depends on the probability \(\gamma\). This reflects the situation where a susceptible person becomes infected by any infected member of the population and then becomes ill but has not acquired immunity. It means that despite being already ill, a person is susceptible to reinfection (e.g., cold or seasonal flu). These processes can be described by the following equations: \[\frac{ds}{dt}=-\beta is+\gamma i,\ \frac{di}{dt}=\beta is-\gamma i,\] where \(i,s\) represent the fraction of susceptible and infected actors in the total population, respectively. The state changes are also presented in fig. 3. This model corresponds to real processes of spreading seasonal diseases such as cold or flu, for which one does not acquire immunity, as in the case of chickenpox or mumps. The representation of this model for the network is similar to the case described for the \(SIR\) model, except that instead of a transition to state \(R\), there will be a return to state \(S\)[21]. Epidemic models can be used to simulate other spreading processes. The most common example would be information spreading, where we have models like \(UAU\)[22] (Unaware-Aware-Unaware) or \(UAF\)[23] (Unaware-Aware-Forgot), which are based on \(SIS\) and \(SIR\) models, respectively. In the case of our research, we have decided to use \(SIS\) based model (i.e. \(UAU\)[22]) to model the spread of information. Figure 3: State changes for \(SIS\) model Figure 2: State changes in \(SIR\) model #### 2.2.3 Interaction between processes. Interactions between multiple processes in a network can take many forms, and most research is centred on one of three categories: supporting, competing and mixed approaches [17]. **Supporting processes** are observed, for example, in the case of opinion formation and decision making, where public opinion about a topic is taken into account during the decision-making process [24]. Epidemics in multilayer networks can take a cooperative form, as one disease can exacerbate or inhibit the development of another [25]. As a result, the dynamics and extent of a disease can be increased by other diseases spreading in the same network. One disease may be a consequence of contracting another. For example, the number of people with tuberculosis increases in the population with HIV [26]. However, the issue of mutual support processes represents a small percentage of all works [17]. **Competing processes**. Competition between processes has been modelled and analysed on a large scale. An example can be competition studies for memes [27] and extended to generalisation for other content [28]. Competitive processes have also been studied in the context of optimal resource allocation in multilayer networks, where a single node may participate in multiple processes at the same time. It has also been shown that the diffusion of resources in the information layer can affect the spread of outbreaks in the physical contact layer and change the phase transition. Studies have shown that the existence of optimal resource diffusion leads to maximum disease suppression [29]. **Mixed approaches**. The third type of interaction is a mixed approach, used in modelling competing and supporting processes spreading simultaneously. For example, the appearance on the market of new technologically advanced products, which are very similar to each other, creates a demand for new services (support) and, at the same time, strengthens the competition on the market (competition) [30]. Researchers also analysed the coexistence of cooperation and competition mechanisms. They observed that increased cooperation boosts the ability of content to spread across all layers, whereas without cooperation, the layers are independent, and each virus spreads only within one layer. Due to the competition mechanism, only one viral agent can be assigned to one node [31]. Interesting results can be observed in the field of spreading diseases and information about them. In some research, awareness inhibits the spread of diseases. On the other hand, we can have a situation in which the infected node becomes aware and can spread infection and information about the disease at the same time [12, 15, 16]. A similar scenario in which disease supports the spread of information and awareness reduces disease was also examined for disease and immunisation. The spread in a multilayer structure that contains disease and immunisation that can enhance or dampen the epidemic. While immunisation can compete with the epidemic, it can also enhance its dynamics [32]. Mixed interactions can also be observed in individuals waiting for immunization [33]. ### Adjusting parameters for Covid-19 pandemic Based on previous research in epidemic modelling, we adjusted the model's parameters to the SARS-COV-2 virus and the early days of the COVID-19 pandemic. Out of many existing epidemic models, for virus spread, the \(SIR\) model was selected, and for information spread, the \(UAU\) (\(SIS\)) model was chosen. Values for all parameters can be found in tab. 2. #### 2.3.1 \(Sir\) model. In the beginning, it was necessary to define the initial conditions and assumptions resulting from the specificity of the virus, as well as the research questions posed. That was equal to answering the questions: _where?_, _what?_ and _how?_ it spreads. **Spreading Structure.** To simulate the coronavirus epidemic, it was necessary to select the structure in which it would occur. In the real world, the virus is spread through direct contact between a susceptible person and an infected person or through things/objects/surfaces on which virus particles have settled. Additionally, the situation is complicated because susceptibility to infection is an individual factor. Moreover, in the case of the analyzed virus, this factor is greater for the elderly or people suffering from chronic diseases. **Initial state.** An essential question is _how to initiate an outbreak_. Based on the literature analysis, there are two main approaches to establishing the initial state of the network. In the first one, the epidemic starts with the disease of a certain number of individuals, the most common being the so-called patient zero. This approach, as faithful as possible to the principles of epidemiology, is slightly troublesome from the perspective of comparative studies when we test networks of different sizes [34]. An alternative approach is more sympathetic to comparative analysis, as it assumes that some percentage of nodes in a network or layer is initially infected [35]. Due to the prospect of comparing epidemic progression for networks of different sizes, we used the second approach. In the initial state, one percent of all nodes in the personal contact layer are infected. The calculated number of infected actors is rounded up to an integer value to ensure that for networks with the number of nodes in the contact layer below 100, we have at least a single seed node. Determining the actors who will be infected is the result of random selection. **State Changes.** An actor in state \(S\) can change its state with probability \(\beta\) to \(I\) if it has an infected neighbour. This model means that for each actor in the state \(I\), all direct neighbours (all nodes connected by an edge to a given infected node) are searched, for each of them a value between (0, 1) indicating the probability of infection is randomised that is compared with the value of the threshold \(\beta\). If the drawn value is less than the \(\beta\) then this actor will change its state to \(I\) on the next epidemic day (iteration of the process). Otherwise, the state of the node will not change. A separate draw determines the change of state of an actor in the state \(I\). When all neighbours of the infected individual are found, a probability value is generated from the interval (0,1) for it and, similarly to the case described above, it is compared with \(\gamma\). The change in state to \(R\) will occur only if the generated value is lower than the \(\gamma\). If this does not happen, the actor remains in the state \(I\) and continues to infect. It should be noted that a node that has changed its state to \(R\) cannot change it to \(I\) again. However, there are indeed reports of reinfection in the literature, but their percentage relative to all cases is so low that they are not included in this model. **Probabilities \(\beta\) and \(\gamma\).** The Coronavirus pandemic led to intensive work in the scientific community on modelling the epidemic. As a result, in the literature, one can find probability values for the \(SIR\) model tailored to the modelling of the SARS-COV-2 virus spreading. Most publications concern Asian countries, especially China, where the pandemic began, and European countries, where the epidemic further developed - causing paralysis of health services, resulting in serious illnesses or deaths of many people. These countries included Italy, Spain, France and, to a lesser extent, Germany and Poland. Based on the analysis of available works, as well as the available social networks and their density, it was decided to adopt four different probability values, the first three for Italy (\(\beta=0.19,\gamma=0.10\)[36], \(\beta=0.22,\gamma=0.02\)[37], \(\beta=0.28,\gamma=0.08\)[38]) and one for Poland (\(\beta=0.31,\gamma=0.10\)[36], ). #### 2.3.2 \(Uau\) model. The spread of the virus is accompanied by the spread of awareness (information) of its existence. However, this is a process, at least partly independent of the spread of the virus. For this reason, it is necessary to have two different models for both processes. For the spread of information about virus, the \(SIS\)-based \(UAU\) model was adapted. Previous research showed that despite the spreading of seasonal diseases such as cold or flu, the \(UAU\) model could be successfully adapted for the spread of different types of information, taking into account the process of forgetting [13]. The states of the model can then be described as \(U\) (Unaware) - unaware of information (\(S\) in \(SIS\) model) and \(A\) (Aware) - spreading information (\(I\) in \(SIS\) model). The change of states is determined by the probabilities \(\beta\) and \(\gamma\). To simplify the understanding of the interactions between the models, the probabilities will be denoted by symbols \(\epsilon\) and \(\mu\), respectively. In the \(UAU\) model, an unaware actor in state \(U\) may learn about the existence of the virus from a conscious member of the population \(A\). Over time, the aware person returns to state \(U\), which corresponds to the situation in which someone forgets about the existence of the virus or gets used to it and awareness does not affect its behavior [13], for example, someone, despite knowing about the pandemic stops wearing the mask. Similarly to the adaptation of the \(SIR\) model, for the \(UAU\) model, it was necessary to define the initial state and the assumptions. **Spreading Structure.** Simulating the spread of information requires defining the medium in which it will occur. Information and viruses in the real world coexist within the same population. Therefore, the network for the \(SIS\) and \(SIR\) models is common. However, the specifics of spreading differ. Unlike the virus, access to information is so widespread that receiving it does not require direct contact between two people. Information reaches the recipient through social networks, the Internet, newspapers, etc. However, this does not exclude acquiring information through real interpersonal contacts or travelling by shared means of transport. Furthermore, obtaining information from one source does not prevent encountering the same information again through another medium. Therefore, to simulate a real process, information may spread throughout the network at all layers. For simplicity, the type of interaction does not affect the entire process, regardless of the layer, the assumptions are the same. **Initial state.** Similar to the infection process, information appears in a population through human action. To determine the initial state of the aware population, the same tactics were used as for the virus. Initially, one percent of all actors in the network are aware of the virus. The selection of informed actors results from random selection similar to the \(SIR\) process. **State Changes.** An actor in state U can change its state to \(A\) with probability \(\epsilon\) if it has an aware neighbour. What this model means is that for each actor in state \(U\), all immediate neighbours (all nodes connected by an edge to a given aware vertex) are searched, and for each of them, a value is drawn from the interval (0, 1) denoting the probability of awareness. It is then compared with the value of the threshold \(\epsilon\). If the drawn value is less than the \(\epsilon\), the actor will change its state to \(A\) in the next iteration. Otherwise, the state of the individual will not change. A separate drawing determines the change of state of an actor in state \(A\). When all neighbours of the aware individual are found, then a probability value from the interval (0,1) is drawn for it and, similarly as in the case described above, compared with the threshold, which is the probability \(\mu\). Return to state U will occur only if the drawn value is less than the \(\mu\). If this does not happen, the actor remains in state \(A\) and continues to spread information. A node that has changed its state to U may change it again to \(A\). Although, over time, we forget a given piece of information or consciously downplay the presence of the virus, resulting in a return to state \(U\), this does not preclude a renewed increase in interest or awareness. **Probabilities \(\epsilon\) and \(\mu\).** In previous research, we could not find information on how to determine the \(\epsilon\) and \(\mu\) for the \(UAU\) model during the spread of information about the SARS-COV-2 virus. Therefore, it was assumed that the probabilities \(\epsilon\) and \(\mu\) would be equal to the probabilities of the \(SIR\) model to reflect the intensity of the spread of information is somehow related to the intensity of virus spread. Since in real life, the spread of information is much faster than the virus itself, it was decided to extend the set of probabilities by multiplying the initial probabilities according to the equations: \[\epsilon=min(\beta*x,1),\;\mu=min(\gamma*x,1)\;\;where\;x\;\varepsilon\;\{1,2, 3,4\}.\] Thus, for each \(\beta\) and \(\gamma\) combination we have four combinations of \(\epsilon\) and \(\mu\). #### 2.3.3 Interaction between virus and information processes. While analysing the impact of the spread of information on the virus, it is necessary to locate both processes in a single medium. For this purpose, a multilayer network was chosen. The virus spread, simulated by the \(SIR\) model, will progress within a direct contact layer. In contrast, awareness will spread in all layers. Therefore, it is necessary to address the interaction between the models. In reality, awareness of the virus causes a range of behaviours designed to avoid infection (social distancing, masks, vaccination, etc.). A representation of this phenomenon will be a reduction in the infection probability, \(\beta\), for actors aware of the virus. Choosing just one number for the reduction of \(\beta\) was difficult since different actions yield different results in infection risk reduction. For example, wearing a mask will results in 65% risk reduction (RR) [39], one meter of social distancing has a similar effect (RR of 65%) [39, 40] with RR increasing with the distance [39]. Other actions have lower (e.g. face shields) or higher RR (e.g. quarantine and self-isolation have RR of almost 100%). Additionally, one will increase RR with the combination of more than one action (e.g. face mask and social distancing). Since various countries decided on different actions and various actions have various effects we have decided to assume RR of 90%. Thus, the primary probability will be reduced by a factor of ten, which the following equation can describe: \(\beta^{\prime}=\frac{\beta}{10}\), where \(\beta\) is the probability of infection and \(\beta^{\prime}\) is the probability of infection of an aware node. Similar to how awareness affects the probability of infection, the infection can alter the chance of becoming aware. This corresponds to the situation where a person with COVID-19 becomes aware of the SARS-COV-2 virus by having specific disease symptoms or test results. However, not all cases of infection with the coronavirus are manifested by symptoms [41, 42]. At the same time, symptoms can be similar to other upper respiratory diseases that are not difficult to confuse. To address the impact of this phenomenon on the \(\epsilon^{*}\), the percentage of symptomatic patients was taken. In previous research on the SARS-COV-2 virus, only a few addressed the issue of the number of asymptomatic patients. One of the most important is the proportion of patients with asymptomatic COVID-19 based on observations of passengers on the Diamond Princess, a quarantined ship off the coast of Wuhan. In this case, 17.9% of the ill passengers were asymptomatic [41]. However, it should be noted that the ship's crew, by sharing the quarantine, was an isolated community, so generalising the results to the whole population might not be correct. Slightly more general results were obtained in a study of a group of Japanese evacuated from Wuhan by a shared plane. Although the group examined is smaller than that of the ship's passengers, a significant difference is the lack of shared isolation. Researchers, using a binomial distribution, estimated that among evacuees, the proportion of symptomless patients was 30.8% [42]. The characteristics of the virus change over time due to mutations or certain individual attributes in different populations. However, since we are interested in the initial part of the pandemic, we can use published data from the initial period of the spread of the SARS-COV-2 virus. Based on this, we have assumed that the probability that the unaware node becomes aware if it is infected will correspond to the percentage of symptomatically ill people from [42], i.e., \(\epsilon^{\prime}=0.692\). The described changes in probabilities are presented in table 1. In summary, the interaction between information dissemination and virus spread can be classified as a mixed interaction. The virus spread supports information dissemination, and information dissemination can suppress virus spread. An example of support is increasing the chance of getting information about the virus for an infected node. This allows the information to spread faster. Otherwise, knowledge of the existence of the virus reduces the chances of an actor being infected by blocking the development of an epidemic. This is an example of competition. It should be noted that competition is not limited to the layer where both processes occur because awareness of the virus gained in the layer of direct contacts affects the spread of information about the virus in all other layers. ## 3 Results and Discussion Experiments have been performed using _multiinet library_[49] and six multilayer networks (tab. 3). For each network, we have selected a layer acting as a direct contact layer for virus spread. For some networks, the selection was based on the characteristic of interactions between layers (e.g., N1), while for others, the choice was arbitrary (e.g., N3). The rest of the layers acted as communication layers. For the bigger networks, N5 and N6, we have run the experiment a few times, each time with a different layer acting as the direct contact layer. We know that not all networks are classic social networks; however, we also wanted to observe the effects on other complex networks, especially since they reflect human mobility (N3) or information exchange (N5, N6). To evaluate the relationship between information spread and virus spread, three main scenarios were constructed. 1. Worst case scenario: only virus spreads (\(SIR\)). 2. Best-case scenario: virus and awareness spread simultaneously (\(SIR\) and \(UAU\)). 3. Evaluated scenario: the virus spreads, but the information about the virus is blocked for some period. When the blocking time ends, the information (awareness) about the virus also starts to spread (Blocking). \begin{table} \begin{tabular}{|c|c|c|} \hline **State in \(SIR\) model** & **State in \(UAU\) model** & **Probability change** \\ \hline \hline Susceptible & Unaware & - \\ Infectious & Aware & - \\ Recovered & Unaware & - \\ Susceptible & Aware & \(\beta-\beta^{\prime}\) \\ Infectious & Unaware & \(\epsilon-\epsilon^{\prime}\) \\ Recovered & Aware & - \\ \hline \end{tabular} \end{table} Table 1: Probabilities change by spreading processes interactions. \begin{table} \begin{tabular}{|c|c|c|} \hline **Param.** & **Values** & **Description** \\ \hline \(\beta\) & 0.19, 0.22, 0.28, 0.31 & The probability of getting an infection during contact with an infected individual. \\ \hline \(\gamma\) & 0.1, 0,02, 0,08 & The probability of recovery of an infected individual during each iteration. \\ \hline \(\beta^{\prime}\) & \(\frac{\beta}{\gamma\delta}\) & The probability of getting infection by aware person. \\ \hline \(\epsilon\) & \(min(\beta*x,1);\ x\varepsilon\{1,2,3,4\}\) & The probability of an unaware person getting the information from its aware neighbour. \\ \hline \(\mu\) & \(min(\gamma*x,1);\ x\varepsilon\{1,2,3,4\}\) & The probability of an aware person forgetting the information or stopping being influenced by it. \\ \hline \(\epsilon^{\prime}\) & 0.692 & The probability of unaware infected person to become aware. \\ \hline time & 150 & We have simulated the first 150 days of the pandemic. \\ \hline repetition & 20 & The simulation was repeated 20 times for each combination of parameters and each network. \\ \hline \end{tabular} \end{table} Table 2: Summary of the experimental setup. **Only virus spreads.** In the direct contact layer, one percent of all actors are infected by drawing lots. Simulations lasted 150 days, where one day is one iteration of the \(SIR\) model. During an epidemic, the probabilities \(\beta\) and \(\gamma\) are constant. The epidemic can end before 150 days if all actors are recovered or, although there are people in the susceptible state, they do not have infectious neighbours, so they cannot become infected. The first case refers to the situation in which the entire society has been infected and recovered, while the second case refers to the situation in which enough people have been infected to create herd immunity in society. For each set of parameters, the scenario was run at least 20 times. **Virus and information spread simultaneously.** In the direct contact layer, the virus spreads the same way as when there is no information. At the same time, information about the existence of the virus spreads throughout the network. As a result of a random draw, those who are aware of the existence of the virus are selected, representing one percent of all nodes in the network. This is followed by the awareness-spreading process according to an adapted \(UAU\) model with fixed probabilities \(\epsilon\) and \(\mu\). There are interactions between the processes. If an actor is aware of a virus, then its probability of being infected is changed to a smaller value \(\beta^{\prime}\). In the opposite situation, for an infected actor, the probability of becoming aware is \(\epsilon^{\prime}\). As in the case of the virus itself, the epidemic can end before 150 days have passed when everyone is in state \(R\) or the number of infected nodes is zero. The scenario is repeated at least 20 times for each combination of parameters. **Information blocking.** Similar to the spread of a single process, the virus spreads through a layer of direct contacts. The epidemic lasted 150 days. In the initial phase of the experiment, the probabilities \(\beta\) and \(\gamma\) are constant, and only the virus spreads in the network. The spread of information begins after the blocking time, which is intended to simulate the real situation when the information about SARS-COV-2 was not released to the public. The Citizen Lab report shows that the blocking lasted three weeks (21 days) [2]. Consequently, for the first 21 iterations, the spread of information is blocked. As networks of different sizes were analysed, it was decided to test the effect of changing the blocking time on the outbreak and additionally test blocking for one week (7 iterations) and two weeks (14 iterations). After that time, the spread of the information begins, and the information spreads simultaneously to virus as described above (scenario: virus and information spread simultaneously). The epidemic lasts 150 days or until all nodes are in the \(R\) state or no actor is in the \(I\) state. The scenario is repeated at least 20 times for each combination of parameters. ### Effect of information blocking To compare three scenarios with each other, we have looked at three moments during the epidemic spreading: 1. When the peek of the infected people occurred, assuming that later this happened, the better, as it gives the healthcare services more time to prepare. Thus, we took the peak day for case 2 (virus and information spread simultaneously) and calculated how much faster we would have the peak day for the other two cases. 2. How many people got infected till the peak day? We took the peak day for the second case (virus and information) and compared how many more people got infected until this day in the other two cases (taking into account both infected and recovered nodes). \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Net.** & **Layers** & **Nodes** & **Edges** & **Avg. degree** & **Direct contact layer** & **Description** \\ \hline N1 & 5 & 61 & 620 & 20.33 & work (6.47) & AUCS CS-AARHUS [43] \\ \hline N2 & 3 & 241 & 1 370 & 11.37 & advice (4.18) & Ckm Physicians Innovation [44] \\ \hline N3 & 37 & 417 & 3 588 & 17.21 & Ryanair (9.39) & EU Air Transportation [45] \\ \hline N4 & 3 & 71 & 1 659 & 46.73 & co-work (10.8) & Lazega Law Firm [46] \\ \hline N5 & 3 & 88 804 & 210 250 & 4.64 & RT (2.79) and MT (3.83) & Tweets related to 2013 World Championships in Athletics [47] \\ \hline N6 & 13 & 14 489 & 59 026 & 8.39 & physics.bio-ph (4.13), q-bio.MN (4.64),physics.data-an (5.30), cost-mat.dis-nn (4.19), cs.SI (4.69) \\ \hline \end{tabular} \end{table} Table 3: Networks used in experiments, their parameters and short description. The average degree of actors was calculated using degree definition from [49]. For each direct contact layer, the average node degree on that layer was included in the brackets. Table 4 present the summary of our results for the three aspects mentioned above. We can see that in most networks, the results indicate that blocking information for just 21 days results in the peak day being up to 35% (network N2) faster than in the case where the information can spread together with the virus. A similar case is with the number of infected individuals on the peak day, which can be up to 138% (again for network N2) higher than for the \(SIR\) and \(UAU\) process. Interestingly, while information blocking significantly impacts the "peak time", it has a lower impact on the total number of individuals affected by the disease after 150 days. 3. How many people got infected during 150 days? We took the final number of infected and recovered people for the second case (virus and information) and compared how many more people got infected during 150 days in the other two cases (taking into account both infected and recovered nodes). We have to note that since both \(SIR\) and \(UAU\) are not deterministic processes, we have repeated the simulation at least 20 times for each combination of parameters. However, for bigger networks like N6-dis-nn, N6-MN and N6-SI, this number was too low, and in the case of those networks, all three cases (\(SIR\), \(SIR\) and \(UAU\), Blocking) were very similar. Each one was within the standard deviation of another two, and there was no statistically significant difference between all three cases (tab. 5). Unfortunately, due to the size of the network and the number of combinations of parameters, we could not repeat the simulations more times. \begin{table} \begin{tabular}{|l|c c|c c|c c|} \hline & \multicolumn{2}{c|}{**Peak day**} & \multicolumn{2}{c|}{**1-R at peak day**} & \multicolumn{2}{c|}{**1-R at 150 day**} \\ \hline **Network** & \(SIR\) & **Blocking** & \(SIR\) & **Blocking** & \(SIR\) & **Blocking** \\ \hline \hline N1 & 16.65\% & 15.18\% & 23.78\% & 21.91\% & 10.70\% & 9.80\% \\ \hline N2 & 34.54\% & 35.42\% & 136.78\% & 138.23\% & 91.80\% & 95.83\% \\ \hline N3 & 15.16\% & 10.82\% & 18.60\% & 19.05\% & 10.73\% & 11.26\% \\ \hline N4 & 52.15\% & 50.74\% & 35.77\% & 33.28\% & 8.74\% & 7.14\% \\ \hline \hline N5-RT & 6.51\% & 6.36\% & 7.14\% & 6.24\% & 3.59\% & 2.67\% \\ \hline N5-MT & 1.96\% & 0.78\% & 7.32\% & 6.75\% & 4.56\% & 3.88\% \\ \hline \hline N6-bio-ph & 14.85\% & 14.50\% & 16.03\% & 14.94\% & 8.00\% & 7.45\% \\ \hline N6-data-an & 3.00\% & 2.81\% & 7.42\% & 6.00\% & 3.48\% & 2.78\% \\ \hline N6-dis-nn & 1.02\% & 0.41\% & 0.65\% & -0.68\% & 0.28\% & -0.55\% \\ \hline N6-MN & -0.15\% & 0.02\% & 1.24\% & -1.72\% & 0.44\% & -0.82\% \\ \hline N6-SI & 0.58\% & -0.90\% & 1.57\% & 1.04\% & 0.87\% & 0.63\% \\ \hline \end{tabular} \end{table} Table 4: The results for different scenarios; our baseline is \(SIR\) and \(UAU\) to which we compare two other processes. The value in each cell represents how faster the peak day was or how much more nodes got infected (until the peak day or until 150 day) compared to \(SIR\) and \(UAU\), that is, the scenario where both the virus and the information start to spread at the same time. \begin{table} \begin{tabular}{|l|c c|c c|c c|} \hline & \multicolumn{2}{c|}{**Peak day**} & \multicolumn{2}{c|}{**1-R at peak day**} & \multicolumn{2}{c|}{**1-R at 150 day**} \\ \hline **Network** & \(SIR\) & **Blocking** & \(SIR\) & **Blocking** & \(SIR\) & **Blocking** \\ \hline \hline N1 & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** \\ \hline N2 & **>0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** \\ \hline N3 & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** \\ \hline N4 & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** \\ \hline \hline N5-RT & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** \\ \hline N5-MT & **<0.05** & **>0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** \\ \hline \hline N6-bio-ph & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** \\ \hline N6-data-an & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** \\ \hline N6-dis-nn & **<0.05** & **>0.05** & **<0.05** & **<0.05** & **>0.05** & **<0.05** \\ \hline N6-MN & **>0.05** & **>0.05** & **>0.05** & **<0.05** & **>0.05** & **>0.05** \\ \hline N6-SI & **>0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** & **<0.05** \\ \hline \end{tabular} \end{table} Table 5: The results of the p-value for Wilcoxon signed rank test. We compare the results of \(SIR\) and Blocking to \(SIR\) and \(UAU\). Wilcoxon signed rank test is a non–parametric counterpart of the paired t-test and is often used in situations when we cannot ensure normal distribution of samples [50, 51]. ### Duration of the delay The next element we evaluated was the effect of the delay duration on the epidemic spreading. To do so, we ran our experiments again, this time for 7 and 14 days information blocking periods, and compared it with previous results for 21 days blocking period. Due to the network size in this experiment, we have not used the N5 network. The results show that information blocking, regardless of the blocking period, results in very similar results, i.e., although for some individual networks, longer blocking results in a faster epidemic peak and higher number of infected nodes, on average, the results for all blocking periods are very similar (tab. 6), and according to Wilcoxon signed rank test, most of the differences are not statistically significant (tab. 7). This leads to the conclusion that what is important is the fact that we block information about infectious diseases, not the duration of the ban. This emphasises the need to share information with society as soon as possible so that the information can start spreading as soon as possible and prevent as many infections as possible, especially in the first weeks of a pandemic. ## 4 Conclusions The study included an investigation of the influence of information blocking on the spread of infectious diseases. A comparison of the intensity of the epidemic for three different periods of information blocking, as well as an investigation of the impact of the parameters of the information spreading model on the epidemic course, revealed that the spreading of information about the virus reduces the intensity of the epidemic and flattens the disease curve. No impact of shorter blocking periods on the change in epidemic dynamics was found, indicating that even a short period of information blocking will increase the size and speed of the epidemic. \begin{table} \begin{tabular}{|l|c c|c c|c c|} \hline & \multicolumn{2}{c|}{**Peak day**} & \multicolumn{2}{c|}{**1-R at peak day**} & \multicolumn{2}{c|}{**1-R at 150 day**} \\ \hline **Network** & **14 days** & **7 days** & **14 days** & **7 days** & **14 days** & **7 days** \\ \hline N1 & -1.25\% & 1.80\% & 0.00\% & -2.20\% & 0.55\% & -2.03\% \\ \hline N2 & 4.41\% & 1.88\% & -4.07\% & -6.41\% & -4.12\% & -4.34\% \\ \hline N3 & 1.09\% & 2.98\% & 1.44\% & 0.74\% & 1.28\% & 0.67\% \\ \hline N4 & -1.70\% & -2.21\% & 1.85\% & 0.85\% & 1.83\% & 1.10\% \\ \hline \hline N6-bio-ph & 0.08\% & -0.54\% & 0.03\% & -4.76\% & -0.24\% & -2.65\% \\ \hline N6-data-an & -0.04\% & 0.62\% & 0.12\% & -0.12\% & 0.05\% & -0.44\% \\ \hline N6-dis-nn & 0.81\% & 0.42\% & 0.44\% & -3.27\% & 0.34\% & -2.76\% \\ \hline N6-MN & 3.47\% & 0.74\% & 0.21\% & 0.44\% & -0.43\% & -0.20\% \\ \hline N6-SI & 0.76\% & -0.30\% & -0.20\% & -1.31\% & -0.13\% & -0.86\% \\ \hline \end{tabular} \end{table} Table 6: The results for different delay times, the baseline is \(SIR\)**and**\(UAU\)**with **21 days delay**, to which we compare two other delay periods. The value in each cell represents how sooner or later (in case of negative values) was the peak day, or how much more or less (in case of negative values) nodes got infected till peak day or till 150 day. \begin{table} \begin{tabular}{|l|c c|c c|c c|} \hline & \multicolumn{2}{c|}{**Peak day**} & \multicolumn{2}{c|}{**1-R at peak day**} & \multicolumn{2}{c|}{**1-R at 150 day**} \\ \hline **Network** & **14 days** & **7 days** & **14 days** & **7 days** & **14 days** & **7 days** \\ \hline N1 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & **\textgreater{}0.05** \\ \hline N2 & \textgreater{}0.05 & \textless{}**0.05** & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 \\ \hline N3 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 \\ \hline N4 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 \\ \hline \hline N6-bio-ph & \textgreater{}0.05 & \textgreater{}0.05 & \textless{}**0.05** & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & **\textless{}0.05** \\ \hline N6-data-an & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & **\textless{}0.05** \\ \hline N6-dis-nn & \textgreater{}0.05 & \textgreater{}0.05 & \textless{}**0.05** & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & **\textless{}0.05** \\ \hline N6-MN & \textgreater{}0.05 & \textless{}**0.05** & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 \\ \hline N6-SI & \textgreater{}0.05 & \textgreater{}0.05 & \textgreater{}0.05 & \textless{}**0.05** & \textgreater{}0.05 & **\textless{}0.05** \\ \hline \end{tabular} \end{table} Table 7: The results of the p-value for Wilcoxon signed rank test. We compare the results of \(SIR\) and \(UAU\) with 7 and 14 days delay to \(SIR\) and \(UAU\) with 21 days delay. Wilcoxon signed rank test is a non-parametric counterpart of the paired t-test and is often used in situations when we cannot ensure normal distribution of samples [50, 51]. ### Limitations of our research The problem of spreading information and virus in multilayer networks is very important. This work focused on mapping the most important features of the propagation of the SARS-COV-2 virus and information about it. Research has allowed us to investigate the most important relationships; however, some aspects must be elaborated further. In this paper, the spreading probability thresholds for the \(SIR\) model and the \(UAU\) are assumed to be the same for all actors. They only change due to interactions between processes, but each change results in the same probability value. In real life, the chance of contracting a virus is significantly influenced by age, the burden of additional diseases, and other factors. Therefore, it would be necessary to investigate how the analysed process will shape the individual probability of infection for each actor in the network. One way of studying this dependence would be to randomise the status of individual actors in the network based on available statistics that describe the characteristics of the population of a given country through information such as gender, age or the percentage of patients with specific diseases. A broader view of the examined relationship between virus and information spread could be gained by extending the set of tested probabilities for the \(SIR\) model to include probability values for countries other than Italy and Poland. The model of the spread of information about the virus could be improved with the time dependence of the probability of information spread. In this way, it would be possible to represent the real pattern that new information is more popular. Then it spreads faster in direct contacts, as well as through social networks. As the information gets older, it becomes less popular, which means that the spread is slower, and sometimes stops completely. Additionally, our model assumes that the information spreads between actors, ignoring the external influence on the network, such as government information campaigns that target all nodes in the network simultaneously. Because of this mechanism, the influence of information blocking could be even more profound since information might reach all nodes at the beginning of the epidemic. An additional interesting issue is an attempt to represent the phenomenon of "forgetting" or ignoring the existence of the virus. It is the result of the fatigue of having to respect the restrictions imposed by the authorities or to be careful and wear personal protective equipment. Therefore, as time passes, more and more people start to ignore the information about the existence of the virus and become less vigilant. This should be expressed as an increased chance of infection despite awareness of the virus after a certain period of the epidemic. Finally, we used average-size networks. It would be interesting to use larger networks that better reflex the complexity of interaction between people on various levels and consider real mobility patterns. ## Acknowledgments This work was partially supported by the Polish National Science Centre, under Grant no. 2016/21/D/ST6/02408 and 2022/45/B/ST6/04145.
2310.07732
The tropical polytope is the set of all weighted tropical Fermat-Weber points
Let $v_1,\ldots,v_m$ be points in $\mathbb{R}^n$, and let $w_1,\ldots,w_m$ be positive real weights. The weighted Fermat-Weber points are those points $z$ which minimize $\sum w_i d(v_i, z)$. Following Com\u{a}neci and Joswig, we study the weighted Fermat-Weber points with respect to an asymmetric tropical metric. In the unweighted case (when $w_1 = \cdots = w_m = 1$), Com\u{a}neci and Joswig showed that the set of Fermat-Weber points is the "central" cell of the tropical convex hull of $v_1,\ldots,v_m$. We show that for any fixed data points $v_1, \ldots, v_m$, as the weights $w_i$ vary, the set of all Fermat-Weber points is the entire tropical convex hull of the $v_i$.
Shelby Cox, Mark Curiel
2023-10-04T04:02:01Z
http://arxiv.org/abs/2310.07732v1
# The tropical polytope is the set of all weighted tropical Fermat-Weber points ###### Abstract. Let \(v_{1},\ldots,v_{m}\) be points in \(\mathbb{R}^{n}\), and let \(w_{1},\ldots,w_{m}\) be positive real weights. The weighted Fermat-Weber points are those points \(z\) which minimize \(\sum w_{i}d(v_{i},z)\). Following Comaneci and Joswig, we study the weighted Fermat-Weber points with respect to an asymmetric tropical metric. In the unweighted case (when \(w_{1}=\cdots=w_{m}=1\)), Comaneci and Joswig showed that the set of Fermat-Weber points is the "central" cell of the tropical convex hull of \(v_{1},\ldots,v_{m}\). We show that for any fixed data points \(v_{1},\ldots,v_{m}\), as the weights \(w_{i}\) vary, the set of all Fermat-Weber points is the entire tropical convex hull of the \(v_{i}\). ## 1. Introduction Phylogenetics is a field of computational biology concerned with converting molecular data into trees that capture evolutionary relationships. While there are several, often combinatorial, methods for computing these phylogenetic trees, typically there is much discrepancy among the various resulting trees. Thus, a key hurdle to overcome is to find a tree which best represents the true evolutionary data. A common strategy is to gather information from many trees and compile this information into a single tree. The resulting tree obtained from such a strategy is called a _consensus tree_ and the algorithm used to compute a consensus tree is called a _consensus method_. Still there are many consensus methods for computing a consensus tree. For a survey of consensus methods see [1]. While many consensus methods are combinatorial, in this paper our approach is through tropical geometry. A phylogenetic tree is a metric tree with \(N\) leaves. We embed the set of phylogenetic trees on N leaves into \(\mathbb{R}^{\binom{N}{2}}\), by sending a tree \(T\) to the list of \(\binom{N}{2}\) distances between each of the pairs of leaves of \(T\). This space is known as the _space of phylogenetic trees_[8]. In this paper, we only consider trees up to adding a constant to all pendant (leaf-adjacent) edge lengths. In this setting, the space of phylogenetic trees can be realized as a tropical linear space, the _space of ultrametrics_, in the tropical projective torus \(\mathbb{R}^{\binom{N}{2}}/\mathbb{R}1\)[8]. In particular, this tree space is tropically convex - for more background see [8, SS4.3]. A problem of computing a consensus tree can now be stated as the following optimization problem: for a fixed set of points in phylogenetic tree space, locate a point which minimizes the sum of distances to that point. This type of problem is more broadly known as a _Fermat-Weber problem_. A Fermat-Weber problem is a geometric problem seeking the median of a collection of data points \(v_{1},\ldots,v_{m}\) taken from a metric space \(X\), specifically the problem asks to find a point \(x\) such that sum \(\sum_{i=1}^{m}d(x,v_{i})\) is minimized. Such a median \(x\) is called a _Fermat-Weber point_. This optimization problem arises in many different contexts such as linear programming via transportation problems [2], economics via location theory [4], and of course phylogenetics via tropical geometry [2, 7]. In the Euclidean case, there is a single Fermat-Weber point, and it is contained in the convex hull of the data points. In [2], Comaneci and Joswig show that with an asymmetrical tropical metric on phylogenetic tree space, the Fermat-Weber points form a cell in the tropical convex hull of the data points. Since phylogenetic tree space is tropically convex, this means that each Fermat-Weber point can be interpreted as a tree itself. Classically, the Fermat-Weber problem was first posed by Fermat before 1640 to compute the Fermat-Weber point when \(P\) is a triangle in Euclidean space. It was solved geometrically by Evangelista Torricelli in 1645. We note that the sum \(\sum_{i=1}^{m}d(x,v_{i})\) assumes that the distances are weighted equally. Motivated by unequal attracting forces on particles, a generalization of the problem was introduced and solved by Thomas Simpson in 1750, later popularized by Alfred Weber in 1909, by considering weighted distances. In that spirit, this paper is concerned with generalizing the result by Comaneci and Joswig, namely, we are interested in locating the specific cells for which the Fermat-Weber points live in \(P\) by minimizing the sum \(\sum_{i=1}^{m}w_{i}d(x,v_{i})\) for some choice of positive real weights \(w_{i}\). Our main theorem states that the Fermat-Weber set is a cell of the tropical convex hull \(P\) and that, by choosing weights appropriately, it can be any cell of \(P\). **Theorem 1.1**.: _Given data points \(v_{1},\ldots,v_{m}\), the collection of asymmetric tropical weighted Fermat-Weber points over all possible positive real weights \(w_{i}\) is \(P=\operatorname{tconv}\{v_{1},\ldots,v_{m}\}\)._ This paper is organized as follows: in Section 2 we provide background in tropical geometry and polyhedral geometry necessary to understand our approach. Of particular interest we recall the Cayley trick which gives a correspondence between mixed subdivisions of the Minkowski sum of polytopes and subdivisions of the corresponding Cayley polytope. Further, at the end of Section 2 we formulate the Fermat-Weber problem in the language of tropical convexity. In Section 3, we prove our main result Theorem 1.1 as a corollary to Theorem 3.2. ## 2. Background In this section we set up the framework of the Fermat-Weber problem and introduce the algebraic and geometric tools that we use to study it. ### Classical Polytopes In this subsection we recall objects from convex geometry that play keys roles in computing Fermat-Weber sets. A _polyhedron_\(P\subset\mathbb{R}^{n}\) is an intersection of finitely many half spaces. We call \(P\) a _polytope_ if this intersection is bounded. In this case, we write \(P=\operatorname{conv}(A)\) to mean \(P\) is the convex hull of some finite set \(A\subset\mathbb{R}^{n}\). For instance the standard simplex is the convex hull of the \(n\) standard basis vectors in \(\mathbb{R}^{n}\) and we denote it by \(\Delta^{n-1}\). A _face_ of a polytope \(P\) is the collection of points in \(P\) that minimizes the dot product with a fixed vector \(u\), specifically \[\operatorname{face}_{\mathbf{u}}(P):=\{\mathbf{x}\in P\mid\mathbf{u}\cdot \mathbf{x}\leq\mathbf{u}\cdot\mathbf{y},\forall\mathbf{y}\in P\}. \tag{1}\] A _polyhedral complex_ is a collection of polyhedra \(S=\{C_{i}\}\) with the following properties: (1) \(S\) is closed under taking faces, and (2) \(C_{i}\cap C_{j}\) is a face of both \(C_{i}\) and \(C_{j}\), or is empty. The _normal cone to \(F\) in \(P\)_, denoted \(\sigma_{F}\), is the set of all vectors \(u\in\left(\mathbb{R}^{m}\right)^{*}\) such that \(\operatorname{face}_{\mathbf{u}}(P)\supseteq F\). Alternatively, \(\sigma_{F}\) is the closure of \(\{u\in\left(\mathbb{R}^{m}\right)^{*}\mid\operatorname{face}_{\mathbf{u}}(P)=F\}\). The _normal fan_ of a polytope \(P\) is the collection of cones \(\{\sigma_{F}\}_{F\text{ a face of }P}\). For the remainder of this section we are concerned with subdivisions of polytopes. Consider \(m\) polytopes \(P_{1},\ldots,P_{m}\subset\mathbb{R}^{n}\) where \(P_{i}=\operatorname{conv}(A_{i})\) for some finite sets \(A_{i}\subset\mathbb{R}^{n}\). A _subdivision_ of a polytope \(P\) is a polyhedral complex \(S=\{C_{i}\}\) such that \(P=\bigcup_{i}C_{i}\). **Definition 2.1**.: Given a polytope \(P=\operatorname{conv}\{\mathbf{v}_{1},\ldots,\mathbf{v}_{k}\}\subset\mathbb{R}^{n}\), and weights \(w_{i}\in\mathbb{R}\) on \(\mathbf{v}_{i}\), the _lift_ of \(P\) with respect to \(\mathbf{w}\) is \[\tilde{P}:=\operatorname{conv}\{(\mathbf{v}_{i},w_{i})\mid i=1,\ldots,k\} \subset\mathbb{R}^{n+1} \tag{2}\] A face \(F=\operatorname{face}_{\mathbf{u}}(\tilde{P})\) of \(\tilde{P}\) is an _upper face_ if \(u_{n+1}<0\). The _regular subdivision of \(P\) with respect to \(\mathbf{w}\)_ is the projection of the upper faces of \(\tilde{P}\) onto \(P\) (forgetting the last coordinate). An example is given in Example 2.24. **Definition 2.2**.: The _normal complex_ of a regular subdivision is the projection of the cones of the normal fan of \(\tilde{P}\) that are normal to upper faces of \(\tilde{P}\). **Notation:** A subdivision of \(P\) will be denoted \(\underline{P}\). Regular subdivisions induced by a piecewise-linear convex function \(\lambda\) will be denoted \(\underline{P_{\lambda}}\). **Definition 2.3**.: Let \(P_{1},\ldots,P_{m}\) be polytopes in \(\mathbb{R}^{n}\). The Cayley polytope, \(\operatorname{Cayley}(P_{1},\ldots,P_{m})\), is the convex hull of \(\bigcup_{i=1}^{m}e_{i}\times P_{i}\) in \(\mathbb{R}^{m}\times\mathbb{R}^{n}\). In the special case where \(P_{1}=P_{2}=\cdots=P_{m}\), the Cayley polytope is \(\Delta^{r-1}\times P\). For an example see Figure 5, **Definition 2.4**.: The _Minkowski sum_ of \(P_{1},\ldots,P_{m}\) is the set \[P=P_{1}+\cdots+P_{m}=\{\mathbf{p}_{1}+\cdots\mathbf{p}_{m}\mid\mathbf{p}_{i} \in P_{i}\}\subset\mathbb{R}^{n}.\] The Minkowski sum of polytopes is indeed a polytope since its vertices are necessarily sums of vertices of the summands, hence the Minkowski sum is a convex hull of a finite set. **Example 2.5** (Minkowski Sum).: Let \(A_{1}=\{a,b,c,d\}\) and \(A_{2}=\{e,f,g\}\) be the vertex sets in \(\mathbb{R}^{2}\) of \(P_{1}\) and \(P_{2}\) respectively. The minkowski sum \(P_{1}+P_{2}\) is the convex hull of \(\{a+e,b+f,c+f,c+g,d+g\}\) and is shown in Figure 1. **Definition 2.6**.: A _cell_ of the minkowski sum \(P=P_{1}+\ldots+P_{m}\) is a tuple \(C=(C_{1},\ldots,C_{m})\) where \(C_{i}\subseteq A_{i}\) for all \(i\). _Remark 2.7_.: Each cell \((C_{1},C_{2},\ldots,C_{m})\) gives a polytope \(\sum C_{i}\subseteq\sum P_{i}\), and we will often abuse notation by identifying \((C_{1},C_{2},\ldots,C_{m})\) with this sum. If \(C\) and \(C^{\prime}\) are two such cells, then \(C\cap C^{\prime}\) refers to the intersection \((\sum_{i}\operatorname{conv}(C_{i}))\cap(\sum_{i}\operatorname{conv}(C^{ \prime}_{i}))\). Additionally, a cell of \(P\) may be the Minkowski sum of two (or more) different ordered \(m\)-tuples, and we would like to consider these as different cells. The Cayley trick is a correspondence between mixed subdivisions of the Minkowski sum \(P_{1}+\cdots+P_{m}\) and subdivisions of \(\operatorname{Cayley}(P_{1},\ldots,P_{m})\), illustrated in Figure 5. The top right polytope in Figure 5 is \(\operatorname{Cayley}(\Delta^{2},\Delta^{2})\cong\Delta^{1}\times\Delta^{2}\). Explicitly, a subdivision of the Figure 1. A square (left), a triangle (middle), and their Minkowski sum (right). Cayley polytope gives rise to a subdivision of the Minkowski sum after forgetting the first \(m\) coordinates. For more details, see [11, Section 5] for coherent/regular subdivisions, and [5, Theorem 3.1] for all subdivisions. Although often stated as a theorem, we will use the Cayley trick to define _mixed subdivisions_. **Definition 2.8**.: A _mixed subdivision_ of \(P=(P_{1},\ldots,P_{m})\) is a collection of cells \(C^{1},\ldots,C^{k}\) so that \(\{\operatorname{Cayley}(\operatorname{conv}(C^{j}_{1}),\ldots,\operatorname{ conv}(C^{j}_{m}))\mid j=1,\ldots,k\}\) is a subdivision of \(\operatorname{Cayley}(P_{1},\ldots,P_{m})\). **Example 2.9**.: A subdivision of the polytope \(P_{1}+P_{2}\) of Example 2.5 consists of a collection of cells \(\{C^{1},C^{2},C^{3},C^{4}\}\) with \[\begin{array}{ll}C^{1}=\operatorname{conv}\{a,b,c,d\}+\operatorname{conv}\{e \}&C^{3}=\operatorname{conv}\{b,c\}+\operatorname{conv}\{e,f\}\\ C^{2}=\operatorname{conv}\{c,d\}+\operatorname{conv}\{e,g\}&C^{4}= \operatorname{conv}\{c\}+\operatorname{conv}\{e,f,g\}\end{array}\] It is mixed since the cells \(\operatorname{Cayley}(\{a,b,c,d\},\{e\})\), \(\operatorname{Cayley}(\{b,c\},\{e,f\})\), \(\operatorname{Cayley}(\{c,d\},\{e,g\})\), \(\operatorname{Cayley}(\{c\},\{e,f,g\})\) give a subdivision of \(\operatorname{Cayley}(P_{1},P_{2})\). Another subdivision of \(P_{1}+P_{2}\) can be achieved with the cells \[\begin{array}{l}C^{1}=\operatorname{conv}\{a,b,c,d\}+\operatorname{conv}\{e \}\\ C^{2}=\operatorname{conv}\{c,d\}+\operatorname{conv}\{e,g\}\\ C^{3}=\operatorname{conv}\{c\}+\operatorname{conv}\{e,f,g\}\\ C^{4}=\operatorname{conv}\{a,b,c\}+\operatorname{conv}\{f\}\\ C^{5}=\operatorname{conv}\{a,c,d\}+\operatorname{conv}\{f\}\end{array}\] However, it is not mixed since for example the cells \(\operatorname{Cayley}(\{a,b,c,d\},\{e\})\) and \(\operatorname{Cayley}(\{a,b,c\},\{f\})\) intersect on their interior. We will refer to Definition 2.8 above as the _combinatorial Cayley trick_. In addition to the combinatorial correspondence above, there is also an explicit geometric correspondence between a subdivision of the Cayley polytope and a mixed subdivision, sometimes called the _geometric Cayley trick_. **Theorem 2.10** ([5, Theorem 3.1]).: _Let \(\underline{C}\) be a subdivision of \(\operatorname{Cayley}(P_{1},\ldots,P_{m})\). Then the corresponding mixed subdivision of \(P=\sum_{i=1}^{m}P_{i}\) is \(\underline{P}=n\cdot\underline{C}\cap\{\frac{1}{m}\mathbb{1}_{m}\}\times \mathbb{R}^{n}\)._ Proposition 2.11 is essentially stated in [10, SS1.3]. It allows us to think about mixed subdivisions of a _weighted_ Minkowski sum, \(P^{\mathbf{w}}=\sum_{i=1}^{m}w_{i}P_{i}\), in terms of the Cayley polytope without weights, \(\operatorname{Cayley}(P_{1},\ldots,P_{m})\), by slicing at \(\{\mathbf{w}\}\times\mathbb{R}^{n}\). We provide a proof that explicitly states the map that induces the bijection, which we will use later in the paper. Figure 2. A mixed subdivision of \(P_{1}+P_{2}\) (left) and a subdivision of \(P_{1}+P_{2}\) that is not mixed (right). **Proposition 2.11**.: _Let \(C=\text{Cayley}(P_{1},\ldots,P_{m})\), and let \(D=\text{cone}(C)\) denote the cone over it. Let \(C^{\mathbf{w}}\), \(D^{\mathbf{w}}\), and \(P^{\mathbf{w}}\) denote the corresponding weighted versions, for \(\mathbf{w}\in\mathbb{R}^{m}\) with \(|\mathbf{w}|=1\). Let \(\lambda\) be a piecewise-linear convex function on \(\text{Cayley}(w_{1}P_{1},\ldots,w_{m}P_{m})\), and let \(g(\mathbf{x},\mathbf{y})=(w_{1}x_{1},\ldots,w_{m}x_{m},\mathbf{y})\). If \(\lambda^{\prime}=\lambda(g^{-1}(x,y))\), then the following diagram commutes._ Proof.: The subdivision \(\underline{C}^{\mathbf{w}}_{\lambda}\) induces a subdivison \(\underline{D}^{\mathbf{w}}_{\lambda}\). The function \(g\) is an invertible linear function. In particular, this means \(g\) preserves convexity, dimension, and the containment relations within a polyhedral subdivision. Thus, \(\underline{D}^{\mathbf{w}}_{\lambda}\) induces a subdivision \(\underline{D}_{\lambda^{\prime}}\) via \(\lambda^{\prime}(z)=\lambda(g^{-1}(z))\). This in turn induces a subdivision \(\underline{C}_{\lambda^{\prime}}\) with the weightings \(\lambda^{\prime}(e_{i},v_{ij})=\lambda(e_{i},w_{i}v_{ij})\), which is combinatorially equivalent to \(\underline{C}^{\mathbf{w}}_{\lambda^{\prime}}\). Moreover, \(n\cdot\underline{C}^{\mathbf{w}}_{\lambda^{\prime}}\cap\left(\{\frac{1}{m}\} \times\mathbb{R}^{n}\right)\), and \(\underline{C}_{\lambda^{\prime}}\cap(\{\mathbf{w}\}\times\mathbb{R}^{n})\) give the same subdivision of \(\underline{P}^{\mathbf{w}}_{\lambda}\). ### Tropical Polynomials and Regular Subdivisions #### 2.2.1. Tropical Arithmetic In the tropical max-plus semi-ring \((\mathbb{R}\cup\{-\infty\},\oplus,\odot)\), tropical addition \(\oplus\) and tropical multiplication \(\odot\) are defined by \[a\oplus b=\max\{a,b\},\quad a\odot b=a+b\quad\text{where }a,b\in\mathbb{R}.\] The multiplicative identity is \(0\), and the additive identity is \(-\infty\). These operations can be extended component-wise to the tropical projective torus \(\mathbb{T}\mathbb{P}^{n-1}\cong\mathbb{R}^{n}/\mathbb{R}1\). For vectors \(\mathbf{v},\mathbf{u}\in\mathbb{T}\mathbb{P}^{n-1}\), the notation \(\mathbf{v}\oplus\mathbf{u}\) and \(\mathbf{v}\odot\mathbf{u}\) denotes component-wise max and addition, respectively. Tropical scalar multiplication of a vector amounts to adding a (classical) multiple of the all ones vector \(\mathbb{1}_{n}\), namely \(\lambda\odot\mathbf{v}=\lambda\mathbb{1}+\mathbf{v}=(\lambda+v_{1},\ldots, \lambda+v_{n})\) for any \(\lambda\in\mathbb{R}\). **Example 2.12**.: If \(\mathbf{v}_{1}=(1,2,-3)\) and \(\mathbf{v}_{2}=(-5,3,2)\) are points in \(\mathbb{T}\mathbb{P}^{2}\), then \[\mathbf{v}_{1}\oplus\mathbf{v}_{2} =(\max\{1,-5\},\max\{2,3\},\max\{-3,2\})\] \[=(1,3,2)=(-1,1,0)\] \[\mathbf{v}_{1}\odot\mathbf{v}_{2} =(1-5,2+3,-3+2)\] \[=(-4,5,-1)\] \[3\odot v_{2} =(3-5,3+3,3+2)\] \[=(-2,6,5)\] Later it will be convenient to fix a specific representative of \(v\in\mathbb{T}\mathbb{P}^{n-1}\), namely the one where the sum of the coordinates is zero. We denote by \(H_{0}\) the hyperplane where these points are located: \[H_{0}:=\left\{\mathbf{z}\in\mathbb{R}^{n}\,\Bigg{|}\,\mathbf{z}\cdot\mathbb{1}= \sum_{i}z_{i}=0\right\}\subset\mathbb{R}^{n}.\] Each point in \(\mathbb{TP}^{n-1}\) has a unique representative in \(H_{0}\). When we draw pictures in \(\mathbb{TP}^{n-1}\), we will tropically scale points to have last coordinate zero, then project away the last coordinate and draw the point in \(\mathbb{R}^{n-1}\). For example, \((3,1,2)\equiv(1,-1,0)\) will be drawn in the plane at the location \((1,-1)\). #### 2.2.2. Tropical Polynomials Let \(\mathbf{x},\mathbf{a}\in\mathbb{R}^{n}\), and define the tropical monomial: \(\mathbf{x}^{\mathbf{a}}:=\sum_{i=1}^{n}a_{i}x_{i}\). Note that when the entries of \(\mathbf{a}\) are non-negative integers: \[x_{1}^{a_{1}}\odot\cdots\odot x_{n}^{a_{n}}=\underbrace{x_{1}\odot\cdots \odot x_{1}}_{a_{1}\text{ times}}\odot\cdots\odot\underbrace{x_{n}\odot\cdots \odot x_{n}}_{a_{n}\text{ times}},\] which explains the notation. Note that for \(\mathbf{a}\in\mathbb{R}^{n}\), \(\mathbf{x}^{\mathbf{a}}\) is still a well-defined tropical function, but not a tropical polynomial. **Definition 2.13**.: Let \(\mathbf{x}\in\mathbb{TP}^{n-1}\). A _tropical signomial in \(\mathbf{x}\)_ is a finite linear combination of tropical monomials, i.e. \[f(\mathbf{x})=\bigoplus_{\mathbf{a}\in A}\lambda_{\mathbf{a}}\odot\mathbf{x}^ {\mathbf{a}}\] where \(A\subset\mathbb{R}^{n}_{\geq 0}\) is finite and \(\lambda_{\mathbf{a}}\in\mathbb{R}\) for all \(\mathbf{a}\in A\). If \(A\subset\mathbb{Z}^{n}_{\geq 0}\), then \(f(\mathbf{x})\) is a _tropical polynomial_. **Example 2.14**.: Let \(f(x)=1\oplus 3\odot x\oplus-1\odot x^{\sqrt{2}}\). In terms of classical arithmetic operations: \[f(x)=\max\{1,3+x,-1+\sqrt{2}x\}.\] The graph of \(f(x)\) is depicted in fig. 3; it has three linear pieces: \[f(x)=\begin{cases}1&\text{if }x\leq-2\\ 3+x&\text{if }-2\leq x\leq\frac{4}{\sqrt{2}-1}\\ -1+\sqrt{2}x&\text{if }\frac{4}{\sqrt{2}-1}\leq x.\end{cases}\] #### 2.2.3. Optimization A tropical max-plus signomial is a piecewise-linear, continuous, convex function on \(\mathbb{R}^{n}\). For a convex function, any local minimum is a global minimum. This minimum can be identified by locating the tangent plane with zero slope, which we formalize using subgradients. **Definition 2.15** ([9, SS3.1.5]).: Given a convex function \(f:\mathbb{R}^{n}\to\mathbb{R}\), the _subdifferential_ of \(f\) at \(x\) is: \[\partial_{f}(x):=\{u\in\mathbb{R}^{n}\mid\forall z\in\operatorname{dom}(f),f( z)\geq f(x)+u^{\top}\cdot(z-x)\}. \tag{3}\] A _subgradient_ of \(f\) at \(x\) is any element of \(\partial_{f}(x)\). The subdifferential of any function is a closed convex set. If \(f\) is convex and differentiable at \(x\), then the subdifferential of \(f\) at \(x\) is a singleton. In particular, if \(f\) is linear then the subdifferential contains only the slope of \(f\). And if \(f\) is piecewise-linear, then \(\partial_{f}(x)\) is constant on the linear pieces of \(f\). **Lemma 2.16** ([9, Theorem 3.1.15]).: _For any function \(f\), the subdifferential at \(x\) contains \(\overline{0}\) if and only if \(x\) is a global minimizer for \(f\)._ Proof.: By definition, \(\overline{0}\in\partial_{f}(x)\) if and only if \[\forall z\in\operatorname{dom}(f),f(z)\geq f(x)+\overline{0}^{\top}\cdot(z-x) \iff\forall z\in\operatorname{dom}(f),f(z)\geq f(x),\] which is if and only if \(x\) is a global minimizer for \(f\). **Example 2.17**.: Let \(f(x)=1\oplus 3\odot x\oplus-1\odot x^{\sqrt{2}}\). The subdifferential of \(f(x)\) on each linear piece is the slope of that piece. The subdifferential at \(x=-2\) is \(\partial_{f}(-2)=[0,1]\), and the subdifferential at \(x=4(1+\sqrt{2})\) is \([1,\sqrt{2}]\). Note that \(0\) is in the subdifferential of the constant (left-most) linear piece, and this is where the global minimum of \(f(x)\) is achieved. See fig. 3. #### 2.2.4. Tropical Hypersurfaces The results we stated for subgradients hold for any convex function. In this section we recall further results for tropical signomials. For a tropical polynomial \(f:\mathbb{R}^{n}\to\mathbb{R}\), subdifferentials of linear pieces of \(f\) are encoded by a subdivision of \(\operatorname{Newt}(f)\). It is this connection that will allow us to convert the problem of optimizing \(f\) into a polyhedral geometry problem. **Definition 2.18**.: The _tropical vanishing set_ or _tropical hypersurface_ of a tropical signomial \(f=\bigoplus_{i}c_{i}\odot x^{\alpha_{i}}\), denoted \(\operatorname{tropV}(f)\), is the set of \(x\in\mathbb{R}^{n}\) for which the max in \(f(x)\) is achieved at least twice. \[\operatorname{tropV}(f)=\{x\in\mathbb{R}^{n}\mid\max\text{ in $f(x)$ is achieved at least twice}\}. \tag{4}\] Figure 3. The graph of \(f(x)=1\oplus 3\odot x\oplus-1\odot x^{\sqrt{2}}\). The connection to the Newton polytope of \(f\) is explained in section 2.2.4. **Lemma 2.19**.: _The tropical vanishing set of a product of real positive powers of polynomials, \(f_{i}^{w_{i}}\), is (as a set) the union of tropical vanishing sets of the \(f_{i}\). That is,_ \[\text{trop}\,V\!\left(\bigodot_{i=1}^{m}f_{i}^{w_{i}}\right)=\bigcup_{i=1}^{m} \text{trop}\,V\!\left(f_{i}\right),\text{ for }w_{i}>0.\] Proof.: Let \(f_{i}=\bigoplus_{\alpha\in A_{i}}c_{\alpha}\odot\mathbf{x}^{\alpha}\), \(A_{i}\subset\mathbb{R}_{>0}^{n}\). If the maximum in \(f_{j}\) is achieved twice by \(\mathbf{x}\), then \(f_{j}^{w_{j}}(\mathbf{x})=w_{j}(c_{\alpha_{1}}+\alpha_{1}\cdot\mathbf{x})=w_{ j}(c_{\alpha_{2}}+\alpha_{2}\cdot\mathbf{x})\) for some \(\alpha_{1}\neq\alpha_{2}\in A_{j}\). It follows that the maximum in \(\bigodot_{i=1}^{m}f_{i}^{w_{i}}\) is also achieved at least twice: \[\bigodot_{i=1}^{m}f_{i}^{w_{i}}=w_{j}(c_{\alpha_{1}}+\alpha_{1}\cdot\mathbf{x })+\sum_{i\neq j}f_{i}^{w_{i}}(\mathbf{x})=w_{j}(c_{\alpha_{2}}+\alpha_{2} \cdot\mathbf{x})+\sum_{i\neq j}f_{i}^{w_{i}}(\mathbf{x}).\] On the other hand, if the maximum is achieved twice in \(\bigodot_{i=1}^{m}f_{i}^{w_{i}}\) at \(\mathbf{x}\), then we must be able to write the maximum as two distinct sums: \(\sum_{i=1}^{m}w_{i}(c_{\alpha_{i}^{1}}+\alpha_{i}^{1}\cdot\mathbf{x})=\sum_{i= 1}^{m}w_{i}(c_{\alpha_{i}^{2}}+\alpha_{i}^{2}\cdot\mathbf{x})\), where \(\alpha_{i}^{1},\alpha_{i}^{2}\in A_{i}\). It follows that for some \(j\), \(\alpha_{j}^{1}\neq\alpha_{j}^{2}\), so the maximum in \(f_{j}^{w_{j}}\) is achieved at least twice. A tropical hypersurface is a polyhedral complex, and it can be understood combinatorially in terms of the Newton polytope of \(f\), defined below. **Definition 2.20** (Newton polytope).: Suppose \(f(\mathbf{x})=\bigoplus_{\mathbf{a}\in A}c_{\mathbf{a}}\odot\mathbf{x}^{ \mathbf{a}}\) is a multivariate polynomial for some finite \(A\subset\mathbb{R}^{n}\) and \(c_{\mathbf{a}}\in\mathbb{R}\). The _support_ of \(f(\mathbf{x})\) is the set, denoted \(\text{supp}(f)\), containing all \(\mathbf{a}\in A\) such that \(c_{\mathbf{a}}\neq-\infty\). The _Newton polytope_\(\text{Newt}(f)\) of the polynomial \(f(\mathbf{x})\) is the convex hull of its support, i.e. \(\text{Newt}(f)=\text{conv}(\text{supp}(f))\). If \(f,g\) are polynomials, then \(\text{Newt}(fg)=\text{Newt}(f)+\text{Newt}(g)\). **Proposition 2.21** ([8, Proposition 3.1.6]).: _Given a tropical signomial \(f=\sum_{i}c_{i}x^{\alpha_{i}}\), let \(\underline{N}_{f}\) denote the regular subdivision of \(\text{Newt}(f)\) induced by the weighting \(w(a_{i})=c_{i}\). Then \(\text{trop}\,V\!\left(f\right)\) is the codimension-1 skeleton of the normal complex of \(\underline{N}_{f}\)._ _Remark 2.22_.: First, although Proposition 3.1.6 in [8] is originally stated for tropical polynomials, the arguments clearly hold for tropical signomials as well. Moreover, the proof in [8] shows that the subdifferential of a linear piece of \(f\) consists of the points in the corresponding cell of \(\underline{N}\). **Definition 2.23**.: Given a tropical polynomial \(f\), let \(\underline{N}\) be the regular subdivision of \(\text{Newt}(f)\) induced by the coefficients of \(f\). The _normal complex of \(f\)_ is normal complex of \(\underline{N}\); it is a subdivision of \(\mathbb{R}^{n}\). **Example 2.24**.: The following is an example of Proposition 2.21. Let \[f=x^{2}\oplus 4\odot xy\oplus 3\odot xz\oplus y^{2}\oplus 4\odot yz\oplus 3 \odot z^{2}. \tag{5}\] Figure 4 depicts \(\text{tropV}(f)\), the subdivision of the Newton polytope dual to it, and the lift of the Newton polytope that induces that subdivision. **Lemma 2.25**.: _The minimum of a max-plus tropical polynomial is achieved on the cell dual to the cell of the Newton polytope containing \(\overline{\mathbf{0}}\)._ Proof.: Let \(f=\sum_{i}c_{i}x^{\alpha_{i}}\) be a max-plus tropical polynomial (so \(f\) is a piecewise-linear convex function). Let \(\underline{N}\) be the regular subdivision of \(\text{Newt}(f)\) induced by the weighting \(w(\alpha_{i})=c_{i}\). Let \(L\) be a linear piece of \(f\), and let \(N_{L}\) be the cell dual to it in \(\underline{N}\). If \(\overline{\mathbf{0}}\in N_{L}\), then by Proposition 2.21 \(\overline{\mathbf{0}}\) is in the subdifferential of \(f\) at any point in \(L\). It then follows from Lemma 2.16 that the minimum of \(f\) is achieved on \(L\). ### Fermat-Weber Problems A Fermat-Weber problem is a geometric problem seeking the median of a collection of data points \(V=\{\mathbf{v}_{1},\ldots,\mathbf{v}_{m}\}\subset X\), where \(X\) is a metric space with distance \(d(\mathbf{x},\mathbf{y})\). We are particularly interested in the Fermat-Weber points for a collection of data points in \(\mathbb{TP}^{n-1}\), where the points could represent phylogenetic trees. The goal of this section is to introduce the Fermat-Weber problem, and reframe a tropical version as a problem on Newton polytopes. In general, the median of a collection of points is not unique and hence we seek the set of all such medians, called the called _Fermat-Weber set_. The medians belonging to the Fermat-Weber set are called _Fermat-Weber points_. Formally, the Fermat-Weber points are the points \(\mathbf{x}\in X\) minimizing the sum in (6). **Definition 2.26**.: The _Fermat-Weber_ points on the data \(V=\{\mathbf{v}_{1},\ldots,\mathbf{v}_{m}\}\subset X\) are the points \(\mathbf{x}\in X\) minimizing the following sum \[\operatorname{FW}(V):=\frac{1}{m}\sum_{i=1}^{m}d(\mathbf{x},\mathbf{v}_{i}). \tag{6}\] In this paper, we are interested in a variant of the Fermat-Weber problem, called the _weighted Fermat-Weber problem_. This new problem seeks the points \(\mathbf{x}\) minimizing the sum in Equation (7), where the weights \(w_{i}\) are positive real numbers. \[\operatorname{FW}(V,\mathbf{w}):=\frac{1}{m}\sum_{i=1}^{m}w_{i}d(\mathbf{x}, \mathbf{v}_{i}). \tag{7}\] We will use the asymmetric tropical distance first defined by Comaneci and Joswig in [2]. **Definition 2.27**.: The asymmetric tropical distance, \(d_{\Delta}(\mathbf{x},\mathbf{y})\) is: \[d_{\Delta}(\mathbf{x},\mathbf{y}):=n\max_{i\in[n]}(x_{i}-y_{i})+\sum_{i\in[n] }(y_{i}-x_{i}). \tag{8}\] Figure 4. Left: A lift of \(2\Delta^{2}=\operatorname{Newt}(f)\) with weights given by the coefficients of \(f\), overlaid with \(\operatorname{tropV}(f)\) in black; right: \(\operatorname{tropV}(f)\). When the points \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}/\mathbb{R}1\) are given by their unique representative in \(H_{0}\) (the subspace where the coordinates sum to zero), the metric \(d_{\Delta}(\mathbf{x},\mathbf{y})\) can be simplified to the following \[d_{\Delta}(\mathbf{x},\mathbf{y}):=n\max_{i\in[n]}(x_{i}-y_{i}),\;\mathbf{x}, \mathbf{y}\in H_{0}. \tag{9}\] Note that \(d_{\Delta}(x,y)\) is invariant under independent scalar multiplication of the input vectors, so it is well-defined on \(\mathbb{TP}^{n-1}\). From now on, we will assume that all points in \(\mathbb{TP}^{n-1}\) are given by their representative in \(H_{0}\). With this assumption, the distance to a point \(v_{i}\) can be reinterpreted as a power of a tropical linear equations, and the sum in equation 7 can be realized as a tropical product of tropical linear functions (possibly with real exponents). The distance to a point \(v_{i}\), denote by \(f_{v_{i}}(\mathbf{x})\) is \[f_{v_{i}}^{n}(\mathbf{x}):=d_{\Delta}(x,v_{i})=n\max_{j}(x_{j}-v_{ij})=\left( \bigoplus_{j=1}^{n}-v_{ij}\odot x_{j}\right)^{n}. \tag{10}\] It follows that the sum in (eq. (7)) for \(d=d_{\Delta}\) is \[\frac{1}{m}\sum_{i=1}^{m}w_{i}d_{\Delta}(\mathbf{x},\mathbf{v}_{i})=\frac{1}{m }\sum_{i=1}^{m}w_{i}f_{v_{i}}^{w_{i}}(\mathbf{x})=\frac{n}{m}\bigodot_{i=1}^{m }f_{v_{i}}^{w_{i}}. \tag{11}\] **Definition 2.28**.: We define _the tropical signomial associated to data \(V\) with weights \(\mathbf{w}\)_, \(f_{V,\mathbf{w}}\), to be the following tropical function: \[f_{V,\mathbf{w}}(\mathbf{x}):=\bigodot_{i=1}^{m}f_{v_{i}}^{w_{i}}=\bigodot_{i =1}^{m}\left(\bigoplus_{j=1}^{n}-v_{ij}\odot x_{j}\right)^{w_{i}}.\] The tropical hypersurface \(\mathrm{tropV}(f_{v_{i}})\) is a tropical hyperplane centered at \(v_{i}\); it is the codimension-1 skeleton of the normal fan of the standard simplex \(\Delta^{n-1}\). By lemma 2.19, the hypersurface \(\mathrm{tropV}(f_{V,\mathbf{w}})\) is the union of tropical hyperplanes centered at the data points \(v_{i}\). The Newton polytope of \(f_{V,\mathbf{w}}=\bigodot_{i=1}^{m}f_{v_{i}}^{w_{i}}\) is \(\sum_{i=1}^{m}w_{i}\cdot\Delta^{n-1}\). **Example 2.29**.: The polynomial \(f_{V,\mathbf{w}}(\mathbf{x})\) with \(\mathbf{x}\in\mathbb{TP}^{2}\) and \(\mathbf{w}\in\mathbb{R}^{2}\) has nine terms for generic \(V\) and \(\mathbf{w}\). \[f_{V,\mathbf{w}} =(-v_{11}\odot x_{1}\oplus-v_{12}\odot x_{2}\oplus-v_{13}\odot x _{3})^{w_{1}}\odot(-v_{21}\odot x_{1}\oplus-v_{22}\odot x_{2}\oplus-v_{23} \odot x_{3})^{w_{2}}\] \[=(-v_{11}\odot-v_{21})\odot x_{1}^{w_{1}+w_{2}}\oplus(-v_{11} \odot-v_{22})\odot x_{1}^{w_{1}}x_{2}^{w_{2}}\oplus(-v_{11}\odot-v_{23}) \odot x_{1}^{w_{1}}x_{3}^{w_{2}}\] \[\oplus(-v_{12}\odot-v_{21})\odot x_{1}^{w_{2}}x_{2}^{w_{1}}\oplus (-v_{12}\odot-v_{22})\odot x_{2}^{w_{1}+w_{2}}\oplus(-v_{12}\odot-v_{23}) \odot x_{2}^{w_{1}}x_{3}^{w_{2}}\] \[\oplus(-v_{13}\odot-v_{21})\odot x_{1}^{w_{2}}x_{3}^{w_{1}}\oplus (-v_{13}\odot-v_{22})\odot x_{2}^{w_{2}}x_{3}^{w_{1}}\oplus(-v_{13}\odot-v_{23 })\odot x_{3}^{w_{1}+w_{2}}.\] Its Newton polytope is \((w_{1}+w_{2})\cdot\Delta^{2}\), and it is depicted in the lower right of Figure 5, with \(w_{2}>w_{1}\). The image on the lower left of Figure 5 is the Newton polytope in the special case where \(w_{1}=w_{2}=1\). We now apply the results of the previous subsection to translate the problem of optimizing \(f_{V,\mathbf{w}}\) into a problem on \(\mathrm{Newt}(f_{V,\mathbf{w}})\). Since \(f_{V,\mathbf{w}}\) is a function from \(\mathbb{1}^{\perp}\) rather than \(\mathbb{R}^{n}\), we need the following result to apply the results of the previous subsection. **Proposition 2.30**.: _Given a max-plus tropical polynomial \(f:\mathbb{1}^{\perp}\to\mathbb{R}\), let \(\underline{N}\) be the subdivision of \(\operatorname{Newt}(f)\) induced by the coefficients of \(f\). The minimum of \(f\) is achieved on the cell dual to the cell of the Newton polytope containing \(\lambda\mathbb{1}\) for any \(\lambda\in\mathbb{R}\setminus\{0\}\)._ Proof.: Consider the following isomorphism \(\mathbb{R}^{n}/\mathbb{R}\mathbb{1}\cong\mathbb{1}^{\perp}\). \[\psi(x_{1},\dots,x_{n-1})=\left(x_{1},\dots,x_{n-1},-\sum_{i=1}^{n-1}x_{i}\right) \tag{12}\] The Newton polytope of \(f\) lives in \(\left(\mathbb{1}^{\perp}\right)^{*}\). The dual function of \(\psi\) is below. \[\psi^{*}(x_{1},\dots,x_{n})=\left(x_{1}-x_{n},\dots,x_{n-1}-x_{n}\right) \tag{13}\] Then \(\psi^{*}\left(\lambda\mathbb{1}\right)=\lambda(1-1,\dots,1-1)=\overline{ \mathbf{0}}\). Lemma 2.16 says that for a tropical function \(g:\mathbb{R}^{n}\to\mathbb{R}\), the linear piece of \(g\) whose dual cell contains \(\overline{\mathbf{0}}\) is the linear piece achieving the minimum. Combining this result with the map \(\psi\), it follows that \(f:\mathbb{1}^{\perp}\to\mathbb{R}\) achieves its minimum on the linear piece dual to the cell containing \(\lambda\mathbb{1}_{n}\). ### Tropical Convexity #### 2.4.1. Tropical Factorization Let \(f=\bigodot_{i=1}^{m}f_{i}\) be a tropical signomial that factors into a product of tropical signomials. The following theorem tells us how to compute the tropical hypersurface \(\operatorname{tropV}(f)\) in terms of a subdivision of they Cayley polytope. Let \(P_{i}=\operatorname{Newt}(f_{i})\), \(P=\sum_{i}P_{i}\), and write \(f_{i}=\sum c_{i,\alpha}\mathbf{x}^{\alpha}\). **Theorem 2.31** (Corollary 4.9 in [6]).: _Let \(\underline{C}\) be the regular subdivision of \(\operatorname{Cayley}(P_{1},\dots,P_{m})\) induced by the weights \(w(e_{i},\alpha)=c_{i,\alpha}\). Then the mixed subdivision of \(P\) corresponding to \(\underline{C}\) coincides with the regular subdivusion of \(P\) induced by the coefficients of \(f\)._ Recall that in the weighted tropical Fermat-Weber problem, \(f_{V,\mathbf{w}}\) factors into linear pieces, and so theorem 2.31 applies. **Corollary 2.32**.: _Let \(\underline{N}\) be the subdivision of \(\operatorname{Newt}(f_{V,\mathbf{w}})\) induced by the coefficients of \(f_{V,\mathbf{w}}\). Then \(\underline{N}=\underline{C}\cap\{\frac{1}{m}\mathbb{1}_{m}\}\times\mathbb{R}^{n}\). In particular, the cell of \(\underline{N}\) containing \(\lambda\mathbb{1}_{n}\) corresponds to the cell of \(\underline{C}\) containing \((\frac{1}{m}\mathbb{1}_{m},\frac{[\mathbf{w}]}{mn}\mathbb{1}_{n})\)._ Proof.: The point \((\frac{1}{m}\mathbb{1}_{m},\frac{[\mathbf{w}]}{mn}\mathbb{1}_{n})\) is the barycenter of \(\operatorname{Cayley}(P_{1},\dots,P_{m})\), so in particular, it lies in \(\operatorname{Cayley}(P_{1},\dots,P_{m})\cap\{\frac{1}{m}\mathbb{1}_{m}\} \times\mathbb{R}^{n}\). Apply theorem 2.31 and then theorem 2.10. **Proposition 2.33**.: _Let \(P_{i}=\operatorname{Newt}(f_{v_{i}})\), and let \(\underline{C}^{\prime}\) be the subdivision of \(\operatorname{Cayley}(P_{1},\dots,P_{m})\) induced by \(w(\alpha_{j},e_{i})=-v_{ij}\). Then the subdivision of \(P_{\mathbf{w}}=\sum_{i=1}^{m}w_{i}P_{i}\) induced by the coefficients of \(f_{V,\mathbf{w}}\) is \(\underline{C}^{\prime}\cap\{\mathbf{w}\}\times\mathbb{R}^{n}\). In particular, the cell of \(\underline{P}\) containing \(\mathbb{1}_{n}\) corresponds to the cell of \(\underline{C}^{\prime}\) containing \((\mathbf{w},\frac{1}{n}\mathbb{1}_{n})\)._ Proof.: Apply proposition 2.11 to corollary 2.32. #### 2.4.2. Tropical Convex Hull **Definition 2.34**.: The _min-tropical convex hull_ of a set of points \(A\subset\mathbb{T}\mathbb{P}^{n-1}\), denoted \(\operatorname{tconv}^{\min}(A)\) or just \(\operatorname{tconv}(A)\), is the set of all tropical linear combinations of points in \(A\), that is, \[\operatorname{tconv}(A):=\{\lambda_{1}\odot\mathbf{a}_{1}\oplus_{\min}\dots \oplus_{\min}\lambda_{k}\odot\mathbf{a}_{k}\mid\lambda_{i}\in\mathbb{R}, \mathbf{a}_{i}\in A,k\in\mathbb{N}\}. \tag{14}\] If \(A\) is a finite set, then \(\operatorname{tconv}(A)\) is called a _tropical polytope_. Note that the tropical convex hull is independent of the representatives in \(\mathbb{R}^{n}/\mathbb{R}1\) we choose for the points in \(A\). That is, if \(\mathbf{v}_{i}^{\prime}=\alpha_{i}\odot\mathbf{v}_{i}\), \(\lambda_{i}\in\mathbb{R}\), then with \(\lambda_{i}^{\prime}=\lambda_{i}-\alpha_{i}\) \[\lambda_{1}\odot v_{1}\oplus\cdots\oplus\lambda_{m}\odot v_{m}=(\lambda_{1}- \alpha_{1})\odot v_{1}^{\prime}\oplus\cdots\oplus(\lambda_{m}-\alpha_{m}) \odot v_{m}=\lambda_{1}^{\prime}\odot v_{1}^{\prime}\oplus\cdots\oplus\lambda_ {m}^{\prime}\odot v_{m}^{\prime}.\] **Example 2.35**.: Let \(v_{1}=(0,0,0)\), and \(v_{2}=(1,-1,0)\). The tropical polytope \(\operatorname{tconv}(v_{1},v_{2})\) consists of points of the form \(\lambda_{1}\odot v_{1}\oplus\lambda_{2}\odot v_{2}\), and is illustrated in Figure 6. The tropical polytope consists of three points, connected by two classical line segments. It turns out that the tropical convex hull of the data points coincides with the bounded part of the tropical hypersurface \(\operatorname{tropV}(f_{V})\). **Theorem 2.36** (Theorem 5.2.11 in [8]).: _The bounded part of the tropical hypersurface \(f_{V}\) is \(\operatorname{tconv}^{\min}(v_{1},\ldots,v_{m})\)._ Figure 5. Subdivisions with weightings. Clockwise starting on top left: Two tropical hyperplanes in \(\mathbb{TR}^{2}\), the corresponding regular subdivision of \(\Delta^{1}\times\Delta^{2}\), the corresponding mixed subdivision of \((w_{1}+w_{2})\Delta^{2}\) (weighted FW problem), and the corresponding mixed subdivision of \(2\Delta^{2}\) (unweighted FW problem). We now see that \(f_{V}\) and \(f_{V,\mathbf{w}}\) define the same tropical hypersurface. Thus, the bounded part \(\operatorname{tropV}(f_{V,\mathbf{w}})\) is the tropical convex hull of the \(v_{i}\)'s. **Lemma 2.37**.: _For any \(\mathbf{w}\in\mathbb{R}^{m}\), \(\text{tropV}(f_{V})=\text{tropV}(f_{V,\mathbf{w}})\)._ Proof.: By lemma 2.19, \(\operatorname{tropV}(f_{v}^{w})=\operatorname{tropV}(f_{v})\) for any \(v\in\mathbb{1}^{\perp}\), and any \(w>0\). Applying \(lemma\) 2.19 to \(f_{V}\) and \(f_{V,\mathbf{w}}\), it follows that \[\operatorname{tropV}(f_{V})=\bigcup_{i=1}^{m}\operatorname{tropV}(f_{v_{i}})= \operatorname{tropV}(f_{V,\mathbf{w}}).\qed\] **Corollary 2.38**.: _The bounded part of the tropical hypersurface \(f_{V,\mathbf{w}}\) is \(\text{tconv}^{\min}(v_{1},\dots,v_{m})\)._ ## 3. Solving the Weighted Tropical Fermat-Weber Problem In this section, we use combinatorics and tropical geometry to solve the weighted Fermat-Weber problem for \(\mathbb{TP}^{n-1}\) equipped with the tropical asymmetric distance. We begin by discussing the extrema of tropical polynomials. ### Containment **Theorem 3.1**.: _Given data points \(v_{1},\dots,v_{m}\in\mathbb{R}^{n}/\mathbb{R}1\), and weights \(w_{1},\dots,w_{m}>0\), the weighted Fermat-Weber points under the tropical asymmetric metric are a cell of \(\text{tconv}(v_{1},\dots,v_{m})\)._ Proof.: According to corollary 2.38, \(\operatorname{tconv}(v_{1},\dots,v_{m})\) is the bounded part of \(\operatorname{tropV}(f_{V,\mathbf{w}})\). The bounded cells of \(\operatorname{tropV}(f_{V,\mathbf{w}})\) are exactly those cells dual to interior cells of the Newton polytope, so by proposition 2.30, it suffices to show that \(\lambda\mathbb{1}\) is in the interior of \(\operatorname{Newt}(f_{V,\mathbf{w}})\). The vertices of \(\operatorname{Newt}(f_{V,\mathbf{w}})\) are \(|\mathbf{w}|e_{i}\), and their average, \(\frac{|\mathbf{w}|}{n}\mathbb{1}\), is in the interior of the Newton polytope. This proves the Fermat-Weber points are achieved on a bounded cell of \(\operatorname{tropV}(f_{V,\mathbf{w}})\), and therefore form a cell of \(\operatorname{tconv}(v_{1},\dots,v_{m})\). ### Any cell can be the weighted Fermat-Weber cell The following result shows that we can pick weights \(w_{i}\) so that the weighted barycenter lies in any interior cell of the subdivision of the Cayley polytope. This finishes the proof of the main theorem. **Theorem 3.2**.: _Given some data points \(\mathbf{v}_{1},\dots,\mathbf{v}_{m}\in\mathbb{R}^{n}/\mathbb{R}1\), and any simplex \(S\) in \(\Delta^{r-1}\times\Delta^{n-1}\), which intersects the relative interior of \(\Delta^{r-1}\times\Delta^{n-1}\), there is a choice of weights \(w_{1},\dots,w_{m}\in[0,1]\) with \(\sum w_{i}=1\), so that \(S\) contains the point \((w_{1},\dots,w_{m},\frac{1}{n}\mathbb{1}_{n})\)._ The proof uses a well-known correspondence between subsets of the vertices of \(\Delta^{m-1}\times\Delta^{n-1}\) and subgraphs of \(K_{m,n}\) (the complete bipartite graph with \(m\) left vertices, and \(n\) right vertices), which we now briefly recall (see [3, SS6.2.2] for more details). The vertex \((e_{i},e_{j})\) in a simplex \(S\subseteq\Delta^{r-1}\times\Delta^{n-1}\) corresponds to the edge between left vertex \(i\) and right vertex \(j\) in the bipartite graph. Thus, a subset of vertices \(A\subseteq\Delta^{r-1}\times\Delta^{n-1}\) corresponds to the subgraph of \(K_{m,n}\). **Example 3.3** (Simplex-Forest Correspondence for \(m=n=2\)).: The product of two \(2\)-simplices (i.e. line segments) is a square; the corresponding bipartite graph has two left vertices and two right vertices. Both are illustrated in fig. 7. The vertex \((e_{i},e_{j})\) in the simplex corresponds to the edge \((l_{i},r_{j})\) in the bipartite graph. For example, the top left vertex of the shaded gray simplex, \((e_{1},e_{2})\), corresponds to the edge \((l_{1},r_{2})\). The shaded simplex is full-dimensional, so it corresponds to a spanning tree of \(K_{2,2}\) (see lemma 3.4). **Lemma 3.4** (Lemma 6.2.8 in [3]).: _Let \(A\) be a subset of the vertices of \(\Delta^{m-1}\times\Delta^{n-1}\). Then,_ 1. \(\operatorname{conv}(A)\) _is a simplex if and only if the corresponding subgraph of_ \(K_{m,n}\) _is a forest._ 2. \(\operatorname{conv}(A)\) _is full dimensional if and only if the corresponding subgraph of_ \(K_{m,n}\) _is spanning and connected._ Proof of Theorem 3.2.: Let \(S\) be a simplex in \(\Delta^{r-1}\times\Delta^{n-1}\), and let \(F\) be the corresponding forest in \(K_{r,n}\). Assume that \(S\cap\operatorname{int}\left(\Delta^{r-1}\times\Delta^{n-1}\right)\neq\emptyset\) (so \(F\) is a spanning forest). A point \((\mathbf{p},\mathbf{q})\in\mathbb{R}^{m}\times\mathbb{R}^{n}\) lies in \(S\) if it can be written as a convex combination of the vertices of \(S\). In terms of the forest \(F\), \((\mathbf{p},\mathbf{q})\) lies in \(S\) if there exist \(\lambda(e)>0\) for each edge \(e\in E(F)\) such that the sum of edge weights on any left vertex adds up to the corresponding \(\mathbf{p}\) coordinate, and the sum of edge weights on any right vertex adds up to the corresponding \(\mathbf{q}\) coordinate. Let \(r(e)\) be the node on the right side connected to \(e\), and let \(\ell(e)\) be the node on the left side connected to \(e\). The choice of \(\lambda\)'s in (15) leads to a valid choice of weights \(w_{1},\ldots,w_{m}\) (given in (16)) so that \(S\) contains the weighted barycenter. \[\lambda(e):=\frac{1}{n\cdot\deg r(e)}. \tag{15}\] \[w_{i}:=\sum_{e\text{ s.t. }\ell(e)=i}\lambda(e). \tag{16}\] The equations in (17) show that the weights on any right node sum to \(\frac{1}{n}\) (since \(F\) is spanning, every vertex has at least one edge); by definition, the weights on the \(i\)th left node sum to \(w_{i}\). It follows that \(\mathbf{b}=(w_{1},\ldots,w_{m},\frac{1}{n}1)\) lies in the relative interior of \(S\). \[\sum_{r(e)=j}\frac{1}{n\cdot\deg(j)}=\frac{1}{n}\sum_{r(e)=j}\frac{1}{\deg(j) }=\frac{1}{n}\deg(j)\frac{1}{\deg(j)}=\frac{1}{n}. \tag{17}\] Moreover, \(w_{1},\ldots,w_{m}\) is a valid choice of weights for the Fermat-Weber problem. The weights \(w_{i}\) are positive because \(F\) is spanning, so the sum in (16) is never empty; The equations in (18) show that the \(w_{i}\) sum to one. \[\sum_{i=1}^{m}w_{i}=\sum_{e}\lambda(e)=\sum_{j=1}^{n}\sum_{r(e)=j}\frac{1}{n \cdot\deg(j)}=\sum_{j=1}^{n}\frac{1}{n}\sum_{r(e)=j}\frac{1}{\deg(j)}=\sum_{ j=1}^{n}\frac{1}{n}=n\frac{1}{n}=1.\qed \tag{18}\] **Corollary 3.5**.: _Given a cell \(T\) in the tropical polytope \(\operatorname{tconv}(\mathbf{v}_{1},\ldots,\mathbf{v}_{m})\), there is a choice of weights \(w_{1},\ldots,w_{m}\) so that \(T\) is the set of weighted tropical Fermat-Weber points for \(\mathbf{v}_{1},\ldots,\mathbf{v}_{m}\) with weights \(w_{1},\ldots,w_{m}\)._ Figure 7. Vertices in the product of simplices (left) correspond to the color-coded edges of the bipartite graph (right). ### Acknowledgements We are grateful to David Speyer and Michael Joswig for helpful conversations. This work was started at the "Algebra of phylogenetic networks" workshop held at the University of Hawai'i at Manoa from May 23 - 27, 2022 which was supported by the National Science Foundation under grant DMS-1945584. The first author was supported by National Science Foundation Graduate Research Fellowship under Grant No. DGE-1841052, and by the National Science Foundation under Grant No. 1855135.
2307.02285
Monolithic atom interferometry
Atom and, more recently, molecule interferometers are used in fundamental research and industrial applications. Most atom interferometers rely on gratings made from laser beams, which can provide high precision but cannot reach very short wavelengths and require complex laser systems to function. Contrary to this, simple monolithic interferometers cut from single crystals offer (sub) nano-meter wavelengths with an extreme level of stability and robustness. Such devices have been conceived and demonstrated several decades ago for neutrons and electrons. Here, we propose a monolithic design for a thermal-beam molecule interferometer based on (quantum) reflection. We show, as an example, how a reflective, monolithic interferometer (Mach-Zehnder type) can be realised for a helium beam using Si(111)-H(1x1) surfaces, which have previously been demonstrated to act as very robust and stable diffractive mirrors for neutral helium atoms.
Johannes Fiedler, Kim Lefmann, Wolf von Klitzing, Bodil Holst
2023-07-05T13:39:09Z
http://arxiv.org/abs/2307.02285v1
# Monolithic atom interferometry ###### Abstract Atom and, more recently, molecule interferometers are used in fundamental research and industrial applications. Most atom interferometers rely on gratings made from laser beams, which can provide high precision but cannot reach very short wavelengths and require complex laser systems to function. Contrary to this, simple monolithic interferometers cut from single crystals offer (sub) nano-meter wavelengths with an extreme level of stability and robustness. Such devices have been conceived and demonstrated several decades ago for neutrons and electrons. Here, we propose a monolithic design for a thermal-beam molecule interferometer based on (quantum) reflection. We show, as an example, how a reflective, monolithic interferometer (Mach-Zehnder type) can be realised for a helium beam using Si(111)-H(1\(\times\)1) surfaces, which have previously been demonstrated to act as very robust and stable diffractive mirrors for neutral helium atoms. ## 1 Introduction The field of atom interferometry has expanded enormously over the last few decades. Atom interferometers are used in various applications, from magnetic and gravity sensing [1, 2], quantum metrology [3] to atomic clocks [4]. They may even be used as dark matter and gravitational wave detectors [5] also in space [6, 7]. Compact, portable atom gravimeters for prospecting, oil survey and geophysical investigations have recently become commercially available [8]. Atom interferometers will also be useful as accelerometers for sub-sea navigation in submarines and, more recently, underwater drones [9, 2]. This, however, will require very compact solutions, which are not presently available. Atom interferometers use either cold atoms (including Bose-Einstein Condensates) [10] or thermal atoms beams [11], and more recently hot thermal vapours [12]. Most optical interferometers have, by now, been realised as atom interferometers, including Young's double slit, Mach-Zehnder, Talbot-Lau, Ramsey-Borde and Sagnac interferometers. Historically, Young's double slit makes the simplest atom interferometer. The beam is split into two paths by passing through a double slit, and the interference pattern is observed on a screen further down the beam path. It was realised for atoms for the first time in 1991 using metastable helium atoms passing through a thin gold foil [13]. The simplest split-path atom interferometer is arguably the Mach-Zehnder interferometer. It exploits the de Broglie wavelength of the atoms in a diffraction grating configuration with split beam paths. The first Mach-Zehnder atom interferometer was realised in 1991 [11] using a sodium beam and solid transmission diffraction gratings. Later in 1995, it was developed further by using metastable neon and argon and transmission diffraction gratings made of standing light waves [14, 15], in 2002 using ground-state lithium also with light-wave gratings [16] and later again using neutral helium with solid gratings. Results from the last mentioned instrument were never published, but it is mentioned in a review paper from 2009 [17]. In the Talbot-Lau interferometer, the self-imaging property of a grating is exploited in near-field diffraction. The atom paths are not truly separated; therefore, this type of interferometer has been used extensively for experiments with heavy molecules where the de-Broglie wavelength is very small. The first Talbot-Lau atom interferometer was realised in 1994 [18]. Where the Mach-Zehnder interferometer and the Talbot-Lau interferometers are adapted from light optics, the Ramsey-Borde interferometer, first realised in 1949 by Norman Ramsey [19], can only be used for atoms: the principle is diffraction by absorption of a single photon on a weakly allowed transition to split the wave package. In the 1980th, this interferometer has been further developed by Christian Borde by using atomic recoil to create a beam splitter [20]. This interferometer type is currently the standard for high-precision measurements, such as atomic clocks. In light optics, the Sagnac interferometer, also called ring interferometer, relies on a beamsplitter mirror to create two beams that travel equidistant paths in opposite directions through a ring structure guided by reflective mirrors. The two beams meet at the starting point, where they interfere and are made to exit the ring. The first atom interferometer using the Sagnac effect was realised in 1991 using a Ramsey-Borde configuration of a state-labelled atom interferometer based on single-photon transitions, with a beam of atoms traversing two pairs of travelling wave fields. The laser fields within each pair are separated by a distance \(D\), while the two pairs are separated by \(d\) and are counter-propagating with respect to each other [21]. By rotating the interferometer, the counter-propagating beams collect different phases along their optical paths leading to an interference pattern on the screen. Such a configuration provides an absolute measurement of the rotational speed. The atomic structure of a single crystal offers a simple periodic diffractive grating. Thus, it could produce many different types of interferometers, where the monolithic construction guarantees extreme stability. Interferometers based on transmission through solid slabs of material have been demonstrated, e.g. X-rays [22, 23], neutrons [24] and electrons [25]. Unfortunately, these techniques are inapplicable to atoms, which interact too strongly with any solid material they travel through. Monolithic atom interferometers have been used widely in neutron scattering experiments observing gravitationally induced interference (in transmission) [26] and the quantised states of neutrons in the presence of gravitational fields with perfectly reflecting mirrors [27]. Neutrons are sensitive to external forces and, thus, suitable candidates for quantum sensing. However, such experiments require an extensive, costly infrastructure to create, control and detect the neutron beam. This also applies to cold atom interferometers. Thermal atom beams are easier to create and couple more robust to external fields due to the higher mass of the atoms. A further advantage of thermal atom interferometers is that they can operate continuously, dramatically decreasing the temporal resolutions. Here, we propose a novel interferometer based on the reflection of atoms on monolithic single-crystal structures. The basic operation principle is depicted in Fig. 1: an incident beam of atoms is reflected by the crystal lattice (A) into two components, which impinge onto a second mirror and recombine on the third reflection. In the past, atoms had been neglected, largely because the atoms most commonly used in interferometry (Rubidium, Rb [28]; Caesium, Cs [29]; Argon, Ar [30]; Sodium, Na [31]; Potassium, K [32]) will stick to surfaces under most conditions. Similarly, metastable atoms, which have also been used for interferometry (Argon, Ar [33]; Helium, He [13]), will decay upon impingement. A further practical challenge for a reflection-based interferometer is the contamination of the reflecting surface, which distorts the diffraction. For example, all metal surfaces will be covered in physisorbed molecules within hours, even in ultra-high vacuum [34, 35, 36]. Noble gasses, including ground-state helium, H\({}_{2}\), HCl and other molecules, are known to scatter from various surfaces over a broad temperature range without sticking to them [37, 38, 39]. Over the last years, focusing mirrors for neutral, ground-state helium have been developed for neutral helium microscopes [40]. An important requirement for these mirrors is that they must remain stable in a vacuum for months. One of the solutions implemented was Si(111)-H\((1\times 1)\)[41]. Detailed experiments on He and H\({}_{2}\) scattering were performed [42, 43] and the interaction potential between Helium and Si(111)-H\((1\times 1)\) calculated [43]. This interaction potential was then used to obtain the intensity of the different diffraction peaks for a range of conditions [44]. The advantage of the Si(111)-H\((1\times 1)\) surface from an experimental point of view is that it can be prepared chemically by dipping the Si(111) crystal in an HF solution [45]. This means a monolithic configuration with two reflecting surfaces facing each other can be fabricated at any spacing. The additional advantage of using the Si(111)-H\((1\times 1)\) surface is the small lattice constant of \(a_{\rm S}=3.383\) A [42] which, together with the wavelength of, as an example, helium atoms in a room temperature beam: \(\lambda_{\rm dB}=0.55\), ensures a very big wave-package separation. Recent matter-wave interferometers typically split the wave package over a few milliradian [46, 47, 33, 48]. In contrast, using the room-temperature helium beam described above, the proposed new interferometer splits the matter wave over 0.5 radians. The atom interferometer we introduce here uses reflective atom-surface diffraction as a beam splitter. Further reflections from a parallel surface yield the recombination of the wave and thus the interference; see Fig. 1. We present a theoretical model determining the expected interference patterns and apply the model to the interference of helium atoms using Si(111)-H\((1\times 1)\) surfaces, where we concentrate on describing the general principles by describing an ideal system with a perfectly coherent and monochromatic beam and an experimentally based model for the diffraction probabilities. We have chosen an experimentally realisable parameter set providing all possible superpositions occurring in such interferometer: single-path transmission, double-path superposition with vanishing phases and multipath interference. Finally, we discuss how a reflective interferometer based on quantum reflection can be addressed. The paper finishes with a conclusion and outlook on future work. ## 2 The reflective interferometer ### Geometric arrangement The general arrangement of a monolithic reflection interferometer is depicted in Fig. 1. A slab is cut into a U-shaped monolith to form two parallel planar surfaces with a distance \(s\) being sufficiently large to achieve propagating waves inside the interferometer. A particle beam will be diffracted minimally three times at points A, B and C. The beam will be split at point A, and each part will be reflected at point B and recombined in point C, where they interfere. In detail: a particle beam is sent via an incidence angle \(\alpha\) towards one surface. It is reflectively split in point A into a range of diffraction orders determined by the incident beam angle \(\alpha\), the periodic surface structure described by the lattice spacing \(a_{\mathrm{S}}\), and the beam wavelength \(\lambda\) through the well known reciprocal lattice equation [49]. We pick two orders, the first one with the reflection angle \(\beta\) \[\sin\beta=\sin\alpha+\frac{n_{1}\lambda}{a_{\mathrm{S}}}\,, \tag{1}\] with an integer \(n_{i}\in\mathbb{Z}\) (numerating the diffraction order), and the second one with reflection angle \(\gamma\) \[\sin\gamma=\sin\alpha+\frac{n_{1^{\prime}}\lambda}{a_{\mathrm{S}}}\,. \tag{2}\] At point A, the two selected diffraction orders propagate towards points B and B\({}^{\prime}\), where they are reflected towards point C and recombine. Point B denotes the reflection point one diffraction order from point A; thus, the corresponding incidence angle is \(\beta\). To satisfy the recombination of the beam, the reflection angle \(\delta\) has to be of a non-zeroth diffraction order expressed as \[\sin\delta=\sin\beta+\frac{n_{2}\lambda}{a_{\mathrm{S}}}=\sin\alpha+\frac{(n_ {1}+n_{2})\lambda}{a_{\mathrm{S}}}\,. \tag{3}\] Analogously, the reflection at point B\({}^{\prime}\) can be determined by \[\sin\varepsilon=\sin\gamma+\frac{n_{2^{\prime}}\lambda}{a_{\mathrm{S}}}=\cos \alpha+\frac{(n_{1^{\prime}}+n_{2^{\prime}})\,\lambda}{a_{\mathrm{S}}}\,. \tag{4}\] Figure 1: Sketch of the optical paths within a monolithic, reflection interferometer: the beam is reflected three times between the surfaces of two parallel slabs (grey area) separated by the distance \(s\). The incoming beam is reflected at point A with an incidence angle \(\alpha\). Two different diffraction orders are selected: Reflection towards point B with diffraction angle \(\beta\) and reflection towards B\({}^{\prime}\) with diffraction angle \(\gamma\). At point B, the incidence angle is the same as the outgoing angle in point A: \(\beta\). Part of this beam is reflected towards point C with diffraction angle \(\delta\). At point B\({}^{\prime}\), the incidence angle is given by \(\gamma\) due to the reflection at point A and reflected at the diffraction angle \(\varepsilon\). In point C, the incoming waves with incidence angles \(\delta\) and \(\varepsilon\) are recombined, leaving the slab with a reflection angle \(\zeta\). To satisfy the recombination of the beam at point C, the diffraction of the incoming beams need to occur under the same diffraction angle, which can be described mathematically by the relation \[\sin\zeta=\sin\delta+\frac{n_{3}\lambda}{a_{\rm S}}=\sin\alpha+\frac{\left(n_{1}+ n_{2}+n_{3}\right)\lambda}{a_{\rm S}}\,, \tag{5}\] and \[\sin\zeta=\sin\varepsilon+\frac{n_{3^{\prime}}\lambda}{a_{\rm S}}=\sin\alpha+ \frac{\left(n_{1^{\prime}}+n_{2^{\prime}}+n_{3^{\prime}}\right)\lambda}{a_{ \rm S}}\,. \tag{6}\] These equations yield a constrain for the diffraction orders \[n_{3^{\prime}}=n_{1}+n_{2}+n_{3}-n_{1^{\prime}}-n_{2^{\prime}}\,. \tag{7}\] In addition to this angular dependence, the distance between points A and C needs to be the same for both paths to satisfy the recombination of the beams. Figure 1 illustrates this condition: the blue and red beamlines need to recombine in the same point C. Otherwise, they would be reflected without any spatial overlap to interfere directly. If they are reflected into parallel beams from different spots, they will interfere in the far field with a phase shift proportional to the spatial difference between both points. To achieve interference also in the optical near-field regime for the entire interferometer, the condition reads \[\tan\beta+\tan\delta=\tan\gamma+\tan\varepsilon\,. \tag{8}\] Finally, we sum up six parameters characterising a reflective atom interferometer which have to satisfy the conditions (7) and (8). The latter can either be used for determining the incidence angle \(\alpha\) or by rewriting the equation \[\tan\left(c+N_{1}\right)+\tan\left(c+N_{1}+N_{2}\right)-\tan\left(c+N_{1^{ \prime}}\right)-\tan\left(c+N_{1^{\prime}}+N_{2^{\prime}}\right)=0\,, \tag{9}\] with \(c=\cos\alpha\) and \(N_{i}=n_{i}\lambda/a_{\rm S}\), one finds the following conditions leading to an \(\alpha\)-independent solution: \[n_{1}=n_{1^{\prime}}+n_{2^{\prime}}\wedge n_{1^{\prime}}=n_{1}+n_{2}\,. \tag{10}\] The interference pattern is due to the phase shift along the different optical paths ABC and AB\({}^{\prime}\)C. The path lengths can be determined via these angles for the path along point B \[b=s\left(\frac{1}{\cos\beta}+\frac{1}{\cos\delta}\right)\,, \tag{11}\] and along the point B\({}^{\prime}\) \[b^{\prime}=s\left(\frac{1}{\cos\gamma}+\frac{1}{\cos\varepsilon}\right)\,. \tag{12}\] The interference occurs via the superposition of two waves with the same wave vector \(\mathbf{k}\), but are phase shifted with respect to the respective path lengths, \(b-b^{\prime}\). Hence, the phase shifts between the different paths are given by \[\varphi=k(b-b^{\prime})\,. \tag{13}\] It can be observed in Eqs. (11) and (12) that the path lengths are proportional to the slab separation \(s\) and, thus, \(s\) should be tuned with respect to the wave vector to maximise the phase shift between both interfering beams. Figure 2 illustrates the positions of the different diffraction for different incidence angles \(\alpha\) for a particular interferometer configuration. It can be seen that the diffraction orders are strongly separated. All lines are discontinued due to the finite length of the interferometer, which leads to some beams escaping the interferometer. These particles will likely hit the surface and fall into the interferometer; thus, they will not affect the interference patterns. ### Reflection coefficients for the different beam paths inside the interferometer In the last section, the conditions for interference were obtained. We now consider the intensity distribution in the interference signal, described via a reflection function. This reflection function depends on the incidence and diffraction angle \(\vartheta_{1}\) and \(\vartheta_{2}\), respectively. We model the reflected beam via a Gaussian intensity distribution. Consequently, each diffraction order has a Gaussian profile which we normalise to the real-valued probability of each diffraction order \(\rho_{n}\) \[r(\vartheta_{1},\vartheta_{2})=\sum_{n}\rho_{n}\mathrm{e}^{-\frac{\left( \vartheta_{2}+\vartheta_{n}\right)^{2}}{2\sigma_{n}^{2}}}\,, \tag{14}\] with the width of the diffracted signal \(\sigma_{n}\) and the position of the diffracted beam \(\theta_{n}\) determined by Eq. (1). The widths depend on the incidence angle and wavelength, \(\sigma_{n}=\sigma_{n}(\lambda,\vartheta_{1})\). These impacts are negligible for surface diffraction, the manuscript's content, due to the overall weak reflection signal [43]. The reflection coefficient (14) only includes the inelastic scattering, that the wavelength of the outgoing wave is the same as that of the incoming wave, \(\lambda_{\rm inc}=\lambda_{\rm out}\). Thus, the total reflected signal is smaller than one, \(\int{\rm d}\vartheta_{2}\,r(\vartheta_{1},\vartheta_{2})<1\). In general, there are five lengths involved in such an interferometer: the wavelength (\(\lambda_{\rm dB}\)), the dimensions of the interferometer (length \(d\) and slap separation \(s\)) and the free-space propagation lengths (source to interferometer \(L_{1}\) and interferometer to detector \(L_{2}\)). Typically, these dimensions are on different length scales \(\lambda_{\rm dB}\ll d,s<L_{1},L_{2}\). This consideration allows for the separation of length scales. Consequently, each particle will only interfere with itself inside the same optical path in the interferometer. Thus, we can treat each path inside the interferometer separately, and the collected diffraction image will follow from the Gaussian beam envelope. The partial waves will experience a different Figure 3: Optical paths in a monolithic reflective atom interferometer: a slab cut into a monolithic crystal of length \(d\) (50 mm) and width \(s\) (5 mm). A Helium beam with an incidence angle of \(83\) deg (dark blue line) enters the interferometer. It is diffracted at the hydrogen passivived surfaces with a lattice constant \(a_{\rm S}=3.383\,\). The diffracted orders are reflected two more times until they leave the interferometer. It can be seen that the third-order (\(-3=n_{1}+n_{2}+n_{3}\)) diffraction beam will not show any interference (blue dashed line at \(30.32\) deg), see table 1, the second-order beam at \(41.87\) deg (orange lines) will not show any interference due to equal optical path lengths; the diffraction at \(56.10\) deg (green lines), the zeroth order at \(83.00\) deg (red lines) will be measured separately in the near-field regime, whereas they will interfere in the far-field leading to the interference patterns depicted in Fig. 4. Figure 2: Distribution of the diffraction orders on the screen \(\varphi\) depending on the incidence angle \(\alpha\) for a monolithic atom interferometer built of silicon with hydrogen passivived surfaces, which are separated by 5 mm and have an extension of 50 mm. The considered wavelength was \(\lambda=0.55\,\)Å. The purple area describes the dark regions where no particle will appear. For each incidence angle \(\alpha\), the maximum population of the diffraction order is marked in yellow. The remaining peak intensities are plotted relative to the maximum intensity according to the colour scale. phase shift due to the optical path (13). Thus, we can describe the reflection properties of the entire _inter_ferometer with a single modified reflection coefficient \[r_{\mathrm{inter}}(\vartheta_{1},\vartheta_{2})=\sum_{n_{1}n_{2}n_{3}}\rho_{n_{1 }}\rho_{n_{2}}\rho_{n_{3}}f_{n_{1}n_{2}n_{3}}\mathrm{e}^{\mathrm{ik}b_{n_{1}n_{ 2}}}\mathrm{e}^{-\frac{[\vartheta_{2}-\vartheta_{n_{1}n_{2}n_{3}}(\vartheta_{ 1})]^{2}}{2\vartheta^{2}}}\,, \tag{15}\] with the wave vector of the matter wave, \(k=2\pi/\lambda\), and the indicator function \(f_{n_{1}n_{2}n_{3}}\) factoring in the interferometer's geometry (which determines whether the beam can pass through the interferometer or not). The beam spread of all diffraction orders will usually be the same for a monochromatic wave, \(\sigma=\sigma_{n}\) for all diffraction orders \(n\). Due to the tilted reflective surfaces with respect to the beam incidence, the detected spots will be slightly asymmetric, which we neglect for consideration in this manuscript. The position of the diffraction order is given by \(\theta_{n_{1}n_{2}n_{3}}(\vartheta_{1})\) which is the three-times composition of Eq. (1) simplifying to \[\theta_{n_{1}n_{2}n_{3}}(\vartheta_{1})=\arcsin\left[\sin\vartheta_{1}+\frac{ \left(n_{1}+n_{2}+n_{3}\right)\lambda}{a_{\mathrm{S}}}\right]\,. \tag{16}\] ### The monolithic interferometer for He and Si(111)-H\((1\times 1)\) Let us consider a helium beam with de-Broglie wavelength \(\lambda_{\mathrm{dB}}=0.55\mathrm{\AA}\) and a beam spread of 1 mrad at a distance of 1 m from the interferometer (propagation length \(L_{1}=1\,\mathrm{m}\)). This corresponds to a 1 mm beam waist (\(w=1\,\mathrm{mm}\)) as the beam enters the interferometer. The lattice spacing of Si(111)-H(1\(\times\)1) is \(a_{\mathrm{S}}=3.383\mathrm{\AA}\)[42]. We consider an incidence angle of 83 deg and the reflection coefficient (14) with the amplitudes \(\rho_{0}=0.06\), \(\rho_{\pm 1}=0.03\) and \(\rho_{\pm 2}=0.015\). These scattering values correspond to the experimentally obtained data for a beam with a wavelength of \(0.6\,\mathrm{\AA}\) and an incidence angle of \(52\) deg reported in Ref. [42]. We changed the incidence angle because the result would be restricted to the zeroth order; see Fig. 2. Table 1 shows the ratio of transmitted atoms into each diffraction channel. The reflection coefficients influence only the amplitude of the interference patterns, not the position of the peaks. Thus, the impact of the correct scattering amplitudes is neglectable. We have chosen the parameters to demonstrate several effects: the single-beam transmission, the two-path superposition and the multi-path interference. Here we restrict our considerations to the zeroth, first and second diffraction orders. Furthermore, we consider the reflecting plates to be 50 mm long and 5 mm separated from each other. The optical paths for this scenario are depicted in Fig. 3. It can be seen that the third-order diffraction beam (at \(30.32\) deg, blue line) consists of a single beam. It thus will not show any interference; the second-order beam (at \(41.87\) deg, orange line) is the superposition of two paths, as described in Sec. 2.1, but with equal optical paths which again will not interfere; and the first and zeroth order will show two separate signals each, that will lead to interference in the far field. Due to the separation of the length scales and the fact that the atoms interfere with themselves and not with each other, we describe each interference pattern via a phase-shifted Gaussian wave in analogy to the Michelson interferometer. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} Angle \(\varphi\) (deg) & \(n_{1}\) & \(\beta(n_{1})\) & \(n_{2}\) & \(n_{3}\) & path \(b\) (cm) & Ampl. \(a\) (\%) \\ \hline 30.32 & 0 & 1.4486 & -1 & -2 & 5.00 & 0.0270 \\ 41.87 & -1 & 0.9791 & 1 & -2 & 5.00 & 0.0135 \\ 41.87 & 0 & 1.4486 & -1 & -1 & 5.00 & 0.0540 \\ 56.10 & -2 & 0.7307 & 2 & -1 & 4.77 & 0.0068 \\ 56.10 & 0 & 1.4486 & -2 & 1 & 4.77 & 0.0270 \\ 56.10 & -1 & 0.9791 & 1 & -1 & 5.00 & 0.0270 \\ 56.10 & 0 & 1.4486 & -1 & 0 & 5.00 & 0.1080 \\ 83.00 & -2 & 0.7307 & 1 & 1 & 1.57 & 0.0135 \\ 83.00 & -2 & 0.7307 & 2 & 0 & 4.77 & 0.0135 \\ 83.00 & 0 & 1.4486 & -2 & 2 & 4.77 & 0.0135 \\ 83.00 & -1 & 0.9791 & 0 & 1 & 1.79 & 0.0540 \\ 83.00 & -1 & 0.9791 & 1 & 0 & 5.00 & 0.0540 \\ 83.00 & 0 & 1.4486 & -1 & 1 & 5.00 & 0.0540 \\ 83.00 & -1 & 0.9791 & -1 & 2 & 1.57 & 0.0135 \\ \end{tabular} \end{table} Table 1: Overview of beams expected from an 83 deg incidence angle reflected into the diffraction angle \(\varphi\) with the diffraction orders \(n_{1}\), \(n_{2}\) and \(n_{3}\) with the first diffraction angle \(\beta\) in radians. The last columns describe the optical path length \(b\) and amplitudes \(a\) being the ratio of transmitted atoms into each diffraction channel (%). Thus, the interference pattern is described by the superposition of phase-shifted Gaussian waves \[I(\varphi)\propto\left|\sum_{n}a_{n}\mathrm{e}^{\mathrm{i}\frac{k\sin n}{2}b_{n}} \right|^{2}\mathrm{e}^{-\frac{2L_{\mathrm{B}}^{2}\sin^{2}\varphi}{w^{2}}}\,, \tag{17}\] with the amplitudes \(a_{n}=\rho_{n_{1}}\rho_{n_{2}}\rho_{n_{3}}\) and the optical path lengths \(b_{n}\), which are given in table 1. The widths of the diffraction orders \(\sigma\) are small compared to the width of the Gaussian envelope, \(L_{2}\sin\sigma\ll w\), and, hence, can be neglected. It can be seen in Eq. (17) that the interference fringes are determined by the wave vector, \(k=2\pi/\lambda_{\mathrm{dB}}\). Thus, increasing the wavelength, either by increasing the particle's mass or velocity, will reduce the spacing between the interference fringes. The resulting interference patterns are plotted in Fig. 4. One can see that the diffraction at \(30.32\) deg and \(41.87\) deg will not show any interference features due to the equal optical path lengths of both optical paths. The remaining two spots will show interference effects with a contrast of 48.5% for the spot at 56.10 deg and 84.1% for the spot at 83.00 deg. The transmission rates of all channels can be found in table 1: 0.027% of the atoms will be diffracted under the angle of \(30.32\) deg, 0.0675% under \(41.87\) deg, 0.1688% under \(56.10\) deg, 0.216% under \(83\) deg. The remaining particles will not leave the interferometer. The intensity of a typical helium beam is so big [50] that a signal fraction of \(10^{-4}\) can easily be detected. The velocity spread will be the limiting quantity to measure the interference patterns for the helium atom interferometry configuration depicted in Fig. 4. The velocity spread of a supersonic helium beam depends on the beam temperature, the nozzle diameter and the reservoir pressure. This has been treated extensively in the literature; see, for example, Ref. [51]. A velocity spread causes two different effects: (i) a broadening of the interference fringes and (ii) a spatial movement of the entire interference pattern, as illustrated in Fig. 5. Finally, to observe interference, the velocity spread has to be sufficiently small to not cause a washing out of the interference fringes. Figure 5 illustrates the positions of the diffraction order for different wavelengths of the incoming beam with a fixed incidence angle of \(83\) deg. It can be observed that the zeroth order will stay constant. The remaining orders strongly spread out with increasing wavelength. As in Fig. 2, the lines are not continuous due to the finite size of the interferometer. It can be seen in table 1 that the interferometer splits the wave package at the first diffraction point over \(\approx 0.71\) rad. ### Quantum reflection interferometer Quantum reflection occurs on the attractive (outer) part of the atom-surface interaction potential [52] in contrast to surface scattering, where the reflection occurs on the repulsive (inner) part of the interaction potential [53; 54]. It is called quantum reflection because, classically, reflection cannot occur with an attractive force interaction potential. Quantum reflection has the very big advantage that an extensive range of atoms and small, few-atomic molecules that would stick under surface diffraction conditions display quantum reflection. The disadvantage is that quantum reflection requires small perpendicular wave vectors. This means that, for a given wavelength, the spatial extension of a reflective interferometer must be larger than in the surface scattering configuration for the separated beams to recombine. Figure 4: Far-field diffraction patterns of each spot at 30.32 deg (blue line), 41.87 deg (orange line), 56.10 deg (green line) and 83.00 deg (red line) for a Helium beam with wavelength \(\lambda_{\mathrm{dB}}=0.55\) with an incidence angle of 83 deg. The vertical black dashed line marks the regime for the considered scenario. Quantum reflection is less sensitive than surface scattering to defects and surface contamination because it occurs at larger distances from the surface [55]. Very large specular reflection coefficients of the order of 50% [56] up to 90% [57] have been measured. Diffraction via quantum reflection was recently demonstrated experimentally [58] using helium dimers and trimers with periodically striped surfaces with micron-sized structures. The paper includes a comparison of the experimental result with scattering theory based on the diffraction angle distribution (1), reported in Ref. [59]. There is reasonable agreement between theory and experiment. ## 3 Conclusions and Future Work This paper presents the first proposal for a reflective interferometer for atoms and molecules. We present calculations for a monolithic configuration based on experimental scattering results for a room-temperature helium beam from Si(111)-H(1\(\times\)1), showing that a beam splitting of more than 0.5 radians is achievable. Furthermore, we argue that quantum reflection diffraction is a viable option for extending the beams and surfaces that can be used and potentially increase the signal intensity. The interference of larger and complex molecules can be achieved by using different interaction potentials, such as evanescent fields [53]. A reflective atom or molecule interferometer, particularly in a monolithic configuration, opens several possibilities for applications, for instance, as an accelerometer, in investigating the coherence of matters near dielectric surfaces, as a continuous velocity selector etc. The next obvious first step is to do a demonstration experiment of the new interferometer with a helium beam and to do detailed designs of quantum reflection setups. The latter will require the calculation of quantum (diffraction) reflection coefficients for a range of realistic system configurations. ## Acknowledgments J.F. gratefully acknowledges support from the European Union (H2020-MSCA-IF-2020, grant number: 101031712).
2303.10376
Barrow entropic Quintessence and Dilation dark energy Models with Generalized HDE cut-off
In the present work, we have analyzed the behaviors of extension of generalized Barrow holographic dark energy(`BHDE'). A ``generalized BHDE model based on the particle and the future horizon using infrared cut-off" was proposed by Nojiri et al. (2022). In this work, we have reviewed the generalized BHDE extension under the assumption of a generalized HDE cut-off. Using a scale factor of the form $a = k t^m$, the dynamics of the cosmos have been discussed through graphic demonstration. By applying the ``open-source emcee Python package", the values of the free parameters $k$ and $m$ are estimated on 57 OHD points by the Markov Chain Monte Carlo (MCMC) technique. We have examined the behavior of the equation of state (EoS) parameter, $( p_{de})$, and dark energy density $(\rho_{de})$. We have also discussed the equivalence of holographic dark energy (DE) with the Barrow entropic DE and its extension. Also, we have explained quintessence and dilation dark energy models in the context of Barrow entropic DE.
Priyanka Garg, Vinod Kumar Bhardwaj, Anirudh Pradhan
2023-03-18T09:52:21Z
http://arxiv.org/abs/2303.10376v1
# **Barrow entropic Quintessence and Dilation dark energy Models with Generalized HDE cut-off** ###### Abstract In the present work, we have analyzed the behaviors of extension of generalized Barrow holographic dark energy('BHDE'). A "generalized BHDE model based on the particle and the future horizon using infrared cut-off" was proposed by Nojiri et al. (2022). In this work, we have reviewed the generalized BHDE extension under the assumption of a generalized HDE cut-off. Using a scale factor of the form \(a=kt^{m}\), the dynamics of the cosmos have been discussed through graphic demonstration. By applying the "open-source emcee Python package", the values of the free parameters \(k\) and \(m\) are estimated on 57 OHD points by the Markov Chain Monte Carlo (MCMC) technique. We have examined the behavior of the equation of state (EoS) parameter, (\(p_{de}\)), and dark energy density (\(\rho_{de}\)). We have also discussed the equivalence of holographic dark energy (DE) with the Barrow entropic DE and its extension. Also, we have explained quintessence and dilation dark energy models in the context of Barrow entropic DE. _Keywords_ : Generalized HDE cut-off; BHDE model; Quintessence model; Dilation model. PACS number: 98.80.-k, 98.80.Jk ## 1 Introduction During last two decades, a number of observations, including type Ia supernovae, CMB radiations, large scale structure (LSS), the Sloan Digital Sky Survey (SDSS), the Wilkinson Microwave Anisotropy Probe (WMAP), and Planck observations [1, 2, 3, 4, 5, 6], have suggested that our universe is expanding with acceleration, this is due to some unknown exotic fluid known as dark energy (DE). The Cosmological Constant ( \(\Lambda\) ) is assumed as most effective alternate of DE to explain the accelerated expansion of the universe [5]. Several models have been suggested to explain the nature of the cosmological constant [7, 8, 9, 10, 11]. Due to its direct connection with'space-time', the 'holographic dark energy (HDE)' has much consideration. The vacuum energy's cosmic behavior is made clear by HDE. The existing cosmic acceleration does not found in the 'HDE models' with 'Hubble' radius as 'IR cut-off', although it seen in the models having event horizon as cut-off [12]. Akhlaghi [13] described the HDE models with Granda-Oliver, Ricci scale, and future horizon cut-offs to explain the evaluation and accelerated growth of the universe. Holographic dark energy model with Granda-Oliver cut-off have been examined by Ghaffari [14]. If the mass of black hole is greater than the vacuum energy, "the horizon length L is considered as IR cutoff" in black hole thermodynamics [15]. In cosmology, the holographic principle is generally adopted to describe the dark energy(DE) epoch [16]. BHDE is one of the alternative forms of dark energy, which is based on the newly suggested Barrow entropy instead of the standard Bekenstein-Hawking (BH) entropy [17, 18, 19, 20]. "Saridakis et al. [21] have studied the generalized second rule of thermodynamics using the Barrow entropy on the horizon". Mamon et al. [22] have investigated the validity of "BHDE models" by taking the dynamical apparent horizon into account as the thermodynamic boundary. The Barrow entropy is also used by Saridakis [23] to present the modified cosmic model. The compatibility of the "BHDE models" with observational data has been shown by Anagnostopoulos et al. [24]. Various researchers [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36] have studied BHDE models in different contexts. Since HDE models are based on 'holographic principle' instead of introducing a term in Lagrangian, they differ significantly from conventional DE models. Nojiri et al. [37] claimed that the BHDE model is equivalent to the generalized HDE model. The proposed generalized BHDE model depends on future and particle horizons by taking IR as a cut-off. In the direction of generalized entropies, few remarkable studies can be seen in refs. [38, 39, 40, 41]. Inspired by this, the authors in the present manuscript describe an extension of generalized BHDE by assuming a Generalized HDE cut-off. In this study, the authors analyzed the extension of Generalized BHDE by assuming generalized HDE cut-off. By considering the power law \(a=kt^{m}\), the cosmos dynamics have been discussed by graphical depiction. Applying the "open-source emcee Python package", the model's free parameters are estimated with 57 OHD points utilizing the "MCMC technique". The present study is organized as: In section 2 we have presented the Thermodynamics of space-time. We proposed the solution of the field equations with Generalized HDE-cutoff in Section 3. In section 4, we have explained the equivalence of generalized HDE with the extension of barrow entropic DE. In Section 5, we have discussed the power law cosmology. In Section 6, the methodology for estimation of the model's free parameters on the latest 57 OHD data points has been discussed. In Section 7, we discuss with Quintessence field model. We have explained the dilation field in Section 8. The concluding remarks are mentioned in Section 9. ## 2 Thermodynamics of space-time and cosmology Gravity thermodynamics is typically described by the "Bekenstein Hawking (BH) area law \(S_{BH}=A/(4G)\)". It is applicable to both the apparent horizon of the universe and the entropy of black-hole horizons. On the basis of non-extensive generalizations of the statistics of the horizon degrees of freedom or quantum gravitational deformations of the horizon geometry, a number of changes to entropy have been suggested. Tsallis [42], and Kaniadakis [43] entropies are two particular instances among them. The use of such entropies can be seen in [44, 45, 46, 47, 48]. Barrow [17] has developed a new generalized entropy based on a modified horizon supplied with a fractal structure. As "Barrow entropy" was developed for "black holes" but it can be used in a cosmic context according to the gravity-thermodynamic conjecture. In this approach, the Barrow entropy-driven corrections to the Friedmann equations in the Standard Model of Cosmology (SMC) are obtained. Additionally, the holographic principle can be used in conjunction with Barrow entropy to produce Barrow holographic dark energy [21, 23, 31]. As a result, one can apply observational data to the aforementioned structures to derive restrictions on the Barrow exponent \(\Delta\)[24, 53]. All of these investigates find that variations from the BH entropy as predicted are relatively small. The black hole entropy is expressed as \[S=A/4G,\ \ \ \ A=4\pi r_{H}^{2} \tag{1}\] Here, \(S\) stands for the Bekenstein-Hawking (BH) entropy and \(r_{H}\) represents the horizon's radius. The relationship between gravity and thermodynamics is defined in a lot of recent studies [50, 51]. The first law of thermodynamics may also be defined using the FLRW equations when the BH entropy and apparent horizon are considered as the "thermodynamics of space-time". "Barrow recently asserted that quantum-gravitational processes, which are inspired by the Covid-19 viral pictures, might be used to introduce the fractal and complicated aspects to the black-hole structure". The "Barrow entropy" is read as [20]: \[S=\frac{A_{0}}{4G}\left(\frac{A}{A_{0}}\right)^{1+\Delta}, \tag{2}\] where, \(A_{0}\) is a constant. For \(\Delta=0\), a quantum gravitational deformation exists and most fractal black hole structure is obtain for \(\Delta=1\). When the Barrow entropy is applied to cosmology, the Friedmann equations also transformed, and these transformations could be seen as a source of dark energy density [25, 26, 27, 28, 53, 55]. ## 3 Barrow Entropy with Generalized HDE cut-off We consider the "flat FLRW space-time metric" as: \[ds^{2}=-dt^{2}+a^{2}(t)\left[(dx^{1})^{2}+(dx^{2})^{2}+(dx^{3})^{2}\right]\, \tag{3}\] here, the scale factor \(a(t)\) is the function of time. The cosmic horizon radius is defined as: \[r_{H}=\frac{1}{\left(\alpha H^{2}+\beta\dot{H}\right)^{\frac{1}{2}}} \tag{4}\] where, \(H=\frac{\dot{a}}{a}\) is the Hubble parameter. The change in heat can be given by \[dQ = -dE=-\frac{4}{3}\pi\dot{\rho}r_{H}^{3}dt \tag{5}\] \[= -\frac{4}{3}\pi\Big{(}\alpha H^{2}+\beta\dot{H}\Big{)}^{\frac{-3}{ 2}}\dot{\rho}dt\] \[= 4\pi\left(\rho+p\right)\left(\alpha H^{2}+\beta\dot{H}\right)^{ \frac{-3}{2}}Hdt\] Utilizing first law of thermodynamics \(TdS=dQ\) and law of conservation \(\dot{\rho}+3\rho H+3pH=0\), we get \[T\frac{dS}{dt}=\frac{4\pi\left(\rho+p\right)}{\left(\alpha H^{2}+\beta\dot{H} \right)^{\frac{3}{2}}}H \tag{6}\] The above expression along with the Hawking temperature defined by [52], \[T=\frac{1}{2\pi r_{H}}=\frac{\left(\alpha H^{2}+\beta\dot{H}\right)^{\frac{1}{ 2}}}{2\pi} \tag{7}\] Second FLRW equation as \[\dot{H}=-4\pi G\rho\left(1+\frac{p}{\rho}\right) \tag{8}\] which on integrating provides the first FLRW equation \[H^{2}=\frac{1}{3}8\pi G\rho+\frac{1}{3}\Lambda \tag{9}\] here, the integration constant is \(\Lambda\), which is considered as a cosmological constant. Similarly, for the Barrow entropy using Eqs. (8), (9), and Eq. (2), we get the expression \[\frac{dS}{dt}=\frac{dS}{dA}\frac{dA}{dt} \tag{10}\] since \[\frac{dA}{dt} = -4\pi\left(\alpha H^{2}+\beta\dot{H}\right)^{-2}\left(2\alpha H \dot{H}+\beta\ddot{H}\right)\] \[\frac{dS}{dt} = -4\pi\left(\frac{1+\Delta}{4G}\right)\frac{\left(2\alpha H\dot{H} +\beta\ddot{H}\right)}{\left(\alpha H^{2}+\beta\dot{H}\right)^{2}}\left( \frac{{H_{1}}^{2}}{\alpha H^{2}+\beta\dot{H}}\right)^{\Delta} \tag{11}\] where \(A_{0}=\frac{4\pi}{H_{1}}\), \(H_{1}\) is constant. The second FLRW equation for the Barrow entropy is obtained as \[\frac{(1+\Delta)}{H}(2\alpha H\dot{H}+\beta\ddot{H})\left(\frac{{H_{1}}^{2}}{ \alpha H^{2}+\beta\ddot{H}}\right)^{\Delta}=-4\pi G(\rho+p) \tag{12}\] On integrating the above equation, we get, \[\frac{(1+\Delta)}{(1-\Delta)}{H_{1}}^{2}\left(\frac{{H_{1}}^{2}}{\alpha H^{2} +\beta\dot{H}}\right)^{\Delta-1}=\frac{8\pi}{3}G\rho+\frac{1}{3}\Lambda \tag{13}\] Now, FLRW Eqs. (8) and (9) can be transformed into, \[\dot{H}=-4\pi G\ \left[(\rho_{B}+\rho)+(p_{B}+p)\right] \tag{14}\] \[H^{2}=\frac{8\pi G}{3}\left(\rho_{\rm B}+\rho\right)+\frac{\Lambda}{3} \tag{15}\] For the Barrow entropy, from Eqs. (12)-(15), the effective energy density \(\rho_{B}\) and pressure \(p_{B}\) are expressed as \[\rho_{B}=\frac{3}{8\pi G}\Bigg{[}H^{2}-\frac{(1+\Delta)}{(1-\Delta)}{H_{1}}^{ 2}\left(\frac{{H_{1}}^{2}}{\alpha H^{2}+\beta\ddot{H}}\right)^{\Delta-1}\Bigg{]} \tag{16}\] \[p_{B} = \frac{1}{4\pi G}\Bigg{[}\frac{(1+\Delta)}{H}(2\alpha H\dot{H}+ \beta\ddot{H})\left(\frac{{H_{1}}^{2}}{\alpha H^{2}+\beta\ddot{H}}\right)^{ \Delta}-\dot{H}\Bigg{]} \tag{17}\] \[- \frac{3}{8\pi G}\Bigg{[}H^{2}-\frac{(1+\Delta)}{(1-\Delta)}{H_{1 }}^{2}\left(\frac{{H_{1}}^{2}}{\alpha H^{2}+\beta\ddot{H}}\right)^{\Delta-1} \Bigg{]}\] The "EoS parameter for the Barrow entropy" can be expressed as \[\omega_{B} = \frac{2\left[\frac{(1+\Delta)}{H}\left(\frac{{H_{1}}^{2}}{ \alpha H^{2}+\beta\ddot{H}}\right)^{\Delta}\left(2\alpha H\dot{H}+\beta\ddot{H }\right)-\dot{H}\right]}{3\left[H^{2}-\frac{(1+\Delta)}{(1-\Delta)}{H_{1}}^{2 }\left(\frac{{H_{1}}^{2}}{\alpha H^{2}+\beta\ddot{H}}\right)^{\Delta-1} \right]}-1 \tag{18}\] The EoS parameter \(\omega_{B}\) for the energy density of Barrow entropy follows the "\(\dot{\rho}_{B}+3H\rho_{B}(1+\omega_{B})=0\)". Moreover, Eq. (18), explain that the EoS parameter tremendously depend on the exponent \(\Delta\). For the different values of exponent \(\Delta\), BHDE can exist in a quintessence region, in a phantom era, or may cross the phantom-divide during the cosmic evolution [23]. Equivalence of generalized "Holographic Dark Energy with the Extension of Barrow entropic Dark Energy" We have study the models where the entropy exponent shows an extending behavior particularly, when the universe is expanding. The entropic dark energy models with variable exponent has been discussed in [56, 57]. Here, the authors claimed that the behavior in this case is caused by a physical degree of freedom that corresponds to entropy. The renormalization of a quantum theory also implies that the degrees of freedom depend on the scale. We express a dimensionless variable in cosmology \(x=H_{1}^{2}/H^{2}\), where \(H_{1}^{2}=4\pi/A_{0}\), as the Hubble parameter determines the energy scale. On applying this expanded formalism to the Barrow entropy (where the exponent of each entropy function varies), then the Barrow entropy function can be recasts as, \[S_{B}=\left(\frac{A}{A_{0}}\right)^{1+\Delta(x)}A_{0}\frac{1}{4G}. \tag{19}\] Using \(A=4\pi r_{h}^{2}\), We deduce from equation (19) \[\frac{dS_{B}}{dt}=\frac{\partial S}{\partial A}\frac{dA}{dt}+\frac{ \partial S}{\partial x}\frac{dx}{dt} \tag{20}\] \[\frac{dS_{B}}{dt}=-\frac{1}{4G}\left(\frac{4\pi(2\alpha H\dot{H}+ \beta\ddot{H})}{(\alpha H^{2}+\beta\dot{H})^{2}}\right)\left(\frac{H_{1}^{2}}{ (\alpha H^{2}+\beta\dot{H})}\right)^{\Delta(x)}\] \[\left\{(1+\Delta(x))+\frac{H_{1}^{2}}{(\alpha H^{2}+\beta\dot{H} )}\ln\left(\frac{H_{1}^{2}}{(\alpha H^{2}+\beta\dot{H})}\right)\Delta^{\prime} (x)\right\} \tag{21}\] For extended Barrow entropy scenario, we have obtain the second FLRW equation by using the first law of thermodynamics as, \[(2\alpha H\dot{H}+\beta\ddot{H})\left\{1+\Delta(x)+\frac{H_{1}^{ 2}}{\alpha H^{2}+\beta\dot{H}}\ln\left(\frac{H_{1}^{2}}{\alpha H^{2}+\beta \dot{H}}\right)\Delta^{\prime}(x)\right\} \times\] \[\left(\frac{H_{1}^{2}}{\alpha H^{2}+\beta\dot{H}}\right)^{\Delta (x)} = -4\pi G(p+\rho), \tag{22}\] where, \(p\) and \(\rho\) stand for the energy density and pressure of the matter, respectively. As we have observed that, the FLRW equations is affected by the running behaviour of \(\Delta(x)\) as compared with the constant exponent (see Eq. (12)). On integrating the above equation and using conservation law, the first FLRW equation can read as \[-\left.H_{1}^{2}\left\{x^{-1+\Delta(x)}+2\int^{x}x^{-2+\Delta(x)}dx\right\} \right|_{x=\eta}=\frac{8\pi G}{3}\rho+\frac{\Lambda}{3}. \tag{23}\] here, \(\eta=\frac{H_{1}^{2}}{\alpha H^{2}+\beta\dot{H}}\) and \(H\frac{dH}{dx}=-\frac{H_{1}^{2}}{2x^{2}}\). The modified expression of FLRW equations with variable exponent in the context of the Barrow entropic energy can be seen in Eqs. (22) and (23). Barrow entropic energy density \(\rho_{\rm B}\) and pressure \(p_{\rm B}\) with variable exponent can be read as \[\rho_{\rm B}=\frac{3}{8\pi G}\left(H^{2}+\left.H_{1}^{2}\left\{x^{-1+\Delta(x )}+2\int^{x}x^{-2+\Delta(x)}dx\right\}\right|_{x=\eta}\right), \tag{24}\] and \[p_{\rm B} = -\rho_{\rm B}+\frac{1}{4\pi G}\bigg{[}(2\alpha H\dot{H}+\beta \ddot{H}) \tag{25}\] \[\times \left\{1+\Delta(x)+\frac{H_{1}^{2}}{\alpha H^{2}+\beta\dot{H}} \ln\left(\frac{H_{1}^{2}}{\alpha H^{2}+\beta\dot{H}}\right)\Delta^{\prime}(x) \left(\frac{H_{1}^{2}}{\alpha H^{2}+\beta\dot{H}}\right)^{\Delta(x)}\right\}- \dot{H}\bigg{]}\] In order to obtain an explicit expression of energy density and pressure from the above equation and to integrate Eq. (24) analytically, a functional form of \(\Delta\) is required. At the late time, if the exponent \(\Delta(x)\) turns into constant, the outcome of the expended scenario will agree with the Barrow dark energy (BDE) model, where the entropy exponent is assumed to be constant. If the exponent \(\Delta\) is assumed in such a manner that at low and high energy scales the values of the exponent diverge from the regular value 1. But for the transitional scales, the values remain close to unity. It results in a unified scenario with early inflation, late dark energy, and an intermediate deceleration phase. From Eqs. (24) and (25), we define the "EoS parameter for barrow entropy" as \[\omega_{\rm B} = -1+\frac{2}{3} \tag{26}\] \[\times \frac{(2\alpha H\dot{H}+\beta\ddot{H})\left\{1+\Delta(x)+\frac{H_{ 1}^{2}}{\alpha H^{2}+\beta\ddot{H}}\ln\left(\frac{H_{1}^{2}}{\alpha H^{2}+ \beta\ddot{H}}\right)\Delta^{\prime}(x)\left(\frac{H_{1}^{2}}{\alpha H^{2}+ \beta\ddot{H}}\right)^{\Delta(x)}\right\}-\dot{H}}{x^{\Delta(x)}+2x\int^{x}x^{ -2+\Delta(x)}dx+1}\] For the extended Barrow entropy scenario, where the exponent varies with the cosmic evolution of the universe, the efficient EoS parameter can be seen in Eq. (26). Presuming the scenario, the correspondence between the generalized holographic energy density and the extended form of the Barrow energy density can be established. Nojiri-Odintsov [41] have proposed the generalized cut-off for holographic dark energy (HDE). According to the holographic principle, the HDE energy density is inversely proportional to the square of the Generalized HDE cut-off \(L_{GO}\), in particularly, \(\rho_{\rm hol}=\frac{3c^{2}}{\kappa^{2}L_{\rm GO}^{2}}\), where \(\kappa^{2}=8\pi G\), \(G\) is the gravitational constant. Here, we use two different cut-off, particle horizon \(L_{\rm p}\equiv a\int_{0}^{t}\frac{dt}{a}\) and the future event horizon \(L_{\rm f}\equiv a\int_{t}^{\infty}\frac{dt}{a}\). The Hubble parameter can be determined as \(H(L_{p},\dot{L}_{p})=\frac{L_{p}-1}{L_{p}}\) and \(H(L_{f},\dot{L}_{f})=\frac{\dot{L}_{f}+1}{L_{f}}\). Holographic cut-off (denoted by \(L_{B}\)) corresponding to the extended Barrow entropic scenario in terms of \(L_{p}\) and its derivative is given as \[\frac{3c^{2}}{\kappa^{2}L_{\rm B}^{2}} = \frac{3}{8\pi G}\Bigg{[}\left.H_{1}^{2}\left\{x^{-1+\Delta(x)}+ \int^{x}2x^{-2+\Delta(x)}dx\right\}\right|_{x=\frac{H_{1}^{2}}{\alpha\left( \frac{L_{\rm p}}{L_{\rm p}}-\frac{1}{L_{\rm p}}\right)^{2}+\beta\left(\frac{L _{\rm p}}{L_{\rm p}}-\left(\frac{L_{\rm p}}{L_{\rm p}}\right)^{2}+\frac{L_{ \rm p}}{L_{\rm p}^{2}}\right)} \tag{27}\] \[+\Bigg{(}\frac{\dot{L}_{\rm p}}{L_{\rm p}}-\frac{1}{L_{\rm p}} \Bigg{)}^{2}\Bigg{]}\] in term of future event horizon \(L_{f}\) and its derivative \[\frac{3c^{2}}{\kappa^{2}L_{\rm B}^{2}} = \frac{3}{8\pi G}\Bigg{[}\left.H_{1}^{2}\left\{x^{-1+\Delta(x)}+2 \int^{x}x^{-2+\Delta(x)}dx\right\}\right|_{x=\frac{H_{1}^{2}}{\alpha\left( \frac{L_{f}}{L_{\rm f}}-\frac{1}{L_{\rm f}}\right)^{2}+\beta\left(\frac{L_{f}} {L_{f}}-\left(\frac{L_{f}}{L_{f}}\right)^{2}+\frac{L_{f}}{L_{f}^{2}}\right)} \tag{28}\] \[+\Bigg{(}\frac{\dot{L}_{\rm f}}{L_{\rm f}}-\frac{1}{L_{\rm f}} \Bigg{)}^{2}\Bigg{]}\] Along with the initial Friedmann equation, it is also required to establish the correspondence between the "EoS parameters of the generalized HDE and BHDE models". So, we define the EoS parameter corresponds to the cut-off \(L_{B}\). It is equivalent to the energy density of HDE \(\rho_{hol}^{(B)}=\frac{3c^{2}}{\kappa^{2}L_{B}{}^{2}}\). Following the conservation of \(\rho_{hol}^{B}\), the EoS parameter \(W_{hol}^{B}\) can be determined as: \[W_{\rm hol}^{\rm(B)}=-1+\left(\frac{2}{3HL_{\rm B}}\right)\frac{dL_{\rm B}}{dt} \tag{29}\] From Eqs. (26) and (29), the two EoS parameters \(\omega_{B}\) and \(W_{\rm hol}^{\rm(B)}\) are found to be equivalent. Power law cosmology in Barrow entropy The type Ia supernovae observations [1, 2], CMB anisotropies [3] and recently Planck Collaborations [6] have confirmed that the present Universe is in an accelerating phase. Therefore, to explain current accelerated expansion of the Universe, we assume scale factor in the from \[a(t)=kt^{m} \tag{30}\] where, \(k>0\) is constant and \(m>0\) is real which describes the development of scale factor in distinct eras of the evolution of the universe i..e. for \(m=1\) defines the marginal inflation (\(a\propto t\)), \(m=\frac{1}{2}\) for radiation dominated era, \(m=\frac{2}{3}\) shows matter-dominated era and \(m=\frac{4}{3}\) describe the accelerating era of the universe (\(a\propto t^{\frac{4}{3}}\)) [58]. This form of a(t) describes the power law cosmology and resembles the late time acceleration of the universe. Power-law cosmology is an intriguing solution for dealing with some unusual challenges like flatness, horizon problem, etc. Kumar [64] used power-law with \(H(z)\) and SNe Ia data to analyze cosmological parameters. Rani et al., [65] also examined the power-law cosmology with statefinder analysis. Some important applications of power law cosmology are given in the References Kumar [64] and Sharma et al. [66]. According to cosmological observations in cosmology, the Hubble parameter \(H\) and deceleration parameter \(q\) are some of the most important observational quantities. These are defined as \[H=\frac{\dot{a}}{a}=\frac{m}{t} \tag{31}\] \[q=\frac{-a\ddot{a}}{\dot{a}^{2}}=\frac{1}{m}-1 \tag{32}\] The relationship between redshift and scale factor is defined as \(a=\frac{a_{0}}{1+z}\). In term of redshift z, the Hubble parameter is read as \[H(z)=-\frac{1}{1+z}\frac{dz}{dt} \tag{33}\] With the help of equation (13) and (16), we obtain the following expressions \[H(z)=m\left(\frac{a_{0}}{k}\right)^{\frac{-1}{m}}(1+z)^{\frac{1}{m}} \tag{34}\] The Hubble parameter in the term of redshift, expressed as \(H(z)=H_{0}(1+z)^{1/m}\), shows the expansion history of the universe in power law cosmology. Which depends on the model parameters \(H_{0}\), \(m\) and \(k\) under consideration in view of observational \(H(z)\) datasets in the redshift range \(0\leq z\leq 2.36\). ## 6 Observational constraints on model parameters Numerous authors estimated the Hubble constant in the range of 67 to 74 by utilizing the observational data of Hubble Telescope [59], "Cepheid variable observations [60, 61]", "WMAP seven-year data [62]", and other sources [63, 64, 67, 68, 69]. We take into account the latest 57 OHD data points. To constrain the model parameters, we employ the Markov Chain Monte Carlo (MCMC) approach. By fitting the current model to the latest 57 OHD points in the redshift range 0 to 2.36, based on the emcee python package, the model parameter \(m\), \(k\) are estimated. The model parameters \(k=65.4\pm 1.1\), \(H_{0}=67.3\pm 1.1\), and \(m=1.0213\pm 0.0071\) are found to be the best fits for the existing model to the \(H(z)\) data at a 68% CL. The current model's fitted value of \(H_{0}\) agrees well with that of the Plank collaboration. Many authors have considered the value of the Barrow exponent \(\Delta\) to be in the range \(0\leq\Delta\leq 1\) (see refs. [22, 25, 26, 30, 53, 54]). Adhikary _et al._[31] recently took into account the value of \(\Delta\) in the range \(0\leq\Delta\leq 0.4\). Capozziello _et al._[69] recently discussed the Big Bang Nucleosynthesis (BBN) constraints on Barrow exponent \(\Delta\), where \(\Delta\) should be inside the bound \(\Delta<1.4^{-4}\) to spoil the BBN epoch, indicating that the deformation from standard Bekenstein-Hawking expression should be small as expected. By following the values of \(\Delta\) in the range \(0.45\leq\Delta\leq 0.95\), Mamon _et al._[22] detailed the dynamics of the BHDE model and noted that their model lies in the quintessence regime and phantom regime. Following the study given above, we have used the value of \(\Delta\) in the range \(0.05\leq\Delta\leq 0.25\) to characterize the dynamics of our model. We also intend to investigate the BBN using the Barrow entropy and spacetime thermodynamics, as stated in [22], within the context of changed cosmology. Figure 1 shows the contour plots of the model parameters \(m\), \(k\), \(H_{0}\) with \(1-\sigma\) and \(2-\sigma\) confidence limits for 57 OHD points. Figure 1: The contour plots of the model parameters m, k, \(H_{0}\) with \(1-\sigma\) and \(2-\sigma\) confidence limits for 57 OHD points. Fig. 2(a) depicts the nature of density parameter \(\Omega_{B}\) of BHDE against redshift \(z\). The energy density parameter has positive behavior as clear from the figure. Figure 2(b) shows the behavior of pressure \(p_{D}\) for estimated values of model parameters \(m=1.0213,k=65.4,H_{0}=67.3\). The 'pressure' is always negative through the 'entire evolution of the universe' as clearly seen from the figure. Figure 3, exhibit the behavior of EoS parameter \(\omega_{B}\) for the estimated values \(m=1.0213\) and \(k=65.4\). From figure it is clear that initially, the model lies in quitessence region crosses the \(\Lambda\)CDM and lies in phantom region (\(\omega_{B}<-1\)) at late time. ## 7 Holographic quintessence model Various DE models have been investigated in the context of quintessence field. For dark energy, a number of different models have been proposed. Only distance measurements may be used to discriminate between two dark energy models that have the same assessment of the scale factor, and these tests are unable to distinguish between them. Because of this, it is critical to analyse the growth rate of disturbances in matter using the same scale factor for various models of dark Figure 3: Plot of EoS parameter for \(m=1.0213\) and \(k=65.4\). Figure 2: (a) Plot of density parameter \(\Omega_{B}\), (b) Plot of pressure for BHDE. energy in order to compare the results. Energy density and pressure for the quintessence scalar field model are given by [70] \[\rho_{B}=\frac{\dot{\phi}^{2}}{2}+V(\phi);\ \ p_{B}=\frac{\dot{\phi}^{2}}{2}-V(\phi) \tag{35}\] from the above equation, we obtain \[\dot{\phi}^{2}=\rho_{B}+p_{B},\ \ V(\phi)=\frac{\rho_{B}-p_{B}}{2}=\frac{(1- \omega_{B})}{2}\rho_{B} \tag{36}\] For an accelerated expansion we need flat potential, which is obtained by the condition \(\dot{\phi}^{2}<V\). The range of EoS parameter for quintessence scalar field \(\phi\) lies in the region \((-1\leq\omega\leq 1)\) where \(\omega_{D}=-1\) relates to the condition of the slow-roll limit \(\dot{\phi}^{2}\leq V\). Condition \(\dot{\phi}^{2}\geq V(\phi)\) denotes the presence of the stiff matter in the Universe. Due to some type of phantom dark energy, the area where the equation of state \(\omega_{D}\leq-1\) for \(\dot{\phi}^{2}<V(\phi)\) is often referred to as [71]. The equation for the scalar field and scalar potential is found by the equation (35) and (37) as \[\dot{\phi}^{2}=\frac{1}{4\pi G}\left[\frac{(1+\Delta)}{H}(2\alpha H\dot{H}+ \beta\ddot{H})\left(\frac{{H_{1}}^{2}}{\alpha H^{2}+\beta\dot{H}}\right)^{ \Delta}-\dot{H}\right] \tag{37}\] \[V(\phi)=\frac{3}{8\pi G}\left(H^{2}-\frac{(1+\Delta)}{(1-\Delta)}{H_{1}}^{2} \left(\frac{{H_{1}}^{2}}{\alpha H^{2}+\beta\dot{H}}\right)^{\Delta-1}\right)\] \[+\frac{1}{8\pi G}\left[\frac{(1+\Delta)}{H}(2\alpha H\dot{H}+\beta\ddot{H}) \left(\frac{{H_{1}}^{2}}{\alpha H^{2}+\beta\dot{H}}\right)^{\Delta}-\dot{H}\right] \tag{38}\] Figure 4(a) and 4(b) depicts the evolution of scalar field \(\phi\) and potential \(V(\phi)\) of quintessence model with respect to redshift z. It has been plotted for the estimated values of parameters \(m,k,H_{0}\) ( \(m=1.0213,k=65.4,H_{0}=67.3\)). For these suitable choices of the parameters the field gets trapped in the local minimum because the kinetic energy during a scaling regime is small. The field then enters a regime of damped oscillations leading to an accelerating universe. ## 8 Holographic dilation field A dilaton scalar field, originated from the lower-energy limit of string theory [72], can also be assumed as a source of DE. This model appears from a four-dimensional effective low-energy Figure 4: (a) Figure of \(\phi\) vs \(z\), (b) Figure of \(V(\phi)\) versus \(z\) (c) Figure of \(V(\phi)\) vs \(\phi\) string action [72] and includes higher-order kinetic corrections to the tree-level action in low energy effective string theory. The coefficient of the kinematic term of the dilaton can be negative in the Einstein frame, which means that the dilaton behaves as a phantom-like scalar field. Energy density and pressure (Lagrangian) of the dilaton DE model are given, by [73] \[\rho_{B}=-X+3ce^{\lambda\phi}X^{2}=-X+3f(\phi)X^{2} \tag{39}\] \[p_{B}=-X+ce^{\lambda\phi}X^{2}=-X+f(\phi)X^{2} \tag{40}\] where \(c\) is a positive constant and \(X=\frac{\dot{\phi}^{2}}{2}\). Equation of state parameter \(\omega_{B}\) for the dilaton scalar field can be obtained from \[\omega_{B}=\frac{-1+ce^{\lambda\phi}X}{-1+3ce^{\lambda\phi}X} \tag{41}\] From the above equation we find the value of X, \[X=\frac{\omega_{B}-1}{(3\omega_{B}-1)ce^{\lambda\phi}} \tag{42}\] The scalar field is read as \[\dot{\phi}^{2}=\frac{ce^{\lambda\phi}}{2}\frac{(3\omega_{B}-1)}{(\omega_{B}-1 )}=(\rho-3p) \tag{43}\] \[f(\phi)=\frac{(\rho-p)}{2X^{2}} \tag{44}\] Figure 5(a) and 5(b) shows the variation of KE with variation of redshift \(z\) for best fit values \(m=1.0213,k=65.4,H_{0}=67.3\). It has been ploted for three different values of Barrow exponent choosing \(\Delta=0.01\), \(\Delta=0.05\) & \(\Delta=0.08\). From the figure, we observe that scalar field \(\phi(z)\) rises as z increases. Concluding remarks In this paper, the Authors have described the dynamics of the universe via graphical representation by assuming the scale factor as \(a=kt^{m}\). The value of free parameters \(k,m\) are estimated on 57 OHD points utilizing MCMC (Markov Chain Carlo) method is used. Nojiri et al. [37] proposed the extension of the generalized BHDE model. In this manuscript, the authors have revisited the extension of BHDE and its equivalence with generalized holographic dark energy adopting "Generalized HDE cut-off [41] for the model". We have also explained the "dynamics of quintessence and dilation scalar field models". * Figure 1 demonstrates the "2-Dimensional contour plots and 1-Dimensional marginal plots". The best fitted values of the model parameter are determined to be \(m=1.0213,k=65.4H_{0}=67.3\). * From Fig. 2, it is clear that the 'energy density (\(\rho_{B}\))' is positive and 'cosmic pressure (\(p_{B}\))' is negative through the evolution of the universe for BHDE with Generalized HDE cutoff [41]. * Figure 3, describes the behavior of EoS parameter \(\omega_{B}\) for the BHDE model in the reference of the Generalized HDE cut-off. The EoS parameter lies in the phantom era (\(\omega_{D}\leq-1\)) and remains negative in the entire evolution of the universe. * Figs. 4 & 5 depict the quintessence and dilation of the model. The model has an "attractor solution with accelerated expansion, and it also depends on the field's inverse square" to be able to achieve the dilation holographic correspondence. From Fig 4, we notice that "\(\dot{\phi}^{2}<V(\phi)\)". The potential for the quintessence model is decreasing the function, which indicates to an accelerated expansion of the universe. Similarly, "for dilation model, \(\dot{\phi}^{2}<f(\phi)\) and \(f(\phi)\) is also a decreasing function". The solution suggested in this study may therefore be helpful in better understanding the generalization of HDE theories in the history of the cosmos. ## Acknowledgement A. Pradhan thanks the IUCAA, Pune, India, for providing support and facility under the associateship program. The authors also express their gratitude to the reviewers for valuable comments and suggestions.
2302.11341
Differentially Private Data Structures under Continual Observation for Histograms and Related Queries
Binary counting under continual observation is a well-studied fundamental problem in differential privacy. A natural extension is maintaining column sums, also known as histogram, over a stream of rows from $\{0,1\}^d$, and answering queries about those sums, e.g. the maximum column sum or the median, while satisfying differential privacy. Jain et al. (2021) showed that computing the maximum column sum under continual observation while satisfying event-level differential privacy requires an error either polynomial in the dimension $d$ or the stream length $T$. On the other hand, no $o(d\log^2 T)$ upper bound for $\epsilon$-differential privacy or $o(\sqrt{d}\log^{3/2} T)$ upper bound for $(\epsilon,\delta)$-differential privacy are known. In this work, we give new parameterized upper bounds for maintaining histogram, maximum column sum, quantiles of the column sums, and any set of at most $d$ low-sensitivity, monotone, real valued queries on the column sums. Our solutions achieve an error of approximately $O(d\log^2 c_{\max}+\log T)$ for $\epsilon$-differential privacy and approximately $O(\sqrt{d}\log^{3/2}c_{\max}+\log T)$ for $(\epsilon,\delta)$-differential privacy, where $c_{\max}$ is the maximum value that the queries we want to answer can assume on the given data set. Furthermore, we show that such an improvement is not possible for a slightly expanded notion of neighboring streams by giving a lower bound of $\Omega(d \log T)$. This explains why our improvement cannot be achieved with the existing mechanisms for differentially private histograms, as they remain differentially private even for this expanded notion of neighboring streams.
Monika Henzinger, A. R. Sricharan, Teresa Anna Steiner
2023-02-22T12:38:02Z
http://arxiv.org/abs/2302.11341v1
Differentially Private Data Structures under Continual Observation for Histograms and Related Queries ###### Abstract Binary counting under continual observation is a well-studied fundamental problem in differential privacy. A natural extension is maintaining column sums, also known as _histogram_, over a stream of rows from \(\{0,1\}^{d}\), and answering queries about those sums, e.g. the maximum column sum or the median, while satisfying differential privacy. Jain et al. (2021) showed that computing the maximum column sum under continual observation while satisfying event-level differential privacy requires an error either polynomial in the dimension \(d\) or the stream length \(T\). On the other hand, no \(o(d\log^{2}T)\) upper bound for \(\epsilon\)-differential privacy or \(o(\sqrt{d}log^{3/2}T)\) upper bound for \((\epsilon,\delta)\)-differential privacy are known. In this work, we give new parameterized upper bounds for maintaining histogram, maximum column sum, quantiles of the column sums, and any set of at most \(d\) low-sensitivity, monotone, real valued queries on the column sums. Our solutions achieve an error of approximately \(O(d\log^{2}c_{\max}+\log T)\) for \(\epsilon\)-differential privacy and approximately \(O(\sqrt{d}\log^{3/2}c_{\max}+\log T)\) for \((\epsilon,\delta)\)-differential privacy, where \(c_{\max}\) is the maximum value that the queries we want to answer can assume on the given data set. Furthermore, we show that such an improvement is not possible for a slightly expanded notion of neighboring streams by giving a lower bound of \(\Omega(d\log T)\). This explains why our improvement cannot be achieved with the existing mechanisms for differentially private histograms, as they remain differentially private even for this expanded notion of neighboring streams. ## 1 Introduction Differential privacy is a well-studied and widely applied privacy standard for data analysis. Its definition is due to Dwork et al. (2006). For any \(\epsilon>0\), a randomized algorithm is _\(\epsilon\)-differentially private_ if the output distributions differ by at most a factor of \(e^{\epsilon}\) for any two _neighboring_ input data sets, i.e. data sets that differ only in at most _one_ data item. A relaxation called _\((\epsilon,\delta)\)-differential privacy_ additionally allows the output distributions to differ in an additive term \(\delta>0\). The classic model of differential privacy considers the data to be static. The dynamic setting, called _differential privacy under continual observation_, was first studied by Dwork et al. (2010). Here the data arrives in a stream of length \(T\), one data row at a time, and the problem is to answer queries about the data at each of the \(T\) time steps. In the most studied model of differential privacy under continual observation, called _event-level privacy_, two streams are considered _neighboring_ if they differ in a single time step. In this work we always consider event-level privacy. In the binary counting problem for differential privacy under continual observation (_continual binary counting_), one data row is either a \(0\) or a \(1\), and the goal is to estimate the total sum at every time step. In a more general setting, every data row is an element of \(\{0,1\}^{d}\) for \(d\in\mathbb{N}\), and we want to be able to answer queries about the column sums, i.e., the sum of the \(i\)th coordinate in each data row up to the current point in time, for \(i\in\{1,\ldots,d\}\). Examples of such queries are selecting the top-\(k\) elements and computing quantiles, which are widely used in data analysis (Ilyas et al., 2008), and as subroutines to statistical methods (Krruskal and Wallis, 1952; Huber, 1992). Due to the wide range of applications of these queries in analyzing potentially sensitive data, static versions of these queries have been considered in prior work (Qiao et al., 2021; Carvalho et al., 2020; Durfee and Rogers, 2019; Gillenwater et al., 2021; Kaplan et al., 2022). In recent work, Jain et al. (2021) showed upper and lower bounds on the error for computing the maximum column sum over a stream of rows from \(\{0,1\}^{d}\) (MaxSum), as well as selecting the index of the maximum column sum (SumSelect) under differential privacy. Here and in the following, all stated error bounds hold with constant probability. Clearly, the lower bounds automatically extend to the problem of top-\(k\) selection and computing all column sums (Histogram). However, their bounds leave a gap: For MaxSum and \(\epsilon\)-differential privacy, their lower bound is \(\Omega(\min(\sqrt{\frac{T}{\epsilon}},\frac{d}{\epsilon},T))\), while their upper bound is \(O(\min(\sqrt{\frac{T}{\epsilon}},\frac{d\cdot\log^{2}T}{\epsilon},T))\). Similarly, for SumSelect and \(\epsilon\)-differential privacy, their lower bound is \(\Omega(\min(\sqrt{\frac{T\log(d/\sqrt{\epsilon T})}{\epsilon}},\frac{d}{ \epsilon},T))\), while their upper bound is \(O(\min(\sqrt{\frac{T\log(dT)}{\epsilon}},\frac{d\log d\log^{3}T}{\epsilon},T))\). We focus on the bounds that are subpolynomial in \(T\). Note that the \(\frac{d}{\epsilon}\) term in the lower bound can be strengthened to \(\frac{d}{\epsilon}+\log T\), because of the \(\Omega(\log T)\) lower bound on the error of binary counting by Dwork et al. (2010). The \(O(\frac{d\log d\log^{3}T}{\epsilon})\) upper bound comes from computing a full histogram by composing \(d\) binary tree mechanisms for binary counting under continual observation (Dwork et al., 2010; Chan et al., 2011). Using the result by Dwork et al. (2015) for binary counting, the error for computing a \(d\)-dimensional continuous histogram can be improved to \(\widetilde{O}(\frac{d(\log^{2}n_{\max}+\log T)}{\epsilon})\)1, where \(n_{\max}\) is an upper bound on the number of ones in any column. In Dwork et al. (2015), the value \(n_{\max}\) is considered to be given; however, as pointed out by Qiu and Yi (2022), this result also holds when \(n_{\max}\) is not known beforehand, by combining with the two-level mechanism in Chan et al. (2011). Despite this improvement, a natural question remains: "Is an error of \(\Omega(d\cdot\log~{}T)\) necessary?" Footnote 1: For simplicity, we use \(\widetilde{O}(X)=O(X\mathrm{polylog}(X))\). In this paper, we answer this question in the affirmative for the above algorithms: Recall that the standard notion of differential privacy under continual observation requires that two neighboring streams differ in at most one time step, i.e., all the differing entries are in the _same_ row. However, all the algorithms above work for a more general notion of neighboring, which we call _independently neighboring_: Two streams are independently neighboring if they differ in any single entry for each column, but the differing entries do _not_ have to belong to the same row. If differential privacy were defined using independently neighboring streams, a lower bound on the additive error of \(\Omega(d\cdot\log~{}T)\) for estimating all column sums exists as we show in Appendix A. Hence, to achieve a better error bound for the simpler problem of neighboring inputs, we need a new strategy that does not maintain \(d\) independent binary counters. Indeed we show how to break the \(\Omega(d\cdot\log T)\)-barrier for the standard definition of privacy under certain conditions on the input stream by exploiting the interactions of the \(d\) columns. Specifically, we show new parameterized upper bounds for computing Histogram, MaxSum, and SumSelect under continual observation, as well as a larger class of queries with similar properties including the median column sum and the minimum column sum, with error \(\widetilde{O}\left(\frac{d\log^{2}\epsilon_{\max}+\log T}{\epsilon}\right)\), where \(c_{\max}\) is an upper bound on the _maximum query value on the given input at any time step_. Note that there is no dependency on \(d\cdot\log T\), so for queries and streams where \(c_{\max}\) is much smaller than \(T\), our result is better than what can be achieved by the previous algorithms. Our algorithms do not need to be given \(c_{\max}\) at initialization. Also note that there is no hope of removing the dependency on \(d\) altogether, because of the previously stated lower bound by Jain et al. (2021). The following theorem summarizes our main result. To state it, we need the notion of _sensitivity_ of a function, which is the maximum difference between the outputs on any two neighboring data sets. For a formal definition of sensitivity see Definition 3. **Theorem 1**.: _Let \(x=x^{1},\ldots,x^{T}\) be a stream of elements \(x^{t}\in\{0,1\}^{d}\). For any positive integer \(k\), let \(g_{1},\ldots,g_{k}\) be functions \(g_{i}:\{\{0,1\}^{d}\}^{*}\rightarrow\mathbb{R}^{+}\cup\{0\}\) for \(i\in\{1,\ldots,k\}\) with the following properties: For all \(i\in\{1,\ldots,k\}\) each \(g_{i}(x)\) depends only on the column sums of \(x\), is monotonically increasing in \(t\), has sensitivity at most \(1\), and its value at any time step is at most \(c_{\max}\). Further, \(g_{i}(0^{d})=0\) for all \(i\in[k]\). Then there exists_ 1. _an_ \(\epsilon\)_-differentially private algorithm that can answer_ \(g_{1},\ldots,g_{k}\) _at all time steps with error at most_ \(\alpha=\widetilde{O}\left(\left(d\log^{2}c_{\max}+k\log c_{\max}+\log T\right) \epsilon^{-1}\right)\) _with probability at least_ \(2/3\)_,_ 2. _an_ \((\epsilon,\delta)\)_-differentially private algorithm that can answer_ \(g_{1},\ldots,g_{k}\) _at all time steps with error at most_ \(\alpha=\widetilde{O}\left(\left(\sqrt{d}\log^{3/2}c_{\max}+\sqrt{k}\log c_{ \max}+\log T\right)\epsilon^{-1}\log(1/\delta)\right)\) _with probability at least_ \(2/3\)_._ Note that we can also answer \(k\) queries as in Theorem 1 by computing the histogram, since all query answers can be computed once all the column sums are known. The error, however, would depend on \(n_{\max}\) instead of \(c_{\max}\), and thus the bounds given in the theorem can be improved to \(\widetilde{O}\left(\left(\min\{d\log^{2}c_{\max}+k\log c_{\max},d\log^{2}n_{ \max}\}+\log T\right)\epsilon^{-1}\right)\) for \(\epsilon\)-differential privacy, and a bound of \(\widetilde{O}\left(\left(\min\left\{\sqrt{d}\log^{3/2}c_{\max}+\sqrt{k}\log c _{\max},\sqrt{d}\log^{3/2}n_{\max}\right\}+\log T\right)\epsilon^{-1}\log(1/ \delta)\right)\) for \((\epsilon,\delta)\)-differential privacy. The queries MaxSum and Histogram fulfill the conditions of Theorem 1 with \(c_{\max}=n_{\max}\). Other such queries are TopK, i.e., outputting the top-\(k\) column sums and \(\textsc{Quantile}_{q}\), i.e. computing the \(q\)-quantile on the column sums for \(q\in[0,1]\). Furthermore, any algorithm for differentially private continuous histogram can answer SumSelect and Top-\(k\)-Select, i.e. releasing the indices of the top-\(k\) column sums, within the same error bounds. Interestingly, our strategy allows us additionally to change the logarithmic dependency on the maximum column sum \(n_{\max}\) to a logarithmic dependency on the _maximum query value_ for any of the \(k\) queries, which can be much smaller as \(n_{\max}\), e.g. for computing the minimum column sum or the median. **Corollary 1**.: _Let \(x=x^{1},\ldots,x^{T}\) be a stream of elements \(x^{t}\in\{0,1\}^{t}\). Let \(n_{\min},n_{\max},n_{\mathrm{median}}\) denote the value of the minimum, maximum, and median column sum of \(x\), respectively. Then there exists_ * _an_ \(\epsilon\)_-differentially private algorithm that can answer_ MaxSum_,_ SumSelect_,_ Histogram_,_ TopK_, and_ Top-_\(k\)_-Select _at all time steps_ \(t\) _with error at most_ \(\alpha=\widetilde{O}\left((d\log^{2}n_{\max}+\log T)\epsilon^{-1}\right)\) _with probability at least_ \(1-\beta\)_._ * _an_ \(\epsilon\)_-differentially private algorithm that can answer_ \(\textsc{Quantile}_{1/2}\) _at all time steps_ \(t\) _with error at most_ \(\alpha=\widetilde{O}\left((d\log^{2}n_{\mathrm{median}}+\log T)\epsilon^{-1}\right)\) _with probability at least_ \(1-\beta\)_._ * _an_ \(\epsilon\)_-differentially private algorithm that can answer_ MinSum _at all time steps_ \(t\) _with error at most_ \(\alpha=\widetilde{O}\left((d\log^{2}n_{\min}+\log T)\epsilon^{-1}\right)\) _with probability at least_ \(1-\beta\)_._ The corresponding results also hold for \((\epsilon,\delta)\)-differential privacy by plugging \(n_{\min},n_{\max},n_{\mathrm{median}}\) into the bound in Theorem 1, 2. The previous best theoretical error bounds known for these problems were either polynomial in \(T\), or had an error of \(\widetilde{O}(d\log^{2}n_{\max}+d\log T)\) for \(\epsilon\)-differential privacy resp. \(\widetilde{O}(\sqrt{d}\log^{3/2}n_{\max}+\sqrt{d}\log T)\) for \((\epsilon,\delta)\)-differential privacy. Thus, for these queries, we reduce the additive term of \(O(d\log T)\) for \(\epsilon\)-differential privacy and the additive term of \(O(\sqrt{d}\log T)\) for \((\epsilon,\delta)\)-differential privacy to an additive term of \(O(\log T)\). ### Technical Overview The basic idea of obtaining an error bound parameterized by an upper bound on the output stems from an approach for continual binary counting by Dwork et al. (2015). They show how to get an \(O\left((\log^{2}n+\log T)/\epsilon\right)\) bound for binary counting with constant failure probability, where \(n\) is an upper bound on the maximum sum of the input to the algorithm. They do this using a partitioning algorithm that splits the input stream into intervals such that with probability at least \(1-\beta>0\), every interval in the partition has at least one \(1\), and at most \(O(\log(T/\beta)/\epsilon)\)\(1\)s. This implies that with probability at least \(1-\beta\), the number of intervals is bounded by \(n\). In what follows, we state all bounds with constant failure probability. The counts, i.e., the sum of the values, of every interval are used as input to a continual counting mechanism. Note that the continual counting mechanism now receives inputs from \(\mathbb{N}\) with the property that for two neighboring data streams, the inputs differ by at most 1 for one interval (given that the intervals are the same for the neighboring data streams). The binary tree mechanism is an example of a continual counting mechanism that can answer counting queries with this kind of input while preserving \(\epsilon\)-differential privacy. It has an error of \(O(\log^{2}t/\epsilon)\) with constant probability, where \(t\) is the stream length. The output is only updated at the end of each interval, and for all other time steps, the output is the same as in the previous time step. Since the number of 1s within each interval is bounded by \(O(\log T/\epsilon)\), the total error consists of the error of a binary counting mechanism on a stream of length \(n\) plus \(O(\log T/\epsilon)\). We now give an overview of our strategy for computing one query for a \(d\)-dimensional input stream, e.g. MaxSum, at every time step, while preserving \(\epsilon\)-differential privacy. The ideas for \((\epsilon,\delta)\)-differential privacy are similar. Recall that we require that this query is real-valued, monotonically increasing, and can be computed directly from the histogram. Our strategy is also based on a partitioning of the stream, and we only update the output at the end of each interval in the partition. However, there is a new technical challenge to overcome that does not exist for binary counting: Continual binary counting is a 1-dimensional sum over time, i.e., the output at each time step is simply the sum of the counts for each interval and it suffices to determine the sum of the current interval to decide when to end the interval. This allows for a simple partitioning algorithm where interval \(j\) does not depend on prior data, except for its starting time. On the other hand, we want to partition according to the outputs of a query that can depend on all \(d\) dimensions. In particular, our queries do not necessarily have the property that the output of a query for all time steps is the sum of the outputs for the intervals in a partition. Thus, our partitioning algorithm needs to use information about the data in previous intervals to decide when to end the current interval. If we were to naively use the sparse vector technique introduced in Dwork et al. (2010), the error would grow linearly in the number of produced intervals, which is prohibitive. Instead, our algorithm _computes a differentially private histogram of all the input seen so far at the end of every interval and uses this histogram for future partitioning decisions_. Thus, at the end of each interval, the direct usage of previous data is removed and instead a differentially private histogram is used. The partitioning algorithm we use is a variant of the sparse vector technique by Dwork et al. (2010) that is initialized with our differentially private histogram. However, we need to _prove privacy of the combined mechanism from scratch,_ and cannot use the privacy result of the sparse vector technique as a black box for the following reason: We use the differentially private histogram together with the updates within the interval to compute an approximate query answer at every time step and compare against a threshold. Once we cross the threshold, we end the interval and increase the threshold. To compute the differentially private histogram, we use a continuous histogram algorithm that receives the column sums for each interval as input. However, now the input to the partitioning algorithm _depends on the outputs of the histogram algorithm for prior intervals_ (unlike in Dwork et al. (2015), where the partitioning was independent of the output of the counting mechanism), and the input to the histogram algorithm depend on the output of the partitioning algorithm, and, hence, on the prior output of the histogram algorithm. Furthermore, the input to the histogram algorithm on two neighboring streams might not necessarily be neighboring on neighboring data streams, since this also depends on the partitioning. Thus, we cannot use a simple composition theorem to show privacy for the combined mechanism. To overcome this difficulty, we use a continuous histogram mechanism that is differentially private _even if the inputs are chosen adaptively_. Recent work by Denisov et al. (2022) shows that many of the mechanisms that are differentially private under continual observation actually fulfill this stronger property, which we call _adaptively differentially private_. We then perform a careful privacy analysis to show that the interaction between the adaptively differentially private continuous histogram mechanism and the partitioning mechanism satisfies privacy. The fact that the continuous histogram mechanism is adaptively differentially private allows us to separate the privacy loss incurred by the partitioning mechanism from that of the histogram mechanism in the analysis. Note that the privacy of the partitioning algorithm relies on the fact that neighboring streams can only differ in a single interval.Thus, the lower bound for independently neighboring inputs mentioned above and proven in Appendix A does not apply. The full mechanism now proceeds in two main steps. First, we show how the strategy described above, initialized with proper thresholds for the partitioning algorithm, gives an \(\epsilon\)-differentially private algorithm with an error bound of \(\widetilde{O}((d\log^{2}c_{\max}+\log T)\epsilon^{-1})\)_if an upper bound \(c_{\max}\) on the query value is given as an input to the algorithm_. We need this information to decide the threshold values, since we want to make sure that (1) in each interval, the query increases by at least one (and thus we can bound the number of inputs to the histogram algorithm) and (2) in each interval, the query does not increase too much (since we do not update the output within an interval). However, assuming that \(c_{\max}\) is known is a strong assumption, as it is a function of what we try to compute. Thus, we remove this assumption using a two-level algorithm, similar to Chan et al. (2011): We divide the stream into segments and run the above algorithm with an estimate of the maximum output on each segment. The estimate of the maximum output for each segment roughly doubles in comparison to the previous segment. Again, and differently from Chan et al. (2011), this comes with the additional difficulty that the partitioning into segments is data dependent and it depends at any time step on all previous time steps. Thus, as above, we use a differentially private histogram of all prior segments to initialize the algorithm for the current segment. Hence altogether our algorithm at each time step runs (1) a segment partitioning algorithm and (2) for the current segment a mechanism for computing the query answers given an estimated upper bound provided by the segment partitioning algorithm. The segment partitioning algorithm (1) is initialized with a differentially private histogram at the end of each segment. The mechanism (2) consists of the interval partitioning algorithm together with the adaptively differentially private histogram mechanism and outputs a new query estimate at the end of each interval. To guarantee differential privacy of the algorithm for (1) we use Laplace noise \(O(d/\epsilon)\). As that algorithm does not depend on the output of (2), we use normal composition to guarantee the differential privacy of the combination of the two, i.e., of the two-level algorithm. Next observe that the error accrued by the initialization in (1) of the histograms gives an additive error of \(\widetilde{O}(d\sqrt{\ell})\), where \(\ell\) is the number of segments. Due to the doubling of the estimates, \(\ell\) is approximately \(O(\log c_{\max})\). Within each segment the additive error is \(\widetilde{O}((d\log^{2}c_{\max}+\log T)\epsilon^{-1})\) as stated above, so that our final error bound is \(\widetilde{O}((d\log^{2}c_{\max}+\log T)\epsilon^{-1})\). Finally we show that for answering \(k\) queries fulfilling the conditions of Theorem 1, the additive error only slightly increases to \(\widetilde{O}((d\log^{2}c_{\max}+k\log c_{\max}+\log T)\epsilon^{-1})\) for \(\epsilon\)-differential privacy. Note that if we were to treat each query independently and then use composition, the additive error for the 1-query case would multiply by \(k\). To avoid this, our high-level observation is as follows: We use the sparse vector technique to split the input sequence into segments and split each segment into intervals. The noise needed for this directly influences the additive error of the output. Thus, we try to minimize the amount of noise needed. Specifically, we modify the partitioning algorithms used in both steps of the previous algorithm taking the answers of all \(k\) queries into account. First, we partition the stream into segments where the _maximum query value over all \(k\) queries_ roughly doubles. The maximum query value over all \(k\) queries can in itself be seen as a query that fulfills the properties we require; thus, the error for the segmentation part of the algorithm is the same as in the 1-query case and has the same additive error. Next, we partition each of these segments into intervals in the following way: We keep a separate threshold for each of the \(k\) queries, and at every point in time ask if _any_ of the query values crosses its current threshold. If so, we end an interval, without revealing which of the queries crossed its threshold. In the analysis we show that this allows us to only use a constant amount of noise at every time step in the partitioning algorithm. We then introduce an extra step to decide in a differentially private way which of the thresholds to increase. This way, for deciding whether to end an interval, which we need to do at any time step, it is enough to add a constant amount of noise. For deciding which of the thresholds to update, we need to scale the noise with \(k\), as we need to check all \(k\) thresholds. However, we can bound how many times this can happen, namely by the number of intervals. Since every time an interval ends, at least one of the query values must have increased with good probability, and since \(c_{\max}\) bounds the maximum query value on the given input, the number of intervals can be then bounded by at most \(k\cdot c_{\max}\) with good probability. Thus, the total amount of noise used for the partitioning into intervals is roughly \(O((k\log(k\cdot c_{\max})+\log T)\epsilon^{-1})=\widetilde{O}((k\log c_{\max}+ \log T)\epsilon^{-1})\). As before, we need to combine this with an adaptive histogram algorithm, which leads to the final error bound. **Paper organization.** This paper is organized as follows: Section 3 contains preliminaries. Then, we give the full algorithm for computing MaxSum while preserving \(\epsilon\)-dp, to give an overview of our techniques. In Section 4 we give an algorithm for MaxSum assuming an upper bound \(c_{\max}\) is given as input to the algorithm. In Section 5 we present the _doubling mechanism_, an \(\epsilon\)-dp mechanism that divides the stream into segments where the value of MaxSum roughly doubles, and gives an approximate histogram at the end of each such segment. In Section 6 we show how to combine the two to give our _two-level mechanism_ to estimate MaxSum under \(\epsilon\)-dp. In Sections 7, 8 and 9 we generalize this strategy to any set of \(k\) queries satisfying the properties of Theorem 1 and prove Theorem 1, 1. In Sections 10, 11 and 12 we consider \((\epsilon,\delta)\)-dp and prove Theorem 1, 2. In Section A we give an \(\Omega(d\log T)\) lower bound for continuous histogram for the independently neighboring definition of \(\epsilon\)-differential privacy, and thus argue that existing continuous histogram algorithms cannot achieve an error of \(o(d\log T)\). In Section B we recall the binary tree mechanism, as it is used as a subroutine by the mechanism from Fact 4. ## 2 Related Work Differential privacy under continual observation was introduced by Dwork et al. (2010) for the problem of binary counting. Additionally, Dwork et al. (2010) introduce the sparse vector technique which can be used as a blackbox algorithm for computing monotone functions under continual observation. The additive error depends on how often the answer to the query significantly changes. Chan et al. (2011) also consider answering TopK queries for the case where the number of \(1\)-entries in each row is limited to one. Note that this is an easier problem for which the lower bound by Jain et al. (2021) does not hold. In terms of Histogram and related queries, we already mentioned the work by Jain et al. (2021), which give upper and lower bounds on computing MaxSum and SumSelect under continual observation under both \(\epsilon\)-differential privacy and \((\epsilon,\delta)\)-differential privacy. Further, Cardoso and Rogers (2022) consider computing Histogram and TopK under continual observation in different settings, depending on whether the domain of items (in our case \(\{1,\ldots,d\}\)) is known or not, and whether or not we bound the \(L_{0}\)-norm (i.e., number of non-zero elements) of the row in which two neighboring streams may differ. The setting we consider in this paper corresponds to their _known domain, unrestricted \(L_{0}\)-sensitivity_ setting. They provide two algorithms for this setting, the "Meta Algorithm" which is based on the binary tree mechanism, and runs a static algorithm for each interval of the binary tree; however, it seems that they compute the top-\(k\) elements for each such interval, which does not provide the top-\(k\) elements at every time step of the stream. Their second algorithm is based on the sparse vector technique. The utilitiy of the algorithm, which they only analyze for \(k=1\), depends on a parameter \(s\), which can be seen as a bound on the number of times that there is a "significant" change in the maximum element. The error in that case is \(O(\tau\sqrt{s}\log^{3/2}(dTs))\) for \((k/\tau^{2})\)-zCDP, which corresponds to an error of roughly \(O(\epsilon^{-1}\sqrt{k\cdot s\cdot\ln(1/\delta)}\log^{3/2}(dTs))\) for \((\epsilon,\delta)\)-differential privacy. However, there is no good theoretical bound on \(s\), and it can be as large as \(\Omega(T)\) for worst-case streams. For the _known-domain, restricted \(L_{0}\)-sensitivity_ setting, Fichtenberger et al. (2022) used a different mechanism for the continual setting achieving an additive error of \(C_{\epsilon,\delta}(1+\frac{\ln(T)}{\pi})\sqrt{\ln(6dT)}\), where \(C_{\epsilon,\delta}=\frac{2}{\epsilon}\sqrt{\frac{4}{9}+\ln(\frac{1}{\delta} \sqrt{\frac{2}{\pi}})}\) for \((\epsilon,\delta)\)-differential privacy. Other related works include the papers by Qiu and Yi (2022) and Cummings et al. (2018), which study linear queries under continual observation. Fichtenberger et al. (2021) study graph algorithms under continual observation. They also show how the sparse vector technique can be used for monotone functions to get smaller additive error at the cost of an additional multiplicative error. Chan et al. (2012) and Mir et al. (2011) study computing heavy hitters on a stream. ## 3 Preliminaries Notation.We denote the set \(\{1,\ldots,n\}\) by \([n]\). ### Differential Privacy Preliminaries Data Universe.In all problems considered in this paper, the data universe is \(U=\{0,1\}^{d}\), and a data set is a stream \(x^{1},x^{2},\ldots,x^{T}\) of elements from \(U\). We denote by \(x^{t}_{i}\) the \(i\)th coordinate of \(x^{t}\). Neighbouring streams.We say two streams \(x\) and \(y\) of \(U\) are neighboring, if there exists at most one \(t^{*}\) such that \(x^{t}=y^{t}\) for all \(t\neq t^{*}\). Continual observation.An algorithm \(A\) in the continual observation model takes as input the stream of elements \(x^{1},x^{2},\ldots,x^{T}\), and at every time step \(t\) produces an output \(a^{t}=A(x^{1},\ldots,x^{t})\) which may only rely on \(x^{1}\) to \(x^{t}\). Denote \(A(x)=(a^{1},a^{2},\ldots,a^{T})\) the collection of the outputs at all time steps. **Definition 1** (Differential privacy [13]).: _A randomized algorithm \(A\) on a domain \(U^{T}\) is \((\epsilon,\delta)\)-differentially private (\((\epsilon,\delta)\)-dp) if for all \(S\in\operatorname{range}(A)\) and all neighboring \(x,y\in U^{T}\) we have_ \[\Pr[A(x)\in S]\leq e^{\epsilon}\Pr[A(y)\in S]+\delta.\] _If \(\delta=0\) then \(A\) is \(\epsilon\)-differentially private (\(\epsilon\)-dp)._ In the _adaptive continual release model_ the mechanism \(M\) interacts with a _randomized adversarial process_\(Adv\) that runs for \(T\) time steps and has no restrictions regarding time or space complexity. It knows all input and output of \(M\) up to the current time step as well as \(M\) itself, but _not_\(M\)'s random coin flips. Based on this knowledge at each time step \(t\), \(Adv\) chooses the input for \(M\) for time \(t\). However, to model neighboring inputs for event-level privacy in the adaptive continual release model the behavior of \(Adv\) needs to be slightly refined. There are two types of time steps: regular and challenge. The adversary can determine for each \(t\in[T]\) which type a time step is, under the constraint that exactly one time step can be a challenge time step. If a time step is regular, \(Adv\) outputs one value, and if it is challenge, \(Adv\) outputs two values. In the latter setting an external entity, called an oracle, then uses one of them and sends it to \(M\). The oracle has decided before the beginning of the game whether it will send the first or the second value to \(M\). Note that this decision is not known to \(Adv\) and also not to \(M\) and the goal of the adversary is to determine which decision was made, while the goal of the mechanism is to output the result of the computation, e.g., output a histogram, such that \(Adv\) does not find out which decision was made by the oracle. More formally the relationship between \(Adv\) and \(M\) is modeled as a game between adversary \(Adv\) and algorithm \(M\), given in Game 1. **Definition 2** (Differential privacy in the adaptive continual release model [14]).: _Given a mechanism \(M\) the view of the adversary \(Adv\) in game \(\Pi_{M,Adv}\) (Game 1) consists of \(Adv\)'s internal randomness, as well as the outputs of both \(Adv\) and \(M\). Let \(V^{(\mathrm{side})}_{M,Adv}\) denote \(Adv\)'s view at the end of the game run with input_\(\mathrm{side}\in\{L,R\}\). _Let \(\mathcal{V}\) be the set of all possible views. Mechanism \(M\) is \((\epsilon,\delta)\)-differentially private in the adaptive continual release model if, for all adversaries \(Adv\) and any \(S\subseteq\mathcal{V}\),_ \[\Pr(V_{M,Adv}^{(L)}\in S)\leq e^{\epsilon}\Pr(V_{M,Adv}^{(R)}\in S)+\delta\] _and_ \[\Pr(V_{M,Adv}^{(R)}\in S)\leq e^{\epsilon}\Pr(V_{M,Adv}^{(L)}\in S)+\delta.\] _We also call such a mechanism_ adaptively \((\epsilon,\delta)\)-differentially private. We say that two probabilities \(p\) and \(q\) are _\((e^{\epsilon},\delta)\)-close_ if they satisfy \(p\leq e^{\epsilon}q+\delta\) and \(q\leq e^{\epsilon}p+\delta\). For \(\delta=0\) we say \(p\) and \(q\) are _\(e^{\epsilon}\)-close_. **Definition 3**.: _[\(L_{p}\)-sensitivity] Let \(f\) be a function \(f:U^{n}\rightarrow\mathbb{R}^{k}\). The \(L_{p}\)-sensitivity of \(f\) is defined as_ \[\max_{x,y\ \mathrm{neighboring}}||f(x)-f(y)||_{p}. \tag{1}\] _If \(k=1\), then \(||f(x)-f(y)||_{p}=|f(x)-f(y)|\) for all \(p\). In that case, we also call (1) the_ **Definition 4** (Laplace Distribution).: _The Laplace distribution centered at \(0\) with scale \(b\) is the distribution with probability density function_ \[f_{\mathrm{Lap}(b)}(x)=\frac{1}{2b}\exp\left(\frac{-|x|}{b}\right).\] _We use \(X\sim\mathrm{Lap}(b)\) or sometimes just \(\mathrm{Lap}(b)\) to denote a random variable \(X\) distributed according to \(f_{\mathrm{Lap}(b)}(x)\)._ **Fact 1** (Theorem 3.6 in Dwork and Roth (2014): Laplace Mechanism).: _Let \(f\) be any function \(f:U^{n}\rightarrow\mathbb{R}^{k}\) with \(L_{1}\)-sensitivity \(\Delta_{1}\). Let \(Y_{i}\sim\mathrm{Lap}(\Delta_{1}/\epsilon)\) for \(i\in[k]\). The mechanism defined as:_ \[A(x)=f(x)+(Y_{1},\ldots,Y_{k})\] _satisfies \(\epsilon\)-differential privacy._ **Definition 5** (Normal Distribution).: _The normal distribution centered at \(0\) with variance \(\sigma^{2}\) is the distribution with the probability density function_ \[f_{N(0,\sigma^{2})}(x)=\frac{1}{\sigma\sqrt{2\pi}}\exp\left(-\frac{x^{2}}{2 \sigma^{2}}\right)\] We use \(X\sim N(0,\sigma^{2})\) or sometimes just \(N(0,\sigma^{2})\) to denote a random variable \(X\) distributed according to \(f_{N(0,\sigma^{2})}\). **Fact 2** (Theorem A.1 in Dwork and Roth (2014): Gaussian mechanism).: _Let \(f\) be any function \(f:U^{n}\rightarrow\mathbb{R}^{k}\) with \(L_{2}\)-sensitivity \(\Delta_{2}\). Let \(\epsilon\in(0,1)\), \(e^{2}>2\ln(1.25/\delta)\), and \(\sigma\geq c\Delta_{2}(f)/\epsilon\). Let \(Y_{i}\sim N(0,\sigma^{2})\) for \(i\in[k]\). Then the mechanism defined as:_ \[A(x)=f(x)+(Y_{1},\ldots,Y_{k})\] _satisfies \((\epsilon,\delta)\)-differential privacy._ As a subroutine, we use a continuous histogram algorithm that works against an adaptive adversary. The specific continuous histogram algorithm we use is the composition of \(d\) continuous counting mechanisms. We formally define the two problems next. **Definition 6** (Continuous Counting).: _In the continuous counting problem, the input consists of \(T\) and a stream of \(T\) numbers \(x^{1},\dots,x^{T}\) with \(x^{t}\in\mathbb{N}\) for all \(t\in[T]\). Two streams \(x=x^{1},\dots,x^{T}\) and \(y=y^{1},\dots,y^{T}\) are neighboring if there is a time step \(t^{*}\) such that \(|x^{t^{*}}-y^{t^{*}}|\leq 1\) and \(x^{t}=y^{t}\) for all \(t\neq t^{*}\). The goal is to approximate at every time step \(t\) the sum of all inputs seen so far, i.e. \(\sum_{l=1}^{t}x^{l}\)._ **Definition 7** (Continuous Histogram).: _In the continuous histogram problem, the input consists of \(T\) and a stream of \(T\) vectors \(x^{1},\dots,x^{T}\) with in \(x^{t}\in\mathbb{N}^{d}\) for all \(t\in[T]\). Two streams \(x=x^{1},\dots,x^{T}\) and \(y=y^{1},\dots,y^{T}\) are neighboring if there is a time step \(t^{*}\) such that \(\|x^{t^{*}}-y^{t^{*}}\|_{\infty}\leq 1\) and \(x^{t}=y^{t}\) for all \(t\neq t^{*}\). The goal is to approximate at every time step \(t\) the sum of all inputs seen so far, i.e. \(\sum_{l=1}^{t}x^{l}\)._ Denisov et al. (2022) show that \(\epsilon\)-differential privacy under continual observation implies \(\epsilon\)-differential privacy against an adaptive adversary: **Fact 3** (Proposition 2.1 in Denisov et al. (2022)).: _Every mechanism that is \(\epsilon\)-differentially private in the continual release model is \(\epsilon\)-differentially private in the adaptive continual release model._ To apply this to our definition of continuous histogram, we have to align the neighboring definitions, i.e. require \(||x_{t}^{(L)}-x_{t}^{(R)}||_{\infty}\leq 1\) in Algorithm 1. By this and standard composition of \(\epsilon\)-differentially private algorithms, we get that any \(\epsilon\)-differentially private continuous counting algorithm with error \(O(\alpha/\epsilon)\) gives an \(\epsilon\)-differentially private continuous histogram against an adaptive adversary with error \(O(d\alpha/\epsilon)\). The binary tree mechanism from Dwork et al. (2010) gives an error bound of \(O(\epsilon^{-1}\log(1/\beta)\cdot\log^{2}T)\) for continuous counting with \(\epsilon\)-differential privacy (it is stated as \(O(\epsilon^{-1}\log(1/\beta)\cdot\log^{2.5}T)\) in Chan et al. (2011) and Dwork and Roth (2014), but the same analysis actually gives \(O(\epsilon^{-1}\log(1/\beta)\cdot\log^{2}T)\), see appendix B). For continuous histogram, we can use \(d\) binary tree mechanisms in parallel, yielding the following fact: **Fact 4** (\(\epsilon\)-differentially private continuous histogram against an adaptive adversary).: _There is an algorithm solving the continuous histogram problem while preserving \(\epsilon\)-differential privacy in the adaptive continual release model, such that with probability \(1-\beta\), the error is bounded by \(O(\epsilon^{-1}d\log(d/\beta)\cdot\log^{2}T)\)._ For \((\epsilon,\delta)\)-dp, Fichtenberger et al. (2022) give an algorithm for continuous histogram achieving an error of \(O(\epsilon^{-1}\log(1/\delta)\log T\sqrt{d\ln(dT)})\). Since their algorithm fulfills the conditions of Theorem 2.1 in Denisov et al. (2022), that theorem yields that the same privacy guarantees hold in the adaptive continual release model. **Fact 5** (\((\epsilon,\delta)\)-differentially private continuous histogram against an adaptive adversary).: _There is an algorithm solving the continuous histogram problem while preserving \((\epsilon,\delta)\)-differential privacy in the adaptive continual release model, such that with probability \(1-\beta\), the error is bounded by \(O(\epsilon^{-1}\log(1/\delta)\log T\sqrt{d\ln(dT/\beta)})\)._ ### Problem Definitions Recall that we are given an integer \(d>0\) and the input is a stream \(x=x^{1},\dots,x^{T}\) with elements \(x^{t}\in\{0,1\}^{d}\). Let the _column sum_\(c_{i}^{t}\) at time step \(t\) be equal to \(\sum_{t^{\prime}=1}^{t}x_{i}^{t^{\prime}}\). Our main result implies new parameterized upper bounds for the following problems in the continual observation setting: * Histogram: Compute at every time step \(t\) all column sums of \(x^{1},\dots,x^{t}\), that is \((c_{i}^{t})_{i\in[d]}\). * MaxSum: Compute at every time step \(t\) the maximum column sum of \(x^{1},\dots,x^{t}\), that is: \[\max_{i\in[d]}c_{i}^{t}.\] * SumSelect: Compute at every time step \(t\) the index \(i\in[d]\) of the maximum column sum of \(x^{1},\dots,x^{t}\), that is: \[\operatorname*{argmax}_{i\in[d]}c_{i}^{t}.\] * \(\textsc{Quantile}_{q}\) for \(q\in(0,1]\): Compute at every time step \(t\) the smallest \(c_{j}^{t}\) such that \(|\{i\in[1,d]:c_{i}^{t}\leq c_{j}^{t}\}|\geq qd\). * TopK: Compute at every time step \(t\) the \(k\) largest column sums. * Top-\(k\)-Select: Compute at every time step \(t\) the indices of the \(k\) largest column sums. Note that in the continual observation setting the Histogram problem for \(d=1\) is also known as the _continual counting problem_. We first show an auxiliary lemma. **Lemma 1**.: _Let \(q\in(0,1]\). Further, let \(s=(s_{1},\ldots,s_{d})\) and \(c=(c_{1}\ldots,c_{d})\) be such that \(\max_{i=1\ldots d}|s_{i}-c_{i}|\leq\alpha\). Then \(|\textsc{Quantile}_{q}(s)-\textsc{Quantile}_{q}(c)|\leq\alpha\)._ Proof.: For a given \(q\), denote \(s^{\star}=\textsc{Quantile}_{q}(s)\) and \(c^{\star}=\textsc{Quantile}_{q}(c)\). * We have \(|\{i\in[1,d]:c_{i}\leq c^{\star}\}|\geq qd\), which implies that \(|\{i\in[1,d]:s_{i}\leq c^{\star}+\alpha\}|\geq qd\). Thus, \(s^{\star}\leq c^{\star}+\alpha\) * Further, \(|\{i\in[1,d]:c_{i}\geq c^{\star}\}|\geq(d-\lceil qd\rceil+1)\), which implies \(|\{i\in[1,d]:s_{i}\geq c^{\star}-\alpha\}|\geq(d-\lceil qd\rceil+1)\). Thus, \(s^{\star}\geq c^{\star}-\alpha\) It follows that \(c^{\star}-\alpha\leq s^{\star}\leq c^{\star}+\alpha\), as desired. Lemma 1 implies that \(\textsc{Quantile}_{q}\) has \(L_{1}\)-sensitivity 1 for all \(q\in(0,1]\). In particular, this means that \(\textsc{MaxSum}=\textsc{Quantile}_{1}\), as well as any \(\textsc{Quantile}_{i/d}\) for \(i\in[d]\) has sensitivity 1. Note that for any integer \(k>0\) it holds that TopK\(=(f_{1},\ldots,f_{k})\)for \(f_{i}=\textsc{Quantile}_{(d+1-i)/d}\) with \(1\leq i\leq k\). For Histogram, MaxSum, \(\textsc{Quantile}_{q}\), TopK and the class of queries specified in Theorem 1, we use the following error definition: General error definitionLet \(g_{1},\ldots,g_{k}\) be functions \(g_{i}:\{\{0,1\}^{d}\}^{*}\rightarrow\mathbb{R}\) for \(i\in[k]\). For an algorithm \(A\), let \(a^{t}=A(x^{1},\ldots,x^{t})\). We define the error for algorithm \(A\) as \[\mathrm{err}(A)=\max_{t\in T}\max_{i\in[k]}|g_{i}(x^{1},\ldots,x^{t})-a^{t}|\] We say \(A\) is \((\alpha,\beta)\)-accurate for \(g_{1},\ldots,g_{k}\) if \(\mathrm{Pr}[\mathrm{err}(A)>\alpha]<\beta\). We say \(A\) is \(\alpha\)-accurate if it is \((\alpha,\beta)\)-accurate for \(\beta=1/3\). Note that SumSelect and Top-\(k\)-Select require a different error definition, since it does not make sense to compare the output indices to the indices of the maximum column sum or top-\(k\) elements directly. Instead, we compare the corresponding column sum values. Error definition for Top-\(k\)-Select and SumSelect.Let \(i_{1}^{1},\ldots,i_{k}^{1},\ldots,i_{1}^{T},\ldots,i_{k}^{T}\) be the answers to algorithm \(A\). Let \(c_{ji}^{t}\) be the \(l\)th largest \(c_{i}\) at time \(t\). We define the error for algorithm \(A\) as \[\mathrm{err}_{\textsc{Top-}k-\textsc{Select}}(A)=\max_{t\in[T]}\max_{l\in[k]}| c_{j_{l}}^{t}-c_{i_{l}^{t}}|.\] We say \(A\) is \((\alpha,\beta)\)-accurate for Top-\(k\)-Select if \(\mathrm{Pr}[\mathrm{err}_{\textsc{Top-}k-\textsc{Select}}(A)>\alpha]<\beta\). ### Probability Preliminaries **Lemma 2**.: _Let \(Y_{1},\ldots,Y_{k}\) be independent variables with distribution \(\mathrm{Lap}(b)\) and let \(Y=\sum_{i=1}^{k}Y_{i}\). Then_ \[P(Y>2b\sqrt{2\ln(2/\beta_{S})}\max(\sqrt{k},\sqrt{\ln(2/\beta_{S})})\leq\beta_ {S}.\] Proof.: Apply Corollary 12.3 in Dwork and Roth (2014) to \(b_{1}=\cdots=b_{k}=b\). **Fact 6**.: _Let \(Y\) be distributed according to \(\mathrm{Lap}(b)\). Then_ \[P(|Y|\geq t\cdot b)=\exp(-t)\] **Lemma 3**.: _For a random variable \(X\sim D\), if \(\Pr[|X|>\alpha]\leq\beta\), then for \(X_{1},X_{2},\ldots,X_{k}\sim D\) i.i.d., we have \(\Pr[\max_{i}|X_{i}|>\alpha]\leq k\cdot\beta\)._ We use \(f_{X}(x)\) to denote the probability density function of a continuous random variable \(X\). For our privacy proofs, we repeatedly use the fact that if \(X\) and \(Y\) are independent random variables with joint probability density function \(f_{X,Y}(x,y)\), then \(f_{X,Y}(x,y)=f_{X}(x)\cdot f_{Y}(y)\). Thus for any event \(A(X,Y)\), we have \[\int_{x,y}\mathds{1}[A(x,y)]f_{X,Y}(x,y)dxdy=\int_{y}\Pr_{X}[A(X,y)]f_{Y}(y)dy\] ## 4 Mechanism BoundedMaxSum for MaxSum In this section, we will construct a mechanism for MaxSum_which requires that an upper bound \(c_{\max}\) on the maximum column sum is known beforehand._ In the following two sections we develop the necessary tools to remove that requirement. Our algorithm uses as a black-box a continuous histogram algorithm in the adaptive continual release model and it achieves an additive error of roughly \(O(\operatorname{err}(c_{\max},\beta/3)+\log c_{\max}+\log T)\) with probability at least \(1-\beta\), where \(\operatorname{err}(c_{\max},\beta/3)\) is the error guarantee (that holds with probability \(\geq 1-\beta/3\)) of a continuous histogram mechanism run on a stream of length \(c_{\max}\). Plugging in the error of the best known such histogram algorithm gives an additive error roughly \(O(d\log^{2}c_{\max}+\log T)\). The main idea in this section is a generalization of the idea in Dwork et al. (2015): We want to partition the stream into intervals such that _the change in the value of the maximum column sum (maximum over all time and not just within the interval) between the beginning and the end of an interval is \(\Theta(d\log^{2}c_{\max}+\log T)\)_ with the desired probability. In particular, we get that the maximum column sum increases by at least \(1\) within each interval with the desired probability. This bounds the number of intervals by at most \(c_{\max}\). Note that using Dwork et al. (2015) as a black box would have created intervals where the _sum of the entries of all columns in an interval_ increases by at least \(1\), which would give a looser upper bound of \(d\cdot c_{\max}\) for the number of intervals, resulting in a larger additive error. More importantly, our approach generalizes to a large class of queries with a potentially smaller upper bound on the query value. For determining the output given after each time step we use the following simple rule: All time steps at the beginning and within an interval output the same value, which is initially \(0\). At the end of each interval, the algorithm determines an approximate histogram and its maximum value, and uses it as the new output at this time step and during the next interval. The main contribution in this section is our partitioning algorithm. It applies the sparse vector technique to the current differentially private maximum column sum, i.e., we compare a noisy maximum column sum with a noisy threshold. The reason is as follows: The change of the maximum column sum within one interval depends on the previous data. Thus, if we were to use the _actual_ maximum column sum for the partitioning algorithm, then the error would grow linearly in \(c_{\max}\), since then _all_ of the \(c_{\max}\) instances of the sparse vector technique would have to access _all_ of the data. To circumvent this, we use a differentially private approximation of the maximum column sum to determine the end of each interval. This private approximation is obtained by combining the last output of a differentially private histogram with the updates in the stream of the current interval. Since the continual dp histogram algorithm and the partitioning algorithm depend on each other, we use a histogram algorithm that satisfies differential privacy in the adaptive continual release model to prove that our algorithm is differentially private under continual observation. The full algorithm is given in Algorithm 2. ### Privacy We prove privacy of Algorithm 2 by proving privacy of a meta algorithm given in Algorithm 3, and noting that Algorithm 2 is an extension of Algorithm 3 with postprocessing of the histogram to output the noisy column maximum and with setting \(g=\max\), \(s_{i}=0\) for all \(i\in[d]\), \(\Delta=c_{\max}\), and \(K_{j}=j\cdot K\) for all \(j\in[\Delta]\), where \(K\) is a parameter that only depends on the non-private parameters of Algorithm 2. We use the following notation. Let * \(\mu_{t}\) be the \(\operatorname{Lap}(8/\epsilon)\) noise added to the maximum in Line 7 in Algorithm 3 at time \(t\), and * \(\tau_{j}\) be the \(\operatorname{Lap}(4/\epsilon)\) noise added to \(\operatorname{K}_{j}\) in Line 4 of Algorithm 3. To show differential privacy we will first argue that the output of \(H\) is \((\epsilon/2,\delta/e^{\epsilon/2})\)-differential private and then combine it with an analysis of the sparse vector technique to argue that mechanism consisting of the partitioning and the histogram part is \((\epsilon,\delta)\)-differentially private. However there is the following complication to consider: The partitioning of Algorithm 3 interacts with the differentially private histogram mechanism \(H\) in a mutually dependent way. Whenever the current interval \(j\) ends, Algorithm 3 gives one \(d\)-dimensional input consisting of the true counts within interval \(j\) to \(H\). It then uses the output of \(H\) to initialize its noisy histogram vector \((s_{1},...,s_{d})\), which it uses in turn (together with random noise and \(\tilde{K}_{j}\)) to determine the end of the next interval. If we now consider two neighboring streams \(x\) and \(y\), let \(t^{*}\) be the time step where \(x\) and \(y\) differ. Let \(j\) be the interval that contains time step \(t^{*}\). This will be the \(j\)-th input for \(H\), i.e., for \(H\) this is time step \(j\). Now note that \(j\) is a random variable depending on the random choices of \(H\) and Algorithm 3. Thus, the time step at which the inputs for \(H\) differ, depends on the past random choices of \(H\). Note that in non-adaptive differential privacy the time step where the two input streams differ must be fixed before the mechanism is executed (and makes its random choices) and, thus, the input for \(H\) does not follow the definition of neighboring input streams used in non-adaptive differential privacy. Instead we require that \(H\) is an adaptively \((\epsilon/2,\delta)\)-DP continuous histogram mechanism. We show that this condition is enough in the following privacy proof. **Lemma 4**.: _Let \(\epsilon>0\) and \(\delta\in[0,1]\). If \(H\) is an adaptively \((\epsilon/2,\delta/e^{\epsilon/2})\)-DP continuous histogram mechanism, then Algorithm 3 satisfies \((\epsilon,\delta)\)-differential privacy._ Proof.: Let \(x\) and \(y\) be two neighboring streams that differ at time \(t^{*}\). Let \(S\) be a subset of the possible outputs of the algorithm and let \(\mathcal{A}(x)\) be the output stream of Algorithm 3 with input stream \(x\). We show that \[\Pr\left[\mathcal{A}(x)\in S\right]\leq e^{\epsilon}\cdot\Pr\left[\mathcal{A} (y)\in S\right]+\delta\] The arguments also hold when swapping the identities of \(x\) and \(y\) since they are symmetric, which gives us the privacy guarantee. Thus we focus on proving the inequality above. To show differential privacy we will first argue that Algorithm 3 acts like an adversarial process in the adaptive continual release model towards the histogram algorithm \(H\). From our assumption on \(H\) it then follows that the output of \(H\) is \((\epsilon/2,\delta/e^{\epsilon/2})\)-differential private. We will combine this fact with an analysis of the sparse vector technique to argue that mechanism consisting of both the partitioning and the histogram part is \((\epsilon,\delta)\)-differentially private. Recall that an adversary in the adaptive continual release model presented in Section 3 is given by a privacy game, whose generic form is presented in Game 1. Due to the complicated interaction between the partitioning and \(H\), the specification of such an adversarial process in our setting is intricate and is given in Game 4. As the behavior of our adversary will depend on the input stream \(x\) and \(y\) we denote the adversary by \(Adv(x,y)\). Note that \(Adv(x,y)\) does not equal \(Adv(y,x)\) as the first parameter is the input stream for the which the \(s_{i}\) values are computed, and therefore also the stream which the partition is based on. The important observation from this game is that there is only one interval, i.e., only one time step for \(H\), where the adversary outputs two values, and in all other time step it outputs only one value. Thus the adversarial process that models the interaction between the partitioning algorithm and \(H\) fulfills the condition of the adaptive continual release model. As we assume that \(H\) is \((\epsilon/2,\delta/e^{\epsilon/2})\)-differentially private in that model it follows that for all possible neighboring input streams \(x\) and \(y\) for \(\Pi_{H,Adv(x,y)}\) and all possible sides \(L\) and \(R\) it holds that \[\Pr(V_{H,Adv(x,y)}^{(L)}\in S)\leq e^{\epsilon/2}\Pr(V_{H,Adv(x,y)}^{(R)}\in S )+\delta/e^{\epsilon/2}\] and \[\Pr(V_{H,Adv(x,y)}^{(R)}\in S)\leq e^{\epsilon/2}\Pr(V_{H,Adv(x,y)}^{(L)}\in S )+\delta/e^{\epsilon/2}\] The same also holds with the positions of \(x\) and \(y\) switched. Since the choice of side \(L/R\) merely decides whether the counts \(c(x)\) or \(c(y)\) are sent by the game to \(H\), we abuse notation and specify directly which count is sent to \(H\), as \(V_{H,Adv(x,y)}^{(x)}\) or \(V_{H,Adv(x,y)}^{(y)}\). Now, let \(S\) be a subset of all possible outputs of Algorithm 3. Recall that the view of the adversary consists of its internal randomness as well as its outputs and the output of \(H\). The behavior of \(Adv(x,y)\) is completely determined by its input \(x\) and \(y\) as well as the thresholds and the function \(g\) and its random coin flips. However, for our analysis only the output of \(H\) matters. Thus, we ignore the other values in the view and say that a view \(V\) of the adversary \(Adv(x,y)\) satisfies \(V\in S\), if the streams of \((s_{1},\dots,s_{d})\) returned from \(H\) for all intervals match the output sequences of Algorithm 3 in \(S\). We then have \[\Pr(\mathcal{A}(x)\in S)=\Pr(V^{(x)}_{H,Adv(x,y)}\in S)\] and \[\Pr(\mathcal{A}(y)\in S)=\Pr(V^{(y)}_{H,Adv(y,x)}\in S)\] As \(H\) is adaptively \((\epsilon/2,\delta/e^{\epsilon/2})\)-differentially private, we have \[\Pr(V^{(x)}_{H,Adv(y,x)}\in S)\leq e^{\epsilon/2}\Pr(V^{(y)}_{H,Adv(y,x)}\in S )+\delta/e^{\epsilon/2}\] It remains to prove \[\Pr(V^{(x)}_{H,Adv(x,y)}\in S)\leq e^{\epsilon/2}\Pr(V^{(x)}_{H,Adv(y,x)}\in S), \tag{2}\] since then \[\begin{split}\Pr(\mathcal{A}(x)\in S)&=\Pr(V^{(x)} _{H,Adv(x,y)}\in S)\\ &\leq e^{\epsilon/2}\Pr(V^{(x)}_{H,Adv(y,x)}\in S)\\ &\leq e^{\epsilon}\Pr(V^{(y)}_{H,Adv(y,x)}\in S)+e^{\epsilon/2}( \delta/e^{\epsilon/2})\\ &=e^{\epsilon}\Pr(\mathcal{A}(y)\in S)+\delta\end{split} \tag{3}\] Note that when we run \(Adv(x,y)\) on side \(x\), the interval partitioning is created according to \(x\) and according to the outputs of \(H\). Also for each interval, the input given to \(H\) is based on the counts for \(x\). When we run \(Adv(y,x)\) on side \(x\), we partition according to \(y\) and outputs of \(H\), and the input given to \(H\) is based on the counts for \(x\). Thus in both cases the same input is given to \(H\), and hence, to prove Inequality 2, it suffices to show that _when running \(Adv(x,y)\) on side \(x\) and \(Adv(y,x)\) on side \(x\), the probabilities of getting a given partition into intervals are \(e^{\epsilon}\)-close_. To simplify notation, we denote running \(Adv(x,y)\) on side \(x\) as \(\mathrm{run}(x)\), and \(Adv(y,x)\) on side \(x\) as \(\mathrm{run}(y)\). Call the time interval \((p_{j-1},p_{j}]\) the \(j^{th}\)_interval_. If \(\ell\) is the value of \(j\) at the end of processing the input stream, set \(p_{\ell}=T\). Note that the probabilities of computing any fixed sequence of intervals \((p_{0},p_{1}],\ldots,(p_{j-2},p_{j-1}]\) with \(p_{j-1}<t^{*}\) and \(j-1\leq\Delta\) are the same on both \(\mathrm{run}(x)\) and \(\mathrm{run}(y)\), since the streams are equal at all time steps before \(t^{*}\). We want to argue that for any \(\lambda>t^{*}\) the probability of \(p_{j}=\lambda\) is \(e^{\epsilon/2}\)-close on \(\mathrm{run}(x)\) and \(\mathrm{run}(y)\). First note that if \(j>\Delta\), then the stream is ignored after \(p_{j-1}\) for both \(\mathrm{run}(x)\) and \(\mathrm{run}(y)\), so \(e^{\epsilon/2}\)-closeness follows trivially. If \(j\leq\Delta\), we condition on all the noises added in the algorithm before time \(p_{j-1}\) as well as the randomness of \(H\) up until time \(p_{j-1}\). Fixing a particular time \(\lambda>p_{j-1}\), we first show that the probability of interval \(j\) ending at \(\lambda\) (i.e., \(p_{j}=\lambda\)) is \(e^{\epsilon/2}\)-close on \(\mathrm{run}(x)\) and \(\mathrm{run}(y)\). For this to happen, the noisy max sum needs to cross the threshold at time \(\lambda\), and never before that in the time steps \((p_{j-1},\lambda-1]\). We use \(s^{t}(x)\) to denote the value of \(s\) at time \(t\) on \(\mathrm{run}(x)\). We use below \(f_{\tau_{j}}(z)\) as the pdf of the random variable \(\tau_{j}\). Since the choice of \(\tau_{j}\) and \(\mu_{\lambda}\) happen independently, \[\Pr[\text{Interval $j$ ends at $\lambda$ on $\mathrm{run}(x)$}]\] \[=\int_{z}\Pr_{\mu_{p_{j-1}},\ldots,\mu_{\lambda}}\big{[}\big{(} g(s^{\lambda}(x))+\mu_{\lambda}\geq\mathrm{K}_{j}+z\big{)}\wedge\big{(}g(s^{t}(x)) +\mu_{t}<\mathrm{K}_{j}+z,\quad\forall p_{j-1}<t<\lambda\big{)}\big{]}\cdot f_ {\tau_{j}}(z)\,dz\] Rearranging to have the \(\mu\) terms on one side, \[=\int\Pr_{\mu_{p_{j-1}},\ldots,\mu_{\lambda}}\big{[}\big{(}\mu_{\lambda}\geq \mathrm{K}_{j}+z-g(s^{\lambda}(x))\big{)}\wedge\big{(}\mu_{t}<\mathrm{K}_{j}+z -g(s^{t}(x)),\quad\forall p_{j-1}<t<\lambda\big{)}\big{]}\cdot f_{\tau_{j}}(z )\,dz\] Since the \(\mu\)'s are independent chosen of each other, we can split the first term as follows. \[=\int\Pr_{\mu_{\lambda}}\big{[}\mu_{\lambda}\geq\mathrm{K}_{j}+z-g(s^{ \lambda}(x))\big{]}\cdot\prod_{p_{j-1}<t<\lambda}\Pr_{\mu_{t}}\big{[}\mu_{t}< \mathrm{K}_{j}+z-g(s^{t}(x))\big{]}\cdot f_{\tau_{j}}(z)\,dz \tag{4}\] Now we make the following observations. Since \(\tau_{j}\sim\mathrm{Lap}(4/\epsilon)\), \[f_{\tau_{j}}(z)\leq e^{\epsilon/4}\cdot f_{\tau_{j}}(z+1) \tag{5}\] Since \(x\) and \(y\) differ in one row, at each time step \(\lambda\) we have that the \(L_{\infty}\) norm of the true histograms of \(x\) and \(y\) can differ by at most \(1\). Since we condition on the noises being the same until \(p_{j-1}\) (which was the last time the algorithm obtained a new output from \(H\)), we get that \(||s^{\lambda}(x)-s^{\lambda}(y)||_{\infty}\leq 1\) and therefore, \(g(s^{\lambda}(x))\leq g(s^{\lambda}(y))+1\). Thus, \[\Pr_{\mu_{\lambda}}[\mu_{\lambda}\geq\mathrm{K}_{j}+z-g(s^{ \lambda}(x))] \leq\Pr_{\mu_{\lambda}}[\mu_{\lambda}\geq\mathrm{K}_{j}+z-g(s^{ \lambda}(y))-1]\] \[\leq e^{\epsilon/4}\cdot\Pr_{\mu_{\lambda}}[\mu_{\lambda}\geq \mathrm{K}_{j}+z-g(s^{\lambda}(y))-1+2]\] (since \[\mu\] is \[\mathrm{Lap}(8/\epsilon)\] ) \[=e^{\epsilon/4}\cdot\Pr_{\mu_{\lambda}}[g(s^{\lambda}(y))+\mu_{\lambda}\geq \mathrm{K}_{j}+(z+1)]\] (6) Finally, since \(g(s^{t}(y))\leq g(s^{t}(x))+1\), for any \(p_{j-1}<t<\lambda\), \[\Pr_{\mu_{t}}[\mu_{t}<\mathrm{K}_{j}+z-g(s^{t}(x))] \leq\Pr_{\mu_{t}}[\mu_{t}<\mathrm{K}_{j}+z+1-g(s^{t}(y))]\] \[=\Pr_{\mu_{t}}[g(s^{t}(y))+\mu_{t}<\mathrm{K}_{j}+(z+1)] \tag{7}\] Putting Equations 5, 6, 7 together into Equation 4, we get that \[\begin{split}&\Pr[\text{Interval $j$ ends at $\lambda$ on $\operatorname{run}(x)$}]\\ &\leq e^{\epsilon/2}\cdot\int_{\mu_{\lambda}}\!\!\Pr[g(s^{ \lambda}(y))+\mu_{\lambda}\geq\operatorname{K}_{j}+(z+1)]\cdot\prod_{p_{j-1}< t<\lambda}\Pr_{\mu_{t}}[g(s^{t}(y))+\mu_{t}<\operatorname{K}_{j}+(z+1)]\cdot f_{ \tau_{j}}(z+1)\,dz\\ &\leq e^{\epsilon/2}\cdot\Pr[\text{Interval $j$ ends at $\lambda$ on $\operatorname{run}(y)$}]\end{split} \tag{8}\] Now we have shown that the probability that interval \(j\) (that contains time \(t^{*}\) where the two input streams differ) ends at time step \(\lambda>p_{j-1}\) on \(\operatorname{run}(x)\) and \(\operatorname{run}(y)\) are \(e^{\epsilon/2}\)-close. Recall that both \(\operatorname{run}(x)\) and \(\operatorname{run}(y)\) both send the counts for \(x\) to \(H\). Thus, we condition next on the segment \((p_{j-1},p_{j}]\) containing \(t^{*}\) starting and ending at the same time step for both runs. With this condition and since the streams are the same anywhere else, the probabilities \(\Pr(V^{(x)}_{H,Adv(x,y)}\in S)\) and \(\Pr(V^{(x)}_{H,Adv(y,x)}\in S)\) are \(e^{\epsilon/2}\)-close for any subset \(S\) of possible outputs. Thus, (2) and therefore (3) follow. ### Accuracy In this section, we show that Algorithm 2 is \((\alpha,\beta)\)-accurate for MaxSum, with \(\alpha=O(\operatorname{err}(c_{\max},\beta/3)+\epsilon^{-1}\cdot\log(T/\beta))\). Since the output at any time step within an interval is the noisy output obtained from \(H\) at the end of the most recently closed interval, our error is comprised of two terms: 1) the error in the noisy output obtained from \(H\), and 2) the increase in true maximum since the last interval was closed. Since the first is bounded by \(\alpha_{BMS}\) (defined below), our task reduces to showing a bound on the second. We do this by showing that at the end of any interval, the true maximum is not too far away from the threshold that it crossed. Since it is possible that the final interval was closed without a threshold being crossed, we deal with this case separately. Note that we also need to show that the true maximum strictly increases within an interval. This is because our histogram mechanism \(H\) was initialized for a stream of length \(c_{\max}\), and is only accurate as long as \(\leq c_{\max}\) numbers are given as input. In particular, showing that the true maximum strictly increases within an interval restricts the number of intervals (and hence the length of the input stream to \(H\)) by \(c_{\max}\). Below we use the following variables: 1. \(\alpha_{\mu}=\frac{8}{\epsilon}\cdot\log\Big{(}\frac{3T}{\beta}\Big{)}\), 2. \(\alpha_{\tau}=\frac{4}{\epsilon}\cdot\log\Big{(}\frac{3c_{\max}}{\beta}\Big{)}\), 3. \(\alpha_{BMS}=\operatorname{err}(c_{\max},\beta/3)+\alpha_{\mu}+\alpha_{\tau}\), 4. \(K=3\alpha_{BMS}\), and 5. for a concrete histogram mechanism \(BT[d]\) obtained by composing \(d\) binary tree mechanisms for each column, the error bound of the mechanism is \(\operatorname{err}(c_{\max},\beta/3)=4\epsilon^{-1}d\log(c_{\max})\log(6dc_{ \max}/\beta)\) by Lemma 52 in Appendix B. **Lemma 5**.: _With probability \(\geq 1-\beta\), the following bounds hold simultaneously for all \(t\) and all \(j\):_ \[|\mu_{t}|\leq\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)=\alpha_ {\mu} \tag{9}\] \[|\tau_{j}|\leq\frac{4}{\epsilon}\cdot\log\left(\frac{3c_{\max}}{\beta}\right) =\alpha_{\tau} \tag{10}\] \[\max_{t}\{\|s^{t}-h^{t}\|_{\infty}\}\leq\text{err}(c_{\max},\beta/3) \tag{11}\] _where \(s^{t}=(s_{1}^{t},s_{2}^{t},\ldots,s_{d}^{t})\) is the noisy histogram maintained by Algorithm 2 at time \(t\), and \(h^{t}=(h_{1}^{t},h_{2}^{t},\ldots,h_{d}^{t})\) is the true histogram at time \(t\)._ Proof.: By Fact 6, \(\mu_{t}\sim\mathrm{Lap}(8/\epsilon)\) is at most \(\alpha_{\mu}\) with probability \(1-\beta/(3T)\) for any \(t\). Further, \(\tau_{j}\sim\mathrm{Lap}(4/\epsilon)\) is at most \(\alpha_{\tau}\) with probability \(1-\beta/(3c_{\max})\) for any \(j\). In the algorithm, there are \(T\) instances of \(\mu_{t}\), and \(c_{\max}\) instances of \(\tau_{j}\). Applying Lemma 3 with \(k=T\), resp. \(k=c_{\max}\), we obtain the first two bounds each with probability \(\geq 1-\beta/3\). Since \(s^{t}\) is the sum of the noisy histogram obtained from \(H\) at the end of the last interval and the true segment counts within this interval, the only error comes from the continuous histogram mechanism \(H\). Thus, we get the final inequality with probability \(\geq 1-\beta/3\) by the guarantee given by the continuous histogram mechanism \(H\). A union bound over all three sets of bounds gives us the lemma. Finally we show that Algorithm 2 is \(O(\mathrm{err}(c_{\max},\beta/3)+\alpha_{\mu}+\alpha_{\tau})\)-accurate with probability at least \(1-\beta\). **Lemma 6**.: _Algorithm 2 is \((\alpha,\beta)\)-accurate for MaxSum, with \(\alpha=O\left(\mathrm{err}(c_{\max},\beta/3)+\alpha_{\mu}+\alpha_{\tau}\right)\)._ Proof.: We define \(\alpha_{BMS}=\mathrm{err}(c_{\max},\beta/3)+\alpha_{\mu}+\alpha_{\tau}\), and note that by definition of \(K\), \(K=3\alpha_{BMS}\). Assume that all the random variables are bounded as in Lemma 5. Fix any time step \(t\). Let \(j\) be the interval that \(t\) belongs to. Let \(s_{max}^{t}\) be \(\max_{i\in[d]}s_{i}\) and \(M_{t}\) be the true maximum column sum at time \(t\). Further, let \(\mathrm{out}^{t}\) be the output of the algorithm at time \(t\). If \(t<T\) and the interval ends at this timestep, i.e., \(t=p_{j}\), then we can bound the error by just \(\mathrm{err}(c_{\max},\beta/3)\). More generally, for any such \(p_{k}\), we can bound the difference between the noisy maximum and the true maximum by \(\mathrm{err}(c_{\max},\beta/3)\), since the only noise added to each column sum is the continual counting noise. Thus, \[|M_{p_{k}}-s_{max}^{p_{k}}|\leq\mathrm{err}(c_{\max},\beta/3) \tag{12}\] The same also holds for \(p_{j}=T\) if the condition in Line 10 is true. We now deal with the case when the interval does not end at this timestep, i.e., \(t\neq p_{j}\), or \(t=T\) and the condition in Line 10 is false. Then as \(M_{t}\) is monotonically non-decreasing in \(t\) it holds that \[|M_{t}-\mathrm{out}^{t}| =|M_{t}-s_{max}^{p_{j-1}}|\] \[\leq|M_{t}-M_{p_{j-1}}|+|M_{p_{j-1}}-s_{max}^{p_{j-1}}|\] \[\leq|M_{p_{j}}-M_{p_{j-1}}|+|M_{p_{j-1}}-s_{max}^{p_{j-1}}| \tag{13}\] Using \(k=j-1\) in Equation 12, we bound the second term in Equation 13 by \(\mathrm{err}(c_{\max},\beta/3)\). Thus we now focus on bounding \(|M_{p_{j}}-M_{p_{j-1}}|\). We consider two cases, depending on whether or not the condition in Line 10 is true at \(p_{j}\); note that it can only be false if \(p_{j}=T\). _Case 1: The condition in Line 10 is true at \(p_{j}\)._ We have \[|M_{p_{j}}-M_{p_{j-1}}|\leq|M_{p_{j}}-\mathrm{K}_{j}|+|\mathrm{K}_{j}-\mathrm{ K}_{j-1}|+|M_{p_{j-1}}-\mathrm{K}_{j-1}| \tag{14}\] We bound \(|\mathrm{K}_{j}-\mathrm{K}_{j-1}|\) by K, and so we focus on bounding \(|M_{p_{j}}-\mathrm{K}_{j}|\) for all \(j\). We do this by giving an upper and lower bound separately on \(M_{p_{j}}-\mathrm{K}_{j}\). Since the threshold is crossed at time \(p_{j}\), we have that \[s_{max}^{p_{j}}>K_{j}-\alpha_{\mu}-\alpha_{\tau}.\] Combining this and Equation 12 with \(k=j\), we get that \[M_{p_{j}}-K_{j}\geq-\mathrm{err}(c_{\max},\beta/3)-\alpha_{\mu}-\alpha_{\tau}=- \alpha_{BMS} \tag{15}\] Next, we show an upper bound of \(\alpha_{BMS}+1\) on \(M_{p_{j}}-K_{j}\) by induction. **Claim 1**.: _For all segments \(j\) such that the condition in Line 10 is true at time \(p_{j}\), we have \(M_{p_{j}}-K_{j}\leq\alpha_{BMS}+1\) and \(M_{p_{j}}-M_{p_{j-1}}>1\)._ Proof.: For \(p_{0}=0\), defining \(K_{0}:=0\), we trivially have \(M_{p_{0}}-K_{0}\leq 0\leq\alpha_{BMS}+1\). Assume inductively that \(M_{p_{j-1}}-K_{j-1}\leq\alpha_{BMS}+1\). Using this assumption and Equation 15, the increase in the true column maximum between two threshold crossings is \[M_{p_{j}}-M_{p_{j-1}} \geq K_{j}-K_{j-1}-2\alpha_{BMS}-1\] \[=K-2\alpha_{BMS}-1\] \[\geq 3\alpha_{BMS}-2\alpha_{BMS}-1\] (by definition of \[K\] ) \[\geq 2\] Thus, in every interval, the true maximum increases by at least \(2\). In particular, there cannot be two _consecutive_ time steps \(t\) and \(t+1\) where the corresponding thresholds are crossed. Thus, \(p_{j}-1\neq p_{j-1}\). Note that \(p_{j}-1\) then necessarily belongs to interval \(j\). Since further \(j\leq c_{\max}\) (because the condition in Line 10 is true at time \(p_{j}\)) it means that the first condition in Line 10 was false at time \(p_{j}-1\). Together with the fact that \(M\) is a sensitivity-\(1\) monotonically non-decreasing function, we get \[M_{p_{j}}-1\leq M_{p_{j}-1}\leq K_{j}+\alpha_{\tau}+\alpha_{\mu} +\operatorname{err}(c_{\max},\beta/3)\leq K_{j}+\alpha_{BMS} \tag{16}\] which ends the inductive proof. Since \(c_{\max}\) is a bound at the maximum column sum at time \(T\), Claim 1 implies that \(j<c_{\max}\) at the end of the stream. Also, equation 15 and Claim 1 together give \[|M_{p_{j}}-K_{j}|\leq\alpha_{BMS}+1\] Substituting this into Equation 14, \[|M_{p_{j}}-M_{p_{j-1}}|\leq K+2\alpha_{BMS}+2\] which, when plugged into Equation 13 along with \(K=3\alpha_{KM}\), gives \[|M_{t}-\operatorname{out}^{t}| \leq 5\alpha_{BMS}+\operatorname{err}(c_{\max},\beta/3)+2\] \[\leq 6\alpha_{BMS}\] \[=O\left(\operatorname{err}(c_{\max},\beta/3)+\alpha_{\mu}+\alpha _{\tau}\right)\] as required. _Case 2: The condition in Line 10 is false at \(p_{j}\)._ We have \(p_{j}=T\). In Case 1, we show that for each \(j^{\prime}<j\), we have \(M_{p_{j^{\prime}}}-M_{p_{j^{\prime}-1}}>1\). Note that this in particular implies \(j<c_{\max}\). Thus, if the condition in Line 10 is false, it means \[s_{\max}^{p_{j}}<K_{j}+\alpha_{\mu}+\alpha_{\tau},\] and thus, \[M_{p_{j}}<K_{j}+\alpha_{\mu}+\alpha_{\tau}+\operatorname{err}(c_ {\max},\beta/3)=K_{j}+\alpha_{BMS}\] for \(\alpha_{BMS}=\alpha_{\mu}+\alpha_{\tau}+\operatorname{err}(c_{\max},\beta/3)\). On the other hand, since the condition in Line 10 was true at time \(p_{j-1}\), we have \[M_{p_{j-1}}\geq K_{j}-K-\alpha_{BMS}.\] We get \[|M_{p_{j}}-M_{p_{j-1}}|=M_{p_{j}}-M_{p_{j-1}}\leq K+2\alpha_{BMS}.\] Thus, by equation 13, we get \[|M_{t}-\operatorname{out}^{t}| \leq K+2\alpha_{BMS}+\operatorname{err}(c_{\max},\beta/3)\] \[\leq 6\alpha_{BMS}\] \[=O\left(\operatorname{err}(c_{\max},\beta/3)+\alpha_{\mu}+\alpha_{ \tau}\right)\] which proves the claimed error bound. **Corollary 2**.: _Algorithm 2 using as mechanism \(H\) the histogram mechanism from Fact 4 is \(\epsilon\) differentially private and \((\alpha,\beta)\)-accurate for MaxSum, with \(\alpha=O\left(\epsilon^{-1}\cdot\left(d\ln(d/\beta)\log^{2}(c_{\max})+\log(T/ \beta)\right)\right)\)._ Proof.: For the histogram mechanism consisting from Fact 4 consisting of \(d\) binary tree counting mechanisms we have \(\operatorname{err}(c_{\max},\beta/3)=4\epsilon^{-1}d\log(c_{\max})\log(6dc_{ \max}/\beta)\) by Lemma 52 in Appendix B. Using Lemma 6 with this value of \(\operatorname{err}(c_{\max},\beta/3)\), \(\alpha_{\mu}=\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)\), and \(\alpha_{\tau}=\frac{4}{\epsilon}\cdot\log\left(\frac{3c_{\max}}{\beta}\right)\)gives the corollary. ## 5 Doubling Mechanism for Segmentation of the Input Stream In this section, we give a mechanism that divides the input stream into _segments_. With probability at least \(1-\beta\), this mechanism ensures that (1) within each segment, the value of the maximum column sum approximately doubles, and (2) the number of segments \(\ell\) is \(\widetilde{O}(\log(dc_{\max}/\epsilon)+\log\log(T/\beta))\). Our algorithm additionally outputs a noisy histogram at the end of each segment and we bound the size of the noise by \(O(d\sqrt{\ell}(\log d+\log\log(T/\beta))/\epsilon)\). Note that the mechanism gives no output at time steps that do not end an interval. In the next section we will combine this with the algorithm from the previous section to get an algorithm for estimating MaxSum without the requirement of being given an upper bound. Similar to the previous section, the algorithm works by using the sparse vector technique for computing the partition, and "resetting" privacy at the end of each segment by computing a differentially private histogram. As before the histogram algorithm has to be able to work with an adaptive adversarial process. We use a very simple such \(\epsilon/2\)-differentially private histogram mechanism using the Laplace mechanism, by adding fresh Laplace noise scaled with \(2d/\epsilon\) to the running column sums every time a segment closes. The sparse vector algorithm in this section doubles the threshold whenever the threshold is crossed, while in the previous section the threshold was increased by an additive value. Thus, we call this mechanism the _(threshold) doubling mechanism_. The sparse vector technique with multiplicatively increasing threshold was used previously in Fichtenberger et al. (2021), but for a different problem, namely for maintaining an estimate of a monotonically increasing function. Chan et al. (2011) partition the stream into segments of exponentially growing stream length. However, since this does not depend on private data they could use a non-private algorithm for the partitioning. The full algorithm is given in Algorithm 5. ### Privacy We fix some notation. Let * \(\mu_{t}\) be the \(\operatorname{Lap}(8/\epsilon)\) noise added to the maximum in Line 6, * \(\tau_{j}\) be the \(\operatorname{Lap}(4/\epsilon)\) noise added to \(\operatorname{K}_{j}\), and * \(\gamma_{i}^{j}\) be the \(\operatorname{Lap}(2d/\epsilon)\) noise added to \(s_{i}\) at the end of segment \(j\) in Line 9. **Lemma 7**.: _Algorithm 5 satisfies \(\epsilon\)-differential privacy._ Proof.: Note that Algorithm 5 can be seen as post-processing of Algorithm 3 with \(g=\max\), \(\Delta=\infty\), \(K_{j}=2^{j-1}\), and \(s_{i}=0\) for all \(i\in[d]\). Then we only need to prove that the histogram mechanism \(H\) which computes a running sum of all inputs and adds fresh Laplace noise scaled with \(\operatorname{Lap}(2d/\epsilon)\) to each coordinate for every new input is \((\epsilon/2)\)-differentially private under continual observation. By Fact 3 it then follows that \(H\) is \((\epsilon/2)\)-differentially private in the adaptive continual release model. Now note that \(H\) is \((\epsilon/2)\)-differentially private by parallel composition and the choice of scale \(2d/\epsilon\) for the Laplace mechanism, since the inputs to \(H\) only differ at a single time step, and the \(L_{1}\) norm of the difference is at most \(d\). ``` Input: Stream \(x^{1},x^{2},\ldots,x^{T}\in\{0,1\}^{d}\). Output: End of each segment together with an estimate of the histogram at that time step 1\(p_{0}\gets 0\), \(j\gets 1\) 2\(s_{i}\gets 0\) for all \(i\in[d]\) 3\(\mathrm{K}_{j}\gets 1\), \(\tilde{\mathrm{K}}_{j}\leftarrow\mathrm{K}_{j}+\mathrm{Lap}(4/\epsilon)\) 4for\(t\in[T]\)do 5\(s_{i}=s_{i}+x_{i}^{t}\) for all \(i\in[d]\) 6if\(\max s_{i}+\mathrm{Lap}(8/\epsilon)>\tilde{\mathrm{K}}_{j}\)and\(j<\log T\)then 7\(p_{j}\gets t\), \(j\gets j+1\) 8\(\mathrm{K}_{j}\gets 2\cdot\mathrm{K}_{j-1}\), \(\tilde{\mathrm{K}}_{j}\leftarrow\mathrm{K}_{j}+\mathrm{Lap}(4/\epsilon)\) 9\(s_{i}\gets s_{i}+\mathrm{Lap}(2d/\epsilon)\) 10output\((t,s_{1},s_{2},\ldots,s_{d})\) 11 12 end for 13 14 end for 15\(p_{j}=T\) ``` **Algorithm 5**Doubling Mechanism for Segmentation of the Input Stream ### Accuracy Let \(\ell\) be the total number of segments produced by the algorithm, which is a random variable that is upper bounded by \(\log T\), and let \(\Gamma_{i}^{j}=\sum_{k=1}^{j}\gamma_{i}^{k}\). **Lemma 8**.: _With probability \(\geq 1-\beta\), the following bounds hold simultaneously for all \(t\in[T]\), \(j\in[\log T]\), and \(i\in[d]\):_ \[|\mu_{t}|\leq\frac{8}{\epsilon}\cdot\log\bigg{(}\frac{3T}{\beta}\bigg{)}=: \alpha_{\mu} \tag{17}\] \[|\tau_{j}|\leq\frac{4}{\epsilon}\cdot\log\bigg{(}\frac{3\log T}{\beta}\bigg{)} =:\alpha_{\tau} \tag{18}\] \[|\Gamma_{i}^{j}|\leq\frac{4d}{\epsilon}\cdot\sqrt{2}j\cdot\log\bigg{(}\frac{3 d\log T}{\beta}\bigg{)} \tag{19}\] Proof.: In the algorithm, there are \(T\) instances of \(\mu_{t}\sim\mathrm{Lap}(8/\epsilon)\), and at most \(\log T\) instances of \(\tau_{j}\sim\mathrm{Lap}(4/\epsilon)\). Applying Fact 6 and Lemma 3, we obtain the first two bounds each with probability \(\geq 1-\beta/3\). Thus, using the concentration bound for the sum of \(j\) Laplace variables given in Lemma 2 with \(b=2d/\epsilon\), \(k=j\), \(\beta_{S}=\beta/(3d\log T)\) gives us the third bound with probability \(\geq 1-\beta/3\). Union bound over all three sets of bounds gives us the lemma. Below we use the following variables: 1. \(\alpha_{\mu}=\frac{8}{\epsilon}\cdot\log\bigg{(}\frac{3T}{\beta}\bigg{)}\), 2. \(\alpha_{\tau}=\frac{4}{\epsilon}\cdot\log\bigg{(}\frac{3c_{\max}}{\beta} \bigg{)}\), 3. \(L=\min\{\log\big{(}20\epsilon^{-2}dc_{\max}\big{)}+4\log\log(T/\beta)+\log\log( 3d\log T/\beta),\log T\}\), 4. \(\alpha_{\Gamma}=\frac{4d}{\epsilon}\cdot\sqrt{2L}\cdot\log\left(\frac{3d\log T}{ \beta}\right)\), and 5. \(\alpha_{DM}=\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}+L\). Let \(c_{\max}=\max_{i}c_{i}^{T}\) be the maximum column sum in the entire stream. We first show an upper bound of \(L\) which is roughly \(\widetilde{O}(\log c_{\max}+\log d)\) on the number \(\ell\) of segments produced by the algorithm. **Lemma 9**.: _Assume that the random variables are bounded as in Lemma 8. Then Algorithm 5 creates at most \(L=\min\{\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+\log\log( 3d\log T/\beta),\log T\}\) segments._ Proof.: We condition on the noises being bounded as stated in Lemma 8. A trivial upper bound of \(\log T\) on \(\ell\) (and thus \(L\)) is obtained from the stopping condition of the algorithm in Line 6. At time \(p_{\ell}\) when the last segment was closed2, we have that for \(i=\operatorname*{argmax}_{k}s_{k}^{p_{\ell}}\), Footnote 2: If the last segment was closed at \(p_{\ell-1}\), then the following holds for \(\ell-1\), which gives the same asymptotic bounds on \(\ell\). \[s_{i}^{p_{\ell}}+\mu_{p_{\ell}}\geq 2^{\ell}+\tau_{\ell}.\] Let \(c_{i}^{t}\) be the \(i\)-th true column sum at time \(t\). Taking \(i^{*}=\operatorname*{argmax}_{k}c_{k}^{p_{\ell}}\), we have that \(c_{i}^{p_{\ell}}\leq c_{i^{*}}^{p_{\ell}}\). Thus \[2^{\ell} \leq s_{i}^{p_{\ell}}+\mu_{p_{\ell}}-\tau_{\ell}\] \[=c_{i}^{p_{\ell}}+\Gamma_{i}^{\ell}+\mu_{p_{\ell}}-\tau_{\ell}\] \[\leq c_{i^{*}}^{p_{\ell}}+\Gamma_{i}^{\ell}+\mu_{p_{\ell}}-\tau_{\ell}\] (by definition of \[i^{*}\] ) \[\leq c_{i^{*}}^{p_{\ell}}+\frac{4d}{\epsilon}\cdot\sqrt{2\ell} \cdot\log\left(\frac{3d\log T}{\beta}\right)+\alpha_{\mu}+\alpha_{\tau}. \tag{20}\] We now get that \[\ell \leq\log\left[c_{\max}+\frac{4d}{\epsilon}\cdot\sqrt{2\ell} \cdot\log\left(\frac{3d\log T}{\beta}\right)+\frac{8}{\epsilon}\cdot\log \left(\frac{3T}{\beta}\right)+\frac{4}{\epsilon}\cdot\log\left(\frac{3\log T} {\beta}\right)\right]\] \[\leq\log\left[c_{\max}+\frac{4d}{\epsilon}\cdot\sqrt{2\ell} \cdot\log\left(\frac{3d\log T}{\beta}\right)+\frac{12}{\epsilon}\cdot\log \left(\frac{3T}{\beta}\right)\right]\] \[\leq\log c_{\max}+\log\frac{4\sqrt{2}}{\epsilon}+\log d+\frac{1} {2}\log\log T+\log\log\left(\frac{3d\log T}{\beta}\right)+\log\frac{12}{ \epsilon}+\log\log\left(\frac{3T}{\beta}\right)\] (upper bounding log of sum by sum of logs) \[\leq\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+ \log\log(3d\log T/\beta)\] as required, where the third inequality follows from \(\ell\leq\log T\). We use this to show that the \(s_{i}\) values in Algorithm 5 are at most \(\alpha_{\Gamma}=\frac{4d}{\epsilon}\cdot\sqrt{2L}\cdot\log\left(\frac{3d\log T }{\beta}\right)\) away from the true column sums at all times. **Lemma 10**.: _Assume that the random variables are bounded as in Lemma 8. Let \(t\in[T]\) and \(i\in[d]\). Then \(|s_{i}^{t}-c_{i}^{t}|\leq\alpha_{\Gamma}=O\left(\frac{d\sqrt{L}\cdot\log(d\log T /\beta)}{\epsilon}\right)\)._ Proof.: We condition on the noises being bounded as stated in Lemma 8. Thus we get an upper bound of \(L\) on the number of segments from Lemma 9. Let \(j\) be the segment to which time \(t\) belongs. Then \[|s_{i}^{t}-c_{i}^{t}| =|c_{i}^{t}+\Gamma_{i}^{j}-c_{i}^{t}|\] \[\leq\frac{4d}{\epsilon}\cdot\sqrt{2j}\cdot\log\left(\frac{3d\log T }{\beta}\right)\] \[\leq\alpha_{\Gamma}\] as required. We finally bound the true maximum column sum increase in a single segment. **Lemma 11**.: _Assume that the random variables are bounded as in Lemma 8. Then in Algorithm 5, the true maximum column sum for segment \(j\) increases by at most \(2^{j-1}+2\alpha_{DM}\), where \(\alpha_{DM}=\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}+L=O\left(\frac{d\sqrt{ L}\log(d\log T/\beta)+\log(T/\beta)}{\epsilon}\right)\)._ Proof.: We condition on the noises being bounded as in Lemma 8. Recall that the time interval \((p_{j-1},p_{j}]\) is the \(j^{th}\) segment. First, assume that either \(j<\ell\), or \(j=\ell\) and the condition in Line 6 was true at time \(p_{j}\). Let \(M_{t}\) be the true maximum column sum at time \(t\), and \(\mathrm{K}_{j}\) be the \(j^{th}\) threshold value. Then \[|M_{p_{j}}-M_{p_{j-1}}|\leq|\mathrm{K}_{j}-\mathrm{K}_{j-1}|+|M_{p_{j}}- \mathrm{K}_{j}|+|M_{p_{j-1}}-\mathrm{K}_{j-1}|\] The definition of \(\mathrm{K}_{j}\) directly gives us that \(|\mathrm{K}_{j}-\mathrm{K}_{j-1}|=2^{j-1}\). Thus our task reduces to bounding \(|M_{p_{j}}-\mathrm{K}_{j}|\) for all \(j\). We do this in two parts. Let \(s_{max}^{t}=\max_{i}s_{i}^{t}\) be the maximum noisy column sum at time \(t\). First, from Lemma 10, we get that for all \(t\), \[|M_{t}-s_{max}^{t}|\leq\alpha_{\Gamma} \tag{22}\] and since at time \(p_{j}\) the threshold was crossed, we have that \[s_{max}^{p_{j}}>\mathrm{K}_{j}-\alpha_{\mu}-\alpha_{\tau}.\] Putting these two equations together, we get that \[M_{p_{j}}-K_{j}>-\left(\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}\right).\] This gives us a lower bound. Now we show an upper bound. Let \(t<p_{j}\) be the last time step in which a segment was not closed. If a segment was closed at every time step until \(p_{j}\) of the algorithm, then let \(t=0\). Since at every time step between \(t\) and \(p_{j}\) a segment must have been closed and the total number of segments is at most \(\ell\), we get that \(t\geq p_{j}-\ell\). Let \(k\) be the segment that \(t\) belonged to. If \(t=0\), we set \(k=s_{max}^{0}=K_{0}=0\) in the following equation. Then at time \(t\), \[s_{max}^{t}\leq K_{k}+\alpha_{\mu}+\alpha_{\tau}\] Using Equation 22 and the above equation, we get \[M_{t}\leq K_{k}+\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma} \tag{23}\] Since \(t\geq p_{j}-\ell\) and the maximum column sum is a sensitivity one function, \[M_{t}\geq M_{p_{j}}-\ell\] Since the thresholds do not decrease with time, \(K_{j}\geq K_{k}\). Note that \(\ell\leq L\) by Lemma 9. Using these two facts, and substituting the above equation into Equation 23, we get that \[M_{p_{j}}-K_{j}\leq\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}+L=\alpha_{DM}\] Thus putting it all together, we get that \[|M_{p_{j}}-M_{p_{j-1}}|\leq 2^{j-1}+2\cdot\alpha_{DM}\] as required. Now, for \(j=\ell\), if the condition in Line 6 was false, we have two cases. First, assume \(\ell=\log T\). Then at time \(p_{l-1}\), we have \[g_{max}(s^{p_{\ell-1}})+\mu_{p_{\ell-1}}>K_{\ell-1}+\tau_{\ell-1}=T/2+\tau_{ \ell-1}\] and therefore \[M_{p_{\ell-1}}>T/2-\alpha_{\mu}-\alpha_{\tau}-\alpha_{\Gamma}.\] Since \(M_{p_{\ell}}\leq M_{T}\leq T\), we have \[M_{p_{\ell}}-M_{p_{\ell-1}}\leq T/2+\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma }=2^{\ell-1}+\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}.\] Second, assume \(\ell<\log T\). Then \[s^{p_{j}}_{\max}\leq K_{j}+\alpha_{\mu}+\alpha_{\tau},\] and thus \[M_{p_{j}}\leq K_{j}+\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}.\] Since the threshold was crossed at time \(p_{j-1}\), we have \[M_{p_{j-1}}\geq K_{j-1}-(\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}).\] Therefore, \[|M_{p_{j}}-M_{p_{j-1}}|=M_{p_{j}}-M_{p_{j-1}} \leq(K_{j}-K_{j-1})+2(\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma})\] \[=2^{j-1}+2(\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma})\] \[\leq 2^{j-1}+2\alpha_{DM}\] which proves the claim. **Theorem 2**.: _With probability at least \(1-\beta\), we simultaneously have the following guarantees from Algorithm 5._ 1. _the total number of segments produced by the algorithm is upper bounded by_ \(L=\min\{\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+\log\log(3 d\log T/\beta),\log T\}\)_,_ 2. _the difference between the noisy and the true histogram at all times stored by the algorithm is upper bounded by_ \(\alpha_{\Gamma}=O\left(\frac{d\sqrt{L}\cdot\log(d\log T/\beta)}{\epsilon}\right)\)_, and_ 3. _the true maximum column sum increase within segment_ \(j\) _is at most_ \(2^{j-1}+2\alpha_{DM}\)_, where_ \(\alpha_{DM}=O\left(\frac{d\sqrt{L}\log(d\log T/\beta)+\log(T/\beta)}{\epsilon}\right)\)_._ Proof.: Assume that the Laplace random variables in Algorithm 5 are bounded as in Lemma 8. Then the three points of the theorem follow from Lemmas 9, 10, and 11 respectively. ``` Input: Stream \(x^{1},x^{2},\ldots,x^{T}\in\{0,1\}^{d}\). Output: Estimate of the maximum column sum at every time step \(t\in\mathbb{N}\) (no bound on \(c_{\max}\) is required). 1\(L\leftarrow\log c_{\max}+4\log d+5\log\log T+2\log\left(12/\epsilon\right)+2\log \log\left(1/\beta\right)\) 2\(\alpha_{\Gamma}\leftarrow\frac{4\sqrt{2}}{\epsilon}\cdot d\sqrt{L}\cdot\log \left(\frac{2d\log T}{\beta}\right)\) 3\(\alpha_{DM}\leftarrow\frac{12}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right) +\alpha_{\Gamma}+L\) 4\(p_{0}\gets 0\), \(j\gets 1\) 5\(s_{i}\gets 0\) for all \(i\in[d]\) 6\(\mathrm{K}_{j}\gets 1\), \(\tilde{K}_{j}\leftarrow\mathrm{K}_{j}+\mathrm{Lap}(4/\epsilon)\) 7 Initialize Alg 7 with parameters \((0,s_{1},s_{2},\ldots,s_{d},\alpha_{\Gamma},2^{j-1}+2\alpha_{DM},T)\) 8for\(t\in[T]\)do 9\(s_{i}=s_{i}+x_{t}^{t}\) for all \(i\in[d]\) 10if\(\max s_{i}+\mathrm{Lap}(8/\epsilon)>\tilde{\mathrm{K}}_{j}\)and\(j<\log T\)then 11\(p_{j}=t\) 12\(s_{i}\gets s_{i}+\mathrm{Lap}(2d/\epsilon)\) 13\(j\gets j+1\) 14\(\mathrm{K}_{j}\gets 2\cdot\mathrm{K}_{j-1}\), \(\tilde{\mathrm{K}}_{j}\leftarrow\mathrm{K}_{j}+\mathrm{Lap}(4/\epsilon)\) 15 Terminate current Alg 7 16 Initialize new Alg 7 instance with parameters \((t,s_{1},s_{2},\ldots,s_{d},\alpha_{\Gamma},2^{j-1}+2\alpha_{DM},T)\) 17output\(\max_{i}\boldsymbol{s}_{i}\) 18 19 end for 20else 21feed \(x^{t}\) to Alg 7 22output returned value of Alg 7 23 24 end if 25 26 end for 27\(p_{j}=T\) ``` **Algorithm 6**Two-Level Mechanism for MAXSUM ## 6 Two-Level Mechanism for MaxSum In this section, we combine the two mechanisms from the previous two sections to get an algorithm for MaxSum. The first level of the mechanism is the same as Algorithm 5, which partitions the input stream into segments. For each such segment the second level algorithm is called, which is a modified version of Algorithm 2 and is given in Algorithm 7. The main difference to Algorithm 2 is that it does not start each column sum from \(0\), but instead it is given as input (1) a noisy histogram to initialize each column sum, (2) an upper bound on the amount of noise in the histogram, and (3) an upper bound on how much the maximum column sum (with the given initial column sum values) can increase. The error of the input histogram has to be taken into account in the new partitioning algorithm, which results in a slightly more complicated algorithm than Algorithm 2. The full two-level algorithm is given in Algorithm 6. In this section we will refer to Algorithm 6 without the lines referring to Algorithm 7 as the _doubling mechanism_, and to Algorithm 7 as the _modified BoundedMaxSum mechanism_. ### Privacy **Lemma 12**.: _Algorithm 6 satisfies \(2\epsilon\)-differential privacy._ Proof.: We deal with the outputs of the calls to Alg 7 separately from the outputs of Alg 6 in Line 17. Let \(x\) and \(y\) be two neighboring input streams. First, note that since the instantiations of the modified BoundedMaxSum mechanism do not affect the outputs on Line 17, we can use Lemma 7 to prove that the doubling mechanism (since the outputs in Line 17 of the two-level mechanism are exactly the same as run on the doubling mechanism) is \(\epsilon\)-differentially private. Now we condition on all the internal random variables (namely, the Laplace random variables in Lines 6, 10, 12, and 14) of the two-level mechanism being fixed such that both \(x\) and \(y\) lead to the same sequence of segments, and argue about the privacy of the various modified BoundedMaxSum mechanisms. Since the segments produced by the doubling mechanism are fixed, all the modified BoundedMaxSum mechanisms operate on disjoint parts of the stream. Each instantiation of the modified BoundedMaxSum mechanism is \(\epsilon\)-dp by Lemma 4. Since they operate on disjoint parts of the stream, by parallel composition, all instantiations of the modified BoundedMaxSum mechanism together satisfy \(\epsilon\)-DP. Naive sequential composition now gives us the \(2\epsilon\)-DP guarantee. ### Accuracy #### 6.2.1 Algorithm 7 We first analyze the accuracy of Algorithm 7, assuming that the input noisy column sums given to the algorithm are at most an additive error \(\alpha_{\Gamma}\) away from the true column sums, and that the increase in the true maximum column sum is bounded by \(\Delta\). This property is shown to hold for the two-level algorithm with probability \(\geq 1-\beta\) in Lemmas 10 and 11. We use the following notation. Let * \(\mu_{t}\) be the \(\mathrm{Lap}(8/\epsilon)\) noise added to \(\max_{i\in[d]}s_{i}\) in Line 9 in Algorithm 7 at time \(t\), and let * \(\tau_{j}\) be the \(\mathrm{Lap}(4/\epsilon)\) noise added to \(\mathrm{K}_{j}\) in Line 11 in Algorithm 7. **Lemma 13**.: _With probability \(\geq 1-\beta\), the following bounds hold simultaneously for all \(t\in[T]\) and \(j\in[d]\):_ \[|\mu_{t}|\leq\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)=:\alpha _{\mu} \tag{24}\] \[|\tau_{j}|\leq\frac{4}{\epsilon}\cdot\log\left(\frac{3\Delta}{\beta}\right)=: \alpha_{\tau} \tag{25}\] \[\max_{t}\{\text{continuous histogram error at time }t\}\leq\text{err}(\Delta,\beta/3) \tag{26}\] Proof.: The first two follow from properties of Laplace random variables and a union bound, and the third by assumption on \(H\). Below we use the following variables: 1. \(\alpha_{\mu}=\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)\), 2. \(\alpha_{\tau}=\frac{4}{\epsilon}\cdot\log\left(\frac{3\Delta}{\beta}\right)\), 3. \(\alpha_{TL}=\alpha_{\Gamma}+\text{err}(\Delta,\beta/3)+\alpha_{\mu}+\alpha_{\tau}\), and 4. \(K=3\alpha_{TL}\). 5. for a concrete histogram mechanism \(BT[d]\) obtained by composing \(d\) binary tree mechanisms for each column, the error bound of the mechanism is \(\text{err}(\Delta,\beta/3)=4\epsilon^{-1}d\log(\Delta)\log(6d\Delta/\beta)\) by Lemma 52 in Appendix B. **Lemma 14**.: _Suppose Algorithm 7 is initialized with initial column sum estimates \(\{s_{i}\}_{i\in[d]}\) that are guaranteed to be at most an additive factor \(\alpha_{\Gamma}\) away from the true column sums. Further, assume that the true maximum column sum changes by at most \(\Delta\) during the execution of Algorithm 7. Then Algorithm 7 is \((\alpha,\beta)\)-accurate for MaxSum, with \(\alpha=O\left(\alpha_{\Gamma}+\text{err}(\Delta,\beta/3)+\alpha_{\mu}+\alpha_ {\tau}\right)=O(\alpha_{\Gamma}+\text{err}(\Delta,\beta/3)+\epsilon^{-1}\cdot \left(\log(T/\beta)+\log(\Delta/\beta)\right))\)._ Proof.: We show that if Algorithm 7 is (a) instantiated with a noisy histogram such for each \(i\in[d]\) the initial \(i\)-th column sum has an additive error of absolute value at most \(\alpha_{\Gamma}\), and (b) if the true maximum column sumincreases by at most \(\Delta\) throughout the stream, then the outputs of the algorithm are \(O\left(\alpha_{\Gamma}+\text{err}(\Delta,\beta/3)+\alpha_{\mu}+\alpha_{\tau}\right)\)-accurate. Assume that all the Laplace random variables are bounded as in Lemma 13. Fix any time step \(t\). Let \(j\) be the segment that \(t\) belongs to. Let \(s^{t}_{max}\) be \(\max_{i\in[d]}s_{i}\) and \(M_{t}\) be the true maximum column sum at time \(t\). Further, let \(\text{out}^{t}\) be the output of the algorithm at time \(t\). Let \(\text{err}(\Delta,\beta/3)\) be an upper bound on the maximum error accrued in the continuous histogram mechanism. If \(t<t_{\infty}\) and an interval ends at this timestep, i.e., \(t=p_{j}\), then \(\text{out}^{t}=s^{p_{j}}_{max}\) and we can bound the difference between the noisy maximum and the true maximum by \(\alpha_{\Gamma}+\text{err}(\Delta,\beta/3)\), since the only error for each noisy column sum is the error of \(\alpha_{\Gamma}\) at initialization and the error accrued in the continuous histogram mechanism. Note that this is in general true for any time step when an interval is closed. Thus, for every interval \(k\) (except possibly the final interval) it holds that \[|M_{p_{k}}-s^{p_{k}}_{max}|\leq\alpha_{\Gamma}+\text{err}(\Delta,\beta/3). \tag{27}\] Note that the above holds for \(p_{j}=t_{\infty}\) if the condition in Line 9 is true. We now deal with the case when the segment does not end at this timestep, i.e., \(t\neq p_{j}\), or \(t=t_{\infty}\) and the condition in Line 9 is false. Then the value \(\text{out}^{p_{j-1}}\), which equals \(s^{p_{j-1}}_{max}\), is returned. Thus, \[|M_{t}-\text{out}^{t}| =|M_{t}-s^{p_{j-1}}_{max}|\] \[\leq|M_{t}-M_{p_{j-1}}|+|M_{p_{j-1}}-s^{p_{j-1}}_{max}|\] \[\leq|M_{p_{j}}-M_{p_{j-1}}|+|M_{p_{j-1}}-s^{p_{j-1}}_{max}|, \tag{28}\] where the last inequality holds because of monotonicity of the maximum. Using Equation 27 for segment \(j-1\), we bound the second term above by \(\alpha_{\Gamma}+\operatorname{err}(\Delta,\beta/3)\). Thus we now focus on bounding \(|M_{p_{j}}-M_{p_{j-1}}|\). We consider two cases, depending on whether or not the condition at Line 9 is true at \(p_{j}\). Note that it can be false only when \(p_{j}=t_{\infty}\). _Case 1: The condition in Line 9 is true at \(p_{j}\)_ We have as earlier \[|M_{p_{j}}-M_{p_{j-1}}|\leq|M_{p_{j}}-\operatorname{K}_{j}|+| \operatorname{K}_{j}-\operatorname{K}_{j-1}|+|M_{p_{j-1}}-\operatorname{K}_{j- 1}|. \tag{29}\] We bound \(|\operatorname{K}_{j}-\operatorname{K}_{j-1}|\) by \(\operatorname{K}\) and focus on bounding \(|M_{p_{j}}-\operatorname{K}_{j}|\) for all \(j\). We do this by giving an upper and lower bound on \(M_{p_{j}}-\operatorname{K}_{j}\). First, since the threshold is crossed at time \(p_{j}\), we have that \[s_{max}^{p_{j}}>K_{j}-\alpha_{\mu}-\alpha_{\tau}\] Using Equation 27 with \(k=j\), we get that \[M_{p_{j}}-K_{j}\geq-\alpha_{\Gamma}-\operatorname{err}(\Delta, \beta/3)-\alpha_{\mu}-\alpha_{\tau}=-\alpha_{TL} \tag{30}\] Next, we show an upper bound on \(M_{p_{j}}-K_{j}\) by induction. **Claim 2**.: _For all segments \(j\) such that the condition in Line 9 is true at time \(p_{j}\), \(M_{p_{j}}-K_{j}\leq\alpha_{TL}+1\) and \(M_{p_{j}}-M_{p_{j-1}}>1\)._ Proof.: For \(p_{0}=t_{0}\), defining \(K_{0}:=\max_{i}s_{i}^{t_{0}}\), our assumption (a) at the beginning of this proof implies that \(M_{p_{0}}-K_{0}\leq\alpha_{\Gamma}\leq\alpha_{TL}+1\). Assume that \(M_{p_{j-1}}-K_{j-1}\leq\alpha_{TL}+1\). Using this assumption and Equation 30, the increase in true maximum between two threshold crossings is \[M_{p_{j}}-M_{p_{j-1}} \geq K_{j}-K_{j-1}-2\alpha_{TL}-1\] \[\geq K-2\alpha_{TL}-1\] \[\geq 3\alpha_{TL}-2\alpha_{TL}-1\] (by definition of \[K\] ) \[\geq 2\] Thus, in every interval, the true maximum increases by at least 2. In particular, this implies that there cannot be two consecutive timesteps \(t\) and \(t+1\) when the corresponding thresholds are crossed. Thus, \(p_{j}-1\neq p_{j-1}\). Note that \(p_{j}-1\) then necessarily belongs to segment \(j\). Since further \(j\leq\Delta\) (because the condition in Line 9 is true at time \(p_{j}\)) it means that the first condition in Line 9 was false at time \(p_{j}-1\). Together with the fact that \(M\) is a sensitivity-1 monotonically non-decreasing function, we get \[M_{p_{j}}-1\leq M_{p_{j}-1}\leq K_{j}+\alpha_{\tau}+\alpha_{\mu} \leq K_{j}+\alpha_{TL}, \tag{31}\] which ends the inductive proof. Since by assumption, \(\Delta\) is a bound of the maximum column sum increase throughout the algorithm, Claim 2 implies that \(j<\Delta\) at the end of the algorithm. Also, equation 30 and Claim 2 together give \[|M_{p_{j}}-K_{j}|\leq\alpha_{TL}+1\] Substituting this into Equation 29 gives \[|M_{p_{j}}-M_{p_{j-1}}|\leq K+2\alpha_{TL}+2\] Now when plugged into Equation 28 along with \(K=3\alpha_{TL}\), this gives \[|M_{t}-\operatorname{out}^{t}| \leq 5\alpha_{TL}+\alpha_{\Gamma}+\operatorname{err}(\Delta, \beta/3)+2\] \[\leq 6\alpha_{TL}\] \[=O\left(\alpha_{\Gamma}+\operatorname{err}(\Delta,\beta/3)+ \alpha_{\mu}+\alpha_{\tau}\right)\] as required. _Case 2: The condition in Line 9 is false at \(p_{j}\)._ This in particular implies that \(p_{j}=t_{\infty}\). For any previous interval \(j^{\prime}<j\), we showed above that \(M_{p_{j^{\prime}}}-M_{p_{j^{\prime}-1}}\geq 2\). Note that this in particular implies \(j<\Delta\). Thus, if the condition in Line 9 is false, then \[s_{\max}^{p_{j}}<K_{j}+\alpha_{\mu}+\alpha_{\tau},\] and thus, \[M_{p_{j}}<K_{j}+\alpha_{\Gamma}+\alpha_{\mu}+\alpha_{\tau}+\text{err}(\Delta, \beta/3)=K_{j}+\alpha_{TL}\] On the other hand, since the condition in Line 9 was true at time \(p_{j-1}\), we have \[M_{p_{j-1}}\geq K_{j}-K-\alpha_{TL}\] We get \[|M_{p_{j}}-M_{p_{j-1}}|=M_{p_{j}}-M_{p_{j-1}}\leq K+2\alpha_{TL}.\] Thus, by equation 28, we get \[|M_{t}-\text{out}^{t}| \leq K+2\alpha_{TL}+\text{err}(\Delta,\beta/3)\] \[\leq 6\alpha_{TL}\] \[=O\left(\alpha_{\Gamma}+\text{err}(\Delta,\beta/3)+\alpha_{\mu}+ \alpha_{\tau}\right)\] which proves the claimed error bound. #### 6.2.2 Algorithm 6 Below we use the following variables: 1. \(L=\min\{\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+\log \log(3d\log T/\beta),\log T\}\), 2. \(\alpha_{\Gamma}=\frac{4d}{\epsilon}\cdot\sqrt{2L}\cdot\log\Big{(}\frac{3d\log T }{\beta}\Big{)}\), 3. \(\alpha_{DM}=\alpha_{\Gamma}+L+8\epsilon^{-1}\log(3T/\beta)+4\epsilon^{-1}\log (3\log T/\beta)\), In what follows, \(O_{\log\log\text{hides}}\log\log(d,T,c_{\max},1/\epsilon,1/\beta)\) terms. **Lemma 15**.: _Algorithm 6 is \((\alpha,2\beta)\)-accurate for MaxSum, with_ \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\Big{(}\epsilon\cdot \text{err}(2^{L+2},\beta/3L)+d\log(d/\beta)\cdot\sqrt{\log(dc_{\max}/\epsilon )}+\log(dT/\epsilon\beta)\Big{)}\right).\] Proof.: We first argue about the doubling mechanism. Using Theorem 2, we get that the following guarantees simultaneously hold with probability \(\geq 1-\beta\). 1. the total number of segments produced by the doubling mechanism is upper bounded by \(L\), where \(L=\min\{\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+\log\log( 3d\log T/\beta),\log T\}\), 2. for every time step \(t\in[T]\), the difference between the true histogram and the noisy histogram stored by the doubling mechanism is upper bounded by \(\alpha_{\Gamma}=O\left(\frac{d\sqrt{L}\cdot\log(d\log T/\beta)}{\epsilon}\right)\), and 3. the true maximum column sum increase within segment \(j\) is at most \(2^{j-1}+2\alpha_{DM}\), where \(\alpha_{DM}=O\left(\frac{d\sqrt{L}\log(d\log T/\beta)+\log(T/\beta)}{\epsilon}\right)\). We condition on these guarantees. We now argue about the accuracy when the modified BoundedMaxSum mechanism returns an output. For the \(j^{th}\) instantiation of the modified BoundedMaxSum mechanism, the maxsum increase bound \(\Delta\) is defined as \(2^{j-1}+2\alpha_{DM}\). Taking \(\beta^{\prime}=\beta/3L\) in Lemma 14 gives us that the \(j^{th}\) modified BoundedMaxSum mechanism has the following accuracy with probability \(\geq 1-\beta/L\) \[\alpha=O\left(\epsilon^{-1}\cdot d\sqrt{L}\cdot\log(d\log(TL/\beta))+\text{ err}(\Delta,\beta/3L)+\epsilon^{-1}\cdot\log(TL/\beta)+\epsilon^{-1}\cdot\log(L \Delta/\beta)\right) \tag{32}\] We first upper bound \(\Delta\). Since \(j\leq L\), \[\Delta=2^{j-1}+2\alpha_{DM}\leq 2^{L}+2\alpha_{DM}\] We show that \(\alpha_{DM}\leq 2^{L+1}\), which lets us obtain an upper bound of \(\Delta\leq 2^{L+2}\). Recall that \(\alpha_{DM}=L+\alpha_{\Gamma}+\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{ \beta}\right)+\frac{4}{\epsilon}\cdot\log\left(\frac{3\log T}{\beta}\right)\). We bound \(L\) trivially by \(2^{L}\). We now bound the rest of the term by \(2^{L}\). First, we bound \(\alpha_{\Gamma}\) \[\alpha_{\Gamma} =\frac{4d}{\epsilon}\cdot\sqrt{2L}\cdot\log\left(\frac{3d\log T}{ \beta}\right)\] \[\leq\frac{4d}{\epsilon}\cdot\sqrt{2\log T}\cdot\log\left(\frac{3 d\log T}{\beta}\right)\] (since \[L\leq\log T\] ) Thus \[\alpha_{\Gamma}+\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{ \beta}\right)+\frac{4}{\epsilon}\cdot\log\left(\frac{3\log T}{\beta}\right) \leq\frac{4d}{\epsilon}\cdot\sqrt{2\log T}\cdot\log\left(\frac{3 d\log T}{\beta}\right)+\frac{12}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)\] \[\leq c_{\max}+\frac{4d}{\epsilon}\cdot\sqrt{2\log T}\cdot\log \left(\frac{3d\log T}{\beta}\right)+\frac{12}{\epsilon}\cdot\log\left(\frac{3 T}{\beta}\right)\] \[\leq 2^{L}\] (by definition of \[L\] and ( 21 ) ) which gives \(\alpha_{DM}\leq 2^{L+1}\). This gives us the required upper bound on \(\Delta\) of \(2^{L+2}\). Next, we show that \(\log L=O_{\log\log}(1)\). This follows, since \[L=\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+\log\log(3d\log T /\beta)\] and so \[\log L=\log\log\left(\left(20\epsilon^{-2}dc_{\max}\right)\cdot\log^{4}(T/ \beta)\cdot\log(3d\log T/\beta)\right)=O_{\log\log}(1).\] Plugging in \(\Delta\leq 2^{L+2}\), \(\log L=O_{\log\log}(1)\), and \(\log\log T=O_{\log\log}(1)\) into Equation 32, we get \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(d\sqrt{L}\log(d/\beta)+ \epsilon\cdot\text{err}(2^{L+2},\beta/3L)+\log(T/\beta)+\log(2^{L+2}/\beta) \right)\right) \tag{33}\] For the last term in the summation, \[\log 2^{L+2}=L+2 =O(\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+ \log\log(3d\log T/\beta))\] \[=O_{\log\log}(\log(\epsilon^{-2}dc_{\max}))\] \[=O_{\log\log}(\log(\epsilon^{-2}dT))\] In the first term in Equation 33, we bound \(\sqrt{L}\) as follows \[\sqrt{L}\leq\sqrt{\log(20\epsilon^{-2}dc_{\max})}+\sqrt{4\log\log(T/\beta)}+ \sqrt{\log\log(3d\log(T/\beta))}\] Since the final two terms are \(O_{\log\log}(1)\), this reduces to \[\sqrt{L}\leq O_{\log\log}\left(\sqrt{\log(dc_{\max}/\epsilon)}\right)\] Plugging these bounds on \(d\sqrt{L}\log(d/\beta)\) and \(\log 2^{L+2}\) in Equation 33, we get \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(d\log(d/\beta)\cdot\sqrt{ \log(dc_{\max}/\epsilon)}+\epsilon\cdot\operatorname{err}(2^{L+2},\beta/3L)+ \log(dT/\epsilon\beta)\right)\right)\] Since there are at most \(L\) segments in the two-level mechanism, there are at most \(L\) instantiations of Algorithm 7. Thus all the accuracy guarantees hold together with probability at least \(1-\beta\). Combining the guarantees for the two-level mechanism and the modified BoundedMaxSum mechanism, we get that Algorithm 6 is \((\alpha,2\beta)\)-accurate for \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(\epsilon\cdot \operatorname{err}(2^{L+2},\beta/3L)+d\log(d/\beta)\cdot\sqrt{\log(dc_{\max}/ \epsilon)}+\log(dT/\epsilon\beta)\right)\right)\] which proves the claimed accuracy guarantee. **Corollary 3**.: _Algorithm 6 instantiated with the histogram mechanism from Fact 4 is \((\alpha,2\beta)\)-accurate for MaxSum, with \(\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(d\log(d/\beta)\cdot\log^{2}( dc_{\max}/\epsilon)+\log(T/\epsilon\beta)\right)\right)\)_ Proof.: For the histogram mechanism from Fact 4, we have \(\operatorname{err}(2^{L+2},\beta/3L)=O\left(\epsilon^{-1}\cdot\left(d\ln(dL/ \beta)\log^{2}(2^{L})\right)\right)\). We start by bounding \(\log^{2}(2^{L})\). \[\log^{2}2^{L} =L^{2}\] \[\leq\left(\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/ \beta)+\log\log(3d\log T/\beta)\right)^{2}\] \[\leq O_{\log\log}\left((\log(\epsilon^{-2}dc_{\max}))^{2}\right)\] Thus, \[\operatorname{err}(2^{L+2},\beta/3L)\leq O_{\log\log}\left(\epsilon^{-1}\cdot d \log(d/\beta)\cdot\log^{2}(dc_{\max}/\epsilon)\right)\] Plugging this into the guarantee in Lemma 15, the \(d\log d\sqrt{\log c_{\max}}\) term is dominated by \(\operatorname{err}(2^{L+2},\beta/3L)\). This gives \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(d\log(d/\beta)\cdot\log^{2} dc_{\max}/\epsilon+\log(T/\epsilon\beta)\right)\right)\] as required. ## 7 Answering \(k\) queries In this section, we show how to extend the solutions from the previous sections to answering \(k\) queries of a certain query class specified below. This query class includes, for example, \(\textsc{Quantile}_{q}\) for any \(q\in(0,1]\) and any \(1\)-way marginal. The main adaptation to answer \(k\) queries, instead of just one, lies in the part of the algorithm where we assume an upper bound on the maximum output: Now we want to partition in such a way that we can bound the change in _all_ queries _simultaneously_. This can be done naively using \(k\) instances of the partitioning algorithm in Section 4, resulting in an additive \(O(k\log T+kd\log^{2}c_{\max})\) error. However, we show that we can avoid this increased error by noticing that for the partitioning alone, we can use a variation of the sparse vector technique and always add _the same error_ to all \(k\) query answers and also the same error to all \(k\) thresholds and still preserve differential privacy. This effectively reduces the additive error by a multiplicative factor \(k\). Intuitively, this works because we only want to answer the question: Does there exist at least one query crossing its current threshold? Then, once we decide to close a segment, we introduce an extra step for privately deciding which thresholds to update. The full algorithm is given in Algorithm 8. ``` 1:\(k\) queries \(\{\alpha,\beta\}\), \(\{\epsilon^{-1},\beta\}\), \(\{\epsilon^{-1},\beta\}\), \(\{\epsilon^{-1},\beta\}\), \(\{\epsilon^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta ^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1}, \beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^ {-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1}, \beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^ {-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1}, \beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta ^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta ^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta ^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta ^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta ^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta ^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta ^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta ^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^ {-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{-1},\beta^{- 1},\beta^{-1},\ **Definition 8** (monotone histogram query with sensitivity 1).: _Let \(x=x^{1},\ldots,x^{T}\) be a stream of elements \(x^{t}\in\{0,1\}^{d}\) and let the histogram be the function \(\mathrm{h}^{t}(x)=(\sum_{t^{\prime}=1}^{t}x_{i}^{t^{\prime}})_{i\in[d]}\). We say that \(g:\{\{0,1\}^{d}\}^{*}\to\mathbb{R}\) is a monotone histogram query with sensitivity 1 if_ 1. _The function_ \(g\) _is a function of the histogram, i.e. it depends only on_ \(\mathrm{h}^{t}(x)\)_. Abusing notation, we use_ \(g(\mathrm{h}^{t}(x))\) _to denote_ \(g(x^{1},\ldots,x^{t})\) _and consider_ \(g\) _as a function from_ \(\mathbb{N}^{d}\) _to_ \(\mathbb{R}\) _from now on._ 2. _The function_ \(g\) _is monotone in_ \(t\)_, i.e.,_ \(g(\mathrm{h}^{t-1}(x))\leq g(\mathrm{h}^{t}(x))\)__ 3. _The function_ \(g\) _has sensitivity 1, i.e., for two_ \(d\) _dimensional vectors_ \(v_{x}\) _and_ \(v_{y}\) _such that_ \(||v_{y}-v_{x}||_{\infty}\leq 1\) _it holds that_ \(|g(v_{y})-g(v_{x})|\leq 1\) _for all_ \(i\in[k]\)_._ 4. _It outputs_ \(0\) _for the zero vector, i.e._ \(g(0,\ldots,0)=0\)_._ In this section, similar to Section 4, we assume that an upper bound \(c_{\max}\) on any _query answer_ to the \(k\) queries \(g_{1},\ldots,g_{k}\) is given to the algorithm. Note that \(c_{\max}\) can be much smaller than the value of the maximum column sum, for example, if we want to compute the value of the minimum column sum. ``` 0: Stream \(x^{1},x^{2},\ldots,x^{T}\in\left\{0,1\right\}^{d}\), upper bound \(c_{\max}\) on any \(g_{i}\), an adaptively (\(\epsilon/3\))-DP continuous histogram mechanism \(H\), additive error bound \(\mathrm{err}(n,\beta/3)\) on all outputs of \(H\) that holds with probability \(\geq 1-\beta/3\) when \(H\) is run on a stream of length \(n\). 0: Estimate of \(g_{i}(\mathrm{h}(t))\) for all \(i\in[k]\) and all \(t\in[T]\) 1 Initialize an adaptively (\(\epsilon/3\))-differentially private continuous histogram mechanism \(H\) for stream length \(k\cdot c_{\max}\) 2\(p_{0}\gets 0\), \(j\gets 1\) 3\(c_{i}\gets s_{i}\gets 0\) for all \(i\in[d]\) 4\(C\gets 18\cdot\epsilon^{-1}\cdot(k\ln(6k\cdot c_{\max}/\beta)+\ln(6T/\beta))\) 5\(\mathrm{K}\gets 3(C+\mathrm{err}(k\cdot c_{\max},\beta/3))\) 6\(\tau\leftarrow\mathrm{Lap}(6/\epsilon)\) 7\(\mathrm{K}_{(i)}\leftarrow\mathrm{K}\), \(\tilde{\mathrm{K}}_{(i)}\leftarrow\mathrm{K}_{(i)}+\tau\) for all \(i\in[k]\) 8out \(\leftarrow(g_{1}(\mathbf{0}),g_{2}(\mathbf{0}),\ldots,g_{k}(\mathbf{0}))\) 9for\(t\in\mathbb{N}\)do 10\(c_{i}\gets c_{i}+x_{i}^{t}\), \(s_{i}\gets s_{i}+x_{i}^{t}\) for all \(i\in[d]\) 11\(\mu\leftarrow\mathrm{Lap}(12/\epsilon)\) 12if\(\exists i\in[k]\): \(g_{i}(s)+\mu>\tilde{\mathrm{K}}_{(i)}\)and \(j\leq k\cdot c_{\max}\)then 13\(p_{j}\gets t\), \(j\gets j+1\)\(\triangleright\) Close the current segment 14insert \((c_{1},\ldots,c_{d})\) into \(H\), reset \(c_{i}\gets 0\) for all \(i\in[d]\) 15for\(i\in[k]\)do 16\(\tilde{g}_{i}(s)\gets g_{i}(s)+\mathrm{Lap}(3k/\epsilon)\) 17if\(\tilde{g}_{i}(s)>\tilde{\mathrm{K}}_{(i)}-C\)then 18\(\mathrm{K}_{(i)}\leftarrow\mathrm{K}_{(i)}+\mathrm{K}\) 19 end if 20 21 end for 22\(\tau=\mathrm{Lap}(6/\epsilon)\) 23\(\tilde{\mathrm{K}}_{(i)}\leftarrow\mathrm{K}_{(i)}+\tau\) for all \(i\in[k]\) 24\((s_{1},\ldots,s_{d})\leftarrow\mathrm{output}(H)\)\(\triangleright\) Current count from the histogram mechanism 25out \(\leftarrow(g_{1}(s),\ldots,g_{k}(s))\) 26 27 end for 28output out 29 30 end for 31\(p_{j}=T\) ``` **Algorithm 8**Mechanism for answering \(k\) queries **Lemma 16**.: _Let \((g_{1},\ldots,g_{k})\) be \(k\) monotone histogram queries with sensitivity \(1\). Given an upper bound \(c_{\max}\) on the maximum output of any \(g_{i}\) for \(i\in[k]\), and an upper bound \(T\) on the stream length, there exists an \((\alpha,\beta)\)-accurate and \(\epsilon\)-differentially private algorithm for computing \(g_{1},\ldots,g_{k}\) at every time step where \(\alpha=O(\epsilon^{-1}(k\ln(k\cdot c_{\max}/\beta)+\ln(T/\beta))+\text{err}(k \cdot c_{\max},\beta/3))\) and \(\text{err}(n,\beta/3)\) is the additive error bound that holds with probability \(\geq 1-\beta/3\) on all outputs of an \((\epsilon,\delta)\)-dp continuous histogram mechanism \(H\) when \(H\) is run on a stream of length \(n\)._ ### Privacy **Lemma 17**.: _Let \(\epsilon>0\). If \(H\) is an \((\epsilon/3)\)-differentially private continuous histogram mechanism, then Algorithm 8 satisfies \(\epsilon\)-differential privacy. This holds independently of the initial setting of \((s_{1},\ldots,s_{d})\), \(K\) and \(C\)._ Proof.: Let \(x\) and \(y\) be two neighboring streams that differ at time \(t^{*}\). Let \(S\) be a subset of possible outputs of the algorithm and let \(\mathcal{A}(x)\) be the output stream of Algorithm 8 with input stream \(x\). We show that \[\Pr\left[\mathcal{A}(x)\in S\right]\leq e^{\epsilon}\cdot\Pr\left[\mathcal{A} (y)\in S\right]\] The arguments also hold when swapping the identities of \(x\) and \(y\) since they are symmetric, which gives us the privacy guarantee. Thus we focus on proving the inequality above. We use that the histogram mechanisms \(H\) is \((\epsilon/3)\)-differentially private in the adaptive continual release model. To argue privacy, define an adversary to model the interaction between the partitioning algorithm and the counting mechanisms, see Algorithm 9: We define \(Adv(x,y)\) to be Algorithm 9 run with the setting of parameters corresponding to Algorithm 8. It is given as input two neighboring streams \(x\) and \(y\) differing at time \(t^{*}\). It basically runs Algorithm 8 on \(x\), only for the interval including \(t^{*}\), it outputs both the interval counts for \(x\) and the interval counts for \(y\). This is the challenge time step in the game defined in Algorithm 1. If \(\text{side}=L\), the counts for \(x\) are sent to the counting mechanism, if \(\text{side}=R\), the counts for \(y\) are sent. Note that for the partitioning of the stream, only \(x\) and the outputs from \(H\) are taken into account. Now, let \(S\) be a subset of all possible outputs of Algorithm 8. Abusing notation, we say that a view \(V\) of the adversary \(Adv\) satisfies \(V\in S\), if the streams of \((s_{1},\ldots,s_{d})\) received from \(H\) match the output sequences in \(S\). We then have \[\Pr(\mathcal{A}(x)\in S)=\Pr(V^{(L)}_{H,Adv(x,y)}\in S)\] and \[\Pr(\mathcal{A}(y)\in S)=\Pr(V^{(L)}_{H,Adv(y,x)}\in S)\] Since by assumption \(H\) is adaptively \((\epsilon/3)\)-differentially private, we have \[\Pr(V^{(R)}_{H,Adv(y,x)}\in S)\leq e^{\epsilon/3}\Pr(V^{(L)}_{H,Adv(y,x)}\in S)\] It remains to prove \[\Pr(V^{(L)}_{H,Adv(x,y)}\in S)\leq e^{2\epsilon/3}\Pr(V^{(R)}_{H,Adv(y,x)}\in S), \tag{34}\] since then \[\begin{split}\Pr(\mathcal{A}(x)\in S)&=\Pr(V^{(L)}_ {H,Adv(x,y)}\in S)\\ &\leq e^{2\epsilon/3}\Pr(V^{(R)}_{H,Adv(y,x)}\in S)\\ &\leq e^{\epsilon}\Pr(V^{(L)}_{H,Adv(y,x)}\in S)\\ &=e^{\epsilon}\Pr(\mathcal{A}(y)\in S)\end{split} \tag{35}\] Note that when we run \(Adv(x,y)\) on \(\text{side}=L\), we partition according to \(x\) and the outputs of \(H\), and for each interval we give the counts for \(x\) as input to \(H\). When we run \(Adv(y,x)\) on \(\text{side}=R\), we partition according to \(y\) and the outputs of \(H\), and give the counts for \(x\) as input to \(H\) (since outside of the challenge time step, \(c_{i}^{x}\) and \(c_{i}^{y}\) are the same). So in order to prove (34), we argue that on both runs, the probabilities of getting a given partition into intervals are \(e^{\epsilon/3}\)-close. Call the time interval \((p_{j-1},p_{j}]\) the \(j^{th}\)_interval_. If \(\ell\) is the value of \(j\) when the processing of the input stream ends, set \(p_{\ell}=T\). Note that the probabilities of computing any sequence of intervals \((p_{0},p_{1}],\ldots,(p_{j-2},p_{j-1}]\) with \(p_{j-1}<t^{*}\) and \(j-1\leq\Delta\) are the same on both \(x\) and \(y\), since the streams are equal at all time steps before \(t^{*}\). We want to argue two things: first, that the probability of \(p_{j}=\lambda\) is \(e^{\epsilon/3}\)-close on \(x\) and \(y\), for any \(\lambda>t^{*}\), and second, that the probabilities of updating any subset of thresholds is \(e^{\epsilon/3}\)-close on \(x\) and \(y\). First note that if \(j>\Delta\), then the stream is ignored after \(p_{j-1}\) for both \(x\) and \(y\), so \(e^{\epsilon/3}\)-closeness follows trivially. If \(j\leq\Delta\), we condition on all the noises added in the algorithm before time \(p_{j-1}\) as well as the randomness of the counting mechanism up until time \(p_{j-1}\). Fixing a particular time \(\lambda>p_{j-1}\), we first show that the probability of interval \(j\) ending at \(\lambda\) (i.e., \(p_{j}=\lambda\)) is \(e^{\epsilon/3}\)-close on \(x\) and \(y\). For this to happen, there must exist an \(i\in[k]\) with \(g_{i}(s)+\mu>\tilde{K}_{(i)}\) at time \(\lambda\), and never before that in the time steps \((p_{j-1},\lambda-1]\). We use \(s^{t}(x)\) to denote the value of \(s\) at time \(t\) on stream \(x\) Let \(\mu_{t}\) be the \(\operatorname{Lap}(12/\epsilon)\) noise defined in line 9 in iteration \(t\), and let \(\tau_{j}\) be the current value of \(\tau\). Further denote \(s^{t}(x)\) the vector of \((s_{i})_{i\in[d]}\) at time \(t\) for stream \(x\). Let \(f_{\operatorname{Lap}(b)}\) be the density of the Laplace distribution with scale \(b\). Note that conditioning on all the noises being the same on \(x\) and \(y\) before \(p_{j-1}\), we have that any \(s_{i}\) at time \(t\leq p_{j}\) can differ by at most \(1\) on \(x\) and \(y\). Therefore \(g_{i}(s^{t}(x))\) and \(g_{i}(s^{t}(y))\) can also differ by at most \(1\). We now have: \[\Pr[p_{j}=\lambda\text{ on }x]=\Pr[g_{i}(s^{t}(x))+\mu_{t}\leq K_{(i )}+\tau_{j}\forall t\in(p_{j-1},\lambda),\forall i\in[k]\wedge\exists i:g_{i}( s^{\lambda}(x))+\mu_{\lambda}>K_{(i)}+\tau_{j}]\] \[=\int_{z}\int_{c}\Pr_{\mu_{p_{j-1}}\cdots\mu_{\lambda-1}}[g_{i}( s^{t}(x))+\mu_{t}\leq K_{(i)}+z\forall t\in(p_{j-1},\lambda)\forall i\in[k] \wedge\exists i:g_{i}(s^{\lambda}(x))+c>K_{(i)}+z]\] \[\cdot f_{\tau_{j}}(z)\cdot f_{\mu_{\lambda}}(c)\ dz\ dc\] \[\leq\int_{z}\int_{c}\Pr_{\mu_{p_{j-1}}\cdots\mu_{\lambda-1}}[g_{i }(s^{t}(y))+\mu_{t}\leq K_{(i)}+(z+1)\forall t\in(p_{j-1},\lambda)\forall i\in [k]\wedge\exists i:g_{i}(s^{\lambda}(y))+(c+2)>K_{(i)}+(z+1)]\] \[\cdot f_{\tau_{j}}(z)\cdot f_{\mu_{\lambda}}(c)\ dz\ dc\] \[\leq(e^{\epsilon/6})e^{2\epsilon/12}\int_{z}\int_{c}\Pr[g_{i}(s^{ t}(y))+\mu_{t}\leq K_{(i)}+z\forall t\in(p_{j-1},\lambda)\forall i\in[k] \wedge\exists i:g_{i}(s^{\lambda}(y))+c>K_{(i)}+z]\] \[\cdot f_{\tau_{j}}(z)\cdot f_{\mu_{\lambda}}(c)\ dz\ dc\] \[=e^{\epsilon/3}\Pr[p_{j}=\lambda\text{ on }y].\] Next, note that conditioned on all previous outputs of \(H\) and \(p_{j}\) being equal, \(g_{i}(s^{p_{j}}(x))\) and \(g_{i}(s^{p_{j}}(y))\) can differ by at most \(1\) for each \(i\in[k]\). Thus, adding Laplace noise scaled with \(3k/\epsilon\) to every \(g_{i}(s^{p_{j}}(y))\) to create \(\tilde{g_{i}}(s^{p_{j}}(y))\) ensures that _all_ distributions of \(\tilde{g_{i}}(s^{p_{j}}(x))\) and \(\tilde{g_{i}}(s^{p_{j}}(y))\) are \(e^{\epsilon/3}\)-close. Since the updating of thresholds only depends on those, this implies that the probabilities of updating any subset of thresholds on \(x\) and \(y\) are \(e^{\epsilon/3}\)-close. Finally, recall that running \(Adv(x,y)\) on \(\operatorname{side}=L\) and \(Adv(y,x)\) on \(\operatorname{side}=R\) both insert the counts for \(x\) into \(H\), and only differ in the partitioning. Since we have shown that the probabilities that \(t^{*}\) is in a segment \((p_{j-1},p_{j}]\) on the two runs are \(e^{\epsilon/3}\)-close, and the probabilities that we update the same thresholds for this interval are \(e^{\epsilon/3}\)-close, and since the rest is identical on both runs we have shown (34) and thus (35). ### Accuracy Proof Next we analyze the additive error of Algorithm 8. Let \(t\in[T]\) be an arbitrary time step, let \(s^{t}\) denote the value of \(s\) at time \(t\), and \(\operatorname{h}^{t}\) be the value of the true histogram at time \(t\). In the following let \(\alpha_{SV:}=12\epsilon^{-1}(\ln(6k\cdot c_{\max}/\beta)+\ln(6T/\beta))\) and \(\alpha_{U}:=\epsilon^{-1}6k\ln(3k\cdot c_{\max}/\beta)\). We condition on the following upper bounds on the additive error, which together hold with probability \(1-\beta\): 1. The error of \(H\) is bounded by \(\operatorname{err}(k\cdot c_{\max},\beta/3)\). By assumption, this holds with probability \(1-\beta/3\). 2. Adding the same random variable \(\tau\) with \(\operatorname{Lap}(6/\epsilon)\) distribution to all \(K_{(i)}\) gives an additive error of at most \(\epsilon^{-1}6\ln(6k\cdot c_{\max}/\beta)\) with probability \(1-\beta/6\) by Fact 6 and the union bound, since we sample for that variable at most \(k\cdot c_{\max}\) times. 3. The random variable \(\mu\) with \(\operatorname{Lap}(12/\epsilon)\) distribution drawn at every time steps gives an additive error of at most \(\epsilon^{-1}12\ln(6T/\beta)\) by Fact 6 and the union bound with probability \(1-\beta/6\). Together with the previous condition we have * If the condition in line 12 is true for \(i\) at time \(t\), then \(g_{i}(s^{t})>K_{(i)}-\alpha_{SV}\) * If the condition in line 12 is false for \(i\), then \(g_{i}(s^{t})<K_{(i)}+\alpha_{SV}\). for \(\alpha_{SV}=12\epsilon^{-1}(\ln(6k\cdot c_{\max}/\beta)+\ln(6T/\beta))\) 4. We add \(\operatorname{Lap}(3k/\epsilon)\) noise to \(g_{i}\) in line 16 at most \(k\cdot c_{\max}\) times for each \(i\in[k]\). Thus, by Fact 6 and the union bound, with probability \(1-\beta/3\) at most an additive error of \(\epsilon^{-1}3k\ln(3k^{2}\cdot c_{\max}/\beta)\leq\epsilon^{-1}6k\ln(3k\cdot c _{\max}/\beta)=\alpha_{U}\) is added to \(g_{i}\) for any \(i\) and any time step. We now proceed as follows: Recall that \(p_{j}\) denotes the end of the \(j\)th time interval and that \(p_{0}=0\). To prove accuracy we will first show an auxiliary lemma that says that \(p_{1}>1\). Next fix any \(i\in[k]\) and let \(p_{l}\) and \(p_{r}\) be any two time steps such that \(K_{(i)}\) is updated at \(p_{l}\) and \(p_{r}\) but not between them. Then we show that \(g_{i}\) must have increased by more than \(1\) between \(p_{l}\) and \(p_{r}\). The latter fact implies that \(K_{(i)}\) was not updated at time \(p_{r}-1\), which can be used to get an upper bound on \(g_{i}(h^{p_{r}-1})\) and, by the \(1\)-sensitivity of \(g_{i}\), also on \(g_{i}(h^{p_{r}})\). As \(K_{(i)}\) was updated at time \(p_{l}\), we also have a lower bound on \(g_{i}(h^{p_{l}})\). Combining the two gives an upper bound on \(|g_{i}(h^{p_{r}})-g_{i}(h^{p_{l}})|\) of \(O(K+\alpha_{SV}+\alpha_{U})\), which is the crucial bound needed to upper bound \(|g_{i}(h^{t})-g_{i}(s^{t})|\). In the rest of the section let \(K_{(i)}^{t}\) denote the value of \(K_{(i)}\) at time \(t\) when we reach Line 12 of Algorithm 8. To show that \(p_{1}>1\), we first show that whenever the \(i\)th threshold is updated, the true value of \(g_{i}\) is not much smaller than the threshold that was crossed. **Lemma 18**.: _Suppose the \(i\)th threshold \(K_{(i)}\) is updated at time \(t\). Then_ \[g_{i}(\mathrm{h}^{t})\geq K_{(i)}^{t}-C-\alpha_{U}-\text{err}(k\cdot c_{\max },\beta/3).\] Proof.: This follows from the sensitivity of \(g_{i}\) and the fact that \(K_{(i)}\) was updated at time \(t\). \[g_{i}(\mathrm{h}^{t}) \geq g_{i}(s^{t})-\text{err}(k\cdot c_{\max},\beta/3)\] (since \[g_{i}\] has sensitivity \[1\] ) \[\geq\tilde{g_{i}}(s^{t})-\alpha_{U}-\text{err}(k\cdot c_{\max}, \beta/3)\] \[\geq K_{(i)}^{t}-C-\alpha_{U}-\text{err}(k\cdot c_{\max},\beta/3) \text{(since $K_{(i)}$ is updated at time $t$)}\] as required. **Lemma 19**.: _It holds that \(p_{1}>1\)._ Proof.: Note that variable \(C\) in Algorithm 8 is larger than \(\alpha_{SV}+\alpha_{U}\). Thus, if the condition in line 12 is true for some \(i\) at time \(p_{j}\), then the condition in line 17 is also true for the same \(i\). Thus, whenever we close a segment, we also update at least one threshold, say the \(i\)th one. Using Lemma 18 with \(t=p_{j}\) gives us that \[g_{i}(h^{p_{j}})\geq K_{(i)}^{p_{j}}-C-\alpha_{U}-\text{err}(k\cdot c_{\max}, \beta/3)\] Note that since \(K>C+\text{err}(k\cdot c_{\max},\beta/3)+\alpha_{U}\), this implies \(g_{i}(\mathrm{h}^{p_{1}})>1\). As \(g_{i}\) increases by at most \(1\) per time step, it follows that \(p_{1}>1\). Next we show an upper bound on the true query values when a threshold is updated. **Lemma 20**.: _Let \(i\in[k]\). Let \(p_{r}\) be a timestep where threshold \(K_{(i)}\) is updated. Then \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\text{err}(k\cdot c_{\max},\beta/3) +\alpha_{SV}+\alpha_{U}+1\). Further, let \(l\) and \(r\) be integers such that \(p_{l}\) and \(p_{r}\) are two consecutive time steps where threshold \(K_{(i)}\) is updated. Then \(p_{r}-p_{l}>1\) and \(|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})|>1\)._ Proof.: We show the claim by induction over the number of updates of \(K_{(i)}\). As \(p_{r}\) is a time step where \(K_{(i)}\) is updated, the condition in line 12 is true, and, thus \(r\leq k\cdot c_{\max}\). _Case 1:_ If \(p_{r}\) is the first time that threshold \(K_{(i)}\) is updated, then \(p_{r}\geq p_{1}>1\) by Lemma 19. As the threshold was not updated before, it was not updated at time \(p_{r}-1\). This fact and the fact that \(r\leq k\cdot c_{\max}\) imply that either the condition in line 12 was false or the condition in line 17 was false for \(i\) at time \(p_{r}-1\). Thus, either \[g_{i}(s^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}<K_{(i)}^{p_{r}}+\alpha_{SV}+ \alpha_{U}\] or \[g_{i}(s^{p_{r}-1})<K_{(i)}^{p_{r}}-C+\alpha_{U}<K_{(i)}^{p_{r}}+\alpha_{SV}+ \alpha_{U},\] and hence, \[g_{i}(\mathrm{h}^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err}(k \cdot c_{\max},\beta/3).\] As \(g_{i}\) has sensitivity \(1\) and \(\mathrm{h}^{p_{r}}\) and \(\mathrm{h}^{p_{r}-1}\) differ by at most \(1\) in each coordinate, it holds that \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err}(k \cdot c_{\max},\beta/3)+1\). _Case 2:_ If \(p_{r}\) is not the first time threshold \(K_{(i)}\) is updated, let \(p_{l}\) be the last time at which threshold \(K_{(i)}\) was updated before \(p_{r}\). By induction, we have \(g_{i}(\mathrm{h}^{p_{l}})<K_{(i)}^{p_{l}}+\alpha_{SV}+\alpha_{U}+\mathrm{err}( k\cdot c_{\max},\beta/3)+1=K_{(i)}^{p_{r}}-K+\alpha_{SV}+\alpha_{U}+\mathrm{err}(k \cdot c_{\max},\beta/3)+1\). Since the threshold \(K_{(i)}\) is updated at time \(p_{r}\), we have \(g_{i}(\mathrm{h}^{p_{r}})\geq K_{(i)}^{p_{r}}-C-\alpha_{U}-\mathrm{err}(k \cdot c_{\max},\beta/3)\) by Lemma 18 for \(t=p_{r}\). Thus, \[|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})| >K_{(i)}^{p_{j}}-C-\alpha_{U}-\mathrm{err}(k\cdot c_{\max},\beta/ 3)-(K_{(i)}^{p_{r}}-K+\mathrm{err}(k\cdot c_{\max},\beta/3)+\alpha_{SV}+\alpha _{U}+1)\] \[=K-C-2\mathrm{err}(k\cdot c_{\max},\beta/3)-\alpha_{SV}-2\alpha_{ U}-1>1,\] since \(K=3(C+\mathrm{err}(k\cdot c_{\max},\beta/3))\) and \(C>\alpha_{SV}+\alpha_{U}\). Thus, \(p_{r}-p_{l}>1\), and as \(g_{i}\) has sensitivity \(1\), the threshold \(K_{(i)}\) was not updated at time \(p_{r}-1\). Now by the same argument as before, \(g_{i}(\mathrm{h}^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{ err}(k\cdot c_{\max},\beta/3)\) and therefore \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err} (k\cdot c_{\max},\beta/3)+1\). Let \(t(i)\) be the last time in the whole input sequence that \(K_{(i)}\) was updated, let \(p_{l}\) be the last time _before_\(t\) that \(K_{(i)}\) was updated, and let \(p_{r}\) be the first time step _at or after_\(t\) at which \(K_{(i)}\) gets updated, i.e., \(p_{r}\geq t\). _Case A:_\(t\leq t(i)\), i.e., there exists a time step \(\geq t\) at which \(K_{(i)}\) is updated. First, we show by induction an upper bound on \(g_{i}(\mathrm{h}^{p_{r}})\). Now, by Lemma 20 applied to \(p_{r}\) and Lemma 18 applied to \(p_{l}\), we get \[|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})| <K_{(i)}^{p_{r}}+\mathrm{err}(k\cdot c_{\max},\beta/3)+\alpha_{ SV}+\alpha_{U}+1-(K_{(i)}^{p_{l}}-K-C-\alpha_{U}-\mathrm{err}(k\cdot c_{\max}, \beta/3))\] \[=K+C+2\mathrm{err}(k\cdot c_{\max},\beta/3)+\alpha_{SV}+2\alpha_ {U}+1. \tag{36}\] The error for outputting \(g_{i}\) at any time step \(t\) in an interval \([p_{j-1},p_{j})\subseteq[p_{l},p_{r})\) for any \(j\leq k\cdot c_{\max}\) is now bounded by: \[|g_{i}(\mathrm{h}^{t})-g_{i}(s^{p_{j-1}})| \leq|g_{i}(\mathrm{h}^{t})-g_{i}(\mathrm{h}^{p_{j-1}})|+|g_{i}( \mathrm{h}^{p_{j-1}})-g_{i}(s^{p_{j-1}})|\] \[\leq|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})|+ \mathrm{err}(k\cdot c_{\max},\beta/3) \tag{37}\] \[\leq K+C+3\mathrm{err}(k\cdot c_{\max},\beta/3)+\alpha_{SV}+2\alpha _{U}+1\] \[=O(\epsilon^{-1}(k\ln(k\cdot c_{\max}/\beta)+\ln(T/\beta))+ \mathrm{err}(k\cdot c_{\max},\beta/3))\] Further, because of Lemma 20, every time we update \(K_{(i)}\), \(g_{i}(\mathrm{h}^{t})\) grows by at least \(1\). Since all \(g_{i}(\mathrm{h}^{t})\) are bounded by \(c_{\max}\), and every time \(j\) is updated at least one threshold is updated, this implies that \(j<k\cdot c_{\max}\) is always true. _Case B:_\(t>t(i)\). Consider the last time step \(T\). As \(T\geq t>t(i)\), \(K_{(i)}\) was not updated at time \(T\). Since \(j\leq k\cdot c_{\max}\), at time \(T\) either the condition in line 12 was false or the condition in line 17 was false. Therefore \(g_{i}(h^{T})<K_{(i)}^{T}+\mathrm{err}(k\cdot c_{\max},\beta/3)+\alpha_{U}+ \alpha_{SV}\). Then by Lemma 18, \(g_{i}(\mathrm{h}^{t(i)})\geq K_{(i)}^{T}-K-C-\alpha_{U}-\mathrm{err}(k\cdot c _{\max},\beta/3)\). These two equations together give \[g_{i}(\mathrm{h}^{T})-g_{i}(\mathrm{h}^{t(i)})\leq K+C+2\alpha_{U}+\alpha_{SV}+ \mathrm{err}(k\cdot c_{\max},\beta/3)\leq K+3C+\mathrm{err}(k\cdot c_{\max}, \beta/3) \tag{38}\] The error for outputting \(g_{i}\) at any timestep \(t\) in an interval \([p_{j-1},p_{j})\subseteq[t(i),T]\) and at \(t=T\) is now bounded by \[|g_{i}(\mathrm{h}^{t})-g_{i}(s^{p_{j-1}})| \leq|g_{i}(\mathrm{h}^{t})-g_{i}(\mathrm{h}^{p_{j-1}})|+|g_{i}( \mathrm{h}^{p_{j-1}})-g_{i}(s^{p_{j-1}})|\] \[\leq|g_{i}(\mathrm{h}^{T})-g_{i}(\mathrm{h}^{t(i)})|+\mathrm{err} (k\cdot c_{\max},\beta/3)\] \[\leq K+3C+2\mathrm{err}(k\cdot c_{\max},\beta/3)\] (by Equation 38 ) \[=O(\epsilon^{-1}(k\ln(k\cdot c_{\max}/\beta)+\ln(T/\beta))+ \mathrm{err}(k\cdot c_{\max},\beta/3)) \tag{39}\] Thus Equations 37 and 39 together give a bound on \(|g_{i}(h^{t})-g_{i}(s^{p_{j}-1})|\) in both cases. For any time \(t\) that belongs to interval \(j\), the values of \(g_{i}\) on the true histogram are \((g_{1}(h^{t}),g_{2}(h^{t}),\ldots,g_{k}(h^{t}))\), and the values output by Algorithm 8 are \((g_{1}(s^{p_{j-1}}),g_{2}(s^{p_{j-1}}),\ldots,g_{k}(s^{p_{j-1}}))\). Thus the error of the algorithm is given by \[\alpha =\max_{t\in T}\max_{i\in[k]}|g_{i}(h^{t})-g_{i}(s^{p_{j-1}})|\] (where \[j\] is the interval that \[t\] belongs to) \[\leq\max_{t\in T}\max_{i\in[k]}O(\epsilon^{-1}(k\ln(k\cdot c_{ \max}/\beta)+\ln(T/\beta))+\text{err}(k\cdot c_{\max},\beta/3))\] (by Equations 37 and 39) \[\leq O(\epsilon^{-1}(k\ln(k\cdot c_{\max}/\beta)+\ln(T/\beta))+ \text{err}(k\cdot c_{\max},\beta/3))\] This then gives us the following lemma. **Lemma 21**.: _Algorithm 8 is \((\alpha,\beta)\)-accurate, with \(\alpha=O(\epsilon^{-1}(k\ln(k\cdot c_{\max}/\beta)+\ln(T/\beta))+\text{err}(k \cdot c_{\max},\beta/3))\)._ ## 8 Doubling Mechanism for k-Query In this section, we deal with \(k\) queries from the same query class as in the last section. We show how to divide the stream into segments, such that within each segment, the value of \(\max_{i}g_{i}(\text{h}(t))\) approximately doubles. Our algorithm outputs approximate answers to all \(k\) queries at the end of each segment. In the next section we will combine this with the algorithm from the previous section to get an algorithm for estimating \(k\) queries without the requirement of being given an upper bound \(c_{\max}\). As earlier, the algorithm works by using the sparse vector technique for computing the partition, and "resetting" privacy at the end of each segment by computing a differentially private histogram. In this algorithm, the \(\epsilon/2\)-differentially private histogram is computed using the Laplace mechanism (Fact 1), by adding fresh Laplace noise scaled with \(2d/\epsilon\) to the running column sums every time a segment closes. The full algorithm is given in Algorithm 10. ``` Input: Stream \(x^{1},x^{2},\ldots,x^{T}\in\{0,1\}^{d}\). Output: End of each segment together with an estimate of the histogram at that time step. 1\(p_{0}\gets 0\), \(j\gets 1\) 2\(s_{i}\gets 0\) for all \(i\in[d]\) 3\(\text{K}_{j}\gets 1\), \(\tilde{\text{K}}_{j}\gets K_{j}+\text{Lap}(4/\epsilon)\) 4for\(t\in[T]\)do 5\(s_{i}=s_{i}+x^{i}_{i}\) for all \(i\in[d]\) 6\(s\leftarrow(s_{1},s_{2},\ldots,s_{d})\) 7if\(\max g_{i}(s)+\text{Lap}(8/\epsilon)>\tilde{\text{K}}_{j}\) and \(j<\log T\)then 8\(p_{j}\gets t\), \(j\gets j+1\) 9\(s_{i}\gets s_{i}+\text{Lap}(2d/\epsilon)\) 10\(\text{K}_{j}\gets 2\cdot\text{K}_{j-1}\), \(\tilde{\text{K}}_{j}\leftarrow\text{K}_{j}+\text{Lap}(4/\epsilon)\) 11output\((t,s_{1},s_{2},\ldots,s_{d})\) 12 13 end if 14 15 end for 16\(p_{j}=T\) ``` **Algorithm 10**Doubling Mechanism for k-Query ### Privacy We fix some notation. Let * \(\mu_{t}\) be the \(\text{Lap}(8/\epsilon)\) noise added to the maximum in Line 7, * \(\tau_{j}\) be the \(\text{Lap}(4/\epsilon)\) noise added to \(\text{K}_{j}\), and * \(\gamma_{i}^{j}\) be the \(\operatorname{Lap}(2d/\epsilon)\) noise added to \(s_{i}\) at the end of segment \(j\) in Line 9. **Lemma 22**.: _Algorithm 10 satisfies \(\epsilon\)-differential privacy._ Proof.: Note that Algorithm 10 can be seen as post-processing of Algorithm 3 with \(g=\max_{i}g_{i}\), \(\Delta=\log T\), \(K_{j}=2^{j-1}\), and \(s_{i}=0\) for all \(i\in[d]\) using as histogram mechanism \(H\) the Laplacian mechanism that computes a running sum of all inputs and adds fresh Laplace noise scaled with \(\operatorname{Lap}(2d/\epsilon)\) to each coordinate. Then we only need to prove that \(H\) is an \((\epsilon/2)\)-differentially private in the adaptive continual observation model. By Fact 3 this follows directly if \(H\) is \((\epsilon/2)\)-differentially private. Now note that \(H\) is \((\epsilon/2)\)-differentially private by parallel composition and the choice of parameter \(2d/\epsilon\) for the Laplace mechanism, since the inputs to \(H\) only differ at a single time step, and the \(L_{1}\) norm of the difference is at most \(d\). ### Accuracy Let \(\ell\) be the total number of segments produced by the algorithm, which is a random variable that is upper bounded by \(\log T\), and let \(\Gamma_{i}^{j}=\sum_{k=1}^{j}\gamma_{i}^{k}\). **Lemma 23**.: _With probability \(\geq 1-\beta\), the following bounds hold simultaneously for all \(t\in[T]\), \(j\in[\log T]\), and \(i\in[d]\):_ \[|\mu_{t}|\leq\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)=:\alpha _{\mu} \tag{40}\] \[|\tau_{j}|\leq\frac{4}{\epsilon}\cdot\log\left(\frac{3\log T}{\beta}\right)=: \alpha_{\tau} \tag{41}\] \[|\Gamma_{i}^{j}|\leq\frac{4d}{\epsilon}\cdot\sqrt{2j}\cdot\log\left(\frac{3d \log T}{\beta}\right) \tag{42}\] Proof.: In the algorithm, there are \(T\) instances of \(\mu_{t}\sim\operatorname{Lap}(8/\epsilon)\), and at most \(\log T\) instances of \(\tau_{j}\sim\operatorname{Lap}(4/\epsilon)\). Applying Fact 6 and Lemma 3, we obtain the first two bounds each with probability \(\geq 1-\beta/3\). Thus, using the concentration bound for the sum of \(j\) Laplace variables given in Lemma 2 with \(b=2d/\epsilon\), \(k=j\), \(\beta_{S}=\beta/(3d\log T)\) gives us the third bound with probability \(\geq 1-\beta/3\). Union bound over all three sets of bounds gives us the lemma. Below we use the following variables: 1. \(\alpha_{\mu}=\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)\), 2. \(\alpha_{\tau}=\frac{4}{\epsilon}\cdot\log\left(\frac{3c_{\max}}{\beta}\right)\), 3. \(L=\min\{\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+\log\log (3d\log T/\beta),\log T\}\), 4. \(\alpha_{\Gamma}=\frac{4d}{\epsilon}\cdot\sqrt{2L}\cdot\log\left(\frac{3d \log T}{\beta}\right)\), and 5. \(\alpha_{DM}=\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}+L\). Let \(c_{\max}=\max_{i}g_{i}(h^{T})\) be the maximum query value on the entire stream. We first upper bound the number \(\ell\) of segments generated (and for which an output was generated in Line 11 by the algorithm by \(L\), which is roughly \(\widetilde{O}(\log c_{\max}+\log d)\). Note that there might be one more segment that ends because the stream terminates. We do not count it here as Algorithm 10 does not generate any output for it. **Lemma 24**.: _If the random variables are bounded as in Lemma 23, then Algorithm 10 creates at most \(L\) segments._ Proof.: A trivial upper bound of \(\log T\) on \(\ell\) is obtained from the stopping condition of the algorithm in Line 7. At time \(p_{\ell}\) when the last segment was closed we have that for \(i=\operatorname{argmax}_{k}g_{k}(s^{p_{\ell}})\), \[g_{i}(s^{p_{\ell}})+\mu_{p_{\ell}}\geq 2^{\ell}+\tau_{\ell}.\] Let \(h^{t}\) be the true histogram at time \(t\). Assuming that the noises being bounded as stated in Lemma 23 and taking \(i^{*}=\operatorname{argmax}_{k}g_{k}(h^{p_{\ell}})\), we have that \(g_{i}(h^{p_{\ell}})\leq g_{i^{*}}(h^{p_{\ell}})\). Thus \[2^{\ell} \leq g_{i}(s^{p_{\ell}})+\mu_{p_{\ell}}-\tau_{\ell}\] \[=g_{i}(h^{p_{\ell}})+\Gamma_{i}^{\ell}+\mu_{p_{\ell}}-\tau_{\ell}\] \[\leq g_{i^{*}}(h^{p_{\ell}})+\Gamma_{i}^{\ell}+\mu_{p_{\ell}}-\tau _{\ell}\] (by definition of \[i^{*}\] ) \[\leq g_{i^{*}}(h^{p_{\ell}})+\frac{4d}{\epsilon}\cdot\sqrt{2\ell }\cdot\log\left(\frac{3d\log T}{\beta}\right)+\alpha_{\mu}+\alpha_{\tau}. \tag{43}\] We now get that \[\ell \leq\log\left[c_{\max}+\frac{4d}{\epsilon}\cdot\sqrt{2\ell}\cdot \log\left(\frac{3d\log T}{\beta}\right)+\frac{8}{\epsilon}\cdot\log\left( \frac{3T}{\beta}\right)+\frac{4}{\epsilon}\cdot\log\left(\frac{3\log T}{\beta} \right)\right]\] \[\leq\log\left[c_{\max}+\frac{4d}{\epsilon}\cdot\sqrt{2\ell}\cdot \log\left(\frac{3d\log T}{\beta}\right)+\frac{12}{\epsilon}\cdot\log\left( \frac{3T}{\beta}\right)\right]\] \[\leq\log c_{\max}+\log\frac{4\sqrt{2}}{\epsilon}+\log d+\frac{1} {2}\log\log T+\log\log\left(\frac{3d\log T}{\beta}\right)+\log\frac{12}{ \epsilon}+\log\log\left(\frac{3T}{\beta}\right)\] (upper bounding log of sum by sum of logs) \[\leq\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+ \log\log(3d\log T/\beta)\] as required, where the third inequality follows from \(\ell\leq\log T\). We use this to show that the \(s_{i}\) values in Algorithm 10 are at most \(\alpha_{\Gamma}=\frac{4d}{\epsilon}\cdot\sqrt{2L}\cdot\log\left(\frac{3d\log T }{\beta}\right)\) away from the true column sums at all times. **Lemma 25**.: _Assume that the random variables are bounded as in Lemma 23. Let \(t\in[T]\) and \(i\in[d]\). Then \(|s_{i}^{t}-h_{i}^{t}|\leq\alpha_{\Gamma}=O\left(\frac{d\sqrt{L}\cdot\log(d \log T/\beta)}{\epsilon}\right)\)._ Proof.: Assuming that the noises being bounded as stated in Lemma 23 we get an upper bound of \(L\) on the number of segments from Lemma 24. Let \(j\) be the segment to which time \(t\) belongs. Then \[|s_{i}^{t}-h_{i}^{t}| =|h_{i}^{t}+\Gamma_{i}^{j}-h_{i}^{t}|\] \[\leq\frac{4d}{\epsilon}\cdot\sqrt{2j}\cdot\log\left(\frac{3d\log T }{\beta}\right)\] \[\leq\alpha_{\Gamma}\] as required. We finally bound the true maximum query value increase in a single segment. **Lemma 26**.: _Assume that the random variables are bounded as in Lemma 23. Then in Algorithm 10, the true maximum query value \(\max_{i}g_{i}(h)\) for segment \(j\) increases by at most \(2^{j-1}+2\alpha_{DM}\), where \(\alpha_{DM}=\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}+L=O\left(\frac{d\sqrt{L }\log(d\log T/\beta)+\log(T/\beta)}{\epsilon}\right)\)._ Proof.: We condition on the noises being bounded as in Lemma 23. Recall that the time interval \((p_{j-1},p_{j}]\) is the \(j^{th}\) segment. First assume that either \(j<\ell\) or otherwise \(j=\ell\)_and_ the condition in Line 7 was true at time \(p_{j}\). Let \(M_{t}=\max_{i}g_{i}(h^{t})\) be the true maximum query value at time \(t\), and \(\mathrm{K}_{j}\) be the \(j^{th}\) threshold value. Then \[|M_{p_{j}}-M_{p_{j-1}}|\leq|\mathrm{K}_{j}-\mathrm{K}_{j-1}|+|M_{p_{j}}-\mathrm{ K}_{j}|+|M_{p_{j-1}}-\mathrm{K}_{j-1}|\] The definition of \(\mathrm{K}_{j}\) directly gives us that \(|\mathrm{K}_{j}-\mathrm{K}_{j-1}|=2^{j-1}\). Thus our task reduces to bounding \(|M_{p_{j}}-\mathrm{K}_{j}|\) for all \(j\). We do this in two parts. Let \(g_{max}(s^{t})=\max_{i}g_{i}(s^{t})\) be the maximum query value on the noisy histogram at time \(t\). First, using Lemma 25 and the fact that \(\max_{i}g_{i}\) is a sensitivity \(1\) function, we get that for all \(t\), \[|M_{t}-g_{max}(s^{t})|\leq\alpha_{\Gamma}. \tag{45}\] As the random variables are bounded as in Lemma 23 and since at time \(p_{j}\) the threshold was crossed, we have that \[g_{max}(s^{p_{j}})>\mathrm{K}_{j}-\alpha_{\mu}-\alpha_{\tau}.\] Putting these two equations together, we get that \[M_{p_{j}}-K_{j}>-\left(\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}\right).\] This gives us a lower bound on the value of \(M_{p_{j}}\). Now we show an upper bound. Let \(t<p_{j}\) be the last time step in which a segment was not closed. If a segment was closed at every time step until \(p_{j}\) of the algorithm, then let \(t=0\). Since at every time step between \(t\) and \(p_{j}\) a segment must have been closed and the total number of segments is at most \(\ell\), we get that \(t\geq p_{j}-\ell\). Let \(k\) be the segment that \(t\) belonged to. If \(t=0\), we set \(k=s^{0}_{max}=K_{0}=0\) in the following equation. Then at time \(t\), \[g_{max}(s^{t})\leq K_{k}+\alpha_{\mu}+\alpha_{\tau}\] Using Equation 45 and the above equation, we get \[M_{t}\leq K_{k}+\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma} \tag{46}\] Since \(t\geq p_{j}-\ell\) and the \(\max_{i}g_{i}\) is a sensitivity one function, \[M_{t}\geq M_{p_{j}}-\ell\] Since the thresholds do not decrease with time, \(K_{j}\geq K_{k}\). Note that \(\ell\leq L\) by Lemma 24. Using these two facts, and substituting the above equation into Equation 46, we get that \[M_{p_{j}}-K_{j}\leq\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}+L=\alpha_{DM}\] Thus putting it all together, we get that \[|M_{p_{j}}-M_{p_{j-1}}|\leq 2^{j-1}+2\cdot\alpha_{DM}\] as required. Now, for \(j=\ell\), if the condition in Line 7 was false, we have two cases. First, assume \(\ell=\log T\). Then at time \(p_{l-1}\), we have \[g_{max}(s^{p_{\ell-1}})+\mu_{p_{\ell-1}}>K_{\ell-1}+\tau_{\ell-1}=T/2+\tau_{ \ell-1}\] and therefore \[M_{p_{\ell-1}}>T/2-\alpha_{\mu}-\alpha_{\tau}-\alpha_{\Gamma}.\] Since \(M_{p_{\ell}}\leq M_{T}\leq T\), we have \[M_{p_{\ell}}-M_{p_{\ell-1}}\leq T/2+\alpha_{\mu}+\alpha_{\tau}+ \alpha_{\Gamma}=2^{\ell-1}+\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}.\] Second, assume \(\ell<\log T\). Then \[g_{max}(s^{p_{j}})\leq K_{j}+\alpha_{\mu}+\alpha_{\tau},\] and thus \[M_{p_{j}}\leq K_{j}+\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}.\] Since the threshold was crossed at time \(p_{j-1}\), we have \[M_{p_{j-1}}\geq K_{j-1}-(\alpha_{\mu}+\alpha_{\tau}+\alpha_{ \Gamma}).\] Therefore, \[|M_{p_{j}}-M_{p_{j-1}}|=M_{p_{j}}-M_{p_{j-1}} \leq(K_{j}-K_{j-1})+2(\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma})\] \[=2^{j-1}+2(\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma})\] \[\leq 2^{j-1}+2\alpha_{DM}\] which proves the claim. **Theorem 3**.: _With probability at least \(1-\beta\), we simultaneously have the following guarantees from Algorithm 10._ 1. _The total number of segments produced by the algorithm is upper bounded by_ \(L=\min\{\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+\log\log(3 d\log T/\beta),\log T\}\)_,_ 2. _The difference between the noisy and the true histogram at all times stored by the algorithm is upper bounded by_ \(\alpha_{\Gamma}=O\left(\frac{d\sqrt{L}\log(d\log T/\beta)}{\epsilon}\right)\)_, and_ 3. _The true maximum query value increase within segment_ \(j\) _is at most_ \(2^{j-1}+2\alpha_{DM}\)_, where_ \(\alpha_{DM}=O\left(\frac{d\sqrt{L}\log(d\log T/\beta)+\log(T/\beta)}{\epsilon}\right)\)_._ Proof.: Assume that the Laplace random variables in Algorithm 10 are bounded as in Lemma 23. Then the three points of the theorem follow from Lemmas 24, 25, and 26 respectively. ## 9 Two-Level Mechanism for k-Query In this section, we combine the two mechanisms from the previous two sections to get an algorithm for k-Query. The first level of the mechanism is the same as Algorithm 10, which partitions the input stream into segments. For each such segment the second level algorithm is called, which is a modified version of Algorithm 8 and is given in Algorithm 12. The main difference to Algorithm 8 is that it does not start each column sum from \(0\), but instead it is given as input (1) a noisy histogram to initialize each column sum, (2) an upper bound on the amount of noise in the histogram, and (3) an upper bound on how much the maximum query value (with the given initial column sum values) can increase. The error of the input histogram has to be taken into account in the new partitioning algorithm, which results in a slightly more complicated algorithm than Algorithm 8. The full two-level algorithm is given in Algorithm 11. In this section we will refer to Algorithm 11 without the lines referring to Algorithm 12 as the _doubling mechanism_, and to Algorithm 12 as the _modified BoundedMaxQuery mechanism_. ### Privacy **Lemma 27**.: _Algorithm 11 satisfies \(2\epsilon\)-differential privacy._ Proof.: We deal with the outputs of the calls to Alg 12 separately from the outputs of Alg 11 in Line 17. Let \(x\) and \(y\) be two neighboring input streams. First, note that since the instantiations of the modified BoundedMaxQuery mechanism do not affect the outputs on Line 17, we can use Lemma 22 to prove that the doubling mechanism (since the outputs in Line 17 of the two-level mechanism are exactly the same as run on the doubling mechanism) is \(\epsilon\)-differentially private. Now we condition on all the internal random variables (namely, the Laplace random variables in Lines 6, 11, 13, and 14) of the two-level mechanism being fixed such that both \(x\) and \(y\) lead to the same sequence of segments, and argue about the privacy of the various modified BoundedMaxQuery mechanisms. Since the segments produced by the doubling mechanism are fixed, all the modified BoundedMaxQuery mechanisms operate on disjoint parts of the stream. Each instantiation of the modified BoundedMaxQuery mechanism is \(\epsilon\)-dp by Lemma 17. Since they operate on disjoint parts of the stream, by parallel composition, all instantiations of the modified BoundedMaxQuery mechanism together satisfy \(\epsilon\)-DP. Naive sequential composition now gives us the \(2\epsilon\)-DP guarantee. ### Accuracy #### 9.2.1 Algorithm 12 We first analyze the accuracy of Algorithm 12, assuming that the input noisy column sums given to the algorithm are at most an additive error \(\alpha_{\Gamma}\) away from the true column sums, and that the increase in the true maximum query value is bounded by \(\Delta\). This property is shown to hold for the two-level algorithm with probability \(\geq 1-\beta\) in Lemmas 25 and 26, and with \(\Delta=2^{j}+2\alpha_{DM}\). Let \(t\in[T]\) be an arbitrary time step, let \(s^{t}\) denote the value of \(s\) at time \(t\), and \(\mathrm{h}^{t}\) be the value of the true histogram at time \(t\). In the following let \(\alpha_{SV}:=12\epsilon^{-1}(\ln(6k\cdot\Delta/\beta)+\ln(6T/\beta))\) and \(\alpha_{U}:=\epsilon^{-1}6k\ln(3k\cdot\Delta/\beta)\). We condition on the following upper bounds on the additive error, which together hold with probability \(1-\beta\): 1. The error of \(H\) is bounded by \(\mathrm{err}(k\cdot\Delta,\beta/3)\). By assumption, this holds with probability \(1-\beta/3\). 2. Adding the same random variable \(\tau\) with \(\mathrm{Lap}(6/\epsilon)\) distribution to all \(K_{(i)}\) gives an additive error of at most \(\epsilon^{-1}6\ln(6k\cdot\Delta/\beta)\) with probability \(1-\beta/6\) by Fact 6 and the union bound, since we sample for that variable at most \(k\cdot\Delta\) times. 3. The random variable \(\mu\) with \(\mathrm{Lap}(12/\epsilon)\) distribution drawn at every time steps gives an additive error of at most \(\epsilon^{-1}12\ln(6T/\beta)\) by Fact 6 and the union bound with probability \(1-\beta/6\). Together with the previous condition we have * If the condition in line 12 is true for \(i\) at time \(t\), then \(g_{i}(s^{t})>K_{(i)}-\alpha_{SV}\) * If the condition in line 12 is false for \(i\), then \(g_{i}(s^{t})<K_{(i)}+\alpha_{SV}\). for \(\alpha_{SV}=12\epsilon^{-1}(\ln(6k\cdot\Delta/\beta)+\ln(6T/\beta))\) 4. We add \(\operatorname{\mathrm{Lap}}(3k/\epsilon)\) noise to \(g_{i}\) in line 16 at most \(k\cdot\Delta\) times for each \(i\in[k]\). Thus, by Fact 6 and the union bound, with probability \(1-\beta/3\) at most an additive error of \(\epsilon^{-1}3k\ln(3k^{2}\cdot\Delta/\beta)\leq\epsilon^{-1}6k\ln(3k\cdot \Delta/\beta)=\alpha_{U}\) is added to \(g_{i}\) for any \(i\) and any time step. We now proceed as follows: Recall that \(p_{j}\) denotes the end of the \(j\)th time interval and that \(p_{0}=t_{0}\). To prove accuracy we will first show an auxiliary lemma that says that \(p_{1}>t_{0}+1\). Next fix any \(i\in[k]\) and let \(p_{l}\) and \(p_{r}\) be any two time steps such that \(K_{(i)}\) is updated at \(p_{l}\) and \(p_{r}\) but not between them. Then we show that \(g_{i}\) must have increased by more than \(1\) between \(p_{l}\) and \(p_{r}\). The latter fact implies that \(K_{(i)}\) was not updated at time \(p_{r}-1\), which can be used to get an upper bound on \(g_{i}(h^{p_{r}-1})\) and, by the \(1\)-sensitivity of \(g_{i}\), also on \(g_{i}(h^{p_{r}})\). As \(K_{(i)}\) was updated at time \(p_{l}\), we also have a lower bound on \(g_{i}(h^{p_{l}})\). Combining the two gives an upper bound on \(|g_{i}(h^{p_{r}})-g_{i}(h^{p_{l}})|\) of \(O(K+\alpha_{SV}+\alpha_{U})\), which is the crucial bound needed to upper bound \(|g_{i}(h^{t})-g_{i}(s^{t})|\). In the rest of the section let \(K_{(i)}^{t}\) denote the value of \(K_{(i)}\) at time \(t\) when we reach Line 12 of Algorithm 12. To show that \(p_{1}>t_{0}+1\), we first show that whenever the \(i\)th threshold is updated, the true value of \(g_{i}\) is not much smaller than the threshold that was crossed. **Lemma 28**.: _Suppose the \(i\)th threshold \(K_{(i)}\) is updated at time \(t\). Then_ \[g_{i}(\mathrm{h}^{t})\geq K_{(i)}^{t}-C-\alpha_{U}-\text{err}(k\cdot\Delta, \beta/3)-\alpha_{\Gamma}.\] Proof.: This follows from the sensitivity of \(g_{i}\) and the fact that \(K_{(i)}\) was updated at time \(t\). \[g_{i}(\mathrm{h}^{t}) \geq g_{i}(s^{t})-\text{err}(k\Delta,\beta/3)-\alpha_{\Gamma}\] (since \[g_{i}\] has sensitivity \[1\] ) \[\geq\tilde{g_{i}}(s^{t})-\alpha_{U}-\text{err}(k\Delta,\beta/3)- \alpha_{\Gamma}\] \[\geq K_{(i)}^{t}-C-\alpha_{U}-\text{err}(k\Delta,\beta/3)-\alpha_{\Gamma}\] (since \[K_{(i)}\] is updated at time \[t\] ) as required. **Lemma 29**.: _It holds that \(p_{1}>1\)._ Proof.: Note that variable \(C\) in Algorithm 12 is larger than \(\alpha_{SV}+\alpha_{U}\). Thus, if the condition in line 12 is true for some \(i\) at time \(p_{j}\), then the condition in line 17 is also true for the same \(i\). Thus, whenever we close a segment, we also update at least one threshold, say the \(i\)th one. Using Lemma 28 with \(t=p_{j}\) gives us that \[g_{i}(h^{p_{j}})\geq K_{(i)}^{p_{j}}-C-\alpha_{U}-\text{err}(k\Delta,\beta/3)- \alpha_{\Gamma}\] Note that since \(K>C+\text{err}(k\cdot\Delta,\beta/3)+\alpha_{U}+\alpha_{\Gamma}\), this implies \(g_{i}(\mathrm{h}^{p_{1}})>1\). As \(g_{i}\) increases by at most \(1\) per time step, it follows that \(p_{1}>1\). Next we show an upper bound on the true query values when a threshold is updated. **Lemma 30**.: _Let \(i\in[k]\). Let \(p_{r}\) be a timestep where threshold \(K_{(i)}\) is updated. Then \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\text{err}(k\cdot\Delta,\beta/3)+ \alpha_{SV}+\alpha_{U}+\alpha_{\Gamma}+1\). Further, let \(l\) and \(r\) be integers such that \(p_{l}\) and \(p_{r}\) are two consecutive time steps where threshold \(K_{(i)}\) is updated. Then \(p_{r}-p_{l}>1\) and \(|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})|>1\)._ Proof.: We show the claim by induction over the number of updates of \(K_{(i)}\). As \(p_{r}\) is a time step where \(K_{(i)}\) is updated, the condition in line 12 is true, and, thus \(r\leq k\cdot\Delta\). _Case 1:_ If \(p_{r}\) is the first time that threshold \(K_{(i)}\) is updated, then \(p_{r}\geq p_{1}>1\) by Lemma 29. As the threshold was not updated before, it was not updated at time \(p_{r}-1\). This fact and the fact that \(r\leq k\cdot\Delta\) imply that either the condition in line 12 was false or the condition in line 17 was false for \(i\) at time \(p_{r}-1\). Thus, either \[g_{i}(s^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}<K_{(i)}^{p_{r}}+\alpha_{SV}+ \alpha_{U}\] \[g_{i}(\mathrm{h}^{p_{r}}-1)<K_{(i)}^{p_{r}}-C+\alpha_{U}<K_{(i)}^{p_{r}}+\alpha_{ SV}+\alpha_{U},\] and hence, \[g_{i}(\mathrm{h}^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err} (k\Delta,\beta/3)+\alpha_{\Gamma}.\] As \(g_{i}\) has sensitivity \(1\) and \(\mathrm{h}^{p_{r}}\) and \(\mathrm{h}^{p_{r}-1}\) differ by at most \(1\) in each coordinate, it holds that \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err} (k\cdot\Delta,\beta/3)+\alpha_{\Gamma}+1\). _Case 2:_ If \(p_{r}\) is not the first time threshold \(K_{(i)}\) is updated, let \(p_{l}\) be the last time at which threshold \(K_{(i)}\) was updated before \(p_{r}\). By induction, we have \(g_{i}(\mathrm{h}^{p_{l}})<K_{(i)}^{p_{l}}+\alpha_{SV}+\alpha_{U}+\mathrm{err} (k\cdot\Delta,\beta/3)+\alpha_{\Gamma}+1=K_{(i)}^{p_{r}}-K+\alpha_{SV}+\alpha_ {U}+\mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{\Gamma}+1\). Since the threshold \(K_{(i)}\) is updated at time \(p_{r}\), we have \(g_{i}(\mathrm{h}^{p_{r}})\geq K_{(i)}^{p_{r}}-C-\alpha_{U}-\mathrm{err}(k \cdot\Delta,\beta/3)-\alpha_{\Gamma}\) by Lemma 28 for \(t=p_{r}\). Thus, \[|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})| >K_{(i)}^{p_{j}}-C-\alpha_{U}-\mathrm{err}(k\cdot\Delta,\beta/3)- \alpha_{\Gamma}\] \[\quad-(K_{(i)}^{p_{r}}-K+\mathrm{err}(k\cdot\Delta,\beta/3)+ \alpha_{SV}+\alpha_{U}+\alpha_{\Gamma}+1)\] \[=K-C-2\mathrm{err}(k\cdot\Delta,\beta/3)-\alpha_{SV}-2\alpha_{U}- 2\alpha_{\Gamma}-1>1,\] since \(K=3(C+\mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{\Gamma})\) and \(C>\alpha_{SV}+\alpha_{U}\). Thus, \(p_{r}-p_{l}>1\), and as \(g_{i}\) has sensitivity \(1\), the threshold \(K_{(i)}\) was not updated at time \(p_{r}-1\). Now by the same argument as before, \(g_{i}(\mathrm{h}^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err }(k\cdot\Delta,\beta/3)+\alpha_{\Gamma}\) and therefore \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err} (k\cdot\Delta,\beta/3)+\alpha_{\Gamma}+1\). Let \(t(i)\) be the last time in the whole input sequence that \(K_{(i)}\) was updated, let \(p_{l}\) be the last time _before_\(t\) that \(K_{(i)}\) was updated, and let \(p_{r}\) be the first time step _at or after_\(t\) at which \(K_{(i)}\) gets updated, i.e., \(p_{r}\geq t\). _Case A:_\(t\leq t(i)\), i.e., there exists a time step \(\geq t\) at which \(K_{(i)}\) is updated. First, we show by induction an upper bound on \(g_{i}(\mathrm{h}^{p_{r}})\). Now, by Lemma 30 applied to \(p_{r}\) and Lemma 28 applied to \(p_{l}\), we get \[|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})| <K_{(i)}^{p_{r}}+\mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{SV}+ \alpha_{U}+\alpha_{\Gamma}+1\] \[\quad-(K_{(i)}^{p_{l}}-K-C-\alpha_{U}-\mathrm{err}(k\cdot\Delta, \beta/3)-\alpha_{\Gamma})\] \[=K+C+2\mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{SV}+2\alpha_{U}+ 2\alpha_{\Gamma}+1. \tag{47}\] The error for outputting \(g_{i}\) at any time step \(t\) in an interval \([p_{j-1},p_{j})\subseteq[p_{l},p_{r})\) for any \(j\leq k\cdot\Delta\) is now bounded by: \[|g_{i}(\mathrm{h}^{t})-g_{i}(s^{p_{j-1}})| \leq|g_{i}(\mathrm{h}^{t})-g_{i}(\mathrm{h}^{p_{j-1}})|+|g_{i}( \mathrm{h}^{p_{j-1}})-g_{i}(s^{p_{j-1}})|\] \[\leq|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})|+ \mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{\Gamma} \tag{48}\] \[\leq K+C+3\mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{SV}+2\alpha_{U }+3\alpha_{\Gamma}+1\] \[=O(\epsilon^{-1}(k\ln(k\cdot\Delta/\beta)+\ln(T/\beta))+\mathrm{ err}(k\cdot\Delta,\beta/3)+\alpha_{\Gamma})\] Further, because of Lemma 30, every time we update \(K_{(i)}\), \(g_{i}(\mathrm{h}^{t})\) grows by at least \(1\). Since all \(g_{i}(\mathrm{h}^{t})\) are bounded by \(\Delta\), and every time \(j\) is updated at least one threshold is updated, this implies that \(j<k\cdot\Delta\) is always true. _Case B:_\(t>t(i)\). Consider the last time step \(t_{\infty}\). As \(t_{\infty}\geq t>t(i)\), \(K_{(i)}\) was not updated at time \(t_{\infty}\). Since \(j\leq k\cdot\Delta\), at time \(t_{\infty}\) either the condition in line 12 was false or the condition in line 17 was false. Therefore \(g_{i}(h^{T})<K_{(i)}^{T}+\mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{U}+\alpha_{ SV}+\alpha_{\Gamma}\). Then by Lemma 28, \(g_{i}(\mathrm{h}^{t(i)})\geq K_{(i)}^{T}-K-C-\alpha_{U}-\mathrm{err}(k\cdot \Delta,\beta/3)-\alpha_{\Gamma}\). These two equations together give \[g_{i}(\mathrm{h}^{T})-g_{i}(\mathrm{h}^{t(i)})\leq K+C+2\alpha_{U}+\alpha_{SV}+ \mathrm{err}(k\Delta,\beta/3)+2\alpha_{\Gamma}\leq K+3C+\mathrm{err}(k\Delta, \beta/3)+2\alpha_{\Gamma} \tag{49}\] The error for outputting \(g_{i}\) at any timestep \(t\) in an interval \([p_{j-1},p_{j})\subseteq[t(i),T]\) and at \(t=T\) is now bounded by \[|g_{i}(\mathrm{h}^{t})-g_{i}(s^{p_{j-1}})| \leq|g_{i}(\mathrm{h}^{t})-g_{i}(\mathrm{h}^{p_{j-1}})|+|g_{i}( \mathrm{h}^{p_{j-1}})-g_{i}(s^{p_{j-1}})|\] \[\leq|g_{i}(\mathrm{h}^{T})-g_{i}(\mathrm{h}^{t(i)})|+\mathrm{err} (k\cdot\Delta,\beta/3)+\alpha_{\Gamma}\] \[\leq K+3C+2\mathrm{err}(k\cdot\Delta,\beta/3)+3\alpha_{\Gamma} \text{(by Equation \ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq: \(\Delta\) may query value increase bound \(\Delta\) is defined as \(2^{j-1}+2\alpha_{DM}\). Taking \(\beta^{\prime}=\beta/3L\) in Lemma 31 gives us that the \(j^{th}\) modified BoundedMaxSum mechanism has the following accuracy with probability \(\geq 1-\beta/L\) \[\alpha=O\left(\epsilon^{-1}\cdot d\sqrt{L}\cdot\log(d\log(TL/\beta))+\mathrm{ err}(k\Delta,\beta/3L)+\epsilon^{-1}\cdot\log(TL/\beta)+\epsilon^{-1}\cdot k\log( Lk\Delta/\beta)\right) \tag{51}\] In what follows, \(O_{\log\log\log}\) hides \(\log\log(d,k,T,1/\epsilon,1/\beta)\) terms. We first upper bound \(\Delta\). Since \(j\leq L\), \[\Delta=2^{j-1}+2\alpha_{DM}\leq 2^{L}+2\alpha_{DM}\] We show that \(\alpha_{DM}\leq 2^{L+1}\), which lets us obtain an upper bound of \(\Delta\leq 2^{L+2}\). Recall that \(\alpha_{DM}=L+\alpha_{\Gamma}+\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{ \beta}\right)+\frac{4}{\epsilon}\cdot\log\left(\frac{3\log T}{\beta}\right)\). We bound \(L\) trivially by \(2^{L}\). We now bound the rest of the term by \(2^{L}\). First, we bound \(\alpha_{\Gamma}\) \[\alpha_{\Gamma} =\frac{4d}{\epsilon}\cdot\sqrt{2L}\cdot\log\left(\frac{3d\log T} {\beta}\right)\] \[\leq\frac{4d}{\epsilon}\cdot\sqrt{2\log T}\cdot\log\left(\frac{3 d\log T}{\beta}\right)\] (since \[L\leq\log T\] ) Thus \[\alpha_{\Gamma}+\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{ \beta}\right)+\frac{4}{\epsilon}\cdot\log\left(\frac{3\log T}{\beta}\right) \leq\frac{4d}{\epsilon}\cdot\sqrt{2\log T}\cdot\log\left(\frac{3 d\log T}{\beta}\right)+\frac{12}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)\] \[\leq c_{\max}+\frac{4d}{\epsilon}\cdot\sqrt{2\log T}\cdot\log \left(\frac{3d\log T}{\beta}\right)+\frac{12}{\epsilon}\cdot\log\left(\frac{3T }{\beta}\right)\] \[\leq 2^{L}\] (by definition of \[L\] and ( 44 )) which gives \(\alpha_{DM}\leq 2^{L+1}\). This gives us the required upper bound on \(\Delta\) of \(2^{L+2}\). Next, we show that \(\log L=O_{\log\log}(1)\). This follows, since \[L=\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+\log\log(3d\log T /\beta)\] and so \[\log L=\log\log\left(\left(20\epsilon^{-2}dc_{\max}\right)\cdot\log^{4}(T/ \beta)\cdot\log(3d\log T/\beta)\right)=\widetilde{O}(1).\] Plugging in \(\Delta\leq 2^{L+2}\), \(\log L=O_{\log\log}(1)\), and \(\log\log T=O_{\log\log}(1)\) into Equation 51, we get \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(d\sqrt{L}\log(d/\beta)+ \epsilon\cdot\mathrm{err}(k2^{L+2},\beta/3L)+\log(T/\beta)+k\log(k2^{L+2}/ \beta)\right)\right) \tag{52}\] For the last term in the summation, \[\log 2^{L+2}=L+2 =O(\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/\beta)+ \log\log(3d\log T/\beta))\] \[=O_{\log\log}(\log(\epsilon^{-2}dc_{\max}))\] In the first term in Equation 52, we bound \(\sqrt{L}\) as follows \[\sqrt{L}\leq\sqrt{\log(20\epsilon^{-2}dc_{\max})}+\sqrt{4\log\log(T/\beta)}+ \sqrt{\log\log(3d\log(T/\beta))}\] Since the final two terms are \(O_{\log\log}(1)\), this reduces to \[\sqrt{L}\leq O_{\log\log}\left(\sqrt{\log(dc_{\max}/\epsilon)}\right)\] Plugging these bounds on \(d\sqrt{L}\log d\) and \(\log 2^{L+2}\) in Equation 52, we get \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(d\log(d/\beta)\cdot\sqrt{\log( dc_{\max}/\epsilon)}+\epsilon\cdot\text{err}(k2^{L+2},\beta/3L)+\log(T/\beta)+k\log( kdc_{\max}/\epsilon\beta)\right)\right)\] Since there are at most \(L\) segments in the two-level mechanism, there are at most \(L\) instantiations of Algorithm 12. Thus all the accuracy guarantees hold together with probability at least \(1-\beta\). Combining the guarantees for the two-level mechanism and the modified BoundedMaxQuery mechanism, we get that Algorithm 11 is \((\alpha,2\beta)\)-accurate for \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(\epsilon\cdot\text{err}(k2^{ L+2},\beta/3L)+d\log(d/\beta)\cdot\sqrt{\log(dc_{\max}/\epsilon)}+\log(T/ \beta)+k\log(kdc_{\max}/\epsilon\beta)\right)\right)\] which proves the claimed accuracy guarantee. **Corollary 4**.: _Algorithm 11 instantiated with the histogram mechanism from Fact 4 is \((\alpha,2\beta)\)-accurate for k-Query, with_ \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(d\log(d/\beta)\cdot\left( \log^{2}(dc_{\max}/\epsilon^{2})+\log^{2}k\right)+\log(T/\beta)+k\log(kdc_{ \max}/\epsilon\beta)\right)\right)\] Proof.: Using the histogram mechanism from Fact 4, we have \(\text{err}(k2^{L+2},\beta/3L)=O\left(\epsilon^{-1}\cdot\left(d\ln(dL/\beta) \log^{2}(k2^{L})\right)\right)\). We start by bounding \(\log^{2}(k2^{L})\). \[\log^{2}k2^{L} =\left(L+\log k\right)^{2}\] \[\leq\left(\log\left(20\epsilon^{-2}dc_{\max}\right)+4\log\log(T/ \beta)+\log\log(3d\log T/\beta)\right)^{2}\] \[\quad+\log^{2}k+\left(\log\left(20\epsilon^{-2}dc_{\max}\right)+4 \log\log(T/\beta)+\log\log(3d\log T/\beta)\right)\cdot\log k\] \[\leq O_{\log\log}\left(\log^{2}(dc_{\max}/\epsilon^{2})+\log^{2}k\right)\] Thus \[\text{err}(k2^{L+2},\beta/3L)\leq O_{\log\log}\left(\epsilon^{-1}\cdot d\log (d/\beta)\cdot\left(\log^{2}(dc_{\max}/\epsilon^{2})+\log^{2}k\right)\right)\] Plugging this into the accuracy guarantee in Lemma 32, the \(d\sqrt{\log c_{\max}}\) term is dominated by \(\text{err}(k2^{L+2},\beta/3L)\). This gives \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(d\log(d/\beta)\cdot\left( \log^{2}(dc_{\max}/\epsilon^{2})+\log^{2}k\right)+\log(T/\beta)+k\log(kdc_{ \max}/\epsilon\beta)\right)\right)\] as required. This bound is \(\alpha=\widetilde{O}\left(\left(d\log^{2}c_{\max}+k\log c_{\max}+\log T\right) \epsilon^{-1}\right)\) as stated in Theorem 1. ## 10 \((\epsilon,\delta)\)-Differential Private algorithm for \(k\) queries In this section, we show how to answer \(k\) monotone histogram queries with sensitivity \(1\) with \((\epsilon,\delta)\)-differential privacy, assuming that the maximum query answer on any of them is \(c_{\max}\). **Lemma 33**.: _Let \((g_{1},\ldots,g_{k})\) be \(k\) monotone histogram queries with sensitivity \(1\). Given an upper bound \(c_{\max}\) on the maximum output of any \(g_{i}\) for \(i\in[k]\), and an upper bound \(T\) on the stream length, there exists an \((\alpha,\beta)\)-accurate and \((\epsilon,\delta)\)-differentially private algorithm for computing \(g_{1},\ldots,g_{k}\) at every time step where \(\alpha=O(\epsilon^{-1}(\sqrt{k\ln(e^{2\epsilon/3}kc_{max}/(\beta\delta))}+ \ln(kc_{\max}/\beta)+\ln(T/\beta))+\text{err}(k\cdot c_{\max},\beta/3))\) and \(\text{err}(n,\beta/3)\) is the additive error bound that holds with probability \(\geq 1-\beta/3\) on all outputs of an \((\epsilon,\delta)\)-dp continuous histogram mechanism \(H\) when \(H\) is run on a stream of length \(n\)._ ``` Input: Stream \(x^{1},x^{2},\ldots,x^{T}\in\{0,1\}^{d}\), upper bound \(c_{\max}\) on any \(g_{i}\), an adaptively \((\epsilon/3,\delta/(2e^{2\epsilon/3}))\)-DP continuous histogram mechanism \(H\), additive error bound \(\operatorname{err}(n,\beta/3)\) on all outputs of \(H\) that holds with probability \(\geq 1-\beta/3\) when \(H\) is run on a stream of length \(n\). Output: Estimate of \(g_{i}(\operatorname{h}(t))\) for all \(i\in[k]\) and all \(t\in[T]\) 1 Initialize an adaptively \((\epsilon/3,\delta/(2e^{2\epsilon/3}))\)-differentially private continuous histogram mechanism \(H\) for stream length \(k\cdot c_{\max}\) 2\(p_{0}\gets 0\), \(j\gets 1\) 3\(c_{i}\gets s_{i}\gets 0\) for all \(i\in[d]\) 4\(\alpha_{SV}\gets 12e^{-1}(\ln(6k\cdot c_{\max}/\beta)+\ln(6T/\beta))\), \(\alpha_{U}\gets 6\epsilon^{-1}\sqrt{k\ln(12e^{2\epsilon/3}kc_{\max}/(\beta \delta))}\) 5\(C\leftarrow\alpha_{SV}+\alpha_{U}\) 6\(\operatorname{K}\leftarrow 3(C+\operatorname{err}(k\cdot c_{\max},\beta/3))\) 7\(\tau\leftarrow\operatorname{Lap}(6/\epsilon)\) 8\(\operatorname{K}_{(i)}\leftarrow\operatorname{K}\), \(\operatorname{\tilde{K}}_{(i)}\leftarrow\operatorname{K}_{(i)}+\tau\) for all \(i\in[k]\) 9\(\text{out}\leftarrow(g_{1}(\mathbf{0}),g_{2}(\mathbf{0}),\ldots,g_{k}(\mathbf{0}))\) 10for\(t\in\mathbb{N}\)do 11\(c_{i}\gets c_{i}+x_{i}^{t}\), \(s_{i}\gets s_{i}+x_{i}^{t}\) for all \(i\in[d]\) 12\(\mu\leftarrow\operatorname{Lap}(12/\epsilon)\) 13if\(\exists i\in[k]\): \(g_{i}(s)+\mu>\operatorname{\tilde{K}}_{(i)}\)and \(j\leq k\cdot c_{\max}\)then 14\(p_{j}\gets t\), \(j\gets j+1\)\(\triangleright\) Close the current segment 15 insert \((c_{1},\ldots,c_{d})\) into \(H\), reset \(c_{i}\gets 0\) for all \(i\in[d]\) 16for\(i\in[k]\)do 17\(\tilde{g}_{i}(s)\gets g_{i}(s)+N(0,18k\ln(4e^{2\epsilon/3}/\delta)/\epsilon^ {2})\) 18if\(\tilde{g}_{i}(s)>\operatorname{K}_{(i)}-C\)then 19\(\operatorname{K}_{(i)}\leftarrow\operatorname{K}_{(i)}+\operatorname{K}\) 20 21 end if 22 23 end for 24\(\tau=\operatorname{Lap}(6/\epsilon)\) 25\(\operatorname{\tilde{K}}_{(i)}\leftarrow\operatorname{K}_{(i)}+\tau\) for all \(i\in[k]\) 26\((s_{1},\ldots,s_{d})\leftarrow\operatorname{output}(H)\)\(\triangleright\) Current count from the histogram mechanism 27\(\text{out}\leftarrow(g_{1}(s),\ldots,g_{k}(s))\) 28 end for 29output out 30 end for 31\(p_{j}=T\) ``` **Algorithm 13**Mechanism for answering \(k\) queries with \((\epsilon,\delta)\)-DP ``` Input: Streams \(x=x^{1},x^{2},\ldots,x^{T}\in\{0,1\}^{d}\) and \(y=y^{1},y^{2},\ldots,y^{T}\in\{0,1\}^{d}\) such that \(x\) and \(y\) are neighboring and differ in time \(t^{*}\), partition bound \(\Delta\), initial values \(s_{1},\ldots,s_{d}\), \(K\), \(C\) 1\(p_{0}\gets 0\), \(j\gets 1\) 2\(c_{i}^{x}=0\) for all \(i\in[d]\), \(c_{i}^{y}=0\) for all \(i\in[d]\) 3 ChallengeOver = False 4\(\tau\leftarrow\mathrm{Lap}(6/\epsilon)\) 5\(\mathrm{K}_{(i)}\leftarrow\mathrm{K}\), \(\widetilde{\mathrm{K}}_{(i)}\leftarrow\mathrm{K}_{(i)}+\tau\) for all \(i\in[k]\) 6for\(t\in[T]\)do 7\(c_{i}^{x}=c_{i}^{x}+x_{i}^{t}\), \(s_{i}=s_{i}+x_{i}^{t}\) for all \(i\in[d]\) 8\(c_{i}^{y}=c_{i}^{y}+y_{i}^{t}\) for all \(i\in[d]\) 9\(\mu=\mathrm{Lap}(12/\epsilon)\) 10if\(\exists i\in[k]\): \(g_{i}(s)+\mu>\widetilde{\mathrm{K}}_{(i)}\)and\(j\leq\Delta\)then 11\(p_{j}\gets t\), \(j\gets j+1\)\(\triangleright\) Close the current interval 12if\(p_{j}\geq t^{*}\)and ChallengeOver=Falsethen 13\(\mathrm{type}_{j}=\)challenge 14output(\(c^{x},c^{y}\)) 15 ChallengeOver=True 16 17 end if 18else 19\(\mathrm{type}_{j}=\)regular 20output\(c^{x}\) 21 22 end if 23for\(i\in[k]\)do 24\(\widetilde{g}_{i}(s)\gets g_{i}(s)+N(0,18k\ln(4e^{2\epsilon/3}/\delta)/ \epsilon^{2})\) 25if\(\widetilde{g}_{i}(s)>\mathrm{K}_{(i)}-C\)then 26\(\mathrm{K}_{(i)}\leftarrow\mathrm{K}_{(i)}+\mathrm{K}\) 27 end if 28 reset \(c_{i}^{x}\gets 0\), \(c_{i}^{y}\gets 0\) for all \(i\in[d]\) 29\(\tau=\mathrm{Lap}(6/\epsilon)\) 30\(\widetilde{\mathrm{K}}_{(i)}\leftarrow\mathrm{K}_{(i)}+\tau\) for all \(i\in[k]\) 31 receive \((s_{1},\ldots,s_{d})\gets H\) 32 end if 33 34 end if 35 36 end for 37\(p_{j}\gets T\) ``` **Algorithm 14**Privacy game \(\Pi_{H,Adv(x,y)}\) for the adaptive continual release model and \(k\) queries for histogram mechanism \(H\) ### Privacy **Lemma 34**.: _Let \(\epsilon,\delta>0\). If \(H\) is an \((\epsilon/3,\delta/(2e^{2\epsilon/3}))\)-differentially private continuous histogram mechanism, then Algorithm 13 satisfies \((\epsilon,\delta)\)-differential privacy. This holds independently of the initial setting of \((s_{1},\ldots,s_{d})\), \(K\) and \(C\)._ Proof.: Let \(x\) and \(y\) be two neighboring streams that differ at time \(t^{*}\). Let \(S\) be a subset of possible outputs of the algorithm and let \(\mathcal{A}(x)\) be the output stream of Algorithm 13 with input stream \(x\). We show that \[\Pr\left[\mathcal{A}(x)\in S\right]\leq e^{\epsilon}\cdot\Pr\left[\mathcal{A} (y)\in S\right]+\delta\] The arguments also hold when swapping the identities of \(x\) and \(y\) since they are symmetric, which gives us the privacy guarantee. Thus we focus on proving the inequality above. We use that the histogram mechanisms \(H\) is \((\epsilon/3,\delta/(2e^{2\epsilon/3}))\)-differentially private in the adaptive continual release model. To argue privacy, define an adversary to model the interaction between the partitioning algorithm and the counting mechanisms, see Algorithm 14: We define \(Adv(x,y)\) to be Algorithm 14 run with the setting of parameters corresponding to Algorithm 13. It is given as input two neighboring streams \(x\) and \(y\) differing at time \(t^{*}\). It basically runs Algorithm 13 on \(x\), only for the interval including \(t^{*}\), it outputs both the interval counts for \(x\) and the interval counts for \(y\). This is the challenge time step in the game defined in Algorithm 1. If \(\text{side}=L\), the counts for \(x\) are sent to the counting mechanism, if \(\text{side}=R\), the counts for \(y\) are sent. Note that for the partitioning of the stream, only \(x\) and the outputs from \(H\) are taken into account. Now, let \(S\) be a subset of all possible outputs of Algorithm 13. Abusing notation, we say that a view \(V\) of the adversary \(Adv\) satisfies \(V\in S\), if the streams of \((s_{1},\dots,s_{d})\) received from \(H\) match the output sequences in \(S\). We then have \[\Pr(\mathcal{A}(x)\in S)=\Pr(V^{(L)}_{H,Adv(x,y)}\in S)\] and \[\Pr(\mathcal{A}(y)\in S)=\Pr(V^{(L)}_{H,Adv(y,x)}\in S)\] Since by assumption \(H\) is adaptively \((\epsilon/3,\delta/(2e^{2\epsilon/3}))\)-differentially private, we have \[\Pr(V^{(R)}_{H,Adv(y,x)}\in S)\leq e^{\epsilon/3}\Pr(V^{(L)}_{H,Adv(y,x)}\in S )+\delta/2e^{2\epsilon/3}\] It remains to prove \[\Pr(V^{(L)}_{H,Adv(x,y)}\in S)\leq e^{2\epsilon/3}\Pr(V^{(R)}_{H,Adv(y,x)}\in S )+\delta/2, \tag{53}\] since then \[\begin{split}\Pr(\mathcal{A}(x)\in S)&=\Pr(V^{(L)}_ {H,Adv(x,y)}\in S)\\ &\leq e^{2\epsilon/3}\Pr(V^{(R)}_{H,Adv(y,x)}\in S)+\delta/2\\ &\leq e^{\epsilon}\Pr(V^{(L)}_{H,Adv(y,x)}\in S)+\delta/2+\delta/ 2\\ &=e^{\epsilon}\Pr(\mathcal{A}(y)\in S)+\delta\end{split} \tag{54}\] Note that when we run \(Adv(x,y)\) on \(\text{side}=L\), we partition according to \(x\) and the outputs of \(H\), and for each interval we give the counts for \(x\) as input to \(H\). When we run \(Adv(y,x)\) on \(\text{side}=R\), we partition according to \(y\) and the outputs of \(H\), and give the counts for \(x\) as input to \(H\) (since outside of the challenge time step, \(c_{i}^{x}\) and \(c_{i}^{y}\) are the same). So in order to prove (53), we argue first that on both runs, the probabilities of getting a given partition into intervals are \(e^{\epsilon/3}\)-close, and that the probabilities of updating the same thresholds on Line 17 are \((e^{\epsilon/3},\delta/2e^{\epsilon/3})\) close. Call the time interval \((p_{j-1},p_{j}]\) the \(j^{th}\)_interval_. If \(\ell\) is the value of \(j\) when the processing of the input stream ends, set \(p_{\ell}=T\). Note that the probabilities of computing any sequence of intervals \((p_{0},p_{1}],\dots,(p_{j-2},p_{j-1}]\) with \(p_{j-1}<t^{*}\) and \(j-1\leq\Delta\) are the same on both \(x\) and \(y\), since the streams are equal at all time steps before \(t^{*}\). We want to argue two things: first, that the probability of \(p_{j}=\lambda\) is \(e^{\epsilon/3}\)-close on \(x\) and \(y\), for any \(\lambda>t^{*}\), and second, that the probabilities of updating any subset of thresholds is \((e^{\epsilon/3},\delta/2e^{\epsilon/3})\)-close on \(x\) and \(y\). First note that if \(j>\Delta\), then the stream is ignored after \(p_{j-1}\) for both \(x\) and \(y\), so \(e^{\epsilon/3}\)-closeness follows trivially. If \(j\leq\Delta\), we condition on all the noises added in the algorithm before time \(p_{j-1}\) as well as the randomness of the histogram mechanism up until time \(p_{j-1}\). Fixing a particular time \(\lambda>p_{j-1}\), we first show that the probability of interval \(j\) ending at \(\lambda\) (i.e., \(p_{j}=\lambda\)) is \(e^{\epsilon/3}\)-close on \(x\) and \(y\). For this to happen, there must exist an \(i\in[k]\) with \(g_{i}(s)+\mu>\widetilde{K}_{(i)}\) at time \(\lambda\), and never before that in the time steps \((p_{j-1},\lambda-1]\). We use \(s^{t}(x)\) to denote the value of \(s\) at time \(t\) on stream \(x\). Let \(\mu_{t}\) be the \(\text{Lap}(12/\epsilon)\) noise defined in line 9 in iteration \(t\), and let \(\tau_{j}\) be the current value of \(\tau\). Further denote \(s^{t}(x)\) the vector of \((s_{i})_{i\in[d]}\) at time \(t\) for stream \(x\). Let \(f_{X}\) be the density of the random variable \(X\). Note that conditioning on all the noises being the same on \(x\) and \(y\) before \(p_{j-1}\), we have that any \(s_{i}\) at time \(t\leq p_{j}\) can differ by at most \(1\) on \(x\) and \(y\). Therefore \(g_{i}(s^{t}(x))\) and \(g_{i}(s^{t}(y))\) can also differ by at most \(1\). We now have: \[\Pr[p_{j}=\lambda\text{ on }x]=\Pr[g_{i}(s^{t}(x))+\mu_{t}\leq K_{(i)} +\tau_{j}\forall t\in(p_{j-1},\lambda),\forall i\in[k]\wedge\exists i:g_{i}(s^{ \lambda}(x))+\mu_{\lambda}>K_{(i)}+\tau_{j}]\] \[=\int_{z}\int_{c}\Pr_{\mu_{p_{j-1}},\ldots,\mu_{\lambda-1}}[g_{i}( s^{t}(x))+\mu_{t}\leq K_{(i)}+z\forall t\in(p_{j-1},\lambda)\forall i\in[k] \wedge\exists i:g_{i}(s^{\lambda}(x))+c>K_{(i)}+z]\] \[\cdot f_{\tau_{j}}(z)\cdot f_{\mu_{t}}(c)\ dz\ dc\] \[\leq\int_{z}\int_{c}\Pr_{\mu_{p_{j-1}},\ldots,\mu_{\lambda-1}}[g_ {i}(s^{t}(y))+\mu_{t}\leq K_{(i)}+(z+1)\forall t\in(p_{j-1},\lambda)\forall i \in[k]\wedge\exists i:g_{i}(s^{\lambda}(y))+(c+2)>K_{(i)}+(z+1)]\] \[\cdot f_{\tau_{j}}(z)\cdot f_{\mu_{t}}(c)\ dz\ dc\] \[\cdot f_{\tau_{j}}(z)\cdot f_{\mu_{t}}(c)\ dz\ dc\] \[=e^{\epsilon/3}\Pr[p_{j}=\lambda\text{ on }y].\] Next, note that conditioned on all previous outputs of \(H\) and \(p_{j}\) being equal, \(g_{i}(s^{p_{j}}(x))\) and \(g_{i}(s^{p_{j}}(y))\) can differ by at most \(1\) for each \(i\in[k]\). Thus the \(L_{2}\) difference between the two vectors is at most \(\sqrt{k}\). By Fact 2, adding \(N(0,18k\ln(4e^{2\epsilon/3}/\delta)/\epsilon^{2})\) noise to each \(g_{i}(s^{p_{j}}(y))\) to create \(\tilde{g_{i}}(s^{p_{j}}(y))\) ensures that _all_ distributions of \(\tilde{g_{i}}(s^{p_{j}}(x))\) and \(\tilde{g_{i}}(s^{p_{j}}(y))\) are \((e^{\epsilon/3},\delta/2e^{\epsilon/3})\)-close. Since the updating of thresholds only depends on those, this implies that the probabilities of updating any subset of thresholds on \(x\) and \(y\) are \((e^{\epsilon/3},\delta/2e^{\epsilon/3})\)-close. Finally, recall that running \(Adv(x,y)\) on \(\text{side}=L\) and \(Adv(y,x)\) on \(\text{side}=R\) both insert the counts for \(x\) into \(H\), and only differ in the partitioning. Since we have shown that the probabilities that \(t^{*}\) is in a segment \((p_{j-1},p_{j}]\) on the two runs are \(e^{\epsilon/3}\)-close, and the probabilities that we update the same thresholds for this interval are \((e^{\epsilon/3},\delta/2e^{\epsilon/3})\)-close, and since the rest is identical on both runs we have shown (53) and thus (54). ### Accuracy Proof We analyze the additive error of Algorithm 13. Let \(t\in[T]\) be an arbitrary time step, let \(s^{t}\) denote the value of \(s\) at time \(t\), and \(\text{h}^{t}\) be the value of the true histogram at time \(t\). In the following let \(\alpha_{SV:}=12\epsilon^{-1}(\ln(6k\cdot c_{\max}/\beta)+\ln(6T/\beta))\) and \(\alpha_{U}:=6\epsilon^{-1}\sqrt{k\ln(12e^{2\epsilon/3}kc_{\max}/(\beta\delta))}\). We condition on the following upper bounds on the additive error, which together hold with probability \(1-\beta\): 1. The error of \(H\) is bounded by \(\text{err}(k\cdot c_{\max},\beta/3)\). By assumption, this holds with probability \(\geq 1-\beta/3\). 2. Adding the same random variable \(\tau\) with \(\text{Lap}(6/\epsilon)\) distribution to all \(K_{(i)}\) gives an additive error of at most \(\epsilon^{-1}6\ln(6k\cdot c_{\max}/\beta)\) with probability \(1-\beta/6\) by Fact 6 and the union bound, since we sample for that variable at most \(k\cdot c_{\max}\) times. 3. The random variable \(\mu\) with \(\text{Lap}(12/\epsilon)\) distribution drawn at every time steps gives an additive error of at most \(\epsilon^{-1}12\ln(6T/\beta)\) by Fact 6 and the union bound with probability \(1-\beta/6\). Together with the previous condition we have * If the condition in line 13 is true for \(i\) at time \(t\), then \(g_{i}(s^{t})>K_{(i)}-\alpha_{SV}\) * If the condition in line 13 is false for \(i\), then \(g_{i}(s^{t})<K_{(i)}+\alpha_{SV}\). for \(\alpha_{SV}=12\epsilon^{-1}(\ln(6k\cdot c_{\max}/\beta)+\ln(6T/\beta))\) 4. We add \(N(0,18k\ln(4e^{2\epsilon/3}/\delta)/\epsilon^{2})\) noise to \(g_{i}\) in line 17 at most \(k\cdot c_{\max}\) times for each \(i\in[k]\). Thus, by Fact 9 and the union bound, with probability \(1-\beta/3\) at most an additive error of \(6\epsilon^{-1}\sqrt{k\ln(12e^{2\epsilon/3}kc_{\max}/(\beta\delta))}=\alpha_{U}\) is added to \(g_{i}\) for any \(i\) and any time step. We now proceed as follows: Recall that \(p_{j}\) denotes the end of the \(j\)th time interval and that \(p_{0}=0\). To prove accuracy we will first show an auxiliary lemma that says that \(p_{1}>1\). Next fix any \(i\in[k]\) and let \(p_{l}\) and \(p_{r}\) be any two time steps such that \(K_{(i)}\) is updated at \(p_{l}\) and \(p_{r}\) but not between them. Then we show that \(g_{i}\) must have increased by more than \(1\) between \(p_{l}\) and \(p_{r}\). The latter fact implies that \(K_{(i)}\) was not updated at time \(p_{r}-1\), which can be used to get an upper bound on \(g_{i}(h^{p_{r}-1})\) and, by the \(1\)-sensitivity of \(g_{i}\), also on \(g_{i}(h^{p_{r}})\). As \(K_{(i)}\) was updated at time \(p_{l}\), we also have a lower bound on \(g_{i}(h^{p_{l}})\). Combining the two gives an upper bound on \(|g_{i}(h^{p_{r}})-g_{i}(h^{p_{l}})|\) of \(O(K+\alpha_{SV}+\alpha_{U})\), which is the crucial bound needed to upper bound \(|g_{i}(h^{t})-g_{i}(s^{t})|\). In the rest of the section let \(K_{(i)}^{t}\) denote the value of \(K_{(i)}\) at time \(t\) when we reach Line 13 of Algorithm 13. To show that \(p_{1}>1\), we first show that whenever the \(i\)th threshold is updated, the true value of \(g_{i}\) is not much smaller than the threshold that was crossed. **Lemma 35**.: _Suppose the \(i\)th threshold \(K_{(i)}\) is updated at time \(t\). Then_ \[g_{i}(\mathrm{h}^{t})\geq K_{(i)}^{t}-C-\alpha_{U}-\text{err}(k\cdot c_{\max},\beta/3).\] Proof.: This follows from the sensitivity of \(g_{i}\) and the fact that \(K_{(i)}\) was updated at time \(t\). \[g_{i}(\mathrm{h}^{t}) \geq g_{i}(s^{t})-\text{err}(k\cdot c_{\max},\beta/3)\] (since \[g_{i}\] has sensitivity \[1\] ) \[\geq\tilde{g_{i}}(s^{t})-\alpha_{U}-\text{err}(k\cdot c_{\max}, \beta/3)\] \[\geq K_{(i)}^{t}-C-\alpha_{U}-\text{err}(k\cdot c_{\max},\beta/3)\] (since \[K_{(i)}\] is updated at time \[t\] ) as required. **Lemma 36**.: _It holds that \(p_{1}>1\)._ Proof.: Note that variable \(C\) in Algorithm 13 is larger than \(\alpha_{SV}+\alpha_{U}\). Thus, if the condition in line 13 is true for some \(i\) at time \(p_{j}\), then the condition in line 18 is also true for the same \(i\). Thus, whenever we close a segment, we also update at least one threshold, say the \(i\)th one. Using Lemma 35 with \(t=p_{j}\) gives us that \[g_{i}(h^{p_{j}})\geq K_{(i)}^{p_{j}}-C-\alpha_{U}-\text{err}(k\cdot c_{\max},\beta/3)\] Note that since \(K>C+\text{err}(k\cdot c_{\max},\beta/3)+\alpha_{U}\), this implies \(g_{i}(\mathrm{h}^{p_{i}})>1\). As \(g_{i}\) increases by at most \(1\) per time step, it follows that \(p_{1}>1\). Next we show an upper bound on the true query values when a threshold is updated. **Lemma 37**.: _Let \(i\in[k]\). Let \(p_{r}\) be a timestep where threshold \(K_{(i)}\) is updated. Then \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\text{err}(k\cdot c_{\max},\beta/3) +\alpha_{SV}+\alpha_{U}+1\). Further, let \(l\) and \(r\) be integers such that \(p_{l}\) and \(p_{r}\) are two consecutive time steps where threshold \(K_{(i)}\) is updated. Then \(p_{r}-p_{l}>1\) and \(|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})|>1\)._ Proof.: We show the claim by induction over the number of updates of \(K_{(i)}\). As \(p_{r}\) is a time step where \(K_{(i)}\) is updated, the condition in line 13 is true, and, thus \(r\leq k\cdot c_{\max}\). _Case 1:_ If \(p_{r}\) is the first time that threshold \(K_{(i)}\) is updated, then \(p_{r}\geq p_{1}>1\) by Lemma 36. As the threshold was not updated before, it was not updated at time \(p_{r}-1\). This fact and the fact that \(r\leq k\cdot c_{\max}\) imply that either the condition in line 13 was false or the condition in line 18 was false for \(i\) at time \(p_{r}-1\). Thus, either \[g_{i}(s^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}<K_{(i)}^{p_{r}}+\alpha_{SV}+ \alpha_{U}\] or \[g_{i}(s^{p_{r}-1})<K_{(i)}^{p_{r}}-C+\alpha_{U}<K_{(i)}^{p_{r}}+\alpha_{SV}+ \alpha_{U},\] and hence, \[g_{i}(\mathrm{h}^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\text{err}(k \cdot c_{\max},\beta/3).\] As \(g_{i}\) has sensitivity \(1\) and \(\mathrm{h}^{p_{r}}\) and \(\mathrm{h}^{p_{r}-1}\) differ by at most \(1\) in each coordinate, it holds that \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err}(k \cdot c_{\max},\beta/3)+1\). _Case 2:_ If \(p_{r}\) is not the first time threshold \(K_{(i)}\) is updated, let \(p_{l}\) be the last time at which threshold \(K_{(i)}\) was updated before \(p_{r}\). By induction, we have \(g_{i}(\mathrm{h}^{p_{l}})<K_{(i)}^{p_{l}}+\alpha_{SV}+\alpha_{U}+\mathrm{err} (k\cdot c_{\max},\beta/3)+1=K_{(i)}^{p_{r}}-K+\alpha_{SV}+\alpha_{U}+\mathrm{ err}(k\cdot c_{\max},\beta/3)+1\). Since the threshold \(K_{(i)}\) is updated at time \(p_{r}\), we have \(g_{i}(\mathrm{h}^{p_{r}})\geq K_{(i)}^{p_{r}}-C-\alpha_{U}-\mathrm{err}(k \cdot c_{\max},\beta/3)\) by Lemma 35 for \(t=p_{r}\). Thus, \[|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})| >K_{(i)}^{p_{j}}-C-\alpha_{U}-\mathrm{err}(k\cdot c_{\max},\beta/ 3)-(K_{(i)}^{p_{r}}-K+\mathrm{err}(k\cdot c_{\max},\beta/3)+\alpha_{SV}+\alpha _{U}+1)\] \[=K-C-2\mathrm{err}(k\cdot c_{\max},\beta/3)-\alpha_{SV}-2\alpha_ {U}-1>1,\] since \(K=3(C+\mathrm{err}(k\cdot c_{\max},\beta/3))\) and \(C>\alpha_{SV}+\alpha_{U}\). Thus, \(p_{r}-p_{l}>1\), and as \(g_{i}\) has sensitivity \(1\), the threshold \(K_{(i)}\) was not updated at time \(p_{r}-1\). Now by the same argument as before, \(g_{i}(\mathrm{h}^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err }(k\cdot c_{\max},\beta/3)\) and therefore \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err }(k\cdot c_{\max},\beta/3)+1\). Let \(t(i)\) be the last time in the whole input sequence that \(K_{(i)}\) was updated, let \(p_{l}\) be the last time _before_\(t\) that \(K_{(i)}\) was updated, and let \(p_{r}\) be the first time step _at or after_\(t\) at which \(K_{(i)}\) gets updated, i.e., \(p_{r}\geq t\). _Case A:_\(t\leq t(i)\), i.e., there exists a time step \(\geq t\) at which \(K_{(i)}\) is updated. First, we show by induction an upper bound on \(g_{i}(\mathrm{h}^{p_{r}})\). Now, by Lemma 37 applied to \(p_{r}\) and Lemma 18 applied to \(p_{l}\), we get \[|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})| <K_{(i)}^{p_{r}}+\mathrm{err}(k\cdot c_{\max},\beta/3)+\alpha_{ SV}+\alpha_{U}+1-(K_{(i)}^{p_{l}}-K-C-\alpha_{U}-\mathrm{err}(k\cdot c_{\max}, \beta/3))\] \[=K+C+2\mathrm{err}(k\cdot c_{\max},\beta/3)+\alpha_{SV}+2\alpha_{ U}+1. \tag{55}\] The error for outputting \(g_{i}\) at any time step \(t\) in an interval \([p_{j-1},p_{j})\subseteq[p_{l},p_{r})\) for any \(j\leq k\cdot c_{\max}\) is now bounded by: \[|g_{i}(\mathrm{h}^{t})-g_{i}(s^{p_{j-1}})| \leq|g_{i}(\mathrm{h}^{t})-g_{i}(\mathrm{h}^{p_{j-1}})|+|g_{i}( \mathrm{h}^{p_{j-1}})-g_{i}(s^{p_{j-1}})|\] \[\leq|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})|+ \mathrm{err}(k\cdot c_{\max},\beta/3)\] \[\leq K+C+3\mathrm{err}(k\cdot c_{\max},\beta/3)+\alpha_{SV}+2 \alpha_{U}+1 \tag{56}\] \[=O(\epsilon^{-1}(\sqrt{k\ln(e^{2\epsilon/3}kc_{max}/(\beta\delta) })+\ln(kc_{\max}/\beta)+\ln(T/\beta))+\mathrm{err}(k\cdot c_{\max},\beta/3))\] Further, because of Lemma 37, every time we update \(K_{(i)}\), \(g_{i}(\mathrm{h}^{t})\) grows by at least \(1\). Since all \(g_{i}(\mathrm{h}^{t})\) are bounded by \(c_{\max}\), and every time \(j\) is updated at least one threshold is updated, this implies that \(j<k\cdot c_{\max}\) is always true. _Case B:_\(t>t(i)\). Consider the last time step \(T\). As \(T\geq t>t(i)\), \(K_{(i)}\) was not updated at time \(T\). Since \(j\leq k\cdot c_{\max}\), at time \(T\) either the condition in line 13 was false or the condition in line 18 was false. Therefore \(g_{i}(h^{T})<K_{(i)}^{T}+\mathrm{err}(k\cdot c_{\max},\beta/3)+\alpha_{U}+ \alpha_{SV}\). Then by Lemma 35, \(g_{i}(\mathrm{h}^{t(i)})\geq K_{(i)}^{T}-K-C-\alpha_{U}-\mathrm{err}(k\cdot c_ {\max},\beta/3)\). These two equations together give \[g_{i}(\mathrm{h}^{T})-g_{i}(\mathrm{h}^{t(i)})\leq K+C+2\alpha_{U}+\alpha_{SV}+ \mathrm{err}(k\cdot c_{\max},\beta/3)\leq K+3C+\mathrm{err}(k\cdot c_{\max}, \beta/3) \tag{57}\] The error for outputting \(g_{i}\) at any timestep \(t\) in an interval \([p_{j-1},p_{j})\subseteq[t(i),T]\) and at \(t=T\) is now bounded by \[|g_{i}(\mathrm{h}^{t})-g_{i}(s^{p_{j-1}})| \leq|g_{i}(\mathrm{h}^{t})-g_{i}(\mathrm{h}^{p_{j-1}})|+|g_{i}( \mathrm{h}^{p_{j-1}})-g_{i}(s^{p_{j-1}})|\] \[\leq|g_{i}(\mathrm{h}^{T})-g_{i}(\mathrm{h}^{t(i)})|+\mathrm{err} (k\cdot c_{\max},\beta/3)\] \[\leq K+3C+2\mathrm{err}(k\cdot c_{\max},\beta/3)\] (by Equation 57 ) \[=O(\epsilon^{-1}(\sqrt{k\ln(e^{2\epsilon/3}kc_{max}/(\beta\delta) })+\ln(kc_{\max}/\beta)+\ln(T/\beta))+\mathrm{err}(k\cdot c_{\max},\beta/3)) \tag{58}\] Thus Equations 56 and 58 together give a bound on \(|g_{i}(h^{t})-g_{i}(s^{p_{j}-1})|\) in both cases. For any time \(t\) that belongs to interval \(j\), the values of \(g_{i}\) on the true histogram are \((g_{1}(h^{t}),g_{2}(h^{t}),\ldots,g_{k}(h^{t}))\), and the values output by Algorithm 13 are \((g_{1}(s^{p_{j-1}}),g_{2}(s^{p_{j-1}}),\ldots,g_{k}(s^{p_{j-1}}))\). Thus the error of the algorithm is given by \[\alpha =\max_{t\in T}\max_{i\in[k]}|g_{i}(h^{t})-g_{i}(s^{p_{j-1}})|\] (where \[j\] is the interval that \[t\] belongs to) \[\leq\max_{t\in T}\max_{i\in[k]}O(\epsilon^{-1}(\sqrt{k\ln(e^{2 \epsilon/3}kc_{max}/(\beta\delta))}+\ln(kc_{\max}/\beta)+\ln(T/\beta))+\text{err }(k\cdot c_{\max},\beta/3))\] (by Equations 56 and 58) \[\leq O(\epsilon^{-1}(\sqrt{k\ln(e^{2\epsilon/3}kc_{max}/(\beta \delta))}+\ln(kc_{\max}/\beta)+\ln(T/\beta))+\text{err}(k\cdot c_{\max},\beta/3))\] This then gives us the following lemma. ``` Input: Stream \(x^{1},x^{2},\ldots,x^{T}\in\{0,1\}^{d}\). Output: End of each segment together with an estimate of the histogram at that time step. 1\(p_{0}\gets 0\), \(j\gets 1\) 2\(s_{i}\gets 0\) for all \(i\in[d]\) 3\(\mathrm{K}_{j}\gets 1\), \(\mathrm{\tilde{K}}_{j}\leftarrow\mathrm{K}_{j}+\mathrm{Lap}(4/\epsilon)\) 4for\(t\in[T]\)do 5\(s_{i}=s_{i}+x_{i}^{t}\) for all \(i\in[d]\) 6\(s\leftarrow(s_{1},s_{2},\ldots,s_{d})\) 7if\(\max g_{i}(s)+\mathrm{Lap}(8/\epsilon)>\mathrm{\tilde{K}}_{j}\)and\(j<\log T\)then 8\(p_{j}\gets t\), \(j\gets j+1\) 9\(s_{i}\gets s_{i}+N(0,2d\ln(2e^{\epsilon/2}/\delta)/\epsilon^{2})\) 10\(\mathrm{K}_{j}\gets 2\cdot\mathrm{K}_{j-1}\), \(\mathrm{\tilde{K}}_{j}\leftarrow\mathrm{K}_{j}+\mathrm{Lap}(4/\epsilon)\) 11output\((t,s_{1},s_{2},\ldots,s_{d})\) 12 13 end for 14 15 end for 16\(p_{j}=T\) ``` **Algorithm 15**Doubling Mechanism for k-Query with \((\epsilon,\delta)\)-DP **Lemma 38**.: _Algorithm 13 is \((\alpha,\beta)\)-accurate, with \(\alpha=O(\epsilon^{-1}(\sqrt{k\ln(e^{2\epsilon/3}kc_{max}/(\beta\delta))}+\ln( kc_{\max}/\beta)+\ln(T/\beta))+\text{err}(k\cdot c_{\max},\beta/3))\)._ ## 11 Doubling Mechanism for \((\epsilon,\delta)\)-Differential Privacy ### Privacy We fix some notation. Let * \(\mu_{t}\) be the \(\mathrm{Lap}(8/\epsilon)\) noise added to the maximum in Line 7 of Algorithm 15, * \(\tau_{j}\) be the \(\mathrm{Lap}(4/\epsilon)\) noise added to \(\mathrm{K}_{j}\), and * \(\gamma_{i}^{j}\) be the \(N(0,2d\ln(2e^{\epsilon/2}/\delta)/\epsilon^{2})\) noise added to \(s_{i}\) at the end of segment \(j\) in Line 9. Since the histogram mechanism cumulatively adds Gaussian noise to each coordinate of every input, we get the following lemma. **Lemma 39**.: _The continuous histogram mechanism \(H\) that cumulatively adds noise \(N(0,2d\ln(2e^{\epsilon/2}/\delta)/\epsilon^{2})\) to each coordinate on every input is \((\epsilon,\delta/e^{\epsilon/2})\)-adaptively differentially private._ **Lemma 40**.: _Algorithm 15 satisfies \((\epsilon,\delta)\)-differential privacy._ Proof.: From Lemma 39, the histogram mechanism \(H\) which computes a running sum of all inputs and adds fresh normal noise scaled with \(N(0,2d\ln(2e^{\epsilon/2}/\delta)/\epsilon^{2})\) to each coordinate for every new input is \((\epsilon/2,\delta/e^{\epsilon/2})\)-adaptively differentially private under continual observation. Now the lemma follows, since Algorithm 15 can be seen as post-processing of Algorithm 3 with \(g=\max_{i}g_{i}\), \(\Delta=\log T\), \(K_{j}=2^{j-1}\), and \(s_{i}=0\) for all \(i\in[d]\). ### Accuracy Let \(\ell\) be the total number of segments produced by the algorithm, which is a random variable that is upper bounded by \(\log T\), and let \(\Gamma_{i}^{j}=\sum_{k=1}^{j}\gamma_{i}^{k}\). **Lemma 41**.: _With probability \(\geq 1-\beta\), the following bounds hold simultaneously for all \(t\in[T]\), \(j\in[\log T]\), and \(i\in[d]\):_ \[|\mu_{t}|\leq\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)=: \alpha_{\mu} \tag{59}\] \[|\tau_{j}|\leq\frac{4}{\epsilon}\cdot\log\left(\frac{3\log T}{\beta}\right)=: \alpha_{\tau} \tag{60}\] \[|\Gamma_{i}^{j}|\leq\frac{2}{\epsilon}\cdot\sqrt{jd\ln\left(\frac{2e^{ \epsilon/2}}{\delta}\right)\log\left(\frac{3d\log T}{\beta}\right)} \tag{61}\] Proof.: In the algorithm, there are \(T\) instances of \(\mu_{t}\sim\mathrm{Lap}(8/\epsilon)\), and at most \(\log T\) instances of \(\tau_{j}\sim\mathrm{Lap}(4/\epsilon)\). Applying Fact 6 and Lemma 3, we obtain the first two bounds each with probability \(\geq 1-\beta/3\). Sum of \(j\) normal \(N(0,\sigma^{2})\) random variables is \(N(0,j\sigma^{2})\). Thus \(\Gamma_{i}^{j}\sim N(0,2jd\ln(2e^{\epsilon/2}/\delta)/\epsilon^{2})\). Using the concentration bound for normal random variables (Fact 9) with \(\beta_{S}=\beta/3d\log T\), we get the final guarantee with probability \(\geq 1-\beta/3\). Union bound over all three guarantees gives the lemma. Below we use the following variables: 1. \(\alpha_{\mu}=\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)\), 2. \(\alpha_{\tau}=\frac{4}{\epsilon}\cdot\log\left(\frac{3c_{\max}}{\beta}\right)\), 3. \(L=\min\{\log(24\epsilon^{-2}dc_{\max})+8\log\log(T/\beta)+\log\ln(2e^{\epsilon /2}/\delta),\log T\}\), 4. \(\alpha_{\Gamma}=\frac{2}{\epsilon}\cdot\sqrt{Ld\ln\left(\frac{2e^{\epsilon/2 }}{\delta}\right)\log\left(\frac{3d\log T}{\beta}\right)}\), 5. \(\alpha_{DM}=\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}+L\). Let \(c_{\max}=\max_{i}g_{i}(h^{T})\) be the maximum query value on the entire stream. We first show an upper bound of \(L\) which is roughly \(\widetilde{O}(\log c_{\max}+\log d)\) on the number \(\ell\) of segments produced by the algorithm. **Lemma 42**.: _Assume that the random variables are bounded as in Lemma 41. Then Algorithm 15 creates at most \(L=\min\{\log(24\epsilon^{-2}dc_{\max})+8\log\log(T/\beta)+\log\ln(2e^{\epsilon /2}/\delta),\log T\}\) segments._ Proof.: We condition on the noises being bounded as stated in Lemma 41. A trivial upper bound of \(\log T\) on \(\ell\) (and thus \(L\)) is obtained from the stopping condition of the algorithm in Line 7. At time \(p_{\ell}\) when the last segment was closed3, we have that for \(i=\mathrm{argmax}_{k}g_{k}(s^{p_{\ell}})\), Footnote 3: If the last segment was closed at \(p_{\ell-1}\), then the following holds for \(\ell-1\), which gives the same asymptotic bounds on \(\ell\). \[g_{i}(s^{p_{\ell}})+\mu_{p_{\ell}}\geq 2^{\ell}+\tau_{\ell}.\] Let \(h^{t}\) be the true histogram at time \(t\). Taking \(i^{*}=\operatorname*{argmax}_{k}g_{k}(h^{p_{\ell}})\), we have that \(g_{i}(h^{p_{\ell}})\leq g_{i^{*}}(h^{p_{\ell}})\). Thus \[2^{\ell} \leq g_{i}(s^{p_{\ell}})+\mu_{p_{\ell}}-\tau_{\ell}\] \[=g_{i}(h^{p_{\ell}})+\Gamma_{i}^{\ell}+\mu_{p_{\ell}}-\tau_{\ell}\] \[\leq g_{i^{*}}(h^{p_{\ell}})+\Gamma_{i}^{\ell}+\mu_{p_{\ell}}- \tau_{\ell}\] (by definition of \[i^{*}\] ) \[\leq g_{i^{*}}(h^{p_{\ell}})+\frac{2}{\epsilon}\cdot\sqrt{\ell d \ln\left(\frac{2e^{\epsilon/2}}{\delta}\right)\log\left(\frac{3d\log T}{\beta }\right)}+\alpha_{\mu}+\alpha_{\tau}.\] (62) We now get that \[\ell \leq\log\left[c_{\max}+\frac{2}{\epsilon}\cdot\sqrt{\ell d\ln \left(\frac{2e^{\epsilon/2}}{\delta}\right)\log\left(\frac{3d\log T}{\beta} \right)}+\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)+\frac{4}{ \epsilon}\cdot\log\left(\frac{3\log T}{\beta}\right)\right]\] \[\leq\log\left[c_{\max}+\frac{2}{\epsilon}\cdot\sqrt{\ell d\ln \left(\frac{2e^{\epsilon/2}}{\delta}\right)\log\left(\frac{3d\log T}{\beta} \right)}+\frac{12}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)\right]\] \[\leq\log\left[c_{\max}+\frac{2}{\epsilon}\cdot\sqrt{d\log T\ln \left(\frac{2e^{\epsilon/2}}{\delta}\right)\log\left(\frac{3d\log T}{\beta} \right)}+\frac{12}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)\right] \tag{63}\] \[<\log c_{\max}+\log\frac{2}{\epsilon}+\frac{1}{2}\left(\log d+ \log\log T+\log\ln\left(\frac{2e^{\epsilon/2}}{\delta}\right)+\log\log\frac{3d \log T}{\beta}\right)+\log\frac{12}{\epsilon}+\log\log\left(\frac{3T}{\beta}\right)\] \[\leq\log(24\epsilon^{-2}dc_{\max})+8\log\log(T/\beta)+\log\ln(2e^ {\epsilon/2}/\delta)\] as required, where the third inequality follows from \(\ell\leq\log T\). We use this to show that the \(s_{i}\) values in Algorithm 15 are at most \(\alpha_{\Gamma}=\frac{2}{\epsilon}\cdot\sqrt{Ld\ln\left(\frac{2e^{\epsilon/2 }}{\delta}\right)\log\left(\frac{3d\log T}{\beta}\right)}\) away from the true column sums at all times. **Lemma 43**.: _Assume that the random variables are bounded as in Lemma 41. Let \(t\in[T]\) and \(i\in[d]\). Then \(|s_{i}^{t}-h_{i}^{t}|\leq\alpha_{\Gamma}=O\left(\epsilon^{-1}\cdot\sqrt{Ld\log \left(\frac{e^{\epsilon/2}d\log T}{\beta\delta}\right)}\right)\)._ Proof.: We condition on the noises being bounded as stated in Lemma 41. Thus we get an upper bound of \(L\) on the number of segments from Lemma 42. Let \(j\) be the segment to which time \(t\) belongs. Then \[|s_{i}^{t}-h_{i}^{t}| =|h_{i}^{t}+\Gamma_{i}^{j}-h_{i}^{t}|\] \[\leq\frac{2}{\epsilon}\cdot\sqrt{jd\ln\left(\frac{2e^{\epsilon/2 }}{\delta}\right)\log\left(\frac{3d\log T}{\beta}\right)}\] \[\leq\alpha_{\Gamma}\] as required. We finally bound the true maximum query value increase in a single segment. **Lemma 44**.: _Assume that the random variables are bounded as in Lemma 41. Then in Algorithm 15, the true maximum column sum for segment \(j\) increases by at most \(2^{j-1}+2\alpha_{DM}\), where \(\alpha_{DM}=\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}+L=O\left(\frac{\sqrt{Ld \log(e^{\epsilon/2}d\log T/\beta\delta)+\log(T/\beta)}}{\epsilon}\right)\)._ Proof.: We condition on the noises being bounded as in Lemma 41. Recall that the time interval \((p_{j-1},p_{j}]\) is the \(j^{th}\) segment. First, assume that either \(j<\ell\), or \(j=\ell\) and the condition in Line 7 was true at time \(p_{j}\). Let \(M_{t}=\max_{i}g_{i}(h^{t})\) be the true maximum query value at time \(t\), and \(\mathrm{K}_{j}\) be the \(j^{th}\) threshold value. Then \[|M_{p_{j}}-M_{p_{j-1}}|\leq|\mathrm{K}_{j}-\mathrm{K}_{j-1}|+|M_{p_{j}}- \mathrm{K}_{j}|+|M_{p_{j-1}}-\mathrm{K}_{j-1}|\] The definition of \(\mathrm{K}_{j}\) directly gives us that \(|\mathrm{K}_{j}-\mathrm{K}_{j-1}|=2^{j-1}\). Thus our task reduces to bounding \(|M_{p_{j}}-\mathrm{K}_{j}|\) for all \(j\). We do this in two parts. Let \(g_{max}(s^{t})=\max_{i}g_{i}(s^{t})\) be the maximum query value on the noisy histogram at time \(t\). First, using Lemma 43 and the fact that \(\max_{i}g_{i}\) is a sensitivity \(1\) function, we get that for all \(t\), \[|M_{t}-g_{max}(s^{t})|\leq\alpha_{\Gamma} \tag{64}\] and since at time \(p_{j}\) the threshold was crossed, we have that \[g_{max}(s^{p_{j}})>\mathrm{K}_{j}-\alpha_{\mu}-\alpha_{\tau}.\] Putting these two equations together, we get that \[M_{p_{j}}-K_{j}>-\left(\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}\right).\] This gives us a lower bound. Now we show an upper bound. Let \(t<p_{j}\) be the last time step in which a segment was not closed. If a segment was closed at every time step until \(p_{j}\) of the algorithm, then let \(t=0\). Since at every time step between \(t\) and \(p_{j}\) a segment must have been closed and the total number of segments is at most \(\ell\), we get that \(t\geq p_{j}-\ell\). Let \(k\) be the segment that \(t\) belonged to. If \(t=0\), we set \(k=s^{0}_{max}=K_{0}=0\) in the following equation. Then at time \(t\), \[g_{max}(s^{t})\leq K_{k}+\alpha_{\mu}+\alpha_{\tau}\] Using Equation 64 and the above equation, we get \[M_{t}\leq K_{k}+\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma} \tag{65}\] Since \(t\geq p_{j}-\ell\) and the \(\max_{i}g_{i}\) is a sensitivity one function, \[M_{t}\geq M_{p_{j}}-\ell\] Since the thresholds do not decrease with time, \(K_{j}\geq K_{k}\). Note that \(\ell\leq L\) by Lemma 42. Using these two facts, and substituting the above equation into Equation 65, we get that \[M_{p_{j}}-K_{j}\leq\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}+L=\alpha_{DM}\] Thus putting it all together, we get that \[|M_{p_{j}}-M_{p_{j-1}}|\leq 2^{j-1}+2\cdot\alpha_{DM}\] as required. Now, for \(j=\ell\), if the condition in Line 7 was false, we have two cases. First, assume \(\ell=\log T\). Then at time \(p_{l-1}\), we have \[g_{max}(s^{p_{\ell-1}})+\mu_{p_{\ell-1}}>K_{\ell-1}+\tau_{\ell-1}=T/2+\tau_{ \ell-1}\] and therefore \[M_{p_{\ell-1}}>T/2-\alpha_{\mu}-\alpha_{\tau}-\alpha_{\Gamma}.\] Since \(M_{p_{\ell}}\leq M_{T}\leq T\), we have \[M_{p_{\ell}}-M_{p_{\ell-1}}\leq T/2+\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}=2 ^{\ell-1}+\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}.\] Second, assume \(\ell<\log T\). Then \[g_{max}(s^{p_{j}})\leq K_{j}+\alpha_{\mu}+\alpha_{\tau},\] and thus \[M_{p_{j}}\leq K_{j}+\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}.\] Since the threshold was crossed at time \(p_{j-1}\), we have \[M_{p_{j-1}}\geq K_{j-1}-(\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma}).\] Therefore, \[|M_{p_{j}}-M_{p_{j-1}}|=M_{p_{j}}-M_{p_{j-1}} \leq(K_{j}-K_{j-1})+2(\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma})\] \[=2^{j-1}+2(\alpha_{\mu}+\alpha_{\tau}+\alpha_{\Gamma})\] \[\leq 2^{j-1}+2\alpha_{DM}\] which proves the claim. **Theorem 4**.: _With probability at least \(1-\beta\), we simultaneously have the following guarantees from Algorithm 15._ 1. _the total number of segments produced by the algorithm is upper bounded by_ \(L=\min\{\log(24\epsilon^{-2}dc_{\max})+8\log\log(T/\beta)+\log\ln(2e^{\epsilon/ 2}/\delta),\log T\}\)_,_ 2. _the difference between the noisy and the true histogram at all times stored by the algorithm is upper bounded by_ \(\alpha_{\Gamma}=O\left(\epsilon^{-1}\cdot\sqrt{Ld\log\left(\frac{e^{\epsilon/2 }d\log T}{\beta\delta}\right)}\right)\)_, and_ 3. _the true maximum query value increase within segment_ \(j\) _is at most_ \(2^{j-1}+2\alpha_{DM}\)_, where_ \(\alpha_{DM}=O\left(\epsilon^{-1}\cdot(\sqrt{Ld\log(e^{\epsilon/2}d\log T/ \beta\delta)}+\log(T/\beta))\right)\)_._ Proof.: Assume that the Laplace random variables in Algorithm 15 are bounded as in Lemma 41. Then the three points of the theorem follow from Lemmas 42, 43, and 44 respectively. ## 12 \((\epsilon,\delta)\)-Differentially Private Two-Level Mechanism for k-Query In this section, we combine the two mechanisms from the previous two sections to get an algorithm for k-Query. The first level of the mechanism is the same as Algorithm 15, which partitions the input stream into segments. For each such segment the second level algorithm is called, which is a modified version of Algorithm 13 and is given in Algorithm 17. The main difference to Algorithm 13 is that it does not start each column sum from \(0\), but instead it is given as input (1) a noisy histogram to initialize each column sum, (2) an upper bound on the amount of noise in the histogram, and (3) an upper bound on how much the maximum query value (with the given initial column sum values) can increase. The error of the input histogram has to be taken into account in the new partitioning algorithm, which results in a slightly more complicated algorithm than Algorithm 13. The full two-level algorithm is given in Algorithm 16. In this section we will refer to Algorithm 16 without the lines referring to Algorithm 17 as the _doubling mechanism_, and to Algorithm 17 as the _modified BoundedMaxQuery mechanism_. ### Privacy **Lemma 45**.: _Algorithm 16 satisfies \((2\epsilon,2\delta)\)-differential privacy._ Proof.: We deal with the outputs of the calls to Alg 17 separately from the outputs of Alg 16 in Line 17. Let \(x\) and \(y\) be two neighboring input streams. First, note that since the instantiations of the modified BoundedMaxQuery mechanism do not affect the outputs on Line 17, we can use Lemma 40 to prove that the doubling mechanism (since the outputs in Line 17 of the two-level mechanism are exactly the same as run on the doubling mechanism) is \((\epsilon,\delta)\)-differentially private. Now we condition on all the internal random variables (namely, the Laplace random variables in Lines 6, 11, 13, and 14) of the two-level mechanism being fixed such that both \(x\) and \(y\) lead to the same sequence of segments, and argue about the privacy of the various modified BoundedMaxQuery mechanisms. Since the segments produced by the doubling mechanism are fixed, all the modified BoundedMaxQuery mechanisms operate on disjoint parts of the stream. Each instantiation of the modified BoundedMaxQuery mechanism is \((\epsilon,\delta)\)-dp by Lemma 34. Since they operate on disjoint parts of the stream, by parallel composition, all instantiations of the modified BoundedMaxQuery mechanism together satisfy \((\epsilon,\delta)\)-DP. Naive sequential composition now gives us the \((2\epsilon,2\delta)\)-DP guarantee. ``` 0: Stream \(x^{t_{0}+1},x^{t_{0}+2},\ldots,x^{t_{\infty}}\in\{0,1\}^{d}\), start time \(t_{0}\), noisy estimates \(s_{i}^{t_{0}}\) for \(i\in[d]\) with error \(\leq\alpha_{\Gamma}\), error bound \(\alpha_{\Gamma}\), max query value increase bound \(\Delta\), stream length bound \(T\), an adaptively \((\epsilon/3,\delta/(2e^{2\epsilon/3}))\)-DP continuous histogram mechanism \(H\), additive error bound \(\operatorname{err}(n,\beta/3)\) on all outputs of \(H\) that holds with probability \(\geq 1-\beta/3\) when \(H\) is run on a stream of length \(n\) 0: Estimate of the query values \(g_{i}(\operatorname{h}(t))\) for all \(i\in[k]\) at every time step \(t\in[t_{0},t_{\infty}]\). 1: Initialize an adaptively \((\epsilon/3,\delta/(2e^{2\epsilon/3}))\)-differentially private continuous histogram mechanism \(H\) for stream length \(k\Delta\) 2:\(p_{0}\gets 0\), \(j\gets 1\) 3:\(c_{i}=0\), and \(s_{i}=s_{i}^{t_{0}}\) for all \(i\in[d]\)\(\triangleright\)\(s_{i}\)s are initialized with input noisy sums 4:\(C\gets 12\cdot\epsilon^{-1}\cdot\left(\sqrt{k\ln(12e^{2\epsilon/3}k \Delta/\beta\delta)}+\ln(6k\cdot\Delta/\beta)+\ln(6T/\beta)\right)\) 5:\(\operatorname{K}\gets 3(C+\operatorname{err}(k\cdot\Delta,\beta/3)+\alpha_{ \Gamma})\) 6:\(\tau\leftarrow\operatorname{Lap}(6/\epsilon)\) 7:\(\operatorname{K}_{(i)}\gets g_{i}(s^{t_{0}})+\operatorname{K}\), \(\operatorname{\tilde{K}}_{(i)}\leftarrow\operatorname{K}_{(i)}+\tau\) for all \(i\in[k]\) 8:\(\text{out}\leftarrow\left(g_{1}(s^{t_{0}}),g_{2}(s^{t_{0}}),\ldots,g_{k}(s^{t_ {0}})\right)\) 9:for\(t\in[t_{0},T]\)do 10:\(c_{i}\gets c_{i}+x_{i}^{t}\), \(s_{i}\gets s_{i}+x_{i}^{t}\) for all \(i\in[d]\) 11:\(\mu\leftarrow\operatorname{Lap}(12/\epsilon)\) 12:if\(\exists i\in[k]\): \(g_{i}(s)+\mu>\operatorname{\tilde{K}}_{(i)}\)and \(j\leq k\Delta\)then \(p_{j}\gets t\), \(j\gets j+1\)\(\triangleright\) Close the current segment 13: insert \((c_{i},\ldots,c_{d})\) into \(H\), reset \(c_{i}\gets 0\) for all \(i\in[d]\) 14:for\(i\in[k]\)do 15:\(\tilde{g}_{i}(s)\gets g_{i}(s)+N(0,18k\ln(4e^{2\epsilon/3}/\delta)/\epsilon^ {2})\) 16:if\(\tilde{g}_{i}(s)>\operatorname{K}_{(i)}-C\)then 17:\(\operatorname{K}_{(i)}\leftarrow\operatorname{K}_{(i)}+\operatorname{K}\) 18:end 19:end 20:end 21:\(\tau=\operatorname{Lap}(6/\epsilon)\) 22:\(\operatorname{\tilde{K}}_{(i)}\leftarrow\operatorname{K}_{(i)}+\tau\) for all \(i\in[k]\) 23:\((s_{1},\ldots,s_{d})\leftarrow\text{output}(H)\) + \((s_{1}^{t_{0}},s_{2}^{t_{0}},\ldots,s_{d}^{t_{0}})\)\(\triangleright\) Current count from the histogram mechanism 24:\(\text{out}\leftarrow(g_{1}(s),\ldots,g_{k}(s))\) 25:end 26:output out 27:end 28:\(p_{j}=t_{\infty}\) ``` **Algorithm 17**Modified \((\epsilon,\delta)\)-DP Mechanism for k-Query ### 12.2 Accuracy #### 12.2.1 Algorithm 17 We first analyze the accuracy of Algorithm 17, assuming that the input noisy column sums given to the algorithm are at most an additive error \(\alpha_{\Gamma}\) away from the true column sums, and that the increase in the true maximum query value is bounded by \(\Delta\). This property is shown to hold for the two-level algorithm with probability \(\geq 1-\beta\) in Lemmas 43 and 44. Let \(t\in[T]\) be an arbitrary time step, let \(s^{t}\) denote the value of \(s\) at time \(t\), and \(\operatorname{h}^{t}\) be the value of the true histogram at time \(t\). In the following let \(\alpha_{SV}:=12\epsilon^{-1}(\ln(6k\cdot\Delta/\beta)+\ln(6T/\beta))\) and \(\alpha_{U}:=6\epsilon^{-1}\sqrt{k\ln(12e^{2\epsilon/3}k\Delta/\beta\delta)}\). We condition on the following upper bounds on the additive error, which together hold with probability \(\geq 1-\beta\): 1. The error of \(H\) is bounded by \(\operatorname{err}(k\cdot\Delta,\beta/3)\). By assumption, this holds with probability \(1-\beta/3\). 2. Adding the same random variable \(\tau\) with \(\operatorname{Lap}(6/\epsilon)\) distribution to all \(K_{(i)}\) gives an additive error of at most \(\epsilon^{-1}6\ln(6k\cdot\Delta/\beta)\) with probability \(1-\beta/6\) by Fact 6 and the union bound, since we sample for that variable at most \(k\cdot\Delta\) times. 3. The random variable \(\mu\) with \(\operatorname{Lap}(12/\epsilon)\) distribution drawn at every time steps gives an additive error of at most \(\epsilon^{-1}12\ln(6T/\beta)\) by Fact 6 and the union bound with probability \(1-\beta/6\). Together with the previous condition we have * If the condition in line 12 is true for \(i\) at time \(t\), then \(g_{i}(s^{t})>K_{(i)}-\alpha_{SV}\) * If the condition in line 12 is false for \(i\), then \(g_{i}(s^{t})<K_{(i)}+\alpha_{SV}\). for \(\alpha_{SV}=12\epsilon^{-1}(\ln(6k\cdot\Delta/\beta)+\ln(6T/\beta))\) 4. We add \(N(0,18k\ln(4e^{2\epsilon/3}/\delta)/\epsilon^{2})\) noise to \(g_{i}\) in line 16 at most \(k\cdot\Delta\) times for each \(i\in[k]\). Thus, by Fact 9 and the union bound, with probability \(1-\beta/3\) at most an additive error of \(6\epsilon^{-1}\sqrt{k\ln(12e^{2\epsilon/3}k\Delta/\beta\delta)}=\alpha_{U}\) is added to \(g_{i}\) for any \(i\) and any time step. We now proceed as follows: Recall that \(p_{j}\) denotes the end of the \(j\)th time interval and that \(p_{0}=t_{0}\). To prove accuracy we will first show an auxiliary lemma that says that \(p_{1}>t_{0}+1\). Next fix any \(i\in[k]\) and let \(p_{l}\) and \(p_{r}\) be any two time steps such that \(K_{(i)}\) is updated at \(p_{l}\) and \(p_{r}\) but not between them. Then we show that \(g_{i}\) must have increased by more than \(1\) between \(p_{l}\) and \(p_{r}\). The latter fact implies that \(K_{(i)}\) was not updated at time \(p_{r}-1\), which can be used to get an upper bound on \(g_{i}(h^{p_{r}-1})\) and, by the \(1\)-sensitivity of \(g_{i}\), also on \(g_{i}(h^{p_{r}})\). As \(K_{(i)}\) was updated at time \(p_{l}\), we also have a lower bound on \(g_{i}(h^{p_{l}})\). Combining the two gives an upper bound on \(|g_{i}(h^{p_{r}})-g_{i}(h^{p_{l}})|\) of \(O(K+\alpha_{SV}+\alpha_{U})\), which is the crucial bound needed to upper bound \(|g_{i}(h^{t})-g_{i}(s^{t})|\). In the rest of the section let \(K_{(i)}^{t}\) denote the value of \(K_{(i)}\) at time \(t\) when we reach Line 12 of Algorithm 17. To show that \(p_{1}>t_{0}+1\), we first show that whenever the \(i\)th threshold is updated, the true value of \(g_{i}\) is not much smaller than the threshold that was crossed. **Lemma 46**.: _Suppose the \(i\)th threshold \(K_{(i)}\) is updated at time \(t\). Then_ \[g_{i}(\mathrm{h}^{t})\geq K_{(i)}^{t}-C-\alpha_{U}-\text{err}(k\cdot\Delta, \beta/3)-\alpha_{\Gamma}.\] Proof.: This follows from the sensitivity of \(g_{i}\) and the fact that \(K_{(i)}\) was updated at time \(t\). \[g_{i}(\mathrm{h}^{t}) \geq g_{i}(s^{t})-\text{err}(k\Delta,\beta/3)-\alpha_{\Gamma}\] (since \[g_{i}\] has sensitivity \[1\] ) \[\geq\tilde{g_{i}}(s^{t})-\alpha_{U}-\text{err}(k\Delta,\beta/3)- \alpha_{\Gamma}\] \[\geq K_{(i)}^{t}-C-\alpha_{U}-\text{err}(k\Delta,\beta/3)-\alpha_{ \Gamma}\] (since \[K_{(i)}\] is updated at time \[t\] ) as required. **Lemma 47**.: _It holds that \(p_{1}>1+t_{0}\)._ Proof.: Note that variable \(C\) in Algorithm 17 is larger than \(\alpha_{SV}+\alpha_{U}\). Thus, if the condition in line 12 is true for some \(i\) at time \(p_{j}\), then the condition in line 17 is also true for the same \(i\). Thus, whenever we close a segment, we also update at least one threshold, say the \(i\)th one. Using Lemma 46 with \(t=p_{j}\) gives us that \[g_{i}(h^{p_{j}})\geq K_{(i)}^{p_{j}}-C-\alpha_{U}-\text{err}(k\Delta,\beta/3)- \alpha_{\Gamma}\] Note that since \(K>C+\text{err}(k\cdot\Delta,\beta/3)+\alpha_{U}+\alpha_{\Gamma}\), this implies \(g_{i}(\mathrm{h}^{p_{i}})>1+t_{0}\). As \(g_{i}\) increases by at most \(1\) per time step, it follows that \(p_{1}>1+t_{0}\). Next we show an upper bound on the true query values when a threshold is updated. **Lemma 48**.: _Let \(i\in[k]\). Let \(p_{r}\) be a timestep where threshold \(K_{(i)}\) is updated. Then \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\text{err}(k\cdot\Delta,\beta/3)+ \alpha_{SV}+\alpha_{U}+\alpha_{\Gamma}+1\). Further, let \(l\) and \(r\) be integers such that \(p_{l}\) and \(p_{r}\) are two consecutive time steps where threshold \(K_{(i)}\) is updated. Then \(p_{r}-p_{l}>1\) and \(|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})|>1\)._ Proof.: We show the claim by induction over the number of updates of \(K_{(i)}\). As \(p_{r}\) is a time step where \(K_{(i)}\) is updated, the condition in line 12 is true, and, thus \(r\leq k\cdot\Delta\). _Case 1:_ If \(p_{r}\) is the first time that threshold \(K_{(i)}\) is updated, then \(p_{r}\geq p_{1}>1\) by Lemma 47. As the threshold was not updated before, it was not updated at time \(p_{r}-1\). This fact and the fact that \(r\leq k\cdot\Delta\) imply that either the condition in line 12 was false or the condition in line 17 was false for \(i\) at time \(p_{r}-1\). Thus, either \[g_{i}(s^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}<K_{(i)}^{p_{r}}+\alpha_{SV}+ \alpha_{U}\] or \[g_{i}(s^{p_{r}-1})<K_{(i)}^{p_{r}}-C+\alpha_{U}<K_{(i)}^{p_{r}}+\alpha_{SV}+ \alpha_{U},\] and hence, \[g_{i}(\mathrm{h}^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{ err}(k\Delta,\beta/3)+\alpha_{\Gamma}.\] As \(g_{i}\) has sensitivity \(1\) and \(\mathrm{h}^{p_{r}}\) and \(\mathrm{h}^{p_{r}-1}\) differ by at most \(1\) in each coordinate, it holds that \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err} (k\cdot\Delta,\beta/3)+\alpha_{\Gamma}+1\). _Case 2:_ If \(p_{r}\) is not the first time threshold \(K_{(i)}\) is updated, let \(p_{l}\) be the last time at which threshold \(K_{(i)}\) was updated before \(p_{r}\). By induction, we have \(g_{i}(\mathrm{h}^{p_{l}})<K_{(i)}^{p_{l}}+\alpha_{SV}+\alpha_{U}+\mathrm{err} (k\cdot\Delta,\beta/3)+\alpha_{\Gamma}+1=K_{(i)}^{p_{r}}-K+\alpha_{SV}+\alpha_ {U}+\mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{\Gamma}+1\). Since the threshold \(K_{(i)}\) is updated at time \(p_{r}\), we have \(g_{i}(\mathrm{h}^{p_{r}})\geq K_{(i)}^{p_{r}}-C-\alpha_{U}-\mathrm{err}(k \cdot\Delta,\beta/3)-\alpha_{\Gamma}\) by Lemma 46 for \(t=p_{r}\). Thus, \[|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})| >K_{(i)}^{p_{j}}-C-\alpha_{U}-\mathrm{err}(k\cdot\Delta,\beta/3) -\alpha_{\Gamma}\] \[\quad-(K_{(i)}^{p_{r}}-K+\mathrm{err}(k\cdot\Delta,\beta/3)+ \alpha_{SV}+\alpha_{U}+\alpha_{\Gamma}+1)\] \[=K-C-2\mathrm{err}(k\cdot\Delta,\beta/3)-\alpha_{SV}-2\alpha_{U}- 2\alpha_{\Gamma}-1>1,\] since \(K=3(C+\mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{\Gamma})\) and \(C>\alpha_{SV}+\alpha_{U}\). Thus, \(p_{r}-p_{l}>1\) and the threshold \(K_{(i)}\) was not updated at time \(p_{r}-1\). Now by the same argument as before, \(g_{i}(\mathrm{h}^{p_{r}-1})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err }(k\cdot\Delta,\beta/3)+\alpha_{\Gamma}\) and therefore \(g_{i}(\mathrm{h}^{p_{r}})<K_{(i)}^{p_{r}}+\alpha_{SV}+\alpha_{U}+\mathrm{err }(k\cdot\Delta,\beta/3)+\alpha_{\Gamma}+1\). Let \(t(i)\) be the last time in the whole input sequence that \(K_{(i)}\) was updated, let \(p_{l}\) be the last time _before_\(t\) that \(K_{(i)}\) was updated, and let \(p_{r}\) be the first time step _at or after_\(t\) at which \(K_{(i)}\) gets updated, i.e., \(p_{r}\geq t\). _Case A:_\(t\leq t(i)\), i.e., there exists a time step \(\geq t\) at which \(K_{(i)}\) is updated. Now, by Lemma 48 applied to \(p_{r}\) and Lemma 46 applied to \(p_{l}\), we get \[|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})| <K_{(i)}^{p_{r}}+\mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{SV}+ \alpha_{U}+\alpha_{\Gamma}+1\] \[\quad-(K_{(i)}^{p_{l}}-K-C-\alpha_{U}-\mathrm{err}(k\cdot\Delta, \beta/3)-\alpha_{\Gamma})\] \[=K+C+2\mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{SV}+2\alpha_{U} +2\alpha_{\Gamma}+1. \tag{66}\] The error for outputting \(g_{i}\) at any time step \(t\) in an interval \([p_{j-1},p_{j})\subseteq[p_{l},p_{r})\) for any \(j\leq k\cdot\Delta\) is now bounded by: \[|g_{i}(\mathrm{h}^{t})-g_{i}(s^{p_{j-1}})| \leq|g_{i}(\mathrm{h}^{t})-g_{i}(\mathrm{h}^{p_{j-1}})|+|g_{i}( \mathrm{h}^{p_{j-1}})-g_{i}(s^{p_{j-1}})|\] \[\leq|g_{i}(\mathrm{h}^{p_{r}})-g_{i}(\mathrm{h}^{p_{l}})|+\mathrm{ err}(k\cdot\Delta,\beta/3)+\alpha_{\Gamma}\] \[\leq K+C+3\mathrm{err}(k\cdot\Delta,\beta/3)+\alpha_{SV}+2\alpha_{U }+3\alpha_{\Gamma}+1 \tag{67}\] \[=O(\epsilon^{-1}(\sqrt{k\ln(e^{2\epsilon/3}k\Delta/\beta\delta)}+ \ln(k\Delta/\beta)+\ln(T/\beta))+\mathrm{err}(k\Delta,\beta/3)+\alpha_{ \Gamma})\] Further, because of Lemma 48, every time we update \(K_{(i)}\), \(g_{i}(\mathrm{h}^{t})\) grows by at least \(1\). Since all \(g_{i}(\mathrm{h}^{t})\) are bounded by \(\Delta\), and every time \(j\) is updated at least one threshold is updated, this implies that \(j<k\cdot\Delta\) is always true. _Case B:_\(t>t(i)\). Consider the last time step \(t_{\infty}\). As \(t_{\infty}\geq t>t(i)\), \(K_{(i)}\) was not updated at time \(t_{\infty}\). Since \(j\leq k\cdot\Delta\), at time \(t_{\infty}\) either the condition in line 12 was false or the condition in line 17 was false. Therefore \(g_{i}(h^{T})<K_{(i)}^{T}+\text{err}(k\cdot\Delta,\beta/3)+\alpha_{U}+\alpha_{SV}+ \alpha_{\Gamma}\). Then by Lemma 46, \(g_{i}(\text{h}^{t(i)})\geq K_{(i)}^{T}-K-C-\alpha_{U}-\text{err}(k\cdot\Delta, \beta/3)-\alpha_{\Gamma}\). These two equations together give \[g_{i}(\text{h}^{T})-g_{i}(\text{h}^{t(i)})\leq K+C+2\alpha_{U}+\alpha_{SV}+ \text{err}(k\Delta,\beta/3)+2\alpha_{\Gamma}\leq K+3C+\text{err}(k\Delta,\beta /3)+2\alpha_{\Gamma} \tag{68}\] The error for outputting \(g_{i}\) at any timestep \(t\) in an interval \([p_{j-1},p_{j})\subseteq[t(i),T]\) and at \(t=T\) is now bounded by \[|g_{i}(\text{h}^{t})-g_{i}(s^{p_{j-1}})| \leq|g_{i}(\text{h}^{t})-g_{i}(\text{h}^{p_{j-1}})|+|g_{i}(\text{ h}^{p_{j-1}})-g_{i}(s^{p_{j-1}})|\] \[\leq|g_{i}(\text{h}^{T})-g_{i}(\text{h}^{t(i)})|+\text{err}(k \cdot\Delta,\beta/3)+\alpha_{\Gamma}\] \[\leq K+3C+2\text{err}(k\cdot\Delta,\beta/3)+3\alpha_{\Gamma}\] (by Equation 68 ) \[=O(\epsilon^{-1}(\sqrt{k\ln(e^{2\epsilon/3}k\Delta/\beta\delta) }+\ln(k\Delta/\beta)+\ln(T/\beta))+\text{err}(k\cdot\Delta,\beta/3)+\alpha_{ \Gamma}) \tag{69}\] Thus Equations 67 and 69 together give a bound on \(|g_{i}(h^{t})-g_{i}(s^{p_{j}-1})|\) in both cases. For any time \(t\) that belongs to interval \(j\), the values of \(g_{i}\) on the true histogram are \((g_{1}(h^{t}),g_{2}(h^{t}),\ldots,g_{k}(h^{t}))\), and the values output by Algorithm 17 are \((g_{1}(s^{p_{j-1}}),g_{2}(s^{p_{j-1}}),\ldots,g_{k}(s^{p_{j-1}}))\). Thus the error of the algorithm is given by \[\alpha =\max_{t\in T}\max_{i\in[k]}|g_{i}(h^{t})-g_{i}(s^{p_{j-1}})|\] (where \[j\] is the interval that \[t\] belongs to) \[\leq\max_{t\in T}\max_{i\in[k]}O(\epsilon^{-1}\cdot(\sqrt{k\ln(e^{2 \epsilon/3}k\Delta/\beta\delta)}+\ln(k\Delta/\beta)+\ln(T/\beta))+\text{err}( k\Delta,\beta/3)+\alpha_{\Gamma})\] (by Equation 69 ) \[\leq O(\epsilon^{-1}\cdot(\sqrt{k\ln(e^{2\epsilon/3}k\Delta/\beta \delta)}+\ln(k\Delta/\beta)+\ln(T/\beta))+\text{err}(k\Delta,\beta/3)+\alpha_ {\Gamma})\] This then gives us the following lemma. **Lemma 49**.: _Algorithm 17 is \((\alpha,\beta)\)-accurate, with \(\alpha=O(\epsilon^{-1}\cdot(\sqrt{k\ln(e^{2\epsilon/3}k\Delta/\beta\delta)}+ \ln(k\Delta/\beta)+\ln(T/\beta))+\text{err}(k\Delta,\beta/3)+\alpha_{\Gamma})\)._ #### 12.2.2 Algorithm 16 Below we use the following variables: 1. \(L=\min\{\log(24\epsilon^{-2}dc_{\max})+8\log\log(T/\beta)+\log\ln(2e^{ \epsilon/2}/\delta),\log T\}\), 2. \(\alpha_{\Gamma}=\frac{2}{\epsilon}\cdot\sqrt{Ld\ln\left(\frac{2\epsilon /2}{\delta}\right)\log\left(\frac{3d\log T}{\beta}\right)}\), 3. \(\alpha_{DM}=\alpha_{\Gamma}+L+8\epsilon^{-1}\log(3T/\beta)+4\epsilon^{-1}\log(3 \log T/\beta)\), In what follows, \(O_{\log\log\log}\) hides \(\log\log(d,k,T,1/\epsilon,1/\beta)\) terms. **Lemma 50**.: _Algorithm 16 is \((\alpha,2\beta)\)-accurate for k-Query, with_ \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(\epsilon\cdot\text{err}(k2^{ L+2},\beta/3L)+\sqrt{d\log(d/\beta\delta)\log(dc_{\max}/\epsilon)}+\log(T/ \beta)+\sqrt{k}\log(kdc_{\max}/\epsilon\beta\delta)\right)\right)\] Proof.: We first argue about the doubling mechanism. Using Theorem 4, we get that the following guarantees simultaneously hold with probability \(\geq 1-\beta\). 1. the total number of segments produced by the doubling mechanism is upper bounded by \(L\), where \(L=\min\{\log(24\epsilon^{-2}dc_{\max})+8\log\log(T/\beta)+\log\ln(2e^{ \epsilon/2}/\delta),\log T\}\), 2. for every time step \(t\in[T]\), the difference between the true histogram and the noisy histogram stored by the doubling mechanism is upper bounded by \(\alpha_{\Gamma}=O\left(\epsilon^{-1}\cdot\sqrt{Ld\log\left(\frac{\epsilon^{ -2}d\log T}{\beta\delta}\right)}\right)\), and 3. the true maximum query value increase within segment \(j\) is at most \(2^{j-1}+2\alpha_{DM}\), where \(\alpha_{DM}=O\left(\epsilon^{-1}\cdot(\sqrt{Ld\log(e^{\epsilon/2}d\log T/\beta \delta)}+\log(T/\beta))\right)\) We condition on these guarantees. We now argue about the accuracy when the modified BoundedMaxQuery mechanism returns an output. For the \(j^{th}\) instantiation of the modified BoundedMaxQuery mechanism, the max query value increase bound \(\Delta\) is defined as \(2^{j-1}+2\alpha_{DM}\). Taking \(\beta^{\prime}=\beta/3L\) in Lemma 49 gives us that the \(j^{th}\) modified BoundedMaxSum mechanism has the following accuracy with probability \(\geq 1-\beta/L\) \[\alpha=O\left(\epsilon^{-1}\cdot\left(\sqrt{dL\log\left(Le^{ \epsilon/2}d\log T/\beta\delta\right)}+\log(TL/\beta)+\log(Lk\Delta/\beta)+\right.\right. \tag{70}\] \[\left.\left.\sqrt{k\log(e^{\epsilon/3}Lk\Delta/\beta\delta)} \right)+\operatorname{err}(k\Delta,\beta/3L)\right)\] We first upper bound \(\Delta\). Since \(j\leq L\), \[\Delta=2^{j-1}+2\alpha_{DM}\leq 2^{L}+2\alpha_{DM}\] We show that \(\alpha_{DM}\leq 2^{L+1}\), which lets us obtain an upper bound of \(\Delta\leq 2^{L+2}\). Recall that \(\alpha_{DM}=L+\alpha_{\Gamma}+\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{ \beta}\right)+\frac{4}{\epsilon}\cdot\log\left(\frac{3\log T}{\beta}\right)\). We bound \(L\) trivially by \(2^{L}\). We now bound the rest of the term by \(2^{L}\). First, we bound \(\alpha_{\Gamma}\) \[\alpha_{\Gamma} =\frac{2}{\epsilon}\cdot\sqrt{Ld\ln\left(\frac{2e^{\epsilon/2}}{ \delta}\right)\log\left(\frac{3d\log T}{\beta}\right)}\] \[\leq\frac{2}{\epsilon}\cdot\sqrt{d\log T\ln\left(\frac{2e^{ \epsilon/2}}{\delta}\right)\log\left(\frac{3d\log T}{\beta}\right)} \text{(since $L\leq\log T$)}\] Thus \[\alpha_{\Gamma}+\frac{8}{\epsilon}\cdot\log\left(\frac{3T}{ \beta}\right)+\frac{4}{\epsilon}\cdot\log\left(\frac{3\log T}{\beta}\right) \leq\frac{2}{\epsilon}\cdot\sqrt{d\log T\ln\left(\frac{2e^{ \epsilon/2}}{\delta}\right)\log\left(\frac{3d\log T}{\beta}\right)}+\frac{12} {\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)\] \[\leq c_{\max}+\frac{2}{\epsilon}\cdot\sqrt{d\log T\ln\left(\frac {2e^{\epsilon/2}}{\delta}\right)\log\left(\frac{3d\log T}{\beta}\right)}+ \frac{12}{\epsilon}\cdot\log\left(\frac{3T}{\beta}\right)\] \[\leq 2^{L}\text{(by definition of $L$ and \eqref{eq:2.2.2})}\] which gives \(\alpha_{DM}\leq 2^{L+1}\). This gives us the required upper bound on \(\Delta\) of \(2^{L+2}\). Next, we show that \(\log L=O_{\log\log(1)}\). This follows, since \[L=\log(24\epsilon^{-2}dc_{\max})+8\log\log(T/\beta)+\log\ln(2e^{\epsilon/2}/\delta)\] and so \[\log L=\log\log\left(\left(24\epsilon^{-2}dc_{\max}\right)\cdot\log^{8}(T/ \beta)\cdot\ln(2e^{\epsilon/2}/\delta)\right)=O_{\log\log(1)}.\] Plugging in \(\Delta\leq 2^{L+2}\), \(\log L=O_{\log\log(1)}\), and \(\log\log T=O_{\log\log(1)}\) into Equation 70, we get \[\alpha=O_{\log\log}\left(\epsilon^{-1}\cdot\left(\sqrt{dL\log(d/\beta\delta)} +\epsilon\cdot\operatorname{err}(k2^{L+2},\beta/3L)+\log(T/\beta)+\sqrt{k\log (k2^{L+2}/\beta\delta)}+\log(k2^{L+2}/\beta)\right)\right) \tag{71}\] For the last term in the summation, \[\log 2^{L+2}=L+2 =O(\log(24\epsilon^{-2}dc_{\max})+8\log\log(T/\beta)+\log\ln(2e^{ \epsilon/2}/\delta))\] \[=O_{\log\log}(\log(\epsilon^{-2}dc_{\max}))\] In the first term in Equation 71, we bound \(\sqrt{L}\) as follows \[\sqrt{L}\leq\sqrt{\log(24\epsilon^{-2}dc_{\max})}+\sqrt{8\log\log(T/ \beta)}+\sqrt{\log\ln(3d\log(T/\beta))}\] Since the final two terms are \(O_{\log\log(1)}\), this reduces to \[\sqrt{L}\leq O_{\log\log\left(\sqrt{\log(dc_{\max}/\epsilon)}\right)}\] Plugging these bounds on \(\sqrt{L}\) and \(\log 2^{L+2}\) in Equation 71, we get \[\alpha=O_{\log\log\left(\epsilon^{-1}\cdot\left(\sqrt{d\log(d/ \beta\delta)\log(dc_{\max}/\epsilon)}+\epsilon\cdot\operatorname{err}(k2^{L+2},\beta/3L)+\log(T/\beta)\right.\right.\] \[+\left.\left.\sqrt{k\log(kdc_{\max}/\epsilon\beta\delta)}+\log( kdc_{\max}/\epsilon\beta)\right)\right)\] Since there are at most \(L\) segments in the two-level mechanism, there are at most \(L\) instantiations of Algorithm 17. Thus all the accuracy guarantees hold together with probability at least \(1-\beta\). Combining the guarantees for the two-level mechanism and the modified BoundedMaxQuery mechanism, we get that Algorithm 16 is \((\alpha,2\beta)\)-accurate for \[\alpha=O_{\log\log\left(\epsilon^{-1}\cdot\left(\epsilon\cdot \operatorname{err}(k2^{L+2},\beta/3L)+\sqrt{d\log(d/\beta\delta)\log(dc_{\max }/\epsilon)}+\log(T/\beta)+\sqrt{k}\log(kdc_{\max}/\epsilon\beta\delta) \right)\right)\] which proves the claimed accuracy guarantee. **Corollary 5**.: _Algorithm 16 instantiated with the histogram mechanism from Fact 5 is \((\alpha,2\beta)\)-accurate for k-Query, with_ \[\alpha=O_{\log\log\left(\epsilon^{-1}\cdot\left(\sqrt{d}\log(d/ \beta\delta)\cdot\left(\log^{3/2}(dc_{\max}/\epsilon^{2})+\log^{3/2}k\right)+ \log(T/\beta)+\sqrt{k}\log(kdc_{\max}/\epsilon\beta\delta)\right)\right)\] Proof.: Using the histogram mechanism mentioned in Fact 5, we have \[\operatorname{err}(k2^{L+2},\beta/3L)=O(\epsilon^{-1}\log(1/\delta) \log(k2^{L+2})\sqrt{d\ln(dk2^{L+2}/\beta)})\] We start by bounding \(\log^{3/2}(k2^{L})\). \[\log^{3/2}k2^{L} =(L+\log k)^{3/2}\] \[\leq\left(\log(24\epsilon^{-2}dc_{\max})+8\log\log(T/\beta)+\log \ln(2\epsilon^{\epsilon/2}/\delta)+\log k\right)^{3/2}\] \[\leq O_{\log\log\left(\log^{3/2}(dc_{\max}/\epsilon^{2})+\log^{3 /2}k\right)\] Thus \[\operatorname{err}(k2^{L+2},\beta/3L)\leq O_{\log\log\left( \epsilon^{-1}\cdot\sqrt{d}\cdot\log(d/\beta\delta)\cdot\left(\log^{3/2}(dc_{ \max}/\epsilon^{2})+\log^{3/2}k\right)\right)\] Plugging this into the accuracy guarantee in Lemma 50, the \(\sqrt{d\log d\log dc_{\max}}\) term is dominated by \(\operatorname{err}(k2^{L+2},\beta/3L)\). This gives \[\alpha=O_{\log\log\left(\epsilon^{-1}\cdot\left(\sqrt{d}\log(d/ \beta\delta)\cdot\left(\log^{3/2}(dc_{\max}/\epsilon^{2})+\log^{3/2}k\right)+ \log(T/\beta)+\sqrt{k}\log(kdc_{\max}/\epsilon\beta\delta)\right)\right)\] as required.
2308.04071
Path Signatures for Diversity in Probabilistic Trajectory Optimisation
Motion planning can be cast as a trajectory optimisation problem where a cost is minimised as a function of the trajectory being generated. In complex environments with several obstacles and complicated geometry, this optimisation problem is usually difficult to solve and prone to local minima. However, recent advancements in computing hardware allow for parallel trajectory optimisation where multiple solutions are obtained simultaneously, each initialised from a different starting point. Unfortunately, without a strategy preventing two solutions to collapse on each other, naive parallel optimisation can suffer from mode collapse diminishing the efficiency of the approach and the likelihood of finding a global solution. In this paper we leverage on recent advances in the theory of rough paths to devise an algorithm for parallel trajectory optimisation that promotes diversity over the range of solutions, therefore avoiding mode collapses and achieving better global properties. Our approach builds on path signatures and Hilbert space representations of trajectories, and connects parallel variational inference for trajectory estimation with diversity promoting kernels. We empirically demonstrate that this strategy achieves lower average costs than competing alternatives on a range of problems, from 2D navigation to robotic manipulators operating in cluttered environments.
Lucas Barcelos, Tin Lai, Rafael Oliveira, Paulo Borges, Fabio Ramos
2023-08-08T06:10:53Z
http://arxiv.org/abs/2308.04071v1
# Path Signatures for Diversity in Probabilistic Trajectory Optimisation ###### Abstract Motion planning can be cast as a trajectory optimisation problem where a cost is minimised as a function of the trajectory being generated. In complex environments with several obstacles and complicated geometry, this optimisation problem is usually difficult to solve and prone to local minima. However, recent advancements in computing hardware allow for parallel trajectory optimisation where multiple solutions are obtained simultaneously, each initialised from a different starting point. Unfortunately, without a strategy preventing two solutions to collapse on each other, naive parallel optimisation can suffer from mode collapse diminishing the efficiency of the approach and the likelihood of finding a global solution. In this paper we leverage on recent advances in the theory of rough paths to devise an algorithm for parallel trajectory optimisation that promotes diversity over the range of solutions, therefore avoiding mode collapses and achieving better global properties. Our approach builds on path signatures and Hilbert space representations of trajectories, and connects parallel variational inference for trajectory estimation with diversity promoting kernels. We empirically demonstrate that this strategy achieves lower average costs than competing alternatives on a range of problems, from 2D navigation to robotic manipulators operating in cluttered environments. ## I Introduction Trajectory optimisation is one of the key tools in robotic motion, used to find control signals or paths in obstacle-cluttered environments that allow the robot to perform desired tasks. These trajectories can represent a variety of applications, such as the motion of autonomous vehicles or robotic manipulators. In most problems, we consider a _state-space model_, where each distinct situation for the world is called a _state_, and the set of all possible states is called the _state space_[1]. When optimising candidate trajectories for planning and control, two criteria are usually considered: _optimality_ and _feasibility_. Although problem dependant, in general, the latter evaluates in a binary fashion whether the paths generated respect the constraints of both the robot and the task, such as physical limits and obstacle avoidance. Conversely, optimality is a way to measure the quality of the generated trajectories with respect to task-specific desired behaviours. For example, if we are interested in smooth paths we will search for trajectories that minimise changes in velocity and/or acceleration. The complexity of most realistic robot planning problems scales exponentially with the dimensionality of the state space and is countably infinite. When focusing on motion planning, a variety of algorithms have been proposed to find optimal and feasible trajectories. These can be roughly divided into two main paradigms: sampling-based and trajectory optimisation algorithms. Sampling-based planning [2] is a class of planners with _probabilistically complete_ and _asymptotically optimal_ guarantees [3]. These approaches decompose the planning problem into a series of sequential decision-making problems with a tree-based [4] or graph-based [5, 6] approach. However, most approaches are limited in their ability to encode kinodynamic cost like trajectory curvature [7] or acceleration torque limits [8]. In addition, despite the completeness guarantee, sampling-based planners are often more computationally expensive as the search space grows and can obtain highly varying results due to the random nature of the algorithms. Trajectory optimisation algorithms [9] use different techniques to minimise a cost functional that encourages solutions to be both optimal and feasible. The most direct optimisation procedure relies on a differentiable cost function and uses functional gradient techniques to iteratively improve the trajectory quality [10]. However, many different strategies have been proposed. For example, one may start from a randomly initialised candidate trajectory and proceed by adding random perturbations to explore the search space and generate approximate gradients, allowing any arbitrary form of cost functional to be encoded [11]. The same approach can be used to search for control signals and a local motion plan concurrently [12]. Finally, a locally optimal trajectory can also be obtained via decomposing the planning problem with sequential quadratic programming [13]. A drawback of these methods is that they usually find solutions that are locally optimal and may need to be run with different initial Fig. 1: **An episode of the _Kitchen_ scene.** Depicted is one of the collision-free paths found by SigSVGD on a reaching task using a 7 DOF Franka Panda arm on the MotionBenchMaker planning benchmark. conditions to find solutions that are feasible or with lower costs. Our goal with the present work is to propose a new trajectory optimisation method to improve path diversity. More specifically, we focus on a class of algorithms that perform trajectory optimisation parallel optimisation of a batch of trajectories. This concurrent optimisation of several paths in itself already alleviates the proneness to local minima, since many initial conditions are evaluated simultaneously. Nonetheless, we show how a proper representation of trajectories when performing functional optimisation leads to increased diversity and solutions with a better global property, either with direct gradients or Monte Carlo-based gradient approximations. As an illustrative example, refer to Fig. 2. Our approach is based on two cornerstones. On one hand, we use a modification of Stein Variational Gradient Descent (SVGD) [14], a variational inference method to approximate a posterior distribution with an empirical distribution of sampled particles, to optimise trajectories directly on a structured Reproducing Kernel Hilbert Space (RKHS). The structure of this space is provided by the second pillar of our approach. We leverage recent advancements in rough path theory to encode the sequential nature of paths in the RKHS using a Path Signature Kernel [15, 16]. Therefore we can approximate the posterior distribution over optimal trajectories with structured particles during the optimisation while still taking into account motion planning and control idiosyncrasies. More concretely, the main contributions of this paper are listed below: * We introduce the use of path signatures [17] as a canonical feature map to represent trajectories over high-dimensional state spaces; * Next, we outline a procedure to incorporate the signature kernel into a variational inference framework for motion planning; * Finally, we demonstrate through experiments in both planning and control that the proposed procedure results in more diverse trajectories, which aid in avoiding local minima and lead to a better optimisation outcome. The paper is organised as follows. In Section II we review related work, contrasting the proposed method to the existing literature. In Section III we provide background on path signatures and motion planning as variational inference, which are the foundational knowledge for the method outlined in Section IV. Finally, in Section V we present a number of simulated experiments, followed by relevant discussions in Section VI. ## II Related Work Trajectory optimisation refers to a class of algorithms that start from an initial sub-optimal path and find a, possibly local, optimal solution by minimising a cost function. Given its broad definition, there are many seminal works in the area. One influential early work is Covariant Hamiltonian Optimisation for Motion Planning (CHOMP) [10] and related methods [18, 19, 20]. The algorithm leverages the covariance of trajectories coupled with Hamiltonian Monte Carlo to perform annealed functional gradient descent. However, one of the limitations of CHOMP and related approaches is the need for a fully-differentiable cost function. In Stochastic Trajectory Optimisation for Motion Planning (STOMP) [11] the authors address this by approximating the gradient from stochastic samples of noisy trajectories, allowing for non-differentiable costs. Another approach used in motion planning are quality diversity algorithms, at the intersection of optimisation and evolutionary strategies, of which Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is the most prominent [21, 22, 23]. CMA-ES is a derivative-free method that uses a multivariate normal distribution to generate and update a set of candidate solutions, called individuals. The algorithm adapts the covariance matrix of the distribution based on the observed fitness values of the individuals and the search history, balancing exploration and exploitation of the search space. Because of its stochastic nature, it is ergodic and copes well with multi-modal problems. Nonetheless, it may require multiple initialisations and it typically requires more evaluations than gradient-based optimisers [24]. TrajOpt [13], another prominent planner, adopts a different approach solving a sequential quadratic program and performing continuous-time collision checking. Contrary to sampling-based planners, these trajectory optimisation methods are fast, but only find locally optimal solutions and may require reiterations until a feasible solution is found. Another issue common to these approaches is that in practice they require a fixed and fine parametrisation of trajectory waypoints to ensure feasibility and smoothness, which negates the benefit of working on continuous trajectory space. To address this constraint, in [20] the authors restrict the optimisation and trajectory projection to an RKHS with an associated squared-exponential kernel. However, the cost between sparse waypoints is ignored and the search is still restricted to a deterministic trajectory. Another approach was proposed in GPMP [25, 26, 27] by representing trajectories as Gaussian Processes (GP) and looking for a _maximum a posteriori_ (MAP) solution of the inference problem. More closely related to our approach are [28, 29] which frame motion planning as a variational inference problem and try to estimate the posterior distribution represented as a set of trajectories. In [29], the authors modify GPMP with a natural gradient update rule to approximate the posterior. On the other hand, in Stein Variational Motion Planning (SVMP) [28] the posterior inference is optimised using Stein variational gradient descent. This method is similar to ours, but the induced RKHS does not take into account the sequential nature of the paths being represented, which leads to a diminished repulsive force and lack of coordination along the dimensions of the projected space. In contrast, our approach--which we will refer to as Kernel Signature Variational Gradient Descent (SigSVGD)--uses the path signature to encode the sequential nature of the functional being optimised. We argue that this approach leads to a better representation of trajectories promoting diversity and finding better local solutions. To empirically corroborate this claim we use the Occam's razor principle and take SVMP as the main baseline of comparison since it more closely approximates our method. We note that the application of trajectory optimisation need not be restricted to motion planning. By removing the constraint of a target state and making the optimisation process iterative over a rolling horizon we retrieve a wide class of Model Predictive Controllers with applications in robotics [12, 30, 31, 32]. Stein Variational MPC (SVMPC) [32] uses variational inference with SVGD optimisation to approximate a posterior over control policies and more closely resembles SigSVGD. However, like SVMP, it too does not take into account the sequential nature of control trajectories and we will illustrate how our approach can improve the sampling of the control space and promote better policies. ## III Background ### _Trajectory Optimisation in Robotics_ Consider a system with state \(\mathbf{x}\in\mathcal{X}\) and let us denote a _trajectory_ of such system as \(X:[a,b]\rightarrow\mathcal{X}\), where \(\mathcal{X}\) is an appropriate Euclidean space or group. We shall use the notation \(X_{t}\) to denote the dependency on time \(t\in[a,b]\). The trajectory \(X\) describes a _path_ in \(\mathcal{X}\) and we shall use the two denominations interchangeably. In trajectory optimisation the goal is to find the optimal path \(X^{*}\) from a given starting state \(\mathbf{x}_{s}\) to a certain goal state \(\mathbf{x}_{g}\). This can be done by minimising a cost functional that codifies our desired behaviour \(\mathcal{C}:\mathcal{P}_{\mathcal{X}}\rightarrow\mathbb{R}^{+}\), where \(\mathcal{P}_{\mathcal{X}}\) is the Hilbert space of trajectories [33]: \[X^{*}:=\operatorname*{arg\,min}_{X}\mathcal{C}(X),\ \text{s.t.}\ X_{a}= \mathbf{x}_{s}\ \text{and}\ X_{b}=\mathbf{x}_{g}. \tag{1}\] Typically, \(\mathcal{C}\) is a bespoke functional that includes penalties for trajectory non-smoothness, total energy, speed and acceleration tracking, as well as length. To ensure that the solution is feasible and collision-free, additional equality and inequality constraints may also be included [13]. Alternatively, we can solve an unconstrained problem and include additional penalties to the cost functional as soft-constraints [18, 10]. Finally, we draw the reader's attention to the fact that the problem stated in Eq. (1) can be viewed as an open-loop optimal control problem. If the solution can be found in a timely manner, the same problem can be cast onto a Model Predictive Control [30, 31, 34] framework \[U^{*}:=\operatorname*{arg\,min}_{U}\mathcal{C}(X,U),\ \text{s.t.}\ X_{a}= \mathbf{x}_{s}, \tag{2}\] where \(U:[a,b]\rightarrow\mathcal{U}\) is a path of control inputs on a given Euclidean space and the mapping to \(\mathcal{X}\) is given by the dynamical system \(\mathbf{f}\) such that \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x},\mathbf{u},t)\). That is to say, we now influence the path \(X\) indirectly through input \(U\), and at any time \(t\) the problem is solved for a finite interval. The closed-loop solution arises from applying only the first immediate control action before re-optimising the solution. ### _Path Signature_ A multitude of practical data streams and time series can be regarded as a path, for example, video, sound, financial data, control signals, handwriting, etc. The path signature transforms such multivariate sequential data (which may have missing or irregularly sampled values) into an infinite-length series of real numbers that uniquely represents a trajectory through Euclidean space. Although formally distinct and with notably different properties, one useful intuition is to think of the signature of a path as akin to a Fourier transform, where paths are summarised by an infinite series of feature space coefficients. Consider a path \(X\) traversing space \(\mathcal{X}\subseteq\mathbb{R}^{c}\) as defined in Section III-A. Note that at any time \(t\) such path can be decomposed in \(X_{t}=\left\{X_{t}^{1},X_{t}^{2},\ldots,X_{t}^{c}\right\}\). Now recall that for Fig. 2: **Qualitative analysis of 2D planning task.** The plot shows the final 20 trajectories found with different optimisation methods. The colour of each path shows its normalised final cost. Note how all batch gradient descent trajectories converge to two modes of similar cost. Paths found by SVMP are already more diverse, but one of the gradient descent modes is lost. Note how when multiple trajectories converge to a single trough, the knots are pushed away by the repulsive force resulting in suboptimal solutions. Conversely, paths found by SigSVGD are diverse and able to find more homotopic solutions, including those found by BGD. Note also how paths are able to converge to the same trough without being repelled by one another since the repulsive force takes into account the entire trajectory and not exclusively the spline knot placement. That also allows for paths that are more direct and coordinated than SVMP. a one-dimensional path \(X_{t}\) and a function \(f\), the path integral of \(f\) along \(X\) is defined by: \[\int_{a}^{b}f(X_{t})\mathrm{d}X_{t}=\int_{a}^{b}f(X_{t})\dot{X}_{t}\mathrm{d}t. \tag{3}\] In particular, note that the mapping \(t\to f(X_{t})\) is also a path. In fact, Eq. (3) is an instance of the Riemann-Stieltjes integral [35], which computes the integral of one path against another. Let us now define the _1-fold iterated_ integral, which computes the increment of the \(i\)-th coordinate of the path at time \(t\) as: \[\mathrm{S}(X)_{t}^{i}=\!\!\!\!\!\!\!\!\!\!\!\!\int\limits_{a<I_{1}<I}\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _Invariance under reparametrisation::_ An important difficulty when vying for diversity in trajectory optimisation is the potential symmetry present in the data. This is particularly true when dealing with sequential data, such as, for instance, trajectories of an autonomous vehicle. In this case, the problem is compounded as there is an infinite group of symmetries given by the reparametrisation of a path (i.e. continuous surjections in the time domain to itself), each leading to distinct similarity metrics. In contrast, the path signature acts as a filter that is invariant to reparametrisation removing these troublesome symmetries and resulting in the same features as shown in Figure 3. _Dimension is independent of path length:_ The final property we will emphasise is how the dimension of the signature depends on its degree and the intrinsic dimension of the path, but is independent of the path length. In other words, the signature dimension is invariant to the degree of discretisation of the path. ### _Stein Variational Gradient Descent_ Variational inference (VI) [45] is an established and powerful method for approximating challenging posterior distributions in Bayesian Statistics. As opposed to Markov chain Monte Carlo (MCMC) [46] approaches, in VI the inference problem is cast as an optimisation problem in which a candidate distribution \(q^{*}(\mathbf{x})\) within a distribution family \(\mathcal{Q}\) is chosen to best approximate the target distribution \(p(\mathbf{x})\). This is typically obtained by minimising the Kullback-Leibler (KL) divergence: \[q^{*}=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\;D_{\text{KL}}\big{(}q||p \big{)}. \tag{9}\] The solution also maximises the Evidence Lower Bound (ELBO), as expressed by the following objective \[q^{*}=\operatorname*{arg\,max}_{q\in\mathcal{Q}}\mathbb{E}_{q}\big{[}\log p( \mathbf{x})\big{]}-D_{\text{KL}}\big{(}q(\mathbf{x})||p(\mathbf{x})\big{)}. \tag{10}\] The main challenge that arises is defining an appropriate \(\mathcal{Q}\). Stein variational gradient descent (SVGD) [14] addresses this issue while also solving for Eq. (9) by performing Bayesian inference in a non-parametric nature, removing the need for assumptions on restricted parametric families for \(q(\mathbf{x})\). This approach approximates a posterior \(p(\mathbf{x})\) with a set of particles \(\{\mathbf{x}\}_{i=1}^{N_{p}}\), \(\mathbf{x}\in\mathbb{R}^{p}\). These particles are iteratively updated in parallel according to: \[\mathbf{x}^{i}\leftarrow\mathbf{x}^{i}+\epsilon\phi^{*}(\mathbf{x}^{i}), \tag{11}\] given a step size \(\epsilon\). The function \(\phi(\cdot)\) is known as the score function and defines the velocity field that maximally decreases the KL-divergence: \[\phi^{*}=\operatorname*{arg\,max}_{\phi\in\mathcal{H}}\;\big{\{}-\nabla_{ \epsilon}D_{\text{KL}}\big{(}q_{|\epsilon\phi|}||p\big{)},\text{ s.t. }\|\phi\|_{\mathcal{H}}\leq 1\big{\}}, \tag{12}\] where \(\mathcal{H}\) is a Reproducing Kernel Hilbert Space (RKHS) induced by a positive-definite kernel \(k:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\), and \(q_{|\epsilon\phi|}\) indicates the particle distribution resulting from taking an update step as in Eq. (11). Recall that an RKHS \(\mathcal{H}\) associated with a kernel \(k\) is a Hilbert space of functions endowed with an inner product \(\langle\cdot,\cdot\rangle\) such that \(f(\mathbf{x})=\langle f,k(\cdot,\mathbf{x})\rangle\) for any \(f\in\mathcal{H}\) and any \(\mathbf{x}\in\mathcal{X}\)[47]. In [14], the problem in (12) has been shown to yield a closed-form solution which can be interpreted as a functional gradient in \(\mathcal{H}\) and approximated with the set of particles: \[\phi^{*}(\mathbf{x})=\mathbb{E}_{\gamma\sim\hat{q}}\big{[}k(\mathbf{y}, \mathbf{x})\,\nabla_{\mathbf{y}}\log p(\mathbf{y})+\nabla_{\mathbf{y}}k( \mathbf{y},\mathbf{x})\big{]}, \tag{13}\] with \(\hat{q}=\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}\delta(\mathbf{x}^{i})\) being an empirical distribution that approximates \(q\) with a set of Dirac delta functions \(\delta(\mathbf{x}^{i})\). For SVGD, \(k\) is typically a translation-invariant kernel, such as the squared-exponential or the Matern kernels [14, 48]. ## IV Method Our main goal is to find a diverse set of solutions to the problem presented in Section III-A. To that end, we begin by reformulating Eq. (1) as a probabilistic inference problem. Next, we show that we can apply SVGD to approximate the posterior distribution of trajectories with a set of sampled paths. Finally, in Section IV-C, we present our main contribution discussing how we can promote diversity among the sample paths by leveraging the Path Signature Kernel. ### _Stein Variational Motion Planning_ To reframe the trajectory optimisation problem described in Eq. (1) as probabilistic inference we introduce a binary optimality criterion, \(\mathcal{O}:\mathcal{P}_{\mathcal{X}}\rightarrow\{0,1\}\), analogously to [31, 49]. Simplifying the notation with \(\mathcal{O}\) indicating \(\mathcal{O}=1\), we can represent the posterior distribution of optimal trajectories as \(p(X\mid\mathcal{O})\propto p(\mathcal{O}\mid X)p(X)\), for a given optimality likelihood \(p(\mathcal{O}\mid X)\) and trajectory prior \(p(X)\). The _maximum a posteriori_ (MAP) solution is given by finding the mode of the negative log posterior: \[\begin{split} X^{*}&=\operatorname*{arg\,min}_{X}- \log p(\mathcal{O}\mid X)-\log p(X)\\ &=\operatorname*{arg\,min}_{X}\lambda\mathcal{C}(X)-\log p(X), \end{split} \tag{14}\] where the last equality arises from the typical choice of the exponential distribution to represent the optimality likelihood, i.e. \(p(\mathcal{O}\mid X)=\exp(-\lambda\mathcal{C}(X))\) with \(\lambda\) being a temperature hyper-parameter. Rather than finding the MAP solution, we are interested in approximating the full posterior distribution, which may be multi-modal, and generating diverse solutions for the planning problem. As discussed in Section III-C, we can apply SVGD to approximate the posterior distribution with a collection of particles. In the case at hand each of such particles is a sampled path, such that Eq. (13) can be rewritten as: \[\phi^{*}(X)=\mathbb{E}_{Y\sim\hat{q}}\big{[}k(Y,X)\,\nabla_{Y}\log p(Y\mid \mathcal{O})+\nabla_{Y}k(Y,X)\big{]}. \tag{15}\] The score function presented in Eq. (15) is composed of two competing forces. On one hand, we have the kernel smoothed gradient of the log-posterior pushing particles towards regions of higher probability. Whereas the second term acts as a repulsive force, pushing particles away from one another. It is worth emphasising that the kernel function is _static_, i.e. it does not consider the sequential nature of the input paths. In effect, for a path of dimension \(c\) and \(s\) discrete time steps, the inputs are projected onto a space \(\mathcal{V}\subset\mathbb{R}^{c\times s}\) in which similarities are evaluated. Finally, the posterior gradient can be computed by applying Bayes' rule, resulting in: \[\nabla_{Y}\log p(Y\mid\mathcal{O})=\nabla_{X}\log p(Y)-\nabla_{Y}\lambda \mathcal{C}(Y). \tag{16}\] ### _Stein Variational Motion Planning with Smooth Paths_ In previous work [25, 27, 28, 50] the prior distribution in Eq. (16) is defined in a way to promote smoothness on generated paths. This typically revolves around defining Gaussian Processes [48] as priors and leveraging factor graphs for efficiency. Although effective, this approach still requires several latent variables to describe a desired trajectory, which implies on a higher dimensional inference problem. Importantly, the problem dimensionality is directly related to the amount of repulsive force exerted by the kernel. In large dimensional problems, the repulsive force of translation-invariant kernels vanishes, allowing particles to concentrate around the posterior modes which results in an underestimation of the posterior variance [51]. This problem is further accentuated when considering the static nature of the kernel function, as discussed in the previous section. In order to keep the inference problem low-dimensional while still enforcing smooth paths we make use of _natural cubic splines_ and aim to optimise the location of a small number of knots. These knots may be initialised in different ways, such as perturbations around a linear interpolation from the starting state \(\mathbf{x}_{s}\) and goal state \(\mathbf{x}_{g}\), sampled from an initial solution given by a shooting method (e.g. RRT [4]), or drawn randomly from within the limits of \(\mathcal{X}\). For simplicity, in this work we will opt for the latter. Since path smoothness is induced by the splines, the choice of prior is more functionally related to the problem at hand. If one desires some degree of regularisation on the trajectory optimisation, a multivariate Gaussian prior centred at the placement of the initial knots may be used. Conversely, if we only wish to ensure the knots are within certain bounds, a less informative smoothed approximation of the uniform prior may be used. More concretely, for a box \(B=x\colon a\leq x\leq b\), such prior would be defined as: \[p(x)\propto\exp\left(-d(x,B)^{2}/\sqrt{(2\sigma^{2})}\right) \tag{17}\] where the distance function \(d(x,B)\) is given by \(d(x,B)=\min|x-x^{\prime}|\), \(x^{\prime}\in B\). Finally, we could define both a prior and hyper-prior if we wish to combine both effects (see Appendix -C for details). As discussed in Section -A the cost functional \(\mathcal{C}\) imposes penalties for collisions and defines the relevant performance criteria to be observed. Since only a small number of knots is used for each path, some of these criteria and, in particular, collision checking require that we discretise the resulting spline in a sufficiently dense amount of points. It is worth mentioning that \(\mathcal{C}\) is typically non-differentiable and that the gradient in Eq. (16) is usually approximated with Monte Carlo samples [31]. However, as this introduces an extra degree of stochasticity in the benchmark comparison, we will restrict our choice of \(\mathcal{C}\) to be differentiable. We will discuss the performance criteria of each problem in the experimental section. ### _Stein Variational Motion Planning with Path Signature Kernel_ In this section we present our main contribution, which is a new formulation for motion planning in which Path Signature can be used to efficiently promote diversity in trajectory optimisation through the use of Signature Kernels. In Section -C we discussed some desirable properties of the signature transform. The key insight is that the space of linear combination of signatures forms an algebra, which enables it as a faithful feature map for trajectories [15]. With that in mind, perhaps the most straightforward use of the signature would be to redefine the kernel used in Eqs. (12) and (13) as \(\tilde{k}(X,Y)=k(\mathrm{S}(X)_{t},\mathrm{S}(Y)_{t})\). However, as seen in Section -C this approach would not be scalable given the exponential time and space complexity of the signature w.r.t. to its degree. A single evaluation of the Gram kernel matrix for \(\tilde{k}\) would be an operation of order \(O(n^{2}\cdot c^{d})\), where \(n\) is the number of concurrent paths being optimised, \(d\) is the degree of the signature, and \(c\) is the dimensionality of the space \(\mathcal{P}_{\mathcal{X}}\ni X,Y\). Furthermore, kernel \(\tilde{k}\) is static in the sense that it does not take into account the sequential nature of its domain. Rather than a kernel \(k\colon\mathcal{X}\times\mathcal{X}\to\mathbb{R}\), we want to define a kernel \(k^{+}\colon\mathcal{P}_{\mathcal{X}}\times\mathcal{P}_{\mathcal{X}}\to \mathbb{R}\), which takes into account the structure induced by paths. Hence, we take a different approach and proceed by first projecting paths to an RKHS onto which we will then compute the signature. That is, given a kernel \(k^{+}\colon\mathcal{P}_{\mathcal{X}}\times\mathcal{P}_{\mathcal{X}}\to \mathbb{R}\), a path \(X\in\mathcal{P}_{\mathcal{X}}\) can be lifted to a path in the RKHS \(\mathcal{P}_{\mathcal{H}}\) through the map \(k_{X}\colon t\mapsto k(X_{t},\cdot)\), where \(\mathcal{P}_{\mathcal{H}}\) is the set of \(\mathcal{H}\)-valued paths. Finally, we compute the signature of the lifted path \(\mathrm{S}(k_{X})_{t}\) and use it as our final feature map. At first glance, this further deteriorates scalability, since most useful \(\mathcal{P}_{\mathcal{H}}\) are infinite dimensional, rendering this approach infeasible. However, results presented by Kiraly and Oberhauser [15, Corollary 4.9] show that this approach can be completely kernelised. This allows them to define a _truncated signature kernel_, \(k^{+}\colon(X_{t},Y_{t})\mapsto\langle\mathrm{S}^{d}(k_{X})_{t},\mathrm{S}^{d }(k_{Y})_{t}\rangle\), that can be efficiently computed using only evaluations of a static kernel \(k(\mathbf{x},\mathbf{y})\) at discretised timestamps. The number of evaluations depends on the truncation degree \(d\) and number of discretised steps \(l\). Several algorithmic approaches are considered in [15] with dynamic programming having complexity \(O(n^{2}\cdot l^{2}\cdot d)\) to compute a \((n\times n)\)-Gram matrix. Otherwise, approximations can be used to reduce the complexity to linear on \(l\) and \(n\). However, even though the importance of the terms in the signature decay factorially [17], the amount of coefficients grows exponentially, which means that for high values of \(d\) the kernel \(k^{+}\) would be restricted to low-dimensional applications. Nonetheless, recent work [16] proved that for two continuously differentiable input paths the complete _signature kernel_, \[k^{\oplus}\colon(X_{t},Y_{t})\mapsto\langle\mathrm{S}(k_{X})_{t},\mathrm{S}(k_{Y} )_{t}\rangle, \tag{18}\] is the solution of a second-order, hyperbolic partial differential equation (PDE) known as Goursat PDE. Solving this PDE is a problem of complexity \(O(l^{2}\cdot c)\), so still restrictive on the discretisation of the path. However, by its intrinsic nature, the PDE can be parallelised, turning the complexity into \(O(l\cdot c)\), as long as the GPU is able to accommodate the required number of threads. Therefore the untruncated signature kernel can be efficiently and parallel computed using state-of-the-art hyperbolic PDE solvers and finite-difference evaluations of the static kernel \(k\). Hence, we can directly apply \(k^{\oplus}\) in Eq. (15) and we now have a way to properly represent sequential data in feature space, resulting in the final gradient update function: \[\phi^{*}(X)=\mathbb{E}\big{[}k^{\oplus}(Y,X)\,\nabla_{Y}\log p(Y\mid\mathcal{O })+\nabla_{Y}k^{\oplus}(Y,X)\big{]}, \tag{19}\] where the expectation is taken by sampling paths \(Y\) from \(\hat{q}\). For convenience, we will use the acronym SigSVGD whether the algorithm is used for planning or control problems. A complete overview of the algorithm is presented in Algorithm 1. ``` Input: A cost function \(\mathcal{C}(X)\) or target distribution \(p(X)\), a prior distribution \(q(X_{t_{0}})\), a signature kernel \(k^{\oplus}\). Output: A set of particles \(\big{\{}X_{t}^{i}\big{\}}_{i=1}^{N_{p}}\) that approximates the posterior distribution over optimal paths. 1 Sample \(\big{\{}X_{t_{0}}^{i}\big{\}}_{i=1}^{N_{p}}\sim q(X_{t_{0}})\); 2whiletask not completedo 3ifusing Monte Carlo samplesthen 4 Generate \(N_{s}\) samples for each path \(X_{t}^{i,j}\gets X_{t}^{i}+\eta_{j}\); 5ifusing splinesthen 6 Generate decimated trajectories from knots \(X_{t}\); 7 Evaluate \(\mathcal{C}(X_{t})\) in parallel; 8iftarget distribution \(p(X_{t})\) is availablethen 9 Update score \(\phi^{*}\leftarrow\frac{1}{N_{p}}\sum_{i}^{N_{p}}[k^{\oplus}(X_{t}^{i},X_{t}) \,\nabla_{X_{t}^{i}}\log p(X_{t}^{i})+\nabla_{X_{t}^{i}}k^{\oplus}(X_{t}^{i}, X_{t})]\); 10 11else 12 Log-posterior gradient \(\nabla_{X_{t}^{i}}\log p(X_{t}^{i}\mid\mathcal{O})\approx\nabla_{X_{t}^{i}} \log q(X_{t-1}^{i}\mid\mathcal{O})+\nabla_{X_{t}^{i}}\log\frac{1}{N_{t}}\sum_ {j}^{N_{t}}\exp(-\alpha\mathcal{C}(X_{t}^{i,j}))\); 13 Update score \(\phi^{*}\leftarrow\frac{1}{N_{p}}\sum_{i}^{N_{p}}[k^{\oplus}(X_{t}^{i},X_{t}) \,\nabla_{X_{t}^{i}}\log p(X_{t}^{i}\mid\mathcal{O})+\nabla_{X_{t}^{i}}k^{ \oplus}(X_{t}^{i},X_{t})]\); 14 Update paths \(X_{t}\gets X_{t}+\epsilon\phi^{*}\); 15 Update prior \(q(X_{t}\mid\mathcal{O})\gets p(X_{t}\mid\mathcal{O})\) ; - For details, see [31] 16\(t\gets t+1\); ``` **Algorithm 1**Kernel Signature Stein Variational Gradient Descent (SigSVGD) ## V Results In this section we present results to demonstrate the correctness and applicability of our method in a set of simulated experiments, ranging from simple 2D motion planning to a challenging benchmark for robotic manipulators. ### _Motion Planning on 2D Terrain_ Our first set of experiments consists of trajectory optimisation in a randomised 2D terrain illustrated in Fig. 2. Regions of higher cost, or hills, are shown in a darker shade whereas valleys are in a lighter colour. The terrain is parameterised by a series of isotropic Multivariate Gaussian distributions placed randomly according to a Halton sequence and aggregated into a Gaussian Mixture Model denoted by \(p_{\text{map}}\). Paths are parameterised by natural cubic splines with \(N_{k}=2\) intermediary knots, apart from the start and goal state. Our goal is to find the best placement for these knots to find paths from origin to goal that avoid regions of high cost but are not too long. We adopt the following cost function in order to balance trajectory length and navigability: \[\mathcal{C}(\mathbf{x}_{t})=\sum_{t\in[a,b]}\Big{(}p_{\text{map}}(\mathbf{x}_{t })+75\,\|\mathbf{x}_{t}-\mathbf{x}_{t-1}\|_{2}\Big{)}, \tag{20}\] where the \(\ell^{2}\)-norm term is a piecewise linear approximation of the trajectory length. To ensure the approximation is valid each trajectory is decimated into 100 waypoints before being evaluated by Eq. (20). The initial knots are randomly placed and the plots in Fig. 2 show the final 20 trajectories found with three different optimisation methods. Furthermore, the colour of each path depicts its normalised final cost. On the left we can see the solutions found with Batch Gradient Descent (BGD) and note how all trajectories converge to two modes of similar cost. The SVMP results are more diverse, but failed to capture one of the BGD modes. Also note how, when multiple trajectories converge to a single trough, the spline knots are pushed away by the repulsive force resulting in suboptimal solutions. On the other hand, the trajectories found by SigSVGD are not only more diverse, finding more homotopic solutions, but are also able to coexist in the narrow valleys. This is possible since the repulsive force is being computed in the signature space and not based on the placement of the knots. Furthermore, notice how for the same reason the paths are more direct and coordinated when compared to SVMP. ### _Point-mass Navigation on an Obstacle Grid_ Here, our goal is to demonstrate the benefits of applying the signature kernel Model Predictive Control (MPC). To that end, we reproduce the point-mass planar navigation task presented in [31, 32] and compare SVMPC against and a modified implementation using SigSVGD. The objective is to navigate an holonomic point-mass robot from start to goal through an obstacle grid. Since the system dynamics is represented as a double integrator model with non-unitary mass \(m\), the particle acceleration is given by \(\tilde{\mathbf{x}}=m^{-1}\mathbf{u}\) and the control signal is the force applied to the point-mass. We adopt the same cost function as in [31], that is: \[\mathcal{C}(\mathbf{x}_{t},\mathbf{u}_{t})=0.5\,\mathbf{e}_{t}^{ \mathrm{T}}\mathbf{e}_{t}+0.25\,\dot{\mathbf{x}}_{t}^{\mathrm{T}}\dot{\mathbf{ x}}_{t}+0.2\,\mathbf{u}_{t}^{\mathrm{T}}\mathbf{u}_{t}+\mathds{1}\{\mathrm{ col.}\}\,p\] \[\mathcal{C}_{\mathrm{term}}(\mathbf{x}_{t},\mathbf{u}_{t})=1000 \,\mathbf{e}_{t}^{\mathrm{T}}\mathbf{e}_{t}+0.1\,\mathbf{x}_{t}^{\mathrm{T}} \mathbf{x}_{t}\,,\] where \(\mathbf{e}_{t}=\mathbf{x}_{t}-\mathbf{x}_{g}\) is the instantaneous position error and \(p=10^{6}\) is the penalty when a collision happens. To create a controlled environment with several multi-modal solutions, obstacles are placed equidistantly in a grid (see Fig. 4). The simulator performs a simple collision check based on the particle's state and prevents any future movement in case a collision is detected, simulating a crash. Barriers are also placed at the environment boundaries to prevent the robot from easily circumventing the obstacle grid. As the indicator function makes the cost function non-differentiable, we need to compute approximate gradients using Monte Carlo sampling [32]. Furthermore, since we are using a stochastic controller, we also include CMA-ES and Model Predictive Path Integral (MPPI) [12] in the benchmark. A detailed account of the hyper-parameters used in the experiment is presented in Appendix I. In this experiment, each of the particles in the optimisation is a path that represents the mean of a stochastic control policy. Gradients for the policy updates are generated by sampling the control policies and evaluating _rollouts_ via an implicit model of the environment. As CMA-ES only entertains a single solution at any given time, to make the results comparable we increase the amount of samples it evaluates at each step to be equivalent to the number of policies times the number of samples in SVMPC. One addition to the algorithm in [32] is the inclusion of particles with predefined primitive control policies which are not optimised. For example, a policy which constantly applies the minimum, maximum, or no acceleration are all valid primitives. These primitive policies are also included in every candidate solution set of CMA-ES. The inlay plot in Fig. 4 illustrates how SigSVGD promotes policies that are more diverse, covering more of the state-space on forward rollouts. The outcome can be seen on Table I. SigSVGD finds lower cost policies and is able to reach the goal in fewer steps than SVMPC. Due to the dynamical nature of the problem, we are unable to run the optimisation for many iterations during each time-step, as we need to get actions from the controller at a fast rate. This poses a challenge to CMA-ES, which crashed on all episodes despite having a much larger number of samples per step. ### _Benchmark Comparison on Robotic Manipulator_ To test our approach on a more complex planning problem we compare batch gradient descent (i.e. parallel gradient descent on different initialisations), SVMP and SigSVGD in robotic manipulation problems generated using MotionBenchMaker [52]. A problem consists of a scene with randomly placed obstacles and a consistent request to move the manipulator from its starting pose to a target configuration. For each scene in the benchmark, we generate 4 different requests and run the optimisation with 5 random seeds for a total of 20 episodes per scene. The robot used is a Franka Emika Panda with 7 Degrees of Freedom (DOF). The cost function is designed to generate trajectories that are smooth, collision-free and with a short displacement of the robot's end-effector. We once again resort to a fully-differentiable function to reduce the extraneous influence of approximating gradients with Monte Carlo samples. As is typical in motion planning, the optimisation is performed directly in _configuration space_ (C-space), which simplifies the search for feasible plans. To reduce the sampling space and promote smooth trajectories, we once again \begin{table} \begin{tabular}{l c c} \hline \hline & Cost & Steps \\ \hline \hline SigSVGD & **1056.0 (58.4)** & **189.3 (12.6)** \\ SVMPC & 1396.4 (73.0) & 239.1 (49.4) \\ MPPI & 1740.7 (192.3) & 290.8 (23.7) \\ CMA-ES\({}^{\dagger}\) & — & — \\ \hline \hline \end{tabular} \end{table} TABLE I: **Point-mass navigation results**. The table shows the mean and standard deviation for 20 episodes. _Cost_ indicates the total accrued cost over the episode. CMA-ES cost is not shown as it couldn’t complete the task on any episodes. _Steps_ indicates the total amount of time-steps the controller needed to reach the goal. \({}^{\dagger}\)CMA-ES couldn’t complete any episodes, so results are omitted. Fig. 4: **Point-mass navigation trajectories**. The plot shows an intermediate time-step of the navigation task for SigSVGD, on the left, and SVMPC, on the right. An inset plot enlarges a patch of the map just ahead of the point-mass. The rollout colour indicate from which of the policies, i.e. paths in the optimisation, they originate, whereas fixed motion primitives are shown in purple. Note how rollouts generated by SigSVGD are more disperse, providing a better gradient for policy updates. parameterise the path of each of the robot joints with natural cubic splines, adopting 3 intermediary knots besides those at the initial and target poses. #### Iii-B1 Regularising Path Length and Dynamical Motions Finally, the use of splines to interpolate the trajectories ensures smoothness in generated trajectories, but that does not necessarily imply in smooth dynamics for the manipulator. To visualise this, consider, for example, a trajectory in \(\mathcal{D}\) parameterised by a natural cubic spline. The configurations \(\mathbf{q}\) in between each knot can be interpolated, resulting in a smooth trajectory of the robot end-effector in Euclidean coordinates in SE(3). However, the same end-effector trajectory could be traversed in a constant linear speed or with a jerky acceleration and deceleration motion. More specifically, if we use a fixed number of interpolated configurations between knots without care to impose dynamical restrictions to the simulator, knots that are further apart will result in motions with greater speed and acceleration since a larger distance would be covered during the same interval. To avoid these abrupt motions on the robots joints, we introduce the term Fig. 5: **Motion planning benchmark**. Results shown are the mean and standard deviation over 5 episodes for 4 distinct requests, totalling 20 iterations per scene. Best result is highlighted with a hatched bar. _Lowest cost_ depicts the cost of the best trajectory found. _Path length_ is the piecewise linear approximation of the end-effector trajectory length for the best trajectory. _NLL_ indicates the negative log likelihood and, since we are using an exponential likelihood, represents the total cost of all sampled trajectories. \(\mathcal{C}_{\text{dyn}}\) to the cost function, which penalises the linear distance between consecutive configurations: \[\mathcal{C}_{\text{dyn}}=\sum_{i=2}^{p}\mathbf{w}^{\top}\|\mathbf{q}_{i}-\mathbf{ q}_{i-1}\|_{2}, \tag{21}\] where \(p\) is the number of intermediary configurations chosen when discretising the path spline and the weight \(\mathbf{w}\) can be used to assign a higher importance to certain robot joints. We choose to adopt a vector \(\mathbf{w}\) which is a linear interpolation from 1 to 0.7, where the higher value is assigned to the base joint of the manipulator and progressively reduced until the end-effector. A similar approach as the one presented in Eq. (21) can be used to penalise the length of the robot's trajectory in workspace. We include a final term to our cost function, \(\mathcal{C}_{\text{len}}\), that penalises exclusively the length of the end-effector path. This brings us to our final cost function: \[\mathcal{C}=2.5\,\mathcal{C}_{\text{len}}+2.5\,\mathcal{C}_{\text{dyn}}+ \mathcal{C}_{\text{col}}+10\,\mathcal{C}_{\text{s-col}}, \tag{22}\] where each of the terms are respectively the cost for path length, path dynamics, collision with the environment and self-collision. The optimisation is carried out for 500 iterations and the kernel repulsive force is scheduled with cosine annealing [53]. By reducing the repulsive force on the last portion of the optimisation, we allow trajectories at the same local minima to converge to the modes and are able to qualitatively measure the diversity of each approach. The results shown on Fig. 5 demonstrate how SigSVGD achieves better results in almost all metrics for every scenario. The proper representation of paths results in better exploration of the configuration space and leads to better global properties of the solutions found. This can be seen in Fig. 6, which shows the end-effector paths for SigSVGD and SVMP. One of such paths is also illustrated in Fig. 1. Results found by SigSVGD also show a higher percentage of feasible trajectories and lower contact depths for rollouts in collision (see Table II). #### Iv-B2 Robot Collision as Continuous Cost Typically collision-checking is a binary check and non-differentiable. To generate differentiable collision checking with informative gradients, we resort to continuous occupancy grids. Occupancy grid maps are often generated from noisy and uncertain sensor measurement by discretising the space \(\mathcal{W}\) where the robot operates (know as _workspace_) into grid-cells, where each cell represents an evenly spaced field of binary random variables that corresponds to the presence of an obstacle at the given location. However, the discontinuity in-between each cell means these grid maps are non-differentiable and not suitable for optimisation-based planning. A continuous analogue of an occupancy map can be generalised by a kernelised projection to high-dimensional Fig. 6: **Visualisation of SigSVGD in the motion planning benchmark.** The _Blue_ and _Grey_ lines denote the end-effector’s trajectories with the former highlighting the trajectory with the lowest cost. The _Orange_ and _Green_ tinted robot poses denote the start and target configuration, respectively. The translucent robot poses denote in-between configurations of the lowest-cost solution. spaces [54] or with distance-based methods [55]. In this work we trade off the extra complexity of the methods previously mentioned for a coarser but simpler approach. Inspired by [56], we learn the occupancy of each scene using a neural network as a universal function approximator. We train the network to approximate a continuous function that returns the likelihood of a robot configuration being occupied. The rationale for this choice is that, since all methods are optimised under the same conditions, the comparative results should not be substantially impacted by the overall quality of the map. Additionally, the trained network is fast to query and fast to obtain derivatives with respect to inputs, properties that are beneficial for querying of large batches of coordinates for motion planning. Given a dataset of \(n\) pairs of coordinates and a binary value which indicates whether the coordinate is occupied, i.e. \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\), where \(\mathbf{x}_{i}\in\mathcal{W}\subseteq\mathbb{R}^{w}\), and \(y_{i}\in\{0,1\}\), for \(i=1,\ldots,n\). The network then learns a mapping \(f_{\text{col}}\) between a coordinate of interest \(\mathbf{x}\) and the probability of it being occupied, that is, \(f_{\text{col}}(\mathbf{x})=\mathbb{P}(y=1\mid\mathbf{x})\). A dataset of this format can be obtained, for instance, from depth sensors as point clouds. We model \(f_{\text{col}}\) as a fully-connected neural network, with tanh as the activation function between hidden layers, and sigmoid as the output layer. The final network is akin to a binary classification problem, which can be learned via a binary cross-entropy loss with gradient descent optimisers. As such, we can construct a collision cost function \(f_{\text{col}}\colon\mathcal{W}\to\mathbb{R}\) that maps workspace coordinates into cost values associated at the corresponding locations. A similar problem occurs when ascertaining whether a given configuration of the robot's joints is unfeasible, leading to a self-collision. We address this issue in a similar manner, by training a separate neural network to approximate a continuous function \(f_{\text{s-col}}\) which maps configurations of the robot to the likelihood of they being in self-collision. More precisely, \(f_{\text{s-col}}\colon\mathcal{Q}\to\mathbb{R}\), where \(f_{\text{s-col}}(\mathbf{q})=\mathbb{P}(y=1\mid\mathbf{q})\), for \(\mathbf{q}_{i}\in\mathcal{Q}\subseteq\mathbb{R}^{d}\), and \(y_{i}\in\{0,1\}\). The dataset used to train \(f_{\text{s-col}}\) is generated by randomly choosing configurations within the joint limits of the robot and performing a binary self-collision check provided by the robot's API. #### V-C3 Bringing Collision Cost from Workspace to Configuration Space Collision checking requires information about the workspace geometry of the robot to determine whether it overlaps with objects in the environment. On the other hand, we assume that the robot movement is defined and optimised in C-space. The cost functions to shape robot behaviour are often defined in the Cartesian task space. We denote C-space as \(\mathcal{Q}\subseteq\mathbb{R}^{d}\), where there are \(d\) joints in the case of a robotic manipulator. The joint configurations, \(\mathbf{q}\in\mathcal{Q}\), are elements of the C-space, while Cartesian coordinates in task space are denoted as \(\mathbf{x}\in\mathcal{W}\). We now outline the procedure of _pulling_ a cost gradient defined in the workspace to the C-space. We start by defining \(b\) body points on the robot, each with a forward kinematics function \(\psi_{i}\) mapping configurations to the Cartesian coordinates \(\mathbf{x}_{i}\) at the body point, \(\psi_{i}\colon\mathcal{Q}\to\mathcal{W}\), for each \(i=1,\ldots,b\). Let the Jacobian of the forward kinematics functions w.r.t. the joint configurations be denoted as \[\mathbf{J}(\cdot)^{i}_{\psi}=\frac{\mathbf{d}\psi_{i}}{\mathbf{d}\mathbf{q}}( \cdot). \tag{23}\] The derivative of a cost potential \(\mathcal{C}_{\text{col}}\) which operates on the body points, such as the occupancy cost potential, can then be _pulled_ into the C-space with: \[\nabla_{\mathbf{q}}\mathcal{C}=\sum_{i=1}^{b}\mathbf{J}(\mathbf{q})^{i}_{ \psi}\nabla_{\mathbf{x}}\mathcal{C}, \tag{24}\] which allows us to update trajectory in the C-space \(\mathcal{Q}\) with cost in the Cartesian space \(\mathcal{W}\). ## VI Conclusion This work, to the best of our knowledge, is the first to introduce the use of path signatures for trajectory optimisation in robotics. We discuss how this transformation can be used as a canonical _linear_ feature map to represent trajectories and how it possesses many desirable properties, such as invariance under time reparametrisation. We use these ideas to construct SigSVGD, a kernel method to solve control and motion planning problems in a variational inference setting. It approximates the posterior distribution over optimal paths with an empirical distribution comprised of a set of vector-valued particles which are all optimised in parallel. In previous work it has been shown that approaching the optimisation from the variational perspective alleviates the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{SigSVGD} & \multicolumn{2}{c}{SVMP} & \multicolumn{2}{c}{Batch Gradient Descent} \\ \cline{2-7} Scene & Contact Depth & Feasible Pct. & Contact Depth & Feasible Pct. & Contact Depth & Feasible Pct. \\ \hline \hline Box & 3.74 (2.30) & **94.99 (3.78)** & **3.62 (1.95)** & 94.96 (3.32) & 3.63 (1.95) & 94.97 (3.31) \\ Bookshelf Small & **1.32 (2.50)** & **96.63 (5.48)** & 1.55 (2.19) & 96.20 (4.68) & 1.56 (2.20) & 96.18 (4.71) \\ Bookshelf Tall & 0.56 (1.78) & 98.30 (4.65) & 0.27 (0.60) & 99.02 (1.76) & **0.27 (0.59)** & **99.03 (1.74)** \\ Bookshelf Thin & **2.78 (3.11)** & **94.59 (4.94)** & 3.14 (3.50) & 93.54 (5.57) & 31.4 (3.50) & 93.54 (5.57) \\ Cage & 2.13 (1.82) & **96.12 (2.92)** & **2.00 (1.67)** & 96.11 (2.89) & **2.00 (1.67)** & 96.11 (2.89) \\ Kitchen & **9.82 (6.95)** & 88.04 (9.85) & 10.61 (6.45) & 88.59 (6.21) & 10.62 (6.71) & **88.61 (6.21)** \\ Table Bars & **9.46 (7.43)** & **92.42 (5.89)** & 9.52 (8.05) & 92.09 (6.69) & 9.70 (8.44) & 92.05 (6.85) \\ Table Pick & **0.22 (0.67)** & **99.56 (1.67)** & 0.83 (1.04) & 98.06 (2.62) & 0.83 (1.02) & 98.08 (2.43) \\ Table Under & **3.33 (2.60)** & **93.63 (5.36)** & 5.16 (4.75) & 90.19 (8.21) & 5.18 (4.77) & 90.06 (8.30) \\ \hline \hline \end{tabular} \end{table} TABLE II: **Motion planning benchmark**. Results shown are the mean and standard deviation over 5 episodes for 4 distinct requests, totalling 20 iterations per scene. _Contact Depth_ indicates the average collision depth of the trajectories found (in millimetres), if a collision happens. _Feasible Pct._ is the average percentage of the trajectory that is collision-free. problem of local optimality, providing a more diverse set of solutions. We argue that the use of signatures improves on previous work and can lead to even better global properties. Despite the signature poor scalability, we show how we can construct fast and parallelisable signature kernels by leveraging recent results in rough path theory. The RKHS induced by this kernel creates a structured space that captures the sequential nature of paths. This is demonstrated through an extensive set of experiments that the structure provided helps the functional optimisation, leading to better global solutions than equivalent methods without it. We hope the ideas herein presented will serve an inspiration for further research and stimulate a groundswell of new work capitalising on the benefits of signatures in many other fields within the robotics community.
2308.14262
Experimental simulation of quantum superchannels
Simulating quantum physical processes has been one of the major motivations for quantum information science. Quantum channels, which are completely positive and trace preserving processes, are the standard mathematical language to describe quantum evolution, while in recent years quantum superchannels have emerged as the substantial extension. Superchannels capture effects of quantum memory and non-Markovianality more precisely, and have found broad applications in universal models, algorithm, metrology, discrimination tasks, as examples. Here, we report an experimental simulation of qubit superchannels in a nuclear magnetic resonance (NMR) system with high accuracy, based on a recently developed quantum algorithm for superchannel simulation. Our algorithm applies to arbitrary target superchannels, and our experiment shows the high quality of NMR simulators for near-term usage. Our approach can also be adapted to other experimental systems and demonstrates prospects for more applications of superchannels.
Hang Li, Kai Wang, Shijie Wei, Fan Yang, Xinyu Chen, Barry C. Sanders, Dong-Sheng Wang, Gui-Lu Long
2023-08-28T02:37:28Z
http://arxiv.org/abs/2308.14262v2
# Experimental simulation of quantum superchannels ###### Abstract Simulating quantum physical processes has been one of the major motivations for quantum information science. Quantum channels, which are completely positive and trace preserving processes, are the standard mathematical language to describe quantum evolution, while in recent years quantum superchannels have emerged as the substantial extension. Superchannels capture effects of quantum memory and non-Markovianity more precisely, and have found broad applications in universal models, algorithm, metrology, discrimination tasks, as examples. Here, we report an experimental simulation of qubit superchannels in a nuclear magnetic resonance (NMR) system with high accuracy, based on a recently developed quantum algorithm for superchannel simulation. Our algorithm applies to arbitrary target superchannels, and our experiment shows the high quality of NMR simulators for near-term usage. Our approach can also be adapted to other experimental systems and demonstrates prospects for more applications of superchannels. ## I Introduction Quantum simulation is one of the original motivations for quantum computing [1]. General quantum evolution is described as completely positive mappings [2], which can describe both unitary and non-unitary processes, including measurements. For exploring non-unitary, dissipative processes are important to understand the physics of decoherence [3], quantum error correction [4], and so on. Quantum simulation of channels, including open-system dynamics, have been studied both theoretically and experimentally [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Similar to quantum channels [2] which describe the relationship of input-output states for a quantum system, quantum superchannels [21; 22; 23], also known as supermaps or combs, describe the relationships between input and output quantum channels. Although a superchannel can also be treated as a channel, it captures some peculiar features more precisely, such as quantum non-Markovianity [24] and quantum resources [25]. In recent years, superchannel theory has been widely used for studying channel discrimination and quantum metrology [26; 27], ebit-assisted quantum communication and error correction [28], a computing model without definite causal order [29], quantum von Neumann architecture [30; 31] and quantum algorithms like quantum machine learning and quantum optimization [32; 33; 34; 35]. Experimental quantum simulation is indispensable for some applications especially when the simulated target is hard to obtain. Nuclear magnetic resonance (NMR) has been well developed as a sophisticated technology in recent decades, and is often exploited as a reliable quantum information processing and quantum simulation platform. Due to its computer-aided high-fidelity pulse-engineering technology and controlling in full range of the system dynamics, NMR has exceptional advantages in simulating quantum many-body systems of small-to-medium size with complex or time-dependent Hamiltonians, such as open-system dynamics [16], quantum phase transition [36], gate characterization [37], measuring correlation functions [38], quantum imaginary evolution [39], and heat conduction [40]. In this work, we implement quantum superchannels based on a recent simulation algorithm [41]. Our theory applies to arbitrary form of superchannels, and it employs a convex-sum decomposition to reduce the circuit simulation cost. Our 4-qubit NMR simulator, assisted by the simulation algorithm, is able to realize any qubit superchannel with high fidelity (see Fig. 1). Its circuit contains a pair of pre- and post-unitary operators on the input channel with ancillary qubits serving as quantum memory. We experimentally carried out a few tasks, including randomly generating so-called extreme superchannels, a convex-decomposition of random non-extreme superchannels, and also random dephasing superchannels. We also theoretically demonstrate the application of superchannels for noise-adapted quantum error correction of the amplitude damping channel in the appendix B. The remainder of our paper is organised as follows. Section II introduces the algorithm we use simulating superchannels. Section III presents the experimental method and results. We summarise our paper in Section IV. Further numerical details are reported in the Appendix A and some other examples of superchannel in Appendix B. ## II The algorithm Our goal is to experimentally simulate arbitrary superchannels within a good accuracy. Usually, quantum evolutions are in general described by completely positive trace-preserving (CPTP) maps, also known as quantum channels [4] \[\mathcal{E}(\rho)=\sum_{i=1}^{r}K_{i}\rho K_{i}^{\dagger},\,\rho\in\mathcal{D} (\mathcal{H}),\,\sum_{i}K_{i}^{\dagger}K_{i}=\mathds{1} \tag{1}\] for \(\{K_{i}\}\) the Kraus operators [42]. As an example, a unitary 'qudit' evolution \(U\rho U^{\dagger}\), \(U\in SU(d)\)[43] satisfies \(U^{\dagger}U=UU^{\dagger}=\mathds{1}\), with \(d=\dim(\mathcal{H})\). Channel-state duality [44; 2] maps a channel \(\mathcal{E}\in\mathcal{L}(\mathcal{D})\) into a Choi state \[\omega_{\mathcal{E}}:=\mathcal{E}\otimes\mathds{1}(|\omega\rangle\langle \omega|), \tag{2}\] with \(|\omega\rangle:=\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|i,i\rangle\) a maximally entangled state, also known as a (generalised) Bell state. The rank of the Choi state equals the rank of the channel, which is the minimal number of Kraus operators. As channels can be viewed as states, the operations on them are further defined as superchannels [21; 22; 23]. Similar with channels, superchannels can also be well represented by quantum circuits, Kraus operators, and also Choi states. Given a channel with a set of Kraus operators \(\mathcal{E}=\{K_{i}\}\) and Choi state \(\omega_{\mathcal{E}}\), it is changed by a superchannel \(\hat{\mathcal{S}}\) according to \[\hat{\mathcal{S}}(\omega_{\mathcal{E}})=\sum_{a}S_{a}\omega_{\mathcal{E}}S_{ a}^{\dagger} \tag{3}\] with \[S_{a}=\sum_{m}K_{w}^{ma}\otimes K_{v}^{m} \tag{4}\] and \(\sum_{a}S_{a}^{\dagger}S_{a}=\mathds{1}\) for trace preserving. We place a hat on the symbol for superchannels to avoid confusion. Subscripts \(v\) and \(w\) (4) are pre- and post-unitary operators on channels. Kraus operators of the output channel are represented as \[F_{i}^{a}=\sum_{m}K_{w}^{ma}K_{i}K_{v}^{m,t}, \tag{5}\] with \(K_{v}^{m}=\langle m|\,V\,|0\rangle\), \(K_{w}^{ma}=\langle m|\,W\,|a\rangle\), and \(t\) stands for transposition. Given an arbitrary superchannel, an algorithm has been developed recently for the task of circuit simulation of the superchannel [41]. The algorithm also explores the convexity of the set of superchannels. Being convex, there are extreme points that cannot be written as any convex combination of others [45]. Choi proved that a channel \(\mathcal{E}\) is extreme iff \(\{K_{i}^{\dagger}K_{j}\}\) is linearly independent [2], which has been extended to superchannels [46]. This yields an upper bound on the rank which is a necessary condition, leading to the notion of generalized extreme points [47], or gen-extreme points [30]. Previous studies [7; 8; 9; 30; 41; 47] shows that decomposition via convex combination of gen-extreme points is a viable approach. In this work, we adapt this algorithm to our NMR simulator to realize arbitrary qubit superchannels to high accuracy. The circuit form of a qubit gen-extreme superchannel is shown in Fig.1. It contains 4 qubits and 3 steps for the evolution: a pre- and post-unitary operation, and the input channel in the middle. The top register is the ancilla to realize any qubit channel, and it is proven that a single qubit is enough [7; 48]. The 2nd register is the qubit system, and the remaining two are the ancilla to realize the superchannel. A direct application of Stinespring dilation would require 3 more qubits, hence a lot more gates, with one (two) for the channel (superchannel), which can be easily verified from their ranks. Our algorithm \(\mathcal{A}\) accepts an arbitrary target superchannel \(\hat{\mathcal{S}}\) as the input, in the form of its Choi state \(\omega_{\hat{\mathcal{S}}}\), for instance, and uses an optimization scheme from a built-in package of MATLAB [41] to minimize the trace distance \(d(\omega_{\hat{\mathcal{S}}},\omega_{\hat{\mathcal{S}}^{\prime}})\). This trace distance represents simulation accuracy for \[\hat{\mathcal{S}}^{\prime}=\sum_{i=1}^{4}p_{i}\mathcal{S}_{i}^{g}\,, \tag{6}\] which is a convex combination of gen-extreme superchannels with \(p_{i}\) as a probability and \(\hat{\mathcal{S}}_{i}^{g}\) as a gen-extreme superchannel. Our numerical simulation can guarantee the accuracy in the order of \(10^{-3}\) to \(10^{-4}\)[41]. A gen-extreme superchannel is parameterized based on the cosine-sine decomposition of unitary operator [9]. The gradient ascent pulse-engineering (GRAPE) technique [49; 50] helps design the pulse sequence and achieve the optimal control of radio-frequency field of NMR spectrometer. For a given unitary operation \(\mathcal{U}\), the principle of GRAPE is to calculate the gradient of the fitness function corresponding to the forward and backward unitary propagators, and the obtained gradient indicates the direction that the control pulses should be optimized to improve the fitness function. In general, the GRAPE algorithm is implemented fully on a classical computer. Li et al. [51; 52] proposed a hybrid quantum-classical approach to gradient-based optimal control algorithm, where the fitness function and its gradient are obtained on a quantum processor while the classical computer updates the control parameters. This completes the description of our algorithm. In the next section, we present the details for the implementation of a few simulation tasks: random gen-extreme superchannel, random dephasing superchannel, and also a demonstration of the convex decomposition method. ## III Experimental simulation Several superchannels are demonstrated experimentally in the NMR system. We firstly achieve a random gen-extreme superchannel in experiment, displaying the accuracy of superchannel simulation in the NMR system. Then we achieve a dephasing superchannel [53], which is the analog of dephasing channels [4] that only affect phase information without the loss of energy. Finally, we demonstrate the decomposition of a random superchannel, showing a great agreement with the theory [41]. All experiments are conducted in a liquid NMR system, where the sample, \({}^{13}\)C-labeled _trans-crotonic acid_ molecules dissolved in _Acetone d6_, is placed into a Bruker Avance III 400 MHz spectrometer at the temperature of 303K. The molecule contains four carbons, acting as a 4-qubit quantum simulator, whose internal Hamiltonian in Figure 1: Circuit form of a qubit gen-extreme superchannel the rotating frame is \[H_{\text{int}}=\sum_{i=1}^{4}\pi\upsilon_{i}\sigma_{z}^{i}+\sum_{1\leq i<j\leq 4 }^{4}\frac{\pi}{2}J_{i,j}\sigma_{z}^{i}\sigma_{z}^{j}, \tag{7}\] where \(\sigma_{z}^{i}\) and \(\upsilon_{i}\) are the Pauli \(z\)-operator and chemical shift of the \(i\)-th nuclear spin, respectively, and \(J_{i,j}\) is the J-coupling strengths between the \(i\)-th and \(j\)-th nuclear spins. The structure and parameters of the _trans-crotonic acid_ molecule are illustrated in Fig. 2. A general implementation procedure for simulating the qubit gen-extreme superchannel in Fig. 1 with our 4-qubit NMR quantum processor can be achieved as follows. 1. Preparing \(\rho_{\text{in}}\). The whole system is first initialized into a pseudo-pure state by using the spatial average technique [54], \(\rho_{0000}\simeq|0000\rangle\), starting from the thermal equilibrium state. Then an arbitrary \(\rho_{\text{in}}\) can be prepared afterwards by applying a single-qubit rotation \(R_{\phi}(\theta)\) to the work qubit. 2. Constructing superchannel \(\hat{\mathcal{S}}\). This mainly includes applying pre-operator \(V\), \(U\) for an input channel, and post-operator \(W\) sequentially. In general, the three multi-qubit gates can be decomposed into a sequence of single-qubit gates and two-qubit controlled gates with the cosine-sine decomposition (CSD) scheme [9; 55]. Here we packed each operator into an individual GRAPE pulse for higher fidelities in experiment. 3. Measuring \(\rho_{\text{out}}\). Here \(\rho_{\text{out}}\) can be reconstructed through \(\rho_{\text{out}}=I/2+\sum_{i\in\{x,y,z\}}c_{i}\sigma_{i}\) by performing standard quantum state tomography (QST), where the Pauli components \(\sigma_{x}\) and \(\sigma_{y}\) of \(\rho_{\text{out}}\) can be directly obtained from the spectrum of the work qubit by tracing out the rest qubits, while component \(\sigma_{z}\) can be obtained in the same way by applying a \(\nicefrac{{\pi}}{{2}}\) rotation readout pulse along the \(X\) axis to the work qubit before the measurement. In the following, we show how to select \(\rho_{\text{in}}\) at the first stage of the above procedure, to achieve the three operators in the second stage, and to characterize the performance of the simulated superchannel with the measured \(\rho_{\text{out}}\) at the last stage in experiment. In general, to experimentally determine the dynamics of a channel \(\mathcal{E}\) on an arbitrary single-qubit quantum state \(\rho_{\text{in}}=\left[\begin{smallmatrix}0.5+a&b-x\\ b+x&0.5-s\end{smallmatrix}\right]\) (\(a\), \(b\) and \(c\) are all real numbers), preparation and measurement of a quantum state set \(\mathcal{B}\) composed of four states are sufficient [56], such that the output state of \(\rho_{\text{in}}\) under \(\mathcal{E}\), \(\rho_{\text{out}}\), can be constructed from the measurement result, \(\mathcal{E}(\mathcal{B})\). In our scheme, we select the quantum state set as \(\mathcal{B}=\{\left|z\right\rangle,\left|\bar{z}\right\rangle,\left|x\right\rangle,\left|y\right\rangle\}\), where \(\left|z\right\rangle=\left|0\right\rangle\), \(\left|\bar{z}\right\rangle=\left|1\right\rangle\), \(\left|x\right\rangle=(\left|0\right\rangle+\left|1\right\rangle)/\sqrt{2}\) and \(\left|y\right\rangle=(\left|0\right\rangle+i\left|1\right\rangle)/\sqrt{2}\), which can form an arbitrary quantum state, pure or mixed, by linear combination. In this case, the output state of an arbitrary quantum state \(\rho_{\text{in}}\) under the quantum channel \(\mathcal{E}\) is \[\rho_{\text{out}}=\mathcal{E}(\rho_{\text{in}}) = (0.5+a-b-c)\mathcal{E}(\left|z\right\rangle\left\langle z\right| )+(0.5-a-b-c)\mathcal{E}(\left|\bar{z}\right\rangle\left\langle\bar{z}\right|) \tag{8}\] \[+2b\mathcal{E}(\left|x\right\rangle\left\langle x\right|)+2c \mathcal{E}(\left|y\right\rangle\left\langle y\right|).\] At the stage of constructing superchannel \(\hat{\mathcal{S}}\), to achieve the optimal control, GRAPE is utilized to pack each of the three unitary operators into one shaped pulse. All shaped pulses are calculated with their fidelities reaching 99.5% and are guaranteed to be robust to the inhomogeneity of radio-frequency pulses. Figure 2: **Structure and parameters of trans-crotonic acid**. The diagonal and off-diagonal elements in the table are the chemical shifts of spins and J-coupling strengths between spins, respectively. At the last stage, we exploit a measure of state fidelity for an arbitrary input state \(\rho\) under an ideal channel \(\mathcal{E}\) and its experimentally achieved channel \(\mathcal{E}^{\prime}\). The unattenuated state fidelity \(F_{s}\) is \[F_{s}:=\frac{\operatorname{Tr}\left[\mathcal{E}(\rho)\mathcal{E}^{\prime}(\rho) \right]}{\sqrt{\operatorname{Tr}\left[\mathcal{E}(\rho)^{2}\right] \operatorname{Tr}\left[\mathcal{E}^{\prime}(\rho)^{2}\right]}}, \tag{9}\] where \(\mathcal{E}(\rho)\) and \(\mathcal{E}^{\prime}(\rho)\) are the ideal and experimental output density matrices corresponding to input \(\rho\), and \(\mathcal{E}^{\prime}(\rho)\). Thus, \(\rho_{\text{out}}\) in experiment, can be obtained through QST. \(F_{s}\) quantifies the similarity of \(\mathcal{E}(\rho)\) and \(\mathcal{E}^{\prime}(\rho)\) in 'direction' [57], and is mostly used for measuring the experimental result in the NMR quantum information processing which mitigates the attenuated magnetization issue. Similar to the state fidelity definition, the process fidelity between the ideal and experimental realized channel is denoted as \[F_{p}=\frac{\left|\operatorname{Tr}[\chi_{\text{exp}}\chi_{\text{th}}^{\dagger }]\right|}{\sqrt{\operatorname{Tr}[\chi_{\text{th}}\chi_{\text{th}}^{\dagger }]\operatorname{Tr}[\chi_{\text{exp}}\chi_{\text{exp}}^{\dagger}]}}, \tag{10}\] in which \(\chi_{\text{th}}\) and \(\chi_{\text{exp}}\) are the respective ideal and experimentally reconstructed \(\chi\) matrix of an arbitrary quantum channel [58], which are equivalent to their Choi states. Following the above procedure, to change from one experiment to another, next we need to select \(\phi\) and \(\theta\) of \(R_{\phi}(\theta)\) for a specific \(\rho_{\text{in}}\) in set \(\mathcal{B}\), calculate GRAPE pulses of pre-\(U\) operator, \(U\) and post-\(U\) operator for the simulated target superchannel, and finally implement the quantum circuit, obtaining the experimental results. ### Simulation of extreme superchannel We experimentally achieve a randomly chosen extreme superchannel \(\hat{\mathcal{S}}\) in the NMR system, as the circuit shown in Fig. 3(a), where \(\hat{C}_{2}\) act as the work qubit with \(C_{1}\), \(C_{3}\) and \(C_{4}\) serving as ancillae. An original random channel \(\mathcal{E}\) on the work qubit is achieved through a random two-qubit unitary operator \(U\) with an ancilla (\(C_{1}\)). By performing pre-\(U\) and post-\(U\) operators \(V\) and \(W\), we successfully convert the original channel \(\mathcal{E}\) into \(\hat{\mathcal{S}}(\mathcal{E})\). In our experiment, the length of GRAPE pulse for implementing \(V\), \(U\) and \(W\) in experiment are 30 ms, 20 ms and 30 ms. More details of the original random channel \(\mathcal{E}\), unitary operators \(V\) and \(W\) can be found in Appendix A. The theoretical and experimental output density matrices under the quantum channel \(\hat{\mathcal{S}}(\mathcal{E})\), into which the original quantum channel \(\mathcal{E}\) is converted by the randomly chosen extreme superchannel \(\hat{\mathcal{S}}\), are presented in Fig. 3(b), where the top (bottom) panel are the output density matrices \(\rho_{\text{out}}\) in theory (experiment) corresponding to the four input bases of \(\mathcal{B}\). The fidelities \(F_{s}\) between the theoretical and experimental output density matrices of \(|z\rangle\), \(|z\rangle\), \(|x\rangle\) and \(|y\rangle\) under the converted channel \(\hat{\mathcal{S}}(\mathcal{E})\) are 99.94%, 99.04%, 99.40% and 99.77%, respectively, indicating a very good simulation of the converted channel \(\hat{\mathcal{S}}(\mathcal{E})\) in our experiment. For comparison, we also reconstruct the original channel \(\mathcal{E}\), the theoretical and experimental output density matrices under which can be found in Appendix A. To generalize our simulation result of the randomly chosen superchannel \(\hat{\mathcal{S}}\), 1000 input states are sampled on the Bloch sphere surface (green dots in Fig. 4(a)) based on the spherical Fibonacci lattice method, of which the output states can be reconstructed by the measured output states of the four states (8). For comparison, the theoretical and experimental output states of the original random channel \(\mathcal{E}\) are presented as the blue dots in Fig. 4(a), while that of \(\hat{\mathcal{S}}(\mathcal{E})\) are plotted as the red dots. The experimental results of \(\mathcal{E}\) and \(\hat{\mathcal{S}}(\mathcal{E})\) are in good agreement with their theoretical ones, and the significant difference between the output results of \(\mathcal{E}\) and \(\hat{\mathcal{S}}(\mathcal{E})\) indicates the success of converting the original channel \(\mathcal{E}\) into \(\hat{\mathcal{S}}(\mathcal{E})\) by superchannel \(\hat{\mathcal{S}}\). To fully characterize the original channel \(\mathcal{E}\) and the converted channel \(\hat{\mathcal{S}}(\mathcal{E})\), quantum process tomography (QPT) [4] was conducted, and \(\chi_{\text{exp}}\) matrices of both channels were reconstructed from the experimental QST results of our selected input state set \(\mathcal{B}\). Thereafter they were transformed into the standard basis set \(\{\)I, X, Y, Z\(\}\), as shown in Fig. 4(b), which reveals further evidence of successfully converting the original channel \(\mathcal{E}\) into \(\hat{\mathcal{S}}(\mathcal{E})\) by superchannel \(\hat{\mathcal{S}}\) - dramatic difference in the amplitude and phase of entries of \(\mathcal{E}\) and \(\hat{\mathcal{S}}(\mathcal{E})\). The process fidelities of the ideal channel \(\mathcal{E}\) and the experimentally eachieved channel \(\hat{\mathcal{S}}(\mathcal{E})\) are 99.82% and 99.02%, respectively, which implies a good verification of accurate channel simulation of \(\mathcal{E}\) and \(\hat{\mathcal{S}}(\mathcal{E})\), and further proving the accurate simulation of superchannel \(\hat{\mathcal{S}}\). ### Simulation of dephasing superchannel The dephasing superchannel is also gen-extreme [53]. In a fixed basis, it is defined to preserve the diagonal elements of the input Choi states while suffering a phase noise which alters the non-diagonal elements. Actually, if we consider dephasing channels acting on \(d^{2}\)-dimensional systems, then it is easy to obtain that a dephasing superchannel is of the form of dephasing channels. The experimental circuit is shown in Fig. 5(a), and the original random channel \(\mathcal{E}\) is achieved as before (\(C_{1}\) as the ancilla and \(C_{2}\) as the work qubit). We use controlled-\(V_{i}\) and controlled-\(W_{i}\) (acting on \(C_{3}\) and \(C_{4}\), and \(i\in\{1,2\}\)) to achieve different entanglement relationships among the control and controlled qubits, and the relative phases between the eigenstates of \(\mathcal{E}(\rho_{\text{in}})\) is thus effected. In other words, we achieved a dephasing superchannel \(\mathcal{S}_{d}\), which successfully convert the original channel \(\mathcal{E}\) into \(\mathcal{S}_{d}(\mathcal{E})\). Since the input channel only acts on the control unit, the diagonal elements of the input Choi states stays the same. More details of unitary operators \(V_{i}\) and \(W_{i}\) (\(i\in\{1,2\}\)) can be found in Appendix A. The whole experiment proceeds as before, the random channel \(\mathcal{E}\) is preserved, i.e., keeping \(U\) unchanged, while the controlled-\(V_{i}\)s and controlled-\(W_{i}\)s are packed into two individual GRAPE pulses with length of 22 ms and 25 ms, respectively. The same state set \(\mathcal{B}\) is selected as the input of our simulated channel, and the output density matrices are presented in Fig. 5(b). The state fidelities \(F_{s}\) between the theoretical and experimentally reconstructed Figure 3: (a) Quantum circuit for simulating a random extreme superchannel \(\mathcal{S}(\mathcal{E})\) in experiment. \(\rho_{\text{in}}\) is prepared by applying a rotation \(R_{\phi}(\theta)\) to \(C_{2}\), and all components of \(\rho_{\text{out}}\) are obtained by direct measurement or appling a readout pulse \(R_{x}(\frac{\pi}{2})\) before it. (b) Theoretical (top panel) and experimental (bottom panel) output density matrices of input states \(|z\rangle\), \(|z\rangle\), \(|x\rangle\) and \(|y\rangle\) under \(\mathcal{S}(\mathcal{E})\). The amplitude and phase of each entry of density matrices are presented by the height and color of the \(3D\) bar, respectively. density matrices corresponding to the four input states under the channel \(\hat{\mathcal{S}}_{d}(\mathcal{E})\) are 99.89%, 99.90%, 99.94% and 99.42%, respectively. Besides, to characterize the function of \(\hat{\mathcal{S}}_{d}\), one way is to compare the output state of an arbitrary input state before and after the application of dephasing superchannel \(\hat{\mathcal{S}}_{d}\). The sampled theoretical and experimental output states of the converted channel \(\hat{\mathcal{S}}_{d}(\mathcal{E})\) are presented as the purple dots in Fig. 4(a), which shows the experimental results of \(\hat{\mathcal{S}}_{d}(\mathcal{E})\) are in good agreement with their theoretical ones, and successfully converting the original channel \(\mathcal{E}\) (blue dots) into another channel \(\hat{\mathcal{S}}_{d}(\mathcal{E})\) by a random dephasing superchannel \(\hat{\mathcal{S}}_{d}\). An alternative way is to reconstruct the process matrices \(\chi\) before and after applying \(\hat{\mathcal{S}}_{d}\), as shown in Fig. 4(b). The process fidelity \(F_{p}\) between the ideal and experimentally reconstructed \(\chi\) matrices of the converted channel \(\hat{\mathcal{S}}_{d}(\mathcal{E})\) achieves 99.60%. In a nutshell, both facts indicate a very good simulation of the dephasing superchannel \(\hat{\mathcal{S}}_{d}\) in our experiment. For comparison, the ideal and experimental Choi states of channels \(\mathcal{E}\), \(\hat{S}(\mathcal{E})\) and \(\hat{\mathcal{S}}_{d}(\mathcal{E})\) are reconstructed based on the \(\chi\) matrices, and presented in Fig. 6. It shows the dephasing superchannel \(\hat{\mathcal{S}}_{d}\) causes the dephasing of Choi state \(\omega_{\mathcal{E}}\) of the random channel \(\mathcal{E}\) -- compressing the amplitude of non-diagonal elements while leaving the diagonal elements unchanged. However, a random superchannel \(\hat{S}\) (see subfigures in the middle of Fig. 6) does not have that feature in general. ### Demonstration of superchannel decomposition We experimentally demonstrate the decomposition of a general superchannel \(\hat{\mathcal{S}}_{g}\). Restricted by our experimental apparatus, we choose the input channel as an unitary operator \(U\) and design our quantum circuit by constructing a random non-extreme superchannel with randomly chosen \(V\) and \(W\), as shown in the left panel of Fig. 7(a). We demonstrate that this 4-qubit-composed superchannel can be decomposed into two 3-qubit-composed extreme superchannels. Here, we choose \(V_{i}\) and \(W_{i}\) using composition with equal \(p_{i}\) (6). Therefore, for an arbitrary input channel \(U\) and an arbitrary input state \(\rho_{\text{in}}\), the output state under the general superchannel \(\hat{\mathcal{S}}_{g}\), which is \(\rho_{\text{out}}\) can be approximated as the average of the output states \(\rho_{\text{out}}^{1}\) and \(\rho_{\text{out}}^{2}\) under the two extreme superchannels \(\hat{\mathcal{S}}_{g}^{1}\) and \(\hat{\mathcal{S}}_{g}^{2}\), i.e., \(\rho_{\text{out}}\approx(\rho_{\text{out}}^{1}+\rho_{\text{out}}^{2})/2\). Figure 4: (a) Input and output states sampled in the Bloch sphere. Theoretical (top panel) and experimental (bottom panel) output states corresponding to the input states (green dots) after a random channel (blue dots), a gen-extreme superchannel (red dots) and a dephasing superchannel (purple dots). (b) Theoretical (top panel) and experimental (bottom panel) \(\chi\) matrices of the random channel \(\mathcal{E}\), converted channels \(\hat{\mathcal{S}}(\mathcal{E})\) and \(\hat{\mathcal{S}}_{d}(\mathcal{E})\). Figure 5: (a) Quantum circuit for simulating a random extreme superchannel \(\hat{\mathcal{S}}_{d}(\mathcal{E})\) in experiment. (b) Theoretical (top panel) and experimental (bottom panel) output density matrices of input states \(|z\rangle\), \(|z\rangle\), \(|x\rangle\) and \(|y\rangle\) under \(\hat{\mathcal{S}}_{d}(\mathcal{E})\). Figure 6: Choi state matrices of channel \(\mathcal{E}\), \(\hat{\mathcal{S}}(\mathcal{E})\) and \(\hat{\mathcal{S}}_{d}(\mathcal{E})\). The top (bottom) panel shows the respective Choi state matrix reconstructed from the theoretical (experimental) Choi state matrices of the random channel \(\mathcal{E}\), \(\hat{\mathcal{S}}(\mathcal{E})\) and \(\hat{\mathcal{S}}_{d}(\mathcal{E})\). In our scheme, three groups of experiments corresponding to the circuits in Fig. 7(a) were conducted, where \(U\), \(V\) and \(W\) are generated randomly, while two pairs of \(V_{i}\) and \(W_{i}\) (\(i\in\{1,2\}\)) are calculated based on \(V\) and \(W\). See Appendix A for more details of unitary operators \(U\), \(V\), \(W\), \(V_{i}\) and \(W_{i}\) (\(i\in\{1,2\}\)). We pack each unitary operator of the three circuits into an individual GRAPE pulse. The input states are selected from the input state set \(\mathcal{B}\), after which the QST process is performed to reconstruct the output state. The theoretical and experimental output states are illustrated in Appendix A. The state fidelities \(F_{s}\) obtained range from 97.28% to 99.96%, where the lower fidelities mainly come from the 4-qubit-composed superchannel circuit whose unitary operators \(V\) and \(W\) are more complicated, thus the total pulse length of the circuit from preparing pseudo-pure state to making measurements reaches 165 ms, causing larger decoherence effect of \(T_{2}\). As before, the theoretical and experimental output states of 1000 input states, which are sampled based on the spherical Fibonacci lattice, under the original superchannel \(\mathcal{S}_{g}\) and the two extreme superchannels \(\hat{\mathcal{S}}_{g}^{1}\) and \(\hat{\mathcal{S}}_{g}^{2}\) presented in Fig. 7(b), separately. The output states of the sampled input states under the general superchannel \(\hat{\mathcal{S}}_{g}\) are illustrated in blue dots, while that under extreme superchannels \(\hat{\mathcal{S}}_{g}^{1}\) and \(\hat{\mathcal{S}}_{g}^{2}\) are illustrated in red and purple dots. The theoretical and experimental output states are in good agreement (the comparatively bad agreement of blue dots corresponding to simulating \(\hat{\mathcal{S}}_{g}\) can be attributed to the decoherence effect of \(T_{2}\)), indicating a well simulation of each superchannel in experiment. To demonstrate the implementation of the convex-decomposition of a random superchannel \(\hat{\mathcal{S}}_{g}\) in experiment, we reconstruct the averaged states of the experimental outputs of our selected inputs under the two extreme superchannels \(\hat{\mathcal{S}}_{g}^{1}\) and \(\hat{\mathcal{S}}_{g}^{2}\), and compare them with their corresponding experimental output states under the original superchannel \(\hat{\mathcal{S}}_{g}\), as shown in Fig. 7(c). Besides, we calculated the fidelities between the corresponding states in the top panel and the bottom panel of Fig. 7(c), which are 98.86%, 97.29%, 98.45% and 98.99%, respectively. Fur Figure 7: (a) Superchannel convex-decomposition scheme. A random superchannel simulated in 4 qubits can be decomposed into two 3-qubit extreme superchannels. (b) Theoretical (top panel) and experimental (bottom panel) output states in the Bloch sphere. The green, blue, red, purple and orange dots represent the sampled input states \(\rho_{\text{in}}\), corresponding output states \(\rho_{\text{out}}\), \(\rho_{\text{out}}^{1}\), \(\rho_{\text{out}}^{2}\) and \((\rho_{\text{out}}^{1}+\rho_{\text{out}}^{2})/2\), respectively. (c) Experimental output density matrices in our superchannel convex-decomposition scheme. (Top panel) The output states \(\rho_{\text{out}}\) of \(\hat{\mathcal{S}}_{g}\) in \(|z\rangle\), \(|z\rangle\), \(|x\rangle\) and \(|y\rangle\) basis, while the bottom panel shows the average of the output states under its decomposed two superchannels, \((\rho_{\text{out}}^{1}+\rho_{\text{out}}^{2})/2\). thermore, the theoretical and experimental averaged output states of 1000 sampled input states corresponding to the two extreme superchannels \(\hat{\mathcal{S}}_{\mathcal{S}}^{1}\) and \(\hat{\mathcal{S}}_{\mathcal{S}}^{2}\) are presented in orange dots in Fig. 7(b), as a comparison with the original output states (blue dots). The mismatch of them in experiment (blue and orange dots in the bottom panel of Fig. 7(b)) is mainly concentrated on the data in the Bloch sphere along the positive \(x\)-axis and negative \(z\)-axis, which are dominated by our imperfect experimental realization of the original superchannel \(\hat{\mathcal{S}}_{g}\). ## IV Summary In this paper, we experimentally realized an algorithm-assisted NMR simulator of qubit superchannels. We demonstrate our simulator with three simulation tasks, including a random extreme superchannel, a dephasing superchannel and a superchannel convex-decomposition scheme. Furthermore, our experimental result also shows that the superchannel achieved by convex-decomposition has a higher fidelity than that achieved directly, on account of the less number of qubits involved in the circuit realization. The experimentally simulated superchannels are in great agreement with their theoretical counterparts. Our experiment verifies the feasibility of the convex channel-decomposition algorithm, implying the promising usage of it for higher-dimensional cases and other tasks. ## Appendix A Details of experiments ### Random channel In our scheme, a random channel \(\mathcal{E}\) is implemented by a two-qubit unitary with one qubit as the work qubit and the other as an ancilla, and a random chosen \(U\) can be found (1). A circuit for simulating a random channel with our 4-qubit NMR quantum processor can be found in Fig. 8(a), where \(C_{2}\) serves as the work qubit and \(C_{1}\) as the ancilla. The entire system is initialized in the pseudo-pure state \(|0000\rangle\) from the thermal equilibrium state with the spatial average technique at first, and \(U\) can be decomposed into a sequence of single-qubit gates and two-qubit controlled gates with the cosine-sine decomposition (CSD) scheme [55]. \[U=\begin{bmatrix}-0.0109+0.1787i&-0.2558-0.1492i&-0.2519+0.2656i&-0.4561-0.73 36i\\ -0.6709-0.2262i&0.1270-0.1671i&0.4623-0.3549i&-0.3017-0.1551i\\ -0.1406-0.5049i&-0.2872-0.5841i&-0.4014+0.1515i&-0.0718+0.3353i\\ 0.3717-0.2321i&-0.6624+0.0766i&0.1922-0.5526i&0.0420-0.13889i\end{bmatrix}. \tag{1}\] The output density matrices of the input state bases under the random channel \(\mathcal{E}\) in theory and experiment are presented in Fig. 8(b). The state fidelities between the theoretical and experimental density matrices of \(|z\rangle\), \(|\bar{z}\rangle\), \(|x\rangle\) and \(|y\rangle\) under the channel \(\mathcal{E}\) are 99.98%, 99.83%, 99.94% and 99.91%, respectively. ### Extreme superchannel To simulate a random extreme superchannel, two random unitary pre-\(U\) (V) and post-\(U\) (W) operations of the superchannel \(\hat{\mathcal{S}}\) are generated as follows, \[V=\begin{bmatrix}0.1310+0.0006&0.1140-0.2325&-0.00486+0.1050i&-0.0856-0.1994 &-0.1458i&0.0073-0.3811i&0.2214-0.0094i&0.3471-0.0486i\\ -0.1640i&-0.2807i&-0.2805&-0.0083&-0.1076i&-0.2529+0.118i&-0.0709-0.7160i&0.112 2-0.0094i&0.3491-0.0261i\\ -0.0723-0.385i&-0.4630i&-0.40267i&0.0940i&0.4045&0.237+0.228i&0.1648-0.0260i&0.5 0856-0.2133i&-0.0708-0.3280i&-0.4028-0.2614i\\ -0.0093-0.1185i&0.1771+0.0104i&0.4845i&-0.19746i&-0.2656i&-0.08898i&0.2056-0.2 0488i&-0.2291i&0.0120i&-0.4020-0.2380i&0.2271-0.0083i\\ -0.0828-0.2195i&-0.4632-0.2845i&0.5166-0.0226i&0.2166-0.0202i&0.2166-0.0202i&0.2 166-0.0202i&0.2166-0.0202i&0.2166-0.0202i&0.2166-0.0202i&0.2166-0.0202i&0.2166-0.0 1427i\\ -0.158-0.4899i&0.2062i&0.1040i&0.0025i&-0.2152i&-0.0253i&-0.1625i&0.1261-0.02 024i&0.0844-0.2291i&-0.1569-0.0208i&-0.1625-0.1635i&-0.1640-0.1666i\\ -0.2245+0.4915i&0.3890-0.1167i&0.4806+0.1123i&0.2361-0.2361i&-0.2361-0.2361i&-0. 23675i&0.1299-0.1106i&0.1360+0.4275i&-0.2195-0.1139i\\ -0.2088+0.5161i&0.0955+0.2051i&0.2211+0.1569i&-0.3256-0.1228i&-0.2580-0.2259i&0.1189+0.1203i& 0.1400+0.4275i&-0.2195-0.1139i\end{bmatrix}, \tag{2}\] \[W=\begin{bmatrix}0.4488+0.3583i&-0.1516-0.19240i&0.1703-0.2470i&0.1753-0.1283i&0.0 741-0.4186i&-0.0073-0.2280i&-0.5173+0.0333i&-0.4073-0.0107i\\ 0.2915+0.1024i&0.1703+0.0168i&0.3982i&0.0036+0.0003i&-0.1106+0.3509i&0.2300+0.2 0361i&-0.1741+0.0093i&0.3075-0.2435i\\ -0.0802+0.2970i&0.0088i&0.0165-0.1057i&0.1762i&-0.0262-0.0023i&0.1195+0.1724i&-0. 05175i&-0.1080-0.1457i&0.3081-0.0271i\\ -0.0329+0.1596i&0.1014+0.0449i&-0.0277-0.1276i&0.0088-0.2170i&0.0064i&0.30113+0.1864i&-0. 03064i&-0.03045+0.1228i&-0.21016i\\ 0.19313i&0.3048i&-0.1178-0.5217i&0.2078-0.0788i&0.2238-0.3370i&0.3159+0.2880i&-0.0898 +0.1534i&0.2083-0.2025i&-0.1557i\\ 0.3286+0.0032i&-0.10400i&0.3865-0.0756i&-0.30556i&-0.02702i&0.20706+0.0031i&-0.1032 -0.2259&0.3600i&-0.20703i\\ -0.0113-0.0913i&0.6914+0.1059i&0.2504-0.3436i&0.0275-0.1592i&0.2433-0.3037i&-0.134 -0.0581i&-0.0886i&0.1058i\\ -0.0723+0.3255i&-0.1613-0.0762i&-0.0262-0.0413i&0.5026-0.0413i&0.5026-0.2036i&0.22 655+0.2233i\end{bmatrix}. \tag{3}\] ### Dephasing superchannel To simulate a random dephasing superchannel, random unitary pre-\(U\) and post-\(U\) operations of the superchannel \(\mathcal{S}_{d}\), controlled-\(V_{i}\) and controlled-\(W_{i}\) (\(i\in\{1,2\}\)), are generated. The generated \(V_{i}\) and \(W_{i}\) are as follows, \[V_{1}=\begin{bmatrix}0.2987+0.2302i&0.4874-0.2877i&0.1190+0.2665i&-0.6694+0.0 645i\\ 0.0898+0.2401i&-0.5247-0.5788i&-0.3600+0.2082i&-0.0291-0.3876i\\ 0.7734-0.1597i&0.1368+0.1863i&-0.5275-0.1207i&0.1707+0.0293i\\ -0.4084+0.0403i&-0.0644+0.1087i&-0.6699+0.0164i&-0.3253+0.5107i\end{bmatrix}, \tag{10}\] \[V_{2}=\begin{bmatrix}-0.5089+0.0961i&-0.3067+0.4412i&0.0754+0.5513i&-0.2788-0. 2359i\\ -0.1870-0.2108i&0.2685+0.6255i&-0.5304-0.2773i&0.0053+0.3145i\\ 0.5061+0.0705i&-0.0278-0.1007i&-0.4042+0.3316i&-0.6458+0.1939i\\ 0.5896-0.2091i&0.1698+0.4562i&0.0943+0.2234i&0.2692-0.4904i\end{bmatrix}, \tag{11}\] \[W_{1}=\begin{bmatrix}-0.2919+0.0605i&-0.6294-0.6274i&0.1046-0.0290i&0.3088+0. 1191i\\ 0.2956-0.0233i&-0.0973-0.4107i&-0.3067-0.1554i&-0.3989-0.6758i\\ -0.3639+0.7948i&0.1587-0.0780i&-0.2993+0.2414i&-0.2318+0.0552i\\ -0.0806-0.2295i&0.0100+0.0259i&-0.1371+0.8387i&0.3037-0.3545i\end{bmatrix}, \tag{12}\] Figure 8: (a) Quantum circuit for simulating a random extreme superchannel \(\mathcal{E}\) in experiment. (b) Theoretical (top panel) and experimental (bottom panel) output density matrices of input states \(|z\rangle\), \(|\bar{z}\rangle\), \(|x\rangle\) and \(|y\rangle\) under \(\mathcal{E}\). \[W_{2}=\begin{bmatrix}-0.6174+0.3184i&0.4179-0.3318i&0.2918+0.0792i&0.3384+0.1635i \\ -0.5162-0.4553i&-0.5212-0.1814i&0.2151-0.1841i&-0.0083-0.3761i\\ -0.1403+0.0197i&-0.4111-0.4270i&-0.5477+0.1773i&-0.0179+0.5448i\\ 0.1005+0.1163i&-0.1773-0.1668i&0.6157+0.3434i&-0.6002+0.2446i\end{bmatrix}\,. \tag{10}\] ### Superchannel convex-decomposition To simulate a general superchannel \(\hat{\mathcal{S}}_{g}\) and its decomposed superchannels \(\hat{\mathcal{S}}_{g}^{1}\) and \(\hat{\mathcal{S}}_{g}^{2}\) in Fig. 7(a), the input unitary channel \(U\), pre-\(U\) operator V, and post-\(W\) are generated randomly as follows. Then two pairs of pre-\(U\) operator \(V_{i}\) and post-\(U\) operator \(W_{i}\) corresponding to each decomposed superchannel are calculated (\(i\in\{1,2\}\)). \[U=\begin{bmatrix}0.6196+0.2891i&0.5199-0.5120i\\ -0.7191+0.1236i&0.1266-0.6720i\end{bmatrix}, \tag{11}\] \[V=\begin{bmatrix}-0.3084-0.1730i&-0.1958-0.01514i&-0.1071+0.4419i&-0.23065-0.0894i&0.0582-0.0890i&0.3907+0.4084i&0.129-0.0737i&0.2368+0.0826i\\ -0.2444+0.2554i&0.3602-0.3986i&-0.38-0.0044i&-0.0070+0.1590i&-0.3934-0.1004i&-0.25 65-0.1723i&0.2516+0.1200i&0.2637-0.1417i\\ -0.2047+0.3984i&-0.2028i&-0.1016i&0.3096-0.2214i&0.4099-0.1271i&-0.0161i&-0.02 099-0.0219i&-0.0276-0.3607i\\ -0.3144i&-0.2029+0.4203i&-0.2084i&-0.18054i&0.2044i&-0.3302+0.2253i&-0.1036 0-0.4984i&-0.2484-0.1617i\\ 0.147+0.3024i&0.209+0.2021i&-0.18386i&-0.0384i&0.1726-0.2488i&-0.1282i&0.04 28-0.1334i&0.2998-0.2385i&0.3802-0.2979i\\ -0.3514i&-0.0089-0.0604i&-0.23045i&0.375-0.0066i&0.225-0.1991i&0.0744i&-0.1929 -0.1365i&0.4313i&0.2998-0.2276i&0.458-0.0710i\\ -0.0091+0.5352i&0.3516i&-0.3919i&0.1069i&-0.0112i&-0.00836i&0.436-0.2723i&0.2 949-0.1494i&-0.1909-0.1884i\\ \end{bmatrix}, \tag{12}\] \[W=\begin{bmatrix}0.2496+0.0484i&0.2583+0.0494i&-0.142-0.1233i&0.0349-0.243i&-0.059 0-0.240i&0.1259+0.1864i&0.2037-0.0825i&-0.0705+0.048i\\ -0.2257+0.0388i&-0.3507+0.124i&-0.0042i&-0.0776i&-0.0758i&-0.23516i&0.1486i&0.1 249-0.1850i&-0.2876i\\ -0.2288+0.1095i&0.1179-0.1236i&0.3825i&0.0684i&0.1249-0.1523i&0.1529+0.1523i& 0.1529+0.1523i&0.1529+0.2004i\\ -0.1801+0.1423i&0.3123i&0.0884i&0.166i&-0.1484i&0.0043i&-0.0083-0.1529i&-0.1529+0.15 29+0.0054i&-0.083-0.1044i\\ -0.0086+0.214i&0.0096i&0.2486i&0.2666i&0.26666i&0. \[W_{2}=\begin{bmatrix}-0.2271+0.271&-0.1472+0.254&0.321+0.7788&0.1722-0.0898 &-0.162-0.4995&-0.1554+0.4909&0.1569-0.0884&-0.3007-0.00857\\ -3.0455+0.0631&0.3686+0.004&0.1588-0.004&-0.2571+0.0059&-0.209-0.117&0.1751-0.40 28&0.1468-0.1054&-0.0042-0.57357\\ -0.2736-0.0384&-0.1468+0.257&0.2583+0.184&-0.2539+0.177&-0.42059+0.0066&0.11 582-0.133&-0.3399+0.3594&-0.2526-0.14480\\ 0.3288+0.1419&-0.1412+0.3430&-0.2912+0.2163&-0.0038&0.2087&-0.3527&-0.00 572&-0.0076-0.3130&-0.3292+0.1002&-0.0044+0.00404\\ 0.3455+0.0591&-0.0420&-0.4017&-0.4162&-0.4027&0.0181+0.015&-0.0044-0.112 7&0.3086-0.3051&0.0177&-0.004611\\ 0.2699+0.2838&-0.3590+0.0575&0.2628&-0.5372&0.1758+0.1070&-0.2688&-0.03 8&-0.7846&0.1909+0.0368&0.1392-0.03838\\ -0.3406-0.2795&-0.1548&-0.2692&0.1115&-0.2248&0.0133-0.006&-0.1492-0.105 &-0.209-0.0522&-0.00413-0.0089&-0.0042-0.43223\\ -0.1051+0.3537&-0.2949&-0.2444&-0.2565&-0.2551&0.1947&+0.0089&-0.4443&0.15 484&0.0274-0.0227&-0.4566+0.0088&0.2568-0.2688\\ \end{bmatrix} \tag{100}\] The output density matrices of the input state bases in the general superchannel \(\hat{\mathcal{S}}_{g}\) scheme in theory and experiment are presented in Fig. 9. The state fidelities between the theoretical and experimental density matrices of \(|z\rangle\), \(|z\rangle\), \(|x\rangle\) and \(|y\rangle\) under the channel \(\mathcal{E}\) are 98.12%, 97.95%, 97.28% and 98.76%, respectively. While that of its decomposed superchannels \(\hat{\mathcal{S}}_{g}^{1}\) and \(\hat{\mathcal{S}}_{g}^{2}\) schemes are presented in Fig. 10 and Fig. 11, respectively. The fidelities of that in \(\hat{\mathcal{S}}_{g}^{1}\) scheme are 99.74%, 99.77%, 99.28% and 99.16%, and the fidelities of that in \(\hat{\mathcal{S}}_{g}^{2}\) scheme are 99.65%, 99.17%, 99.96% and 99.72%. Figure 10: **Output density matrices of the input state set \(\mathcal{B}\) in a decomposed superchannel \(\hat{\mathcal{S}}_{g}^{1}\) scheme.** Figure 9: **Output density matrices of the input state set \(\mathcal{B}\) in a general superchannel \(\hat{\mathcal{S}}_{g}\) scheme.** ## Appendix B More examples of quantum superchannel ### Entanglement-assisted quantum communication A notable protocol that was developed before the emerge of superchannel theory is the entanglement-assisted quantum communication [59]. A circuit of it is shown in Fig.12, where Ent represents the pre-existed entangled states shared by Alice and Bob. This circuit is of the form of superchannel with noise in the communication process as the input channel. The Ent and Encoding operations together form the pre-operation of the superchannel, while the decoding operation is the post-operation, which could consume additional ancillary qubits. The entangled state \(|\text{Ent}\rangle\) is used as a resource, and it is assumed to be free from noise. For instance, a common resource is the ebit, which can be generated and distributed via a specific protocol, then Alice and Bob each needs to store their qubits for the later usages. The flying qubits that will subsequently be transmitted suffer from noise, hence requiring quantum error correction. As is well known, a large class of quantum codes is the entanglement-assisted error correction codes, which possess some interesting features compared with codes without entanglement assistance [60; 61]. ### Noise-adapted quantum error correction Using superchannel theory, here we show a construction of error correction codes that are noise-adapted. A common noisy channel that exists in many experimental systems is the amplitude-damping (AD) channel defined Figure 11: **Output density matrices of the input state set \(\mathcal{B}\) in a decomposed superchannel \(\hat{\mathcal{S}}_{\mathcal{g}}^{2}\) scheme.** Figure 12: Circuit for entanglement-assisted quantum communication. by two Kraus operators \[K_{0}=\begin{pmatrix}1&0\\ 0&\sqrt{1-\lambda}\end{pmatrix},\ K_{1}=\begin{pmatrix}0&\sqrt{\lambda}\\ 0&0\end{pmatrix} \tag{47}\] for \(\lambda\in[0,1]\) as the damping parameter that encodes the evolution time. Our codes are \(\gamma\)-dependent and approximate, similar with some other codes in literature [62, 63, 64, 65, 66, 67]. In particular, we have an optimization algorithm that can find a code which can improve the quality of a noisy qubit. Adapted to the NMR simulator we have, we use one qubit as the ancilla to generate the AD noise, and three qubits as the codes. We assume only one qubit suffers the AD noise so that we can treat our codes as distance-3 against the AD noise, approximately. Under this assumption, we theoretically consider three cases as in Fig. 13, namely, either the 1st or 2nd qubit is noisy, or with equal probability one of them is noisy, and the 3rd one, as a part of the ebit, is assumed to be noise-free. In Fig. 14, we show the case when either one of the first two qubits is noisy. The black dashed line is the optimal result we find numerically, which is the average of the green and red lines. The fidelity is the entanglement fidelity between the error-corrected noise and a perfect identity channel. It is obvious to see when the averaged fidelity is optimal, the fidelity for merely one noise (green or red) does not need to be so, though. For each given \(\gamma\), a code defined by the pair, encoding \(V\) and decoding \(W\) operations, is found. We can see that our codes indeed suppress the AD noise quite well, and the fidelity is high especially when \(\gamma\) is small. The primary example above shows that superchannel could be a promising framework to design error correction codes. For better and more practical codings to construct real logical qubits, we need to consider larger systems, which is left for future investigation. Figure 14: The entanglement fidelity with (black line) and without (blue dashed line) error correction as a function of the damping rate \(\lambda\). Figure 13: The ebit-assisted communication when AD noise occurs in the 1st qubit (a) and 2nd qubit (b). ## Acknowledgements H. L., K. W., and S. W. contributed equally to this work. We acknowledge the National Natural Science Foundation of China under Grants No. 12047503 and 12105343 (K. W., D.-S. W.), 12005015 (S. W.), 11974205 (H. L., S. W., F. Y., X. C., G.-L. L.), and the National Key Research and Development Program of China (2017YFA0303700), The Key Research and Development Program of Guangdong province (2018B030325002), Beijing Advanced Innovation Center for Future Chip (ICFC) (H. L., S. W., F. Y., X. C., G.-L. L.).
2305.05393
CaseEncoder: A Knowledge-enhanced Pre-trained Model for Legal Case Encoding
Legal case retrieval is a critical process for modern legal information systems. While recent studies have utilized pre-trained language models (PLMs) based on the general domain self-supervised pre-training paradigm to build models for legal case retrieval, there are limitations in using general domain PLMs as backbones. Specifically, these models may not fully capture the underlying legal features in legal case documents. To address this issue, we propose CaseEncoder, a legal document encoder that leverages fine-grained legal knowledge in both the data sampling and pre-training phases. In the data sampling phase, we enhance the quality of the training data by utilizing fine-grained law article information to guide the selection of positive and negative examples. In the pre-training phase, we design legal-specific pre-training tasks that align with the judging criteria of relevant legal cases. Based on these tasks, we introduce an innovative loss function called Biased Circle Loss to enhance the model's ability to recognize case relevance in fine grains. Experimental results on multiple benchmarks demonstrate that CaseEncoder significantly outperforms both existing general pre-training models and legal-specific pre-training models in zero-shot legal case retrieval.
Yixiao Ma, Yueyue Wu, Weihang Su, Qingyao Ai, Yiqun Liu
2023-05-09T12:40:19Z
http://arxiv.org/abs/2305.05393v1
# CaseEncoder: A Knowledge-enhanced Pre-trained Model for Legal Case Encoding ###### Abstract. Legal case retrieval is a critical process for modern legal information systems. While recent studies have utilized pre-trained language models (PLMs) based on the general domain self-supervised pre-training paradigm to build models for legal case retrieval, there are limitations in using general domain PLMs as backbones. Specifically, these models may not fully capture the underlying legal features in legal case documents. To address this issue, we propose CaseEncoder, a legal document encoder that leverages fine-grained legal knowledge in both the data sampling and pre-training phases. In the data sampling phase, we enhance the quality of the training data by utilizing fine-grained law article information to guide the selection of positive and negative examples. In the pre-training phase, we design legal-specific pre-training tasks that align with the judging criteria of relevant legal cases. Based on these tasks, we introduce an innovative loss function called _Biased Circle Loss_ to enhance the model's ability to recognize case relevance in fine grains. Experimental results on multiple benchmarks demonstrate that CaseEncoder significantly outperforms both existing general pre-training models and legal-specific pre-training models in zero-shot legal case retrieval. The source code of CaseEncoder will be released when the paper is published. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote †: copyrighted: none Experimental results on multiple legal case retrieval datasets demonstrate that CaseEncoder significantly outperforms the existing general pre-training models and legal-specific pre-training models. We also present case document embedding visualizations to showcase the potential of CaseEncoder in downstream tasks such as charge prediction and article prediction. The source code of CaseEncoder will be released when the paper is published. ## 2. Task Definition Given a query case \(q\), the task aims to retrieve relevant cases from a candidate list \(L=\{c_{1},c_{2},...,c_{M}\}\), where \(M\) is the size of \(L\), and rank them by the relevance to \(q\). Each candidate case document in the list has three main components: 1. _Facts_ are objective fact statements confirmed by the court based on the evidence provided by the defendant and plaintiff. These statements typically answer questions such as where, when, and how the case occurred. 2. _Holding_ is the judge's opinion on the key arguments of the case. It explains the reasoning behind the judge's decision. 3. _Decision_ contains the final judgment of the defendant including the charge, sentence, and articles involved. This component is the official outcome of a case. In most legal case retrieval scenarios, \(q\) only contains the _Facts_ component, while each candidate case includes title, meta information, _Facts_, _Holding_, _Decision_, and related law articles. In this paper, we focus primarily on retrieving cases under criminal law. ## 3. Method This section outlines the design and implementation of CaseEncoder. Figure 1 illustrates the overall framework of the model. We begin by introducing the fine-grained case sampling method used for data preparation in Section 3.1. Then, in Section 3.2, we describe the pre-training tasks proposed in CaseEncoder. ### Fine-grained Case Sampling Recent studies on legal-oriented PLMs do not focus on understanding legal texts from a bottom-up approach. In other words, most PLMs simply replace the general-domain training corpus with legal texts without considering the legal correlation between these texts. This is mainly because the annotation of legal cases is time-consuming and requires much expertise, making it challenging to collect large-scale labeled data. On the other hand, in contrastive learning, which has proven to be effective in the pre-training phase, data needs to be sampled in advance as positive and negative cases. Unlike the general domain, in the legal domain, it is not appropriate enough to sample the positive and negative legal cases simply based on raw information in documents (e.g., charges, law articles, etc.). Because in a real legal scenario, a judge will decide a case based on how well _key circumstances_ and _key elements_ of the case match the constituent elements (fine-grained interpretation of the law article). Therefore, this paper proposes a fine-grained sampling method for legal case documents with reference to the specific process of judges deciding cases. By doing so, the sampled positive and negative cases can match the manually labeled relevance as much as possible for the subsequent contrastive learning task. Generally, a law article covers multiple branches. Each branch describes a certain situation applicable to this article. For example, the article shown in Figure 2 has seven branches in total. In the third act of this article, 'engaging in school bus service' and 'engaging in passenger transportation service' belongs to different branches even if they appear in the same sentence. That is to say, phrases in one article have not only a sequential relationship but also a parallel relationship. The above example phrases belong to a parallel relationship. Phrases are combined in a permutation way to generate branches without ambiguity. We call the branches of the article as _unambiguous articles_. Any case that matches any of the unambiguous articles belongs to this article. Obviously, cases under the same article may belong to different unambiguous articles. Therefore, the relevance between two cases can not be directly determined by reason like 'they all belong to Article 133-1'. In this paper, we aim to identify more fine-grained article information (i.e., unambiguous articles) for each case as the preliminary of finding positive and negative cases in our case sampling algorithm. To this end, we divide all law articles into fine-grained information similar to Figure 2: First, we split an original article into unambiguous articles. Then, legal keywords are manually annotated in each unambiguous article, which is under the guidance of legal experts. Finally, we extract the annotated keywords to form a word-level sequence representing the corresponding unambiguous article. All such sequences constitute an unambiguous article corpus \(C_{a}=\{seq_{1},seq_{2},...,seq_{T}\}\), where \(T\in\mathbb{N}^{+}\) is the number of sequences extracted from one article. Then, for cases committing the same crime, our case sampling strategy can recognize their relevance to each other with the term-level similarity to the sequences in \(C_{a}\). Specifically, given a case, the first step is to extract its _Holding_ from the case document, because _Holding_ contains reasons for the final judgment, which is highly related to the articles of the case. Let the extracted _Holding_ is denoted as \(h\), we can compute a similarity vector by: \[v=[\text{BM25}(seq_{1},h),\text{BM25}(seq_{2},h),...,\text{BM25}(seq_{T},h)] \in\mathbb{R}^{T} \tag{1}\] where BM25 is the traditional BM25 (Kumar et al., 2017) model initialized with corpus \(C_{a}\), and BM25(seq\({}_{i}\), \(h\)) denotes the BM25 score between the keyword sequence of the i-th unambiguous article and \(h\). \(v\) can be intuitively interpreted as the feature of a case at the law article level. Next, given two cases \(c_{i}\) and \(c_{j}\), the legal-specific relevance weight \(w_{ij}\) can be presented as: \[\text{rel}(c_{i},c_{j})=\begin{cases}1&\text{argmax}(v_{i})= \text{argmax}(v_{j})\\ \text{max}(\cos(v_{ik},v_{jk}))&Otherwise\end{cases} \tag{2}\] where \(A_{i}\) is the set of articles involved in case \(c_{i}\), \(\cos\) is the cosine similarity score, and \(k\) is the k-th article in \(A_{i}\). In other words, the Figure 1. The overall framework of CaseEncoder. relevance weight between two cases is mainly determined by the extent to which their related law articles overlap. If two cases, \(c_{i}\) and \(c_{j}\), are both most similar to the same unambiguous article, then they are considered the most relevant cases. Otherwise, the relevance weight will be decayed, depending on the cosine similarity between their similarity vectors \(v_{i}\) and \(v_{j}\). Note that a maximum function is added to Equation 3, as \(\text{rel}(c_{i},c_{j})\) is valued by the most relevant scenario of all \(A_{i}\cap A_{j}\). Finally, for each case \(c_{i}\) in the legal case corpus, we can sample a legally explainable positive case \(c_{i+}\) according to \(w\). The input data for the pre-training phase will be in the form of quadruples \((c_{i},c_{i+},v_{i},v_{i+})\). ### Legal-specific Pre-training Task As a law-oriented pre-trained model, we aim to integrate legal knowledge into the design of pre-training tasks, so that the model can acquire the ability to understand case documents not only at the semantic level but also at the legal concept level after training. To this end, in this paper, we refer to the judging criteria of relevant cases to design our pre-training tasks. As demonstrated by Ma et al. (Ma et al., 2018), two cases are relevant if they satisfy two requirements: high similarity between their _key circumstances_, and high similarity between their _key elements_. Specifically, _key circumstances_ refer to significant case descriptions, while _key elements_ focus more on the consistency with law articles and represent the legal-level abstraction of _key circumstances_. In summary, a case is considered relevant to another when the case description and abstracted legal concept are both relevant. Following the idea of the judging criteria, two pre-training tasks are adopted in this paper. The first pre-training task is the masked language modeling (MLM) task, which enables the capture of regular semantic-level meaning of case description. As discussed in Devlin et al. (Devlin et al., 2017), Liu et al. (Liu et al., 2017), Ma et al. (Ma et al., 2018), MLM contributes to producing embeddings with contextual information. Such embeddings are beneficial to the representation of _key circumstances_ in legal cases. In detail, we only select _Facts_ in a case document to randomly mask 15% of the tokens for MLM, because _key circumstances_ are all included in _Facts_. The masked text is then fed into CaseEncoder to predict the masked tokens based on the surrounding unmasked tokens. The MLM loss function is defined as: \[\mathcal{L}_{MLM}=-\sum_{x^{\prime}\in m(\mathbf{x})}\log p\left(x^{\prime} \mid\mathbf{x}_{\mid m(\mathbf{x})}\right) \tag{4}\] where \(\mathbf{x}\) is the text in _Facts_, \(m(\mathbf{x})\) is the set of masked tokens, and \(\mathbf{x}_{\mid m(\mathbf{x})}\) is the set of unmasked tokens. The second pre-training task is a fine-grained contrastive learning task that utilizes the information from quadruples \((c_{i},c_{i+},v_{i},v_{i+})\) obtained in Section 3.1. The contrastive learning task in previous work (Chen et al., 2017) train a model using augmented positive cases and regards the rest of the cases in the same batch as negatives. However, in the legal domain, the relevance scale is more fine-grained. One legal case can be partially relevant to another, and the extent of relevance is mostly determined by the previously mentioned _key elements_. Therefore, a fine-grained contrastive learning task is proposed to enhance the recognition of _key elements_. Specifically, suppose the batch size is \(N\) and each quadruple has two cases, the total number of cases in a batch is \(2N\). First, a multi-layer Transformer is adopted to obtain the representations of \(2N\) cases. Then, we take the output of [CLS] token in the last hidden layer of Transformer as the case embedding: \(e_{1},e_{2},...,e_{2N},e_{t}\in\mathbb{R}^{H}\), where \(H\) is the hidden size. Finally, the training objective of this fine-grained contrastive learning task, Biased Circle Loss (BCL), is defined as: \[\mathcal{L}_{\text{BCL}}=\log[1+\sum_{j=1}^{L}\exp(\gamma a_{n}^{j}(s_{n}^{j}- \Delta_{n}))\sum_{i=1}^{K}\exp(-\gamma a_{p}^{j}(s_{p}^{j}-\Delta_{p}))] \tag{5}\] \[\alpha_{p}^{i}=|\exp^{\gamma_{p}-1}\cdot O_{p}-s_{p}^{i}|,\alpha_{n}^{j}=[s_{ n}^{j}-O_{n}]_{+} \tag{6}\] where \(s_{p}\) and \(s_{n}\) are cosine similarity scores of between-class and within-class case embeddings, respectively. \(\alpha_{p}\) and \(\alpha_{n}\) are parameters controlling the speed of convergence, where \(\alpha_{p}\) is determined by the legal-specific relevance weight in Equation 2. \(y\), \(O_{p}\), \(O_{n}\), \(\Delta_{p}\), and \(\Delta_{n}\) are hyper-parameters of scale factor, optimum for \(s_{p}\), optimum for \(s_{n}\), between-class margin, and within-class margin, respectively. In this way, CaseEncoder is trained to pull case embeddings in the same class closer and push case embeddings in different classes apart. The distance between case embeddings in the vector space depends on the value of \(s_{p}\) and \(s_{n}\). There are two main differences between our proposed loss function and Circle Loss (Dong et al., 2017): First, we expand the original loss function from a binary scenario to a multi-class scenario since legal cases Figure 2. An illustration of the process of collecting a fine-grained unambiguous article corpus. within a batch can be classified into multiple classes. In detail, we consider any two cases to be in the same class if their legal-specific relevance weight is larger than a particular threshold \(W_{T}\), and such a rule is transitive across all cases in a batch. Therefore, the actual implementation of Equation 5 is more complicated because cases are of multiple classes. Second, we add a weight parameter \(\alpha\) to \(\mathcal{L}_{\text{BCL}}\) to account for the extent of relevance between cases. By taking the legal-specific relevance weight into consideration, CaseEncoder is trained to discriminate between relevant cases in fine grains. Finally, CaseEncoder is optimized by the linear combination of MLM loss and BCL loss, where \(\lambda\) is a hyper-parameter: \[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{MLM}}+\lambda\mathcal{L}_{\text {BCL}} \tag{7}\] ## 4. Experiments ### Experimental Settings **Evaluation.** As a retrieval task, all models in this paper are evaluated in three metrics: NDCG@10, NDCG@20, and NDCG@30. Additionally, to match the real legal scenario that the query is a case to be judged without any related information, all models are evaluated under the zero-shot setting. For a fair comparison, all models in this paper adopt a commonly used dual-encoder paradigm to retrieve legal cases. That is, both queries and candidate cases generate document-level embeddings by the same pre-trained encoder, and the final retrieved ranking list is sorted by the cosine similarity between embeddings. **Datasets and baselines.** This paper adopts three publicly available datasets: LeCaRD, CAIL2021-LCR, and CAIL2022-LCR. LeCaRD (Chen et al., 2021) is the first Chinese legal case retrieval benchmark, which is widely used for the evaluation of retrieval models. CAIL2021-LCR 1 and CAIL2022-LCR 2 are two competition datasets. Since the experiment setting is zero-shot, both the training set and test set are included for the evaluation in this paper. CaseEncoder is compared to four PLMs: BERT-XS (Liu et al., 2021), Lawformer (Liu et al., 2021), RoBERTa (Chen et al., 2021), and RoBERTa-Legal. RoBERTa-Legal is a legal version of RoBERTa that conducts a secondary pre-training on legal data using the MLM task. Footnote 1: [http://cail.cipse.org/cn/task3.html?racedID=36call_tag=2022](http://cail.cipse.org/cn/task3.html?racedID=36call_tag=2022) **Parameter Settings.** In this paper, all PLM backbones are imported from Huggingface (Liu et al., 2021), with the learning rate set to \(1*10^{-5}\). BM25 algorithm is implemented by Gensim (Gensim, 2018). The hyper-parameter for CaseEncoder is: \(\gamma=16\), \(O_{p}=1.25\), \(O_{n}=0.25\), \(\delta_{p}=0.75\), \(\delta_{n}=0.25\), \(W_{T}=0.25\), and \(\lambda=\exp*10^{-6}\). All training and experiments are conducted on eight 32G NVIDIA V100 GPUs. ### Experimental Results The overall results are shown in Table 1. CaseEncoder outperforms baselines in terms of all metrics on three datasets, and most of the improvement is statistically significant. RoBERTa-Legal has the second-best overall performance and outperforms the original RoBERTa, which proves the idea in Gururangan et al. (Gururangan et al., 2019) that a secondary pre-training using domain-specific data is beneficial to the overall performance in the target domain. Lawformer is not as effective as reported in Xiao et al. (2021). One possible explanation is that the case document adopted in this paper is relatively short, while Lawformer is trained specifically for long documents (4096 tokens). Besides, the effectiveness of BERT-XS (Liu et al., 2021) is limited, because it utilizes Next Sentence Prediction (NSP) task for pre-training and its [CLS] token is not trained to represent document-level embeddings. These results demonstrate that our proposed CaseEncoder is effective in the retrieval task. To investigate the effectiveness of our proposed fine-grained sampling method and contrastive learning task, we further conduct a series of ablation studies. As shown in Table 2, removing the sampling method, loss function, or the contrastive learning task all lead to performance decline. Therefore, all of these innovations contribute to the effectiveness of CaseEncoder. In addition, Table 2 also indicates that BCL contributes most to the improvement of CaseEncoder, while the effect of adding a traditional binary contrastive learning task is limited. ### Application in Downstream Tasks CaseEncoder is designed to effectively model case documents in the legal domain. In addition to the retrieval task, the document-level case embedding can also be utilized in other downstream tasks such as charge prediction. Figure 3 is an example of how the CaseEncoder improves the quality of case embeddings for charge prediction. We randomly select 2500 cases from each criminal charge and generate their corresponding case embeddings. Then, we use t-SNE (Liu et al., 2021) to reduce the dimension of case embeddings for visualization. Among all PLMs, CaseEncoder has the best ability to divide case embeddings into six clusters based on their charges, with only one pair of similar charges (Provocation and Public Brawl) having some overlap. By comparison, RoBERTa partially distinguishes between six charges, but with more overlap than CaseEncoder. The performance of BERT-XS and Lawformer is limited, which is consistent Figure 3. The visualization of case embeddings generated by four PLMs in the zero-shot manner. with the retrieval result and explanation in Section 4.2. These visualizations demonstrate how the fine-grained legal knowledge embedded in CaseEncoder can be leveraged for a range of legal applications beyond case retrieval. ## 5. Conclusion This paper proposes CaseEncoder, a pre-trained encoder that utilizes fine-grained legal knowledge to enhance the representation of case document embeddings. Experiments and visual analysis demonstrate the effectiveness of case embeddings generated by CaseEncoder in zero-shot legal case retrieval and other downstream legal tasks such as charge prediction.
2307.01954
FEMDA: Une méthode de classification robuste et flexible
Linear and Quadratic Discriminant Analysis (LDA and QDA) are well-known classical methods but can heavily suffer from non-Gaussian distributions and/or contaminated datasets, mainly because of the underlying Gaussian assumption that is not robust. This paper studies the robustness to scale changes in the data of a new discriminant analysis technique where each data point is drawn by its own arbitrary Elliptically Symmetrical (ES) distribution and its own arbitrary scale parameter. Such a model allows for possibly very heterogeneous, independent but non-identically distributed samples. The new decision rule derived is simple, fast, and robust to scale changes in the data compared to other state-of-the-art method
Pierre Houdouin, Matthieu Jonckheere, Frederic Pascal
2023-07-04T23:15:31Z
http://arxiv.org/abs/2307.01954v1
# FEMDA : une methode de classification robuste et flexible ###### Abstract Linear and Quadratic Discriminante Analysis (LDA and QDA) are well-known classical methods but can heavily suffer from non-Gaussian distributions and/or contaminated datasets, mainly because of the underlying Gaussian assumption that is not robust. This paper studies the robustness to scale changes in the data of a new discriminant analysis technique where each data point is drawn by its own arbitrary Elliptically Symmetrical (ES) distribution and its own arbitrary scale parameter. Such a model allows for possibly very heterogeneous, independent but non-identically distributed samples. The new decision rule derived is simple, fast and robust to scale changes in the data compared to others state-of-the-art methods. ## 1 Introduction L'analyse discriminante est un outil tres utilise pour les taches de classification. La methode historique [1] presuppose que les donnees sont issues de distributions gaussinenes et la regle de decision consiste a choisir le cluster qui maximise la vraisemblance de la donnee. Au debut des annees 80, [2] et [3] ont etudie l'impact de la contamination et du _mislabelling_ sur les performances et concluent a une grande sensibilite. Pour traiter ce probleme, [4] suggere l'utilisation de M-estimateurs qui sont robustes au bruit. Plus recement, [5] a propose de modeliser les donnees par une distribution de student multivariee, plus flexible. En 2015, [6] generales meme aux distributions elliptiques symetriques (ES). Cette nouvelle methode, appelee _Generalized_ QDA (GQDA) repose sur l'estimation d'un seuil, dont la valeur varie avec la forme de la distribution. Enfin, [7] a complete GQDA avec l'utilisation d'estimateurs robustes, pour obtenir RGQDA. Toutes ces methodes supposent que les points d'un meme cluster sont issus de la meme distribution, hypothese qui n'est pas toujours valide. [8], inspire par [9], propose une methode alternative qui ne suppose aucun a priori sur les distributions, et permet a chaque point d'etre issu de sa propre distribution elliptique symetrique. Les points d'un meme cluster ne sont pas forcement identiquement distribues, seulement tires independamment. La contrepartie d'une telle flexibilite reside dans les caracteristiques des clusters : au sein d'un meme cluster, les points partagent seulement la meme moyenne et la meme matrice de dispersion. Nous allons etudier dans ce papier la robustesse aux changements d'echelle dans les donnees de cette nouvelle methode. Le modele est presente dans la section 2, la section 3 contient des experiencees sur donnees simulees, la section 4 les experiences sur donnees reelles et les conclusions et remarques sont effectuees dans la section 5. ## 2 FEMDA : Flexible EM-inspired discriminant Analysis **Modele statistique:** On suppose que chaque donnee \(\mathbf{x}_{i}\in\mathbb{R}^{m}\), \(i\in[1,n]\) est tiree d'une distribution ES independante du cluster. La moyenne et la matrice de dispersion dependent du cluster auquel le point appartient tandis que le facteur d'echelle \(\tau_{i,k}\) peut dependre de l'observation egalement. La donnee \(\mathbf{x}_{i}\) du cluster \(\mathcal{C}_{k}\), \(k\in[1,K]\) est tiree selon la densite de probabilite suivante : \[f(\mathbf{x}_{i})=A_{i}\left|\mathbf{\Sigma}_{k}\right|^{-\frac{1}{2}}\tau_{i,k}^ {-\frac{m}{2}}g_{i}\left(\frac{(\mathbf{x}_{i}-\boldsymbol{\mu}_{k})^{T} \mathbf{\Sigma}_{k}^{-1}(\mathbf{x}_{i}-\boldsymbol{\mu}_{k})}{\tau_{i,k}}\right)\] **Expression de la log-vaisemblance et des estimateurs du maximum de vraisemblance:** Soient \(\mathbf{x}_{1},...,\mathbf{x}_{n_{k}}\) des donnes independantes du cluster \(\mathcal{C}_{k}\), la log-vaisemblance de l'echantillon peut s'ecrere: \[l(\mathbf{x}_{1},...,\mathbf{x}_{n_{k}})=\sum_{i=1}^{n_{k}}\log\left(A_{i} \left|\mathbf{\Sigma}_{k}\right|^{-\frac{1}{2}}t_{i,k}^{-\frac{m}{2}}s_{i,k}^{ \frac{m}{2}}g_{i}(s_{i,k})\right) \tag{1}\] ou \(t_{i,k}=(\mathbf{x}_{i}-\boldsymbol{\mu}_{k})^{T}\mathbf{\Sigma}_{k}^{-1}( \mathbf{x}_{i}-\boldsymbol{\mu}_{k})\) and \(s_{i,k}=t_{i,k}/\tau_{i,k}\). Maximiser le terme de l'equation (1) par rapport a \(\tau_{i,k}\), avec \(\boldsymbol{\mu}_{k}\) et \(\mathbf{\Sigma}_{k}\) fixes mene a \[\hat{\tau}_{i,k}=\frac{(\mathbf{x}_{i}-\boldsymbol{\mu}_{k})^{T}\mathbf{ \Sigma}_{k}^{-1}(\mathbf{x}_{i}-\boldsymbol{\mu}_{k})}{\arg\max_{t\in\mathbb{ R}^{+}}\{t^{\frac{m}{2}}g_{i}(t)\}}.\] Les hypotheses sur \(g_{i}\) assurel la stricte positivite du denominateur. Apres avoir remplace dans l'equation (1) \(\tau_{i,k}\) par \(\hat{\tau}_{i,k}\), on obtient: \[l(\mathbf{x}_{i})=\tilde{A}_{i}-\frac{1}{2}\log\left(\left|\mathbf{\Sigma}_{ k}\right|\left((\mathbf{x}_{i}-\boldsymbol{\mu}_{k})^{T}\mathbf{\Sigma}_{k}^{-1}( \mathbf{x}_{i}-\boldsymbol{\mu}_{k})\right)^{m}\right)\] ou \(\tilde{A}_{i}=\log(A_{i})+\log(\max_{t\in\mathbb{R}^{+}}\{t^{\frac{m}{2}}g_{i }(t)\})\). A cette etape, on comprend que la flexibilite dans le choix de l'echelle de la matrice de covariance nous permet de reduire l'impact de la fonction generatrice \(g_{i}\) dans la vraisemblance a une constante multiplicative independante de \(k\). Enfin, l'utilisation de l'estimateur du maximum de vraisemblance permet d'obtenir les estimateurs robustes suivants pour la moyenne et la matrice de dispersion : \[\left\{\begin{array}{lll}\hat{\boldsymbol{\mu}}_{k}&=&\frac{\sum_{i=1}^{n_{k} }w_{i,k}\mathbf{x}_{i}}{\sum_{i=1}^{n_{k}}w_{i,k}},\\ \hat{\mathbf{\Sigma}}_{k}&=&\frac{m}{n_{k}}\sum_{i=1}^{n}w_{i,k}( \mathbf{x}_{i}-\hat{\boldsymbol{\mu}}_{k})(\mathbf{x}_{i}-\hat{\boldsymbol{ \mu}}_{k})^{T}\end{array}\right. \tag{2}\] ou \(w_{i,k}=1/t_{i,k}\). Il est interessant de noter que \(\hat{\boldsymbol{\mu}}_{k}\) est insensible a l'echelle de \(\hat{\mathbf{\Sigma}}_{k}\). Par consequent, si \(\hat{\mathbf{\Sigma}}_{k}\) est une solution a l'equation de point fixe, \(\lambda\hat{\mathbf{\Sigma}}_{k}\) l'est egalement. Les estimateurs obtenus sont similaires aux M-estimateurs robustes, sauf que les poids \(w_{i,k}\) sont proportionnels au carre de la distance de Mahalanobis. La convergence de ces deux equations de point-fixe couplees a cete analysee par [9]. **Regle de classification:** Grace a ces deux estimateurs, on utilise les donnees d'entra(r)nement pour estimer les parametres inconnus. On suppose le nombre de clusters connu. Il est maintenant possible de deduire la regle de classification. On a la proposition suivante : **Proposition 2.1**.: _La regle de decision pour Flexible EM-Inspired Discriminant Analysis (FEMDA) est :_ \[\mathbf{x}_{i}\in\mathcal{C}_{k}\iff\left(\forall j\neq k,\Delta_{jk}^{2}( \mathbf{x}_{i})\geq\frac{1}{m}\lambda_{jk}\right) \tag{3}\] _avec \(\Delta_{jk}^{2}(\mathbf{x}_{i})=\log\left(\frac{(\mathbf{x}_{i}-\boldsymbol{\mu}_ {j})^{T}\mathbf{\Sigma}_{j}^{-1}(\mathbf{x}_{i}-\boldsymbol{\mu}_{j})}{( \mathbf{x}_{i}-\boldsymbol{\mu}_{k})^{T}\mathbf{\Sigma}_{k}^{-1}(\mathbf{x}_ {i}-\boldsymbol{\mu}_{k})}\right)\)_ _et \(\lambda_{jk}=\log\left(\frac{\left|\mathbf{\Sigma}_{k}\right|}{\left|\mathbf{ \Sigma}_{j}\right|}\right)\)._ **Preuve:** La preuve repose sur le fait que la log-vaisemblance ne depend de \(k\) qu'a travers le terme \[\frac{1}{m}\log\left(\left|\mathbf{\Sigma}_{k}\right|\right)+\log\left(( \mathbf{x}_{i}-\boldsymbol{\mu}_{k})^{T}\mathbf{\Sigma}_{k}^{-1}(\mathbf{x}_ {i}-\boldsymbol{\mu}_{k})\right)\] **Remarque 2.2**.: _Cette regle de decision est similaire a la version robuste de QDA. La difference est que nous compareons le logarithme des distances de Mahalanobis au carre plutot que directement les distances de Mahalanobis aau carre. Cela rend notre regle de decision egalement insensible a l'echelle de \(\mathbf{\Sigma}\)._ ## 3 Experiences sur donnees simulees FEMDA, la methode proposee, est comparee avec les methodes suivantes : QDA classique modelisant les donnees par des distributions gaussiennes, QDA modelisant les donnees par des distributions de student (\(t\)-QDA, [5]), GQDA et GQDA [6], [7]. **Parameters de simulation:** Les moyennes des clusters sont tires aleatoirement sur la \(m\)-sphere unite. Les matrices de covariance sont generees avec des valeurs propres et une matrice orthogonale aleatoires. On choisit \(m=10\), \(K=5\), \(N_{train}=5000\), \(N_{test}=20000\) et \(\tau\sim\mathcal{U}(1,m)\). **Scenarios consideres:** On genere les points grace a deux familles de distributions ES effectnetes. \begin{tabular}{|l|l|} \hline Famille de distributions & Representation stochastique \\ \hline gaussienne generalisee & \(\boldsymbol{\mu}+\Gamma(\frac{m}{2\beta},2)^{\frac{1}{2\beta}}\mathbf{\Sigma}^{ 2}\mathcal{U}\left(\mathcal{S}(0,1)\right)\) \\ \(t\)-distribution & \(\boldsymbol{\mu}+\mathcal{N}(0,\mathbf{\Sigma})\sqrt{\frac{1}{\Gamma(\frac{m}{2},\frac{m}{2})}}\) \\ \hline \(\mathcal{U}\left(\mathcal{S}(0,1)\right)\) represente une distribution uniforme sur la \(m\)-sphere unite. Le parametre de forme \(\beta\) (resp. \(\nu\)) est tire de maniere uniforme dans \([0.25,10]\) (resp. \([1,10]\)) pour les gaussiennes generalisees (resp. pour les \(t\)-distributions). \\ \hline Les scenarios de generation de donnees sont definis comme suit : \(0.GGG-0.4T\) correspond a \(60\%\) des points de chaque cluster genere s avec une gaussienne generalisee et \(40\%\) avec une \(t\)-distribution. \\ \hline On utilise le code couleur suivant pour la generation des parametres: \(0.GGG-0.4T\) signifie que les memes \(\beta\) et \(\nu\) sont utilises pour les points d'un meme cluster et \(0.GGG-0.4T\) signigie qu'on utilise un parametre different pour chaque point de chaque cluster. \\ \hline **Resultats** & \\ \hline Pour chaque scenario dans la premiere colonne, le tableau 1 presente les differences de taux de bonne classification entre la meilleure methode et les autres : Dans le tableau 1, on remarque que GQDA et QDA obtiennent des performances plus faibles que FEMDA et \(t\)-QDA. \(t\)-QDA est la meilleure methode dans la plupart des scenarios et surpasse legerement FEMDA, au prix de l'estimation de plus de parametres et donc d'une methode plus lente. [8] a etudie plus en details les vitesses de convergence de chaque estimateur et regle de decision. Dans le tableau 2, on bruite les donnees avec un changement d'echelle. Une fraction des donnees subit le changement suivant : \(x\longleftarrow\mu+\lambda(x-\mu)\). On observe alors que FEMDA est la methode la plus robuste au bruit, \(t\)-QDA est surpassee dans presque tous les scenarios lorsque la contamination atteint 25% avec \(\lambda=4\), et dans tous avec \(\lambda=8\). L'ecart-type entre plusieurs simulations est faible, de l'ordre de 0.05%. ## 4 Experiences sur donnees reelles Les jeux de donnees sont issus de l'UCI machine learning repository [10]. Trois datasets sont utilises : **Ionosphere** avec 351 donnees de dimension 34, **Ecoli** avec 336 donnees de dimension 8 et **Breast cancer** avec 699 donnees de dimension 10. ### Resultats de classification Pour obtenir ces resultats, 100 simulations ont ete effectuees, et apres 10 simulations successives, les _train_ et _test set_ sont recomposes (70% train set et 30% test set). On peut voir sur les figures 1(a) et 1(b) que GQDA surperforme d'au moins 1% les autres methodes, suivi par FEMDA et ensuite par \(t\)-QDA pour le dataset Ionosphere. Les ecarts sont plus resserres pour le dataset Ecoli. Sur la figure 1(c), on remarque que FEMDA devient la meilleure methode avec une precision proche de 95%, suivi de pres par \(t\)-QDA puis GQDA. La variance dans les resultats est plutot faible. Pour conclure sur ces trois datasets, les performances de FEMDA sont legerement superieures a celles de \(t\)-QDA, et souvent inferieures a celles de GQDA, qui sont cependant plus variables. ### Resultats apres changements d'echelle On va maintenant bruiter les donnees d'une maniere similaire a ce qui a ete effectue pour les donnees simulees. On choisit \(\lambda=5\). On trace ensuite l'evolution de la precision des trois methodes robustes selon le taux de contamination. On remarque sur la figure 2 que meme avec des taux de bruit tres eleves, \(t\)-QDA et FEMDA conservent de tres bons resultats pour Spambase et Ionosphere. En revanche, les performances de RGQDA chutent beaucoup plus rapidement lorsque le taux de contamination augmente. Pour le dataset Ecoli, le comportement est beaucoup plus uniforme, les trois methodes voient leurs performances baisser, surtout lorsqu'on depasse un taux de 40% de contamination. FEMDA affiche malgre tout une resilience legerement superieure pour les hauts taux de contamination, mais les performances restent tres proches de celles de \(t\)-QDA. La robustesse de FEMDA aux changements d'echelle dans les donnees d'entrainement peut etre expliquee par l'expression des estimateurs, qui sont in \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Scénario & QDA & \(t\)-QDA & GQDA & FEMDA \\ \hline GG - T & & & & \\ \hline \(1-0\) & \(-0.51\) & \(\mathbf{76.27}\) & \(-0.47\) & \(-0.02\) \\ \hline \(0-1\) & \(\mathbf{-0.64}\) & \(\mathbf{76.74}\) & \(-0.69\) & \(-0.16\) \\ \hline \(1-0\) & \(-0.59\) & \(\mathbf{76.39}\) & \(-0.58\) & \(-0.10\) \\ \hline \(0-1\) & \(\mathbf{-1.24}\) & \(\mathbf{77.08}\) & \(-1.27\) & \(-0.21\) \\ \hline \(\frac{1}{2}-\frac{1}{2}\) & \(-1.17\) & \(\mathbf{80.85}\) & \(-1.13\) & \(-0.39\) \\ \hline \(\frac{1}{2}-\frac{1}{2}\) & \(-1.31\) & \(-0.02\) & \(-0.87\) & \(\mathbf{80.59}\) \\ \hline \end{tabular} \end{table} Table 1: Pécision de la classification \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Scénario & \(t\)-QDA & FEMDA & \(t\)-QDA & FEMDA \\ \hline Bruit & \(10\%\) & & \(25\%\) & \\ \hline GG - T & & & & \\ \hline \(1-0\) - \(\lambda=4\) & \(\mathbf{73.23}\) & \(-0.19\) & \(-0.37\) & \(\mathbf{66.44}\) \\ \hline \(0-1\) - \(\lambda=4\) & \(\mathbf{74.65}\) & \(-0.36\) & \(-0.15\) & \(\mathbf{67.19}\) \\ \hline \(1-0\) - \(\lambda=4\) & \(-0.08\) & \(\mathbf{72.98}\) & \(-0.22\) & \(\mathbf{66.48}\) \\ \hline \(0-1\) - \(\lambda=4\) & \(\mathbf{73.93}\) & \(-0.24\) & \(-0.04\) & \(\mathbf{66.70}\) \\ \hline \(\frac{1}{2}-\frac{1}{2}-\lambda=4\) & \(\mathbf{77.14}\) & \(-0.61\) & \(\mathbf{70.61}\) & \(-0.21\) \\ \hline \(\frac{1}{2}-\frac{1}{2}-\lambda=4\) & \(\mathbf{76.28}\) & \(-0.41\) & \(-0.13\) & \(\mathbf{69.74}\) \\ \hline \(1-0\) - \(\lambda=8\) & \(-0.11\) & \(\mathbf{72.87}\) & \(-0.67\) & \(\mathbf{64.79}\) \\ \hline \(0-1\) - \(\lambda=8\) & \(\mathbf{74.11}\) & \(-0.29\) & \(-0.45\) & \(\mathbf{65.98}\) \\ \hline \(1-0\) - \(\lambda=8\) & \(-0.31\) & \(\mathbf{71.93}\) & \(-0.33\) & \(\mathbf{65.49}\) \\ \hline \(0-1\) - \(\lambda=8\) & \(-0.08\) & \(\mathbf{73.22}\) & \(-0.24\) & \(\mathbf{64.29}\) \\ \hline \(\frac{1}{2}-\frac{1}{2}-\lambda=8\) & \(\mathbf{76.36}\) & \(-0.44\) & \(-0.14\) & \(\mathbf{68.69}\) \\ \hline \(\frac{1}{2}-\frac{1}{2}-\lambda=8\) & \(\mathbf{75.56}\) & \(-0.37\) & \(-0.32\) & \(\mathbf{67.61}\) \\ \hline \end{tabular} \end{table} Table 2: Pécision en présence de bruit Figure 1: Pécision mediane trinsequement insensibles aux changements d'echelle. Enfin, la difference de sensibilite a l'augmentation de la contamination pour Ecoli peut s'expliquer par la faible dimension des donnees par rapport aux autres datasets. En effet, en grande dimension, la direction de la matrice de covariance est beaucoup plus discriminante pour separer les donnees. ## 5 Conclusion Dans ce papier, nous avons presente une nouvelle methode d'analyse discriminante robuste aux changements d'echelle dans les donnees d'entrainement. Elle surpasse toutes les methodes de l'etat de l'art en presence de donnees contaminees, et se comporte de maniere similaire a \(t\)-QDA sans bruit, tout en etant plus rapide. FEMDA est donne une methode rapide et tres resiliente face aux donnees bruitees. Dans ce nouveau paradigme, les clusters ne partagent plus la meme matrice de covariance, mais seulement la meme matrice de dispersion. Permettre a chaque point d'avoir son propre facteur d'echelle entraine un gain de flexibilite qui permet de traiter des jeux de donnees contaminees et non necessairement identiquement distribuees. On peut donc considerer que FEMDA est une amelioration de \(t\)-QDA : performances similaires sans contamination, mais plus robuste et plus rapide.
2302.05334
The Role of Codeword-to-Class Assignments in Error-Correcting Codes: An Empirical Study
Error-correcting codes (ECC) are used to reduce multiclass classification tasks to multiple binary classification subproblems. In ECC, classes are represented by the rows of a binary matrix, corresponding to codewords in a codebook. Codebooks are commonly either predefined or problem dependent. Given predefined codebooks, codeword-to-class assignments are traditionally overlooked, and codewords are implicitly assigned to classes arbitrarily. Our paper shows that these assignments play a major role in the performance of ECC. Specifically, we examine similarity-preserving assignments, where similar codewords are assigned to similar classes. Addressing a controversy in existing literature, our extensive experiments confirm that similarity-preserving assignments induce easier subproblems and are superior to other assignment policies in terms of their generalization performance. We find that similarity-preserving assignments make predefined codebooks become problem-dependent, without altering other favorable codebook properties. Finally, we show that our findings can improve predefined codebooks dedicated to extreme classification.
Itay Evron, Ophir Onn, Tamar Weiss Orzech, Hai Azeroual, Daniel Soudry
2023-02-10T15:48:51Z
http://arxiv.org/abs/2302.05334v1
# The Role of Codeword-to-Class Assignments in Error-Correcting Codes: ###### Abstract Error-correcting codes (ECC) are used to reduce multiclass classification tasks to multiple binary classification subproblems. In ECC, classes are represented by the rows of a binary matrix, corresponding to codewords in a codebook. Codebooks are commonly either predefined _or_ problem-dependent. Given predefined codebooks, codeword-to-class assignments are traditionally overlooked, and codewords are implicitly assigned to classes arbitrarily. Our paper shows that these assignments play a major role in the performance of ECC. Specifically, we examine similarity-preserving assignments, where similar codewords are assigned to similar classes. Addressing a controversy in existing literature, our extensive experiments confirm that similarity-preserving assignments induce easier subproblems and are superior to other assignment policies in terms of their generalization performance. We find that similarity-preserving assignments make _predefined_ codebooks become problem-dependent, without altering other favorable codebook properties. Finally, we show that our findings can improve predefined codebooks dedicated to extreme classification. ## 1 Introduction Error-correcting codes (ECC) have been long used in machine learning as a reduction from multiclass classification tasks to binary classification tasks (Dietterich and Bakiri, 1994). This scheme encodes classes using rows of a binary matrix called a codebook. The codebook columns induce binary partitions of classes, or subproblems, to be learned using any binary classification algorithm. Recently, error-correcting codes have been used as output embeddings of deep networks (Yang et al., 2015; Rodriguez et al., 2018; Kusupati et al., 2021), on top of features extracted by deep CNNs (Dori et al., 2018), and as a means to combine ensembles of several networks (Zheng et al., 2018). Moreover, they were recently used for their robustness in adversarial learning (Verma and Swami, 2019; Gupta and Amin, 2021; Song et al., 2021) and for their redundancy in regression tasks (Shah et al., 2022) and heterogeneous domain adaptation (Zhou et al., 2019). In extreme multiclass classification, where the number of classes is extremely large, ECC can be particularly beneficial. Several works (Jasinska and Karampatziakis, 2016; Evron et al., 2018) employed ECC to shrink the output space, decreasing the number of learned predictors, as well as the prediction time, to _logarithmic_ in the number of classes. In comparison, both one-hot encoding and hierarchical models train a linear number of predictors (even though the latter enjoy a logarithmic prediction time). The first step in employing ECC consists of selecting a _good_ codebook. Some codebook properties are universally important for error correction, e.g., the minimum hamming distance between rows. Other properties are only important in some regimes, e.g., the decoding complexity which is essential mainly in extreme classification. Roughly, codebooks can be divided into two categories: _predefined codebooks_ and _problem-dependent codebooks_. Predefined codebooks are independent of the problem at hand, but offer simplicity (e.g., random codebooks), favorable error-correction properties (e.g., Hadamard codebooks in Zhang et al., 2003 or optimized codebooks in Gupta and Amin, 2022), or regime-specific advantages like fast decoding algorithms (Evron et al., 2018). On the other hand, problem-dependent approaches attempt to induce binary subproblems that are tailored for a given dataset, often by balancing against other codebook properties. Problem-dependent codebooks are commonly designed by optimizing over codebooks while taking _class-similarity_ into account. However, there are two _opposite_ intuitions in the literature as to _how_ to incorporate class-similarity in the design process. Some works follow an intuition that to induce easy subproblems, similar classes should be encoded by _similar_ codewords (Zhang et al., 2009; Cisse et al., 2012; Zhao and Xing, 2013; Zhou et al., 2016; Rodriguez et al., 2018). In contrast, other works encode similar classes by _distant_ codewords to improve the error correction between hardly-separable classes (Pujol et al., 2008; Martin et al., 2017; Youn et al., 2021; Gupta and Amin, 2021). We examine this _controversy_ in depth and provide evidence from multiple regimes that generalization is superior when encoding similar classes by _similar_ codewords. In _predefined_ codebooks, the mapping between codewords and classes, i.e., the _codeword-to-class assignment_, is usually set arbitrarily (e.g., using a random assignment). Dietterich and Bakiri (1994) showed that randomly-sampled assignments perform similarly, and since, these assignments have been commonly overlooked. Our paper shows that _codeword-to-class assignments do matter_ and cause a large variation in the performance of many predefined codebooks (Section 4.1.1). We explain this by showing that, given a codebook, some assignments induce substantially easier binary subproblems than other assignments do (Section 4.1.2). Moreover, we show that the easiest subproblems are induced by assigning similar codewords to similar classes (Section 4.1.3). Finally, we employ our observations on extreme multi-class classification datasets (having 1K to 104K classes). By assigning similar codewords to similar classes, we significantly improve predefined extreme classification codebooks that enjoy fast decoding algorithms (Section 4.2). To the best of our knowledge, this is the first work to point out the large performance variation explained solely by codeword-to-class assignments, and to explicitly examine these assignments as a means to control the difficulty of the induced learning-subproblems in problem-_independent_ predefined codebooks. We conclude that choosing an informed assignment improves predefined codebooks by turning them problem-_dependent_ and better suited for the solved task. Importantly, other useful properties of these codebooks are _not_ harmed in this process. ## 2 Error-Correcting Codes (ECC) Error-correcting codes are widely used for transmitting messages over noisy channels in communication systems, storage systems, and more. By adding redundant bits to transmitted messages, the receiver can recover messages despite errors caused by a disruptive channel (Roth, 2006). Training.The seminal work of Dietterich and Bakiri (1994) employed error-correcting codes to encode the \(K\) classes of a classification dataset. They set a binary codebook \(\mathbf{M}\in\left\{-1,+1\right\}^{K\times\ell}\) with \(K\in\mathbb{N}\) codewords (each belonging to one class) and \(\ell\) columns (where \(\ell\geq\log_{2}K\)). Each column induces a _binary subproblem_, i.e., a binary partition of classes. Each such subproblem is learned using a base learner \(\mathcal{A}\) (i.e., a binary classification learning algorithm), yielding \(\ell\) predictors \(f_{1},...,f_{\ell}:\mathcal{X}\rightarrow\mathbb{R}\). More formally, given a training set \(\left\{\left(\mathbf{x}_{i},y_{i}\right)\right\}_{i=1}^{m}\), where \(x_{i}\in\mathcal{X}\) and \(y_{i}\in\left[K\right]\triangleq\left\{1,...,K\right\}\), the \(j\)th predictor is the output of \(\mathcal{A}\) when trained using the induced binary labels \(M_{y_{i},j}\): \[f_{j}=\mathcal{A}\left(\left\{\left(\mathbf{x}_{i},M_{y_{i},j}\right)\right\}_ {i=1}^{m}\right)\;. \tag{1}\] Prediction.At prediction time, an example \(\mathbf{x}\in\mathcal{X}\) is treated as a transmitted message encoding the unknown class \(y\in\left[K\right]\). The \(\ell\) predictors' scores for \(\mathbf{x}\) constitute the vector \(\mathbf{f}(\mathbf{x})\triangleq\left[f_{1}(\mathbf{x}),...,f_{\ell}(\mathbf{ x})\right]^{\top}\). These scores can be prediction margins from a linear model, confidences from a probabilistic model, outputs of a neural network, etc. Finally, the prediction vector \(\mathbf{f}(\mathbf{x})\) is _decoded_ into a codeword belonging to a class. The simplest approach is _hard decoding_ that consists of finding the nearest neighbor, that is, the codeword closest (in Hamming distance) to the thresholded prediction vector, \(\operatorname{sign}(\mathbf{f}(\mathbf{x}))\in\left\{-1,+1\right\}^{\ell}\). Hard decoding ignores the score magnitudes which entail valuable information for prediction. As a remedy, _soft decoding_, or loss-based decoding (Allwein et al., 2000), minimizes a decoding loss \(\mathcal{L}:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}\): \[\hat{y}\left(\mathbf{x}\right)=\operatorname*{arg\,min}_{y\in\left[K\right]} \sum\nolimits_{j=1}^{\ell}\mathcal{L}\left(M_{y,j}f_{j}\left(\mathbf{x} \right)\right)\;. \tag{2}\] Two popular decoding losses are the hinge loss \(\mathcal{L}\left(z\right)=\max\left\{0,1-z\right\}\) and the exponential loss \(\mathcal{L}\left(z\right)=e^{-z}\). Notice that soft decoding generalizes hard decoding with \(\mathcal{L}\left(z\right)=\frac{1-\operatorname{sign}(z)}{2}\). We illustrate the entire ECC scheme in App. A. Multiclass error upper bound.Allwein et al. (2000) proved an insightful upper bound1 that will facilitate our discussion throughout this paper. Let Footnote 1: Zhou et al. (2019) derived a bound for more general N-ary codes (where subproblems are also multiclass instead of binary), but this remains out of our scope in this work. \[\varepsilon\triangleq\varepsilon(\mathbf{M},\mathcal{L})=\frac{1}{m\ell}{\sum \nolimits_{i=1}^{m}}{\sum\nolimits_{j=1}^{\ell}}\mathcal{L}\left(M_{y_{i},j}f_ {j}\left(\mathbf{x}_{i}\right)\right) \tag{3}\] be the _average binary loss_ of the binary predictors on a given training set \(\left\{\left(\mathbf{x}_{i},y_{i}\right)\right\}_{i=1}^{m}\) with respect to a codebook \(\mathbf{M}\) and a decoding loss \(\mathcal{L}\). Assume \(\mathcal{L}\) satisfies mild conditions (e.g., convexity is sufficient). Then, the _multiclass training error_ when decoding with \(\mathcal{L}\) is upper bounded as: \[\frac{1}{m}{\sum\nolimits_{i=1}^{m}}\mathbb{I}[y_{i}\neq\hat{y}(\mathbf{x}_{i} )]\leq\frac{\ell\varepsilon}{\rho\mathcal{L}\left(0\right)}\;, \tag{4}\] where \(\rho\triangleq\rho(\mathbf{M})=\min_{a\neq b}\frac{1}{2}\left\|\mathbf{M}_{a, :}-\mathbf{M}_{b,:}\right\|_{1}\) is the codebook's _minimum inter-row Hamming distance_ (\(\mathbf{M}_{a,:}\) being the \(a\)th row of \(\mathbf{M}\)) and \(\mathcal{L}\left(0\right)\) is a scaling factor of \(\mathcal{L}\). ### Properties of a Good Codebook We now review favorable properties of error-correcting codebooks. The first two properties are discussed more often in the literature (e.g., Dietterich and Bakiri, 1994; Zhang et al., 2003), while the latter two are seldom addressed despite their importance. In many cases improving one property comes at the expense of another. 1. **High minimum row distance \(\rho\) (between codewords).** With hard decoding (i.e., nearest neighbor), the maximal number of prediction errors the scheme can recover from \(\left[\left(\rho-1\right)/2\right]\). Using soft decoding, a high minimum distance is still vital for error correction, as seen from the error bound (4). 2. **Low column correlation (between subproblems).** Intuitively, if two binary predictors often make errors on the same inputs, their mistakes become twice as hard to correct. Thus, uncorrelated columns (that yield uncorrelated binary subproblems) are generally considered advantageous. 3. **Efficient decoding algorithm.** Traditionally ignored in many ECC works, the complexity of decoding prediction scores into codewords becomes essential in extreme classification tasks with thousands of codewords or more. Recently, Jasinska and Karampatziakis (2016) and Evron et al. (2018) utilized codebooks with a special structure to allow soft decoding using any decoding loss in a time complexity that depends only on the codebook width \(\ell\) (which can be logarithmic in the number of codewords \(K\)). In contrast, exact soft decoding of arbitrary codebooks (e.g., random or optimized ones) requires a time complexity at least linear in \(K\). 4. **Easy binary subproblems (low average loss \(\varepsilon\)).** The binary subproblems yield binary predictors with an average binary loss \(\varepsilon\). The lower this loss is, the better the multiclass accuracy of the scheme becomes (see (4)). One way to lower \(\varepsilon\) is to use high-capacity base learners (e.g., kernel SVMs), but such rich models are often prone to overfitting or require more computation. A proper codebook design can lower \(\varepsilon\), by making the subproblems _easier_, even for low-capacity learners. Following are design choices that can achieve this. 1. **Sparse or imbalanced codebooks.** Allwein et al. (2000) extended the ECC scheme to ternary codes where \(\mathbf{M}\in\left\{-1,0,+1\right\}^{K\times\ell}\). They showed that sparse columns generalize the one-vs-one scheme and that imbalanced columns generalize the one-vs-all scheme. Both options can be seen as ways to create easier subproblems at the expense of the row distance or column correlation. See Zhou et al. (2016) and Section 6 in Allwein et al. (2000) for further discussion. 2. **Problem-dependent aspects.** Many papers design codebooks that are specifically suitable for the problem at hand while implicitly tuning the difficulty of the binary subproblems. Most of these works are guided by notions of class similarity. Some try (implicitly or explicitly) to create codebooks where similar classes have similar codewords (e.g., Cisse et al., 2012) in order to create easier subproblems. Others try the opposite (e.g., Martin et al., 2017) in order to enhance error correction between classes that are hard to separate, at the expense of harder subproblems. Notably, most methods balance preserving the similarity against other codebook properties (e.g., the codeword distance between two very similar classes is encouraged to be 1, whereas \(\rho\) is encouraged to be maximal). They create codebooks from scratch or alter existing ones. On the other hand, our observations next allow making _pre-defined_ codebooks more problem-dependent, by simply assigning codewords to classes in an informed manner, and without harming other codebook properties which may be important. ## 3 Codeword-to-class assignments The error-correcting scheme implicitly assigns codewords to classes. Both during training and during decoding, we arbitrarily assumed that the \(k\)th row in the codebook belongs to the \(k\)th class (see (1) and (2)). In an attempt to show robustness to codeword-to-class assignments, Dietterich and Bakiri (1994) (Section 3.3.2 therein) experimented on several random assignments and reported no significant accuracy variation. However, they did not rule out the possibility that some assignments _are_ better than others. We hypothesize that some assignments are _significantly_ better than others. We first notice that given a codebook, different assignments induce different binary subproblems, potentially changing their difficulty and consequently the average binary loss \(\varepsilon\). Next, we define a scoring function that measures the extent to which close codewords are assigned to close classes. This score later helps us conclude that _similarity-preserving_ assignments (i.e., similar codewords to similar classes) are preferable. Class-codeword score.Consider a class metric in the form of a distance matrix \(\mathbf{D}_{\text{cls}}\in\mathbb{R}_{\geq 0}^{K\times K}\). For instance, \(\mathbf{D}_{\text{cls}}\) can be (inversely proportional to) a symmetrized confusion matrix, a matrix of distances between class embeddings, or a matrix of distances between classes on a hierarchy tree. Define the codeword distance matrix \(\mathbf{D}_{\mathbf{M}}\in\mathbb{R}_{\geq 0}^{K\times K}\) where \(\left(\mathbf{D}_{\mathbf{M}}\right)_{a,b}\triangleq\frac{1}{2}\left\| \mathbf{M}_{a,:}-\mathbf{M}_{b,:}\right\|_{1}\). To account for the different scales of these matrices, we normalize them such that \(\left\|\mathbf{D}_{\text{cls}}\right\|_{\text{F}}=\left\|\mathbf{D}_{\mathbf{ M}}\right\|_{\text{F}}=1\). Notice that an assignment corresponds to reordering, or permuting, the rows of the codebook \(\mathbf{M}\) using a \(K\times K\) permutation matrix \(\mathbf{P}\). Consequently, such an assignment corresponds to permuting the rows _and_ columns of the distance matrix \(\mathbf{D}_{\mathbf{M}}\). Given a codebook \(\mathbf{M}\) and a class metric \(\mathbf{D}_{\text{cls}}\). We assess an assignment, or a permutation \(\mathbf{P}\) of the rows in \(\mathbf{M}\), by defining the _class-codeword score_ as the Frobenius distance between \(\mathbf{D}_{\text{cls}}\) and the permuted \(\mathbf{D}_{\mathbf{M}}\): \[s_{\text{cc}}\left(\mathbf{P}\right)\triangleq\left\|\mathbf{D}_{\text{cls}}- \mathbf{D}_{\mathbf{PM}}\right\|_{\text{F}}=\left\|\mathbf{D}_{\text{cls}}- \mathbf{P}\mathbf{D}_{\mathbf{M}}\mathbf{P}^{\top}\right\|_{\text{F}}. \tag{5}\] Intuitively, an extreme case where \(s_{\text{cc}}\left(\mathbf{P}\right)=0\) means that \(\mathbf{D}_{\text{cls}}\) and the permuted \(\mathbf{D}_{\mathbf{M}}\) completely "agree", i.e., similar codewords are assigned to similar classes, and dissimilar codewords are assigned to dissimilar classes (realistically, given \(\mathbf{D}_{\text{cls}}\) and \(\mathbf{D}_{\mathbf{M}}\), the minimum is often larger than zero). Synthetic dataset.App. B illustrates some of the above ideas using a synthetic dataset. For a specific codebook, we show that only _one_ assignment can perfectly fit the data, while _all_ other \((K!-1)\) assignments fail. Moreover, the only successful assignment assigns similar codewords to similar classes. ## 4 Experiments We test our hypothesis and demonstrate the validity of our claims in two regimes. First, in Section 4.1 we run extensive experiments on small datasets and illustrate how codeword-to-class assignments vary greatly in their accuracy. We show that this variation is mostly explained by the average binary loss \(\varepsilon\) from (3) and the class-codeword score from (5). We conclude that similarity-preserving assignments are vital for inducing easy binary subproblems. Then, in Section 4.2 we employ similarity-preserving assignments on codebooks for extreme classification. We show how the structure of specific predefined codebooks facilitates finding good assignments and improve performance on datasets with up to 104K classes. ### Exhaustive Experiments Datasets.We start by testing our hypothesis on \(3\) small datasets with \(K=10\) classes: MNIST(LeCun et al., 1998), CIFAR-10(Krizhevsky et al., 2009), and yeast(Dua and Graff, 2017). Codebooks.We experiment on 3 predefined codebooks: Two random dense codebooks (generated like in Allwein et al., 2000) of widths \(\ell=8,15\) having row distances of \(\rho=3,5\)(respectively) and a truncated Hadamard matrix (see Hoffer et al., 2018) with \(\ell=15\) and \(\rho=8\). Experimental setup.Working with only \(K=10\) classes allows us to extensively validate our claims on **all** possible \(K!\approx 3.6M\) assignments of each combination of a dataset and a predefined codebook. Notice that given such a combination, we need not _train_\(K!\) assignments from scratch. Instead, we train only \(2^{K-1}=512\) binary predictors and construct every possible assignment from them. This technique saves time and decreases the variance of the evaluated test accuracy (details in App. C.1). To demonstrate the flexibility of our observations, we use two different base learners. For MNIST and CIFAR-10, we train \(\ell\) linear predictors using the (soft-margin) SVM algorithm. For yeast, each binary predictor is a decision tree (built by the Gini splitting criterion and a minimum of 3 samples to split a node). Hyperparameters were tuned using cross-validation (details in App. C.2). In the decoding step (2), we use the hinge loss, corresponding also to the loss minimized by the SVM used for training the linear base learners. #### 4.1.1 Variation in performance of assignments Figure 1 illustrates the large variation in performance for different assignments of given codebooks. For instance, in MNIST we observe that using the random dense codebook of width \(\ell=8\), the worst assignment achieves \(\approx 77\%\) test accuracy, while the best assignment achieves \(\approx 88.5\%\). In all 3 datasets, the narrow (\(\ell<K\)) codebook exhibits higher variation in performance. This can be explained by the low minimum distance (\(\rho=3\)) which does not allow for meaningful error correction, making the average binary loss \(\varepsilon\) a more dominant factor in performance. \begin{table} \begin{tabular}{l|l|l l l|l} \hline \hline Dataset & Area & Feat. & Train & Test & Model \\ \hline MNIST & Vision & 784 & 60K & 10K & Linear \\ CIFAR-10 & Vision & 3,072 & 50K & 10K & Linear \\ yeast & Life & 8 & 1,284 & 200 & DT \\ \hline \hline \end{tabular} \end{table} Table 1: Exhaustive Experiments’ Datasets Figure 1: Variation of test accuracy across all \(10!\approx 3.6M\) assignments of 3 codebooks on 3 datasets. Dashed lines indicate quartiles (except where the plot is too narrow). There is a large variation in performance across different assignments of the same codebooks. Equidistant codebooks.The low variation in the Hadamard codebook (especially in MNIST) probably stems from it being an _equidistant codebook_ (every two codewords are in the same distance from each other). In such codebooks, the class-codeword score (5) remains constant across all assignments (since \(\forall\mathbf{P}\colon\mathbf{D}_{\mathbf{M}}=\mathbf{PD}_{\mathbf{M}}\mathbf{P} ^{\top}\)). This also supports the following findings (Section 4.1.3) that the class-codeword score is a lead factor in the observed performance variation. #### 4.1.2 Some assignments induce easier subproblems Figure 1(a) shows the correlation between the average binary train loss \(\varepsilon\) and the test accuracy. We plot the empirical distribution (using kernel density estimation) of all 3.6M assignments ran on the 3 datasets using the \(10\times 8\) random dense codebook. For MNIST (top left), the correlation between the test accuracy and \(\varepsilon\) is the highest (\(r^{2}=0.78\)). The other two datasets exhibit lower correlations, but large performance gaps are still explained by \(\varepsilon\) which roughly quantifies the difficulty of subproblems induced by each assignment. We observe a similar behavior in another \(10\times 8\) codebook and a wider \(10\times 15\) codebook as well (App. D). The observed correlation between performance and the average binary loss \(\varepsilon\) is itself not surprising and can be expected from the error bound in (4). However, our results stress that different _assignments_ of the _same_ codebook induce binary subproblems of different difficulty. #### 4.1.3 Similarity-preserving assignments are better We now test the effect of class similarity on an assignment's performance. We use the class-codeword score (5) to assess how close are codewords of similar classes. Sources of class similarity.Our class-codeword score requires a matrix \(\mathbf{D}_{\text{cls}}\) corresponding to a class metric. Here, we use two _different_ class metrics to strengthen our findings. First, we use the (training) confusion matrices of one-vs-all predictors, assuming that confusable classes are semantically similar (a common assumption; see Zhou et al., 2016). Then, in App. D, we use Euclidean distances between the means of raw features of each class. App. C.3 explains how we turn a confusion matrix (a similarity matrix) into a distance matrix. Results.Figure 1(b) shows the correlation between our class-codeword score and test accuracy. We use the same random dense \(10\times 8\) codebook as before, and compute the class-codeword score from confusion matrices (see above). For example, the plot on the bottom-middle shows the distribution of all 3.6M assignments ran on CIFAR-10. On average, assigning similar codewords to similar classes (thus minimizing the class-codeword score) improves the test accuracy from \(\approx\!29\%\) to \(\approx\!32.5\%\). Moreover, assigning similar codewords to _dissimilar_ classes evidently worsens the performance significantly (to \(\approx\!25.5\%\)) We observe a similar behavior in another \(10\times 8\) codebook and a wider \(10\times 15\) codebook as well (App. D). Figure 2: The empirical distributions of _all_ the 3.6M assignments of the random \(10\times 8\) codebook on the 3 datasets. **Top:** Test accuracy vs. average binary (train) loss from (3). **Bottom:** Test accuracy vs. the class-codeword score from (5). Each level set contains \(\approx 10\%\) of the assignments. The \(10^{-3}\) least probable assignments are scattered as individual points. Regressors computed on all assignments are plotted in orange. Also written are the coefficients of determination (\(r^{2}\)). #### 4.1.4 Summary Some assignments of _the same_ codebook induce much easier binary subproblems than others do. Our class-codeword score largely explains the performance of an assignment. Computing the class-codeword score of one assignment is cheap and mainly requires calculating the distance between two \(K\times K\) matrices. Thus, when \(K=10\), exhaustively iterating _all_ 3.6M assignments to find the one minimizing that score, takes only a few minutes on a single CPU. Overall, a similarity-preserving assignment found exhaustively _before_ training should yield a much better test accuracy than a random assignment. In App. E we show that the class-codeword score also controls performance in a larger dataset (CIFAR-100), where any exhaustive experiment becomes intractable. We demonstrate that similarity-preserving assignments, originating from the distances between fastText embeddings of _class names_, significantly improve performance. ### Extreme Multiclass Classification (XMC) We now utilize our understanding that similar codewords should be assigned to similar classes on four XMC benchmarks trained using XMC-dedicated codebooks. We show that in the extreme regime as well -- similarity-preserving assignments are significantly better than random ones. Datasets.We experiment on four XMC preprocessed benchmarks - LSHTC1, LSHTC2 (Partalas et al., 2015), aloi.bin (Rocha and Goldenstein, 2013; Yen et al., 2016), and ODP (Bennett and Nguyen, 2009). The datasets are described briefly below and in detail in App. F. Sources of class similarity.For all datasets, our algorithm below uses class taxonomies given in a form of a tree. These taxonomies are either known in advance (in LSHTC1 and LSHTC2) _or computed_ by a simple hierarchical clustering algorithm on class means (in aloi.bin and ODP). Again, using multiple sources of class similarities corroborates the soundness of our findings below. Experimental setup.We use the code from the publicly available repository of Evron et al. (2018) to learn using their WLTLS codebooks. To use our similarity-preserving codeword-to-class assignments, we edit their scripts to allow for fixed assignments (rather than random ones).2 We also use the same learning setup -- as a base learner, we use AROW (Crammer et al., 2009), which is an online algorithm for learning linear classifiers, and we also use the exponential loss for the soft decoding step in (2). We run all experiments sequentially on a single i7 CPU. In practice, each binary predictor can be trained on a separate CPU. Footnote 2: The updated GitHub repository is available on [https://github.com/ievron/wltls](https://github.com/ievron/wltls) For each dataset, we train several WLTLS codebooks of various widths \(\ell\). Each codebook is learned 5 times using random assignments and 5 times using similarity-preserving assignments, found as described below (here, randomness stems from shuffling the training set). For comparison, we also train one-vs-all (OVA) models using the same base learner - AROW. Our OVA results are better than the ones reported in Evron et al. (2018), since we apply oversampling (Ling and Li, 1998) to overcome the high imbalance in each OVA subproblem. Finding similarity-preserving assignments.We exploit the graph structure of WLTLS codebooks which embed \(K\) codewords on source-to-target paths of a directed acyclic graph (DAG) with exactly \(K\) such paths. Since the class taxonomies are also DAGs, a quick-and-simple algorithm arises for assigning similar codewords to similar classes. Input: 1. The dataset's class taxonomy (given or learned) 2. A WLTLS coding DAG suitable for \(K\) classes Algorithm: 1. Traverse the class tree with DFS to obtain an ordering \((a_{1},...,a_{K})\) of leaves (i.e., classes); 2. Recursively iterate _all_\(K\) paths in the coding DAG, to obtain an ordering \((b_{1},...,b_{K})\) of paths (i.e., codewords); 3. Assign class \(a_{i}\) to codeword \(b_{i}\). ``` **Algorithm Sketch:** Naive assignment for WLTLS The proposed algorithm preserves similarities by assigning similar classes to similar codewords. Intuitively, in most cases classes \(a_{i}\) and \(a_{i+1}\) are close on the taxonomy and paths \(b_{i}\) and \(b_{i+1}\) are similar on the codebook's DAG. We illustrate this algorithm in App. F.2. Despite its simplicity, the algorithm finds assignments with exceptionally low class-codeword scores (5) compared to the scores of random assignments. For example, for the smallest codebook of LSHTC1 (\(\ell=56\)), random assignments exhibit an average score of \(\widehat{s_{\text{cc}}}\approx 0.061\) with an empirical standard deviation of \(1.33\cdot 10^{-5}\); while the assignment our algorithm finds has a score of \(s_{\text{cc}}\approx 0.049\). That is, compared to random assignments, our algorithm decreases the score by more than \(900\) standard deviations (!). \begin{table} \begin{tabular}{l|c|c c|l} \hline \hline Dataset & Area & Classes & Features & Similarity \\ \hline aloi & Vision & 1K & 637K & Clustering \\ LSHTC1 & Text & 12K & 1.2M & Given \\ LSHTC2 & Text & 27K & 575K & Given \\ ODP & Text & 104K & 423K & Clustering \\ \hline \hline \end{tabular} \end{table} Table 2: Extreme Benchmarks. Further details in App. F. Results.Figure 3 demonstrates the advantage of similarity-preserving codeword-to-class assignments. For each dataset, we compare the test accuracy of random assignments to that of similarity-preserving assignments across various codebook widths \(\ell\). We plot the test accuracy averages of the 5 runs of each combination of a codebook width and an assignment method, accompanied by 2 empirical standard deviations (full result tables are given in App. F.3). In almost all cases, similarity-preserving assignments lead to a statistically-significant improvement over random assignments. Moreover, in 16 out of 18 cases, similarity-preserving assignments exhibit a lower variance. In LSHTC1 and LSHTC2, similarity-preserving assignments make the codebooks competitive with OVA while training up to 32 times fewer predictors. In the two larger codebooks of aloi.bin, our assignments do not improve much over random ones. This probably happens because when \(\ell\) approaches \(K\), the underlying WLTLS codebooks become almost equidistant. Summary.Similarity-preserving assignments significantly improve codebooks dedicated to extreme classification. By exploiting class semantics, such assignments turn _predefined_ codebooks with regime-specific advantages (e.g., fast decoding algorithms) into problem-dependent codebooks, without losing those advantages. ## 5 Related Work Our work is of a retrospective nature and calls for an elaborate discussion of its connections with decades of existing research on error-correcting codes. Codebooks with easy subproblems are obviously preferable. Bai et al. (2016) design a codebook by selecting a subset of the easiest columns out of all possible columns. They exhaustively train on _all_ these columns and select a column subset based on the trained predictors' accuracy. This works well but does not scale gracefully (e.g., for merely \(K=10\) classes, it requires _training_\(2^{K-1}=512\) predictors). Instead, many works (including ours) exploit extra knowledge on classes to create easy subproblems. Codebook design methods.While we point out that similarity-preserving assignments improve a _predefined_ codebook by making it _problem dependent_, most works try to design the _entire_ codebook. Given a dataset, designing optimal codebooks is a hard problem due to their discrete nature (Crammer and Singer, 2002). As a remedy, some papers take greedy approaches, e.g., sequentially adding optimized columns (Pujol et al., 2008) or solving integer programming formulations (Gupta and Amin, 2021, 2022); while others take approximate approaches, like solving relaxed continuous optimization problems (e.g., Zhang et al., 2009; Rodriguez et al., 2018). The class-similarity controversy.Many papers incorporate different notions of class similarity into their design process. Interestingly, some encode similar classes with _similar_ codewords (Zhang et al., 2009; Cisse et al., 2012; Zhao and Xing, 2013; Zhou et al., 2016; Rodriguez et al., 2018; McVay, 2020), whereas others encode similar classes with _dissimilar_ codewords (Pujol et al., 2008; Martin et al., 2017; Jaiswal et al., 2020; Gupta and Amin, 2021; Wan et al., 2022). For instance, Martin et al. (2017) look for a codebook \(\mathbf{M}\!\in\!\{-1,+1\}^{K\times\ell}\) that _minimizes_\(\big{\|}\mathbf{D}_{\text{cls}}-\mathbf{M}\mathbf{M}^{\top}\big{\|}_{\text{F}}^ {2}\), while balancing against other codebook properties. In fact, they _maximize_ our score (5) instead of minimizing it, since \(\mathbf{M}\mathbf{M}^{\top}=\ell\mathbf{1}_{K\times K}-\mathbf{D}_{\mathbf{M}}\). Existing literature on adversarial robustness has thus far considered assigning _dissimilar_ codewords to similar classes (e.g., Gupta and Amin (2021); Wan et al. (2022)). in order to improve the error-correcting capabilities between easily-confusable classes, especially in the presence of an adversary. On the other hand, our study shows that similarity-preserving assignments improve the separability and classification performance in traditional settings. An interesting future direction should be to perform adequate ablation studies in the adversarial learning regime and examine the tradeoff between separability (maximized by similarity-preserving assignments) and robustness (maximized by similarity-breaking ones). Class similarity in extreme classification (XMC).In Section 4.2 we use a class taxonomy to improve a codebook that requires training very few predictors compared Figure 3: Results on extreme datasets. We run WLTLS codebooks using different predictor numbers \(\ell\). Errorbars indicate \(\pm 2\) empirical standard deviations of \(5\) runs). Results are available in a tabular form in App. F.3. Due to computational infeasibility, we do not report the performance of one-vs-all (OVA) on the largest dataset (LSHTC2), but just mark its number of binary predictors instead (where \(\ell\!=\!K\)). In all datasets, assigning similar codewords to similar classes significantly improves performance. to one-vs-all or hierarchical models. A closely related work (Cisse et al., 2012) designs XMC-codebooks using a learned class-similarity. However, their codebooks do not allow fast decoding like the ones we use. Other related approaches learn hierarchical models using a given (or learned) class taxonomy, to either benefit from a \(\mathcal{O}\left(\log K\right)\) prediction time (Bengio et al., 2010), or to alleviate the computation of the softmax while training a deep network (Morin and Bengio, 2005). Another approach directly builds a codebook from a class taxonomy (Pujol et al., 2006). However, these approaches train \(\mathcal{O}\left(K\right)\) predictors, implying longer training and _linear_ space requirements. Recently, Mittal et al. (2021) incorporated label metadata in the training of _deep_ extreme classification models (much larger than the linear WLTLS models we use). Finally, Rahman et al. (2018) use class semantics to improve zero-shot performance, which may be relevant to XMC tasks which often suffer from a long tail of classes (Babbar et al., 2014), some having few to no training examples. Ordinal classification and regression taskscan also be tackled with ECC. Interestingly, successful assignments used implicitly in these areas often follow a similar rule-of-thumb like we do - they encode target labels that are similar (i.e., close on the real line) using similar codewords. For instance, see the Unary and HEXJ codebooks in Shah et al. (2022) (the first codebook is equivalent to the underlying codebook in Li and Lin, 2006) or the random ordered-splits in Huhn and Hullermeier (2008). However, similarities in these areas (i.e., distances on the real line) are much simpler than the inter-class relations examined in our paper. Nested dichotomies (ND)offer another reduction from multiclass tasks to binary ones. Basically, ND models split classes recursively in a binary hierarchical structure, where each tree node corresponds to a binary classification subproblem. One could either use a single tree (Fox, 1997) or an ensemble of trees (Frank and Kramer, 2004). The resulting models can be seen as a special case of ECC. Melnikov and Hullermeier (2018) conduct an experiment that is closely related to our variation demonstration in Section 4.1.1. They show that the assignment of classes to leaves of a _single_ ND tree greatly affects the model's performance, and report a high variation in the performance of _randomly-sampled_ NDs (the tree structure was also shown to be important in Mnih and Hinton, 2008). However, their tree corresponds to a codebook with a minimum Hamming distance of \(\rho=1\) (i.e., a prediction mistake in _one_ inner node necessarily results in a multiclass error). Thus, it is not immediate that their findings generalize to codebooks with higher error-correcting capabilities (like the ones we use). Importantly, we do not only point out the performance variation of codeword-to-class assignments, but also clearly show it is explained by class-similarity (Section 4.1.3). Model capacity.Codeword-to-class assignments control the difficulty of the binary subproblems, which is naturally more crucial when the base learners are weaker (see the \(\varepsilon/\rho\) factor in (4)). Related phenomena have been exhibited in ordinal classification (Huhn and Hullermeier, 2008) and nested dichotomies (Melnikov and Hullermeier, 2018) as well. In this paper, we demonstrated our findings using relatively weak linear models and decision trees over raw features (Section 4.1) and preprocessed ones (Section 4.2; App. E). Even high-capacity models like neural networks are likely to favor similarity-preserving assignments. Zhai and Wu (2019) show that a deep classification network implicitly performs _metric learning_ -- training embeds the classes' weight vectors in the last linear layer (preceding the softmax) in a way that reflects underlying class semantics (see also Kusupati et al. (2021)). Similarity-preserving codebooks can be seen as fixing the last layer using a matrix that already reflects such semantics at initialization (see also Sec. 3.3 in Hoffer et al., 2018). Notably, complex models can attain a very low average training binary loss \(\varepsilon\) such that the training error bound (4) becomes \(<1/m\), implying no _training_ mistakes. However, this does not make assignments unimportant. If, for example, we train and decode using an exponential loss, then complex learners can obtain an extremely low loss \(\varepsilon\), but never \(0\). In such cases, similarity-preserving assignments should _still_ yield a lower \(\varepsilon\). In turn, a lower training loss, even when the error is already \(0\), is linked to better generalization, both theoretically and practically (e.g., Soudry et al. (2018)). Limitations of design methods.Similarity-preserving assignments can enhance almost _any_ predefined codebook, while design methods are often restricted to codebooks with certain properties. For instance, the spectral method (Zhang et al., 2009) creates only narrow codebooks (where \(\ell\leq K\)) and does not explicitly take the minimum row distance \(\rho\) into account, which may not be best suited for small datasets (e.g., on CIFAR-10 with \(\ell=8\), their method yielded two identical rows). Other methods scale poorly with the number of classes (Bai et al., 2016; Escalera et al., 2008). Some are more suitable for creating balanced dense columns (Zhang et al., 2009; Rodriguez et al., 2018) while others focus on sparse columns (Pujol et al., 2006). Limitations of finding informed assignments.Designing problem-dependent codebooks from scratch is naturally more flexible than only assigning classes to predefined ones. Objective scores can be optimized more freely when the codebook itself is not fixed like in predefined codebooks. However, predefined codebooks can have favorable properties like fast decoding algorithms, hence it is important to be able to find informed assignments for them. We use our class-codeword score \(\left\lVert\mathbf{D}_{\mathrm{cls}}-\mathbf{PD}_{\mathbf{M}}\mathbf{P}^{\top} \right\rVert_{\mathrm{F}}\) mainly to demonstrate the superiority of similarity-preserving assignments (Section 4.1.3). One could also employ this score as a surrogate to control the difficulty of subproblems, and directly minimize it on a given codebook to find an optimal similarity-preserving assignment. However, finding this optimum corresponds to solving a weighted graph-matching problem, which does not have a known efficient algorithm (Umeyama, 1988). Instead, one could settle for assignments with a low (but possibly suboptimal) score. We exemplify this using a local search on a CIFAR-100 codebook (App. E). An exception where our score is constant and assignments are less impactful, is in equidistant codebooks (e.g., Hadamard, OVA, OVO; see Section 4.1.1). This suggests that equidistant codebooks are perhaps more suitable when no class semantics are available. They can also be expected to yield smaller variation (see Figure 1). See also James and Hastie (1998) who linked such codebooks to Bayes optimality. As a downside, these codebooks must be wide (\(\ell\geq K\)), which is unacceptable in many cases such as extreme classification. Greedy assignment policies.After submitting our paper, we became aware of two recent works closely related to ours that also improve the performance of a given codebook using codeword-to-class assignments. McVay (2020) exploits a sparse class-similarity matrix to greedily assign similar codewords to similar classes. Wan et al. (2022) employ ECC for adversarial learning, by altering a Hadamrd codebook (to break its equidistance property) and using a confusion matrix to greedily assign _dissimilar_ codewords to similar classes (in contrast to our policy; see the discussion on this controversy above). Both these works focus on specific greedy assignment policies for specific codebooks. We on the other hand extensively test our hypotheses on many codebooks and demonstrate the superiority of similarity-preserving assignments over similarity-breaking ones in traditional classification settings. We exhaustively evaluate _all_ possible assignments in several codebooks on three small datasets (see Figure 2 and App. D); and also evaluate different greedy assignment policies on larger datasets (see 4.2 and App. E). ## 6 Conclusion Codeword-to-class assignments matter because they vary greatly in the difficulty of subproblems they induce, even for a _predefined_ codebook. In classification tasks (of both small and large scales), similarity-preserving assignments lead to easier subproblems and better generalization performance. Predefined codebooks can be advantageous when certain properties are crucial, e.g., specific minimum distance \(\rho\) and number of predictors \(\ell\), a given sparsity level, or an efficient decoding algorithm. Choosing an informed assignment according to class semantics, allows for improving predefined codebooks by making them more problem-dependent. Further research might discover that different usages require different assignment policies. For instance, perhaps similarity-preserving assignments benefit generalization, while similarity-breaking assignments benefit robustness (see the discussion in Section 5). ## Acknowledgements We thank Koby Crammer and Thomas G. Dietterich for the fruitful discussions. The research of DS was Funded by the European Union (ERC, A-B-C-Deep, 101039436). Views and opinions expressed are however those of the author only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency (ERCEA). Neither the European Union nor the granting authority can be held responsible for them. DS also acknowledges the support of Schmidt Career Advancement Chair in AI. Finally, we thank the Control Robotics & Machine Learning (CRML) Lab at the Technion for their support.
2302.12648
New iterative algorithms for estimation of item functioning
When the item functioning of multi-item measurement is modeled with three or four-parameter models, parameter estimation may become challenging. Effective algorithms are crucial in such scenarios. This paper explores innovations to parameter estimation in generalized logistic regression models, which may be used in item response modeling to account for guessing/pretending or slipping/dissimulation and for the effect of covariates. We introduce a new implementation of the EM algorithm and propose a new algorithm based on the parametrized link function. The two novel iterative algorithms are compared to existing methods in a simulation study. Additionally, the study examines software implementation, including the specification of initial values for numerical algorithms and asymptotic properties with an estimation of standard errors. Overall, the newly proposed algorithm based on the parametrized link function outperforms other procedures, especially for small sample sizes. Moreover, the newly implemented EM algorithm provides additional information regarding respondents' inclination to guess or pretend and slip or dissimulate when answering the item. The study also discusses applications of the methods in the context of the detection of differential item functioning. Methods are demonstrated using real data from psychological and educational assessments.
Adéla Hladká, Patrícia Martinková, Marek Brabec
2023-02-24T14:14:44Z
http://arxiv.org/abs/2302.12648v2
# Parameter estimation in generalised logistic model with application to DIF detection ###### Abstract This paper proposes innovations to parameter estimation in a generalised logistic regression model in the context of detecting differential item functioning in multi-item measurements. The two newly proposed iterative algorithms are compared with existing methods in a simulation study, and their use is demonstrated in a real data example. Additionally the study examines software implementation including specification of initial values for iterative algorithms, and asymptotic properties with estimation of standard errors. Overall, the proposed methods gave comparable results to existing ones and were superior in some scenarios. ## 1 Introduction Logistic regression (e.g., Agresti, 2010) is one of the most popular tools for describing item functioning in multi-item measurements. This method might be used in various contexts, including educational measurements, admission tests, cognitive assessments, and other health-related inventories. However, its broad applications are not limited to behavioural disciplines. In the context of psychometrics, logistic regression can be seen as a score-based counterpart to the 2 Parameter Logistic (PL) Item Response Theory (IRT) model (Birnbaum, 1968), since in contrast to this class of generalised linear mixed effect models, this method uses an observed estimate of the underlying latent trait. Furthermore, logistic regression has become a widely used method for identifying between-group differences on item level when responding to multi-item measurements (Swaminathan & Rogers, 1990). The phenomenon which is known as Differential Item Functioning (DIF) indicates whether responses to an item vary for respondents with the same level of an underlying latent trait but from different social groups (e.g., defined by gender, age, or socio-economic status). In this vein, DIF detection is essential for a deeper understanding of group differences, for assessing effectiveness of various treatments, or for uncovering potential unfairness in educational tests. Beyond the logistic regression, various psychometrics and statistical methods have been proposed for an important task of DIF identification which are still being studied extensively (Schneider, Strobl, Zeileis, & Debelak, 2021; Schauberger & Tutz, 2016; Paek & Fukuhara, 2015; Suh & Bolt, 2011). A natural extension of the logistic regression model to describe item functioning is a generalised logistic regression model, which may account for the possibility that an item can be correctly answered or endorsed without the necessary knowledge or trait. In this case, the model is extended by including a parameter defining a lower asymptote of the probability curve, which may be larger than zero. Similarly, the model can consider the possibility that an item is incorrectly answered or opposed by a respondent with a high level of a certain trait due to issues such as inattention or lack of time. That is, the model includes an upper asymptote of the probability curve which may be lower than one. Analogous to the logistic regression model's being a counterpart to the 2PL IRT model, generalised logistic regression models can be seen as score-based counterparts to 3-4PL IRT models (Birnbaum, 1968; Barton & Lord, 1981). The estimation in the logistic regression model is a straightforward procedure, but including additional parameters in this model makes it more statistically and computationally challenging and demanding. Therefore, this article examines innovations in the item parameter estimation for the generalised logistic regression model in the context of DIF detection. This work proposes novel iterative algorithms and compares the newly proposed methods to existing ones in a simulation study. To begin, Section 2 introduces generalised logistic regression models, examining the estimation techniques. This section provides a detailed description of two existing methods for parameter estimation (Nonlinear Least Squares (NLS) and the Maximum Likelihood (ML) method) and their application to fitting a generalised logistic regression model. Furthermore, this study proposes a novel implementation of the Expectation-Maximisation (EM) algorithm and a new approach based on a Parametric Link Function (PLF). Additionally, this section provides asymptotic properties of the estimates, an estimation of standard errors, and a software implementation including a specification of starting values in iterative algorithms. Subsequently, Section 3 describes the design and results of the simulation study. To illustrate differences and challenges between the existing and newly proposed methods in practice, this work provides a real data analysis of anxiety measure in Section 4. Finally, Section 5 contains the discussion and conclusion remarks. ## 2 Methodology ### Generalised logistic model for item functioning Generalised logistic regression models are extensions of the logistic regression model which may account for the possibility of guessing or inattention when answering an item. The _simple 4PL model_ describes functioning of the item \(i\), meaning the probability of endorsing item \(i\) by respondent \(p\), by introducing four parameters: \[\pi_{pi}=\mathrm{P}(Y_{pi}=1|X_{p})=c_{i}+(d_{i}-c_{i})\ \frac{\exp(b_{i0}+b_{i1}X_{p})}{1 +\exp(b_{i0}+b_{i1}X_{p})}, \tag{1}\] with \(X_{p}\) being an observed trait of respondent \(p\). **Parameter interpretation.** All four parameters have an intuitive interpretation: The parameters \(c_{i}\) and \(d_{i}\) are the upper and lower asymptotes of the probability sigmoid function \(\pi_{pi}(x)\) since \[\lim_{x\rightarrow-\infty}\pi_{pi}(x)=c_{i}\quad\text{and}\quad\lim_{x \rightarrow\infty}\pi_{pi}(x)=d_{i},\] where \(c_{i}\in[0,1],d_{i}\in[0,1]\) and \(c_{i}<d_{i}\) if \(b_{i1}>0\) and \(c_{i}>d_{i}\) otherwise. Evidently, with \(c_{i}=0\) and \(d_{i}=1\), this model recovers a standard logistic regression for item \(i\). In the context of psychological and health-related assessments, the asymptotes \(c_{i}\) and \(d_{i}\) may represent reluctance to admit difficulties due to social norms. In educational testing, parameter \(c_{i}\) can be interpreted as the probability that the respondents guessed the correct answer without possessing the necessary knowledge \(X_{p}\), also known as a pseudo-guessing parameter. On the other hand, \(1-d_{i}\) can be viewed as the probability that respondents were inattentive while their knowledge \(X_{p}\) was sufficient (Hladka & Martinkova, 2020) or as a lapse-rate (Kingdom & Prins, 2016). Next, parameter \(b_{i0}\) is a shift parameter, a midpoint between two asymptotes \(c_{i}\) and \(d_{i}\) related to the difficulty or easiness of item \(i\). Finally, parameter \(b_{i1}\) is linked to a slope of the sigmoid curve \(\pi_{pi}(x)\), which is also called a discrimination of the respective item. **Adding covariates, group-specific 4PL model.** The simple model (1) can be further extended by incorporating additional respondents' characteristics. That is, instead of using a single variable \(X_{p}\) to describe item functioning, a vector of covariates \(\mathbf{X}_{p}=(1,X_{p1},\ldots,X_{pk})\), \(p=1,\ldots,n\), is involved, including the original observed trait and an intercept term. This process produces extra parameters \(\mathbf{b}_{i}=(b_{i0},\ldots,b_{ik})\). Beyond this, even asymptotes may depend on respondents' characteristics \(\mathbf{Z}_{p}=(1,\mathbf{Z}_{p1},\ldots,\mathbf{Z}_{pj})\), \(p=1,\ldots,n\), which are not necessarily the same as \(\mathbf{X}_{p}\). This, the general _covariate-specific 4PL model_ is of form \[\pi_{pi}=\mathrm{P}(Y_{pi}=1|\mathbf{X}_{p},\mathbf{Z}_{p})=\mathbf{Z}_{p}\mathbf{c}_{i}+(\bm {Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_{p}\mathbf{c}_{i})\ \frac{\exp(\mathbf{X}_{p}\mathbf{b}_{i})}{1+\exp(\mathbf{X}_{p}\mathbf{b}_{i})}, \tag{2}\] where \(\mathbf{c}_{i}=(c_{i0},\ldots,c_{ij})\) and \(\mathbf{d}_{i}=(d_{i0},\ldots,d_{ij})\) are related asymptote parameters for item \(i\). As a special case of the covariate-specific 4PL model (2), an additional single binary covariate \(G_{p}\) might be considered. This grouping variable describes a respondent's membership to a social group (\(G_{p}=0\) for the reference group and \(G_{p}=1\) for the focal group). In other words, this special case assumes \(\mathbf{X}_{p}=(1,X_{p},G_{p})\) and \(\mathbf{Z}_{p}=(1,G_{p})\), which reduces the covariate-specific 4PL model (2) to a group-specific form: \[\pi_{pi} =\mathrm{P}(Y_{pi}=\ 1|X_{p},G_{p})=c_{i}+c_{\text{IDIF}}G_{p} \tag{3}\] \[\quad+(d_{i}-d_{\text{IDIF}}G_{p}-c_{i}-c_{\text{IDIF}}G_{p})\ \frac{\exp(b_{i0}+b_{i1}X_{p}+b_{i2}G_{p}+b_{i3}G_{p}X_{p})}{1+\exp(b_{i0}+b_{i 1}X_{p}+b_{i2}G_{p}+b_{i3}G_{p}X_{p})}.\] The _group-specific 4PL model_ (3) can be used for testing between-group differences on the item level with a DIF analysis (Hladka & Martinkova, 2020). In this model, \(X_{p}\) is an observed variable describing the measured trait of the respondent, such as anxiety, fatigue, quality of life, or math ability, here called the _matching criterion_. In the context of the logistic regression method for DIF detection, the total test score is typically used as the matching criterion (Swaminathan & Rogers, 1990). Other options for the matching criterion include a pre-test score, a score on another test measuring the same construct, or an estimate of the latent trait provided by an IRT model. ### Estimation of item parameters Numerous algorithms are available to estimate item parameters in the covariate-specific 4PL model (2). First, this section describes two methods, which may be directly implemented in the existing software: The NLS method and the ML method. This study discusses the asymptotic properties of the estimates. Next, the study introduces two newly proposed iterative algorithms, which might improve implementation of the computationally demanding ML method: The EM algorithm inspired by the work of Dinse (2011) and an iterative algorithm based on PLF. #### 2.2.1 Nonlinear least squares The parameter estimates of the covariate-specific 4PL model (2) can be determined using the NLS method (Dennis, Gay, & Welsch, 1981), which is based on minimisation of the Residual Sum of Squares (RSS) of item \(i\) with respect to item parameters \((\mathbf{b}_{i},\mathbf{c}_{i},\mathbf{d}_{i})\): \[\text{RSS}_{i}(\mathbf{b}_{i},\mathbf{c}_{i},\mathbf{d}_{i})=\sum_{p=1}^{n} \left[Y_{pi}-\mathbf{r}_{pi}\right]^{2}=\sum_{p=1}^{n}\left[Y_{pi}-\mathbf{Z}_{p}\mathbf{ c}_{i}-(\mathbf{Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_{p}\mathbf{c}_{i})\frac{\exp(\mathbf{X}_{p}\mathbf{b}_{i} )}{1+\exp(\mathbf{X}_{p}\mathbf{b}_{i})}\right]^{2}, \tag{4}\] where \(n\) is a number of respondents. Since the criterion function \(\text{RSS}_{i}(\mathbf{b}_{i},\mathbf{c}_{i},\mathbf{d}_{i})\) is continuously differentiable with respect to item parameters \((\mathbf{b}_{i},\mathbf{c}_{i},\mathbf{d}_{i})\), the minimiser can be obtained when the gradient is zero. Thus, the minimisation process involves a calculation of the first partial derivatives with respect to item parameters \((\mathbf{b}_{i},\mathbf{c}_{i},\mathbf{d}_{i})\) and finding a solution of relevant nonlinear estimating equations (e.g., van der Vaart, 1998, Chapter 5). Since \(\mathbf{Z}_{p}\mathbf{c}_{i}\) and \(\mathbf{Z}_{p}\mathbf{d}_{i}\) asymptotes represent probabilities, it is necessary to ensure that these expressions are kept in the interval of \([0,1]\) which is accomplished using numerical approaches. The asymptotic properties of the NLS estimator, such as consistency and asymptotic distribution, can be derived under the classical set of regularity conditions (e.g., van der Vaart, 1998, Theorems 5.41 and 5.42; see also Appendix A.1). This study proposes a sandwich estimator (A1) which can be used as a natural estimate of the asymptotic variance of the NLS estimate. #### 2.2.2 Maximum likelihood The second option for estimating item parameters in the covariate-specific 4PL model (2) is the ML method (e.g., Ren & Xia, 2019). Using a notation \(\phi_{pi}=\frac{\exp(\mathbf{X}_{p}\mathbf{b}_{i})}{1+\exp(\mathbf{X}_{p}\mathbf{b}_{i})}\), the corresponding likelihood function for item \(i\) has the following form: \[L_{i}(\mathbf{b}_{i},\mathbf{c}_{i},\mathbf{d}_{i})=\prod_{p=1}^{n}\left[ \mathbf{Z}_{p}\mathbf{c}_{i}+(\mathbf{Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_{p}\mathbf{c}_{i})\phi_{pi} \right]^{Y_{pi}}\left[1-\mathbf{Z}_{p}\mathbf{c}_{i}-(\mathbf{Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_{p} \mathbf{c}_{i})\phi_{pi}\right]^{1-Y_{pi}},\] and the log-likelihood function is then given by \[l_{i}(\mathbf{b}_{i},\mathbf{c}_{i},\mathbf{d}_{i})=\sum_{p=1}^{n} \left\{Y_{pi}\log(\mathbf{Z}_{p}\mathbf{c}_{i}+(\mathbf{Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_ {p}\mathbf{c}_{i})\phi_{pi})\right.\] \[+\left.(1-Y_{pi})\log(1-\mathbf{Z}_{p}\mathbf{c}_{i}-(\mathbf{Z}_{p}\mathbf{d}_{i }-\mathbf{Z}_{p}\mathbf{c}_{i})\phi_{pi})\right\}.\] The parameter estimates are obtained by a maximisation of the log-likelihood function. Thus this study proceeds similarly to the logistic regression model, except for a larger dimension of the parametric space. To find the maximiser of the log-likelihood function \(l_{i}(\mathbf{b}_{i},\mathbf{c}_{i},\mathbf{d}_{i})\), the first partial derivatives are set to zero and these so-called likelihood equations must be solved. However, the solution of a system of the nonlinear equations cannot be derived algebraically and needs to be numerically estimated using a suitable iterative process. Using van der Vaart's (1998) Theorems 5.41 and 5.42, consistency and asymptotic normality can be shown for the ML estimator, see Appendix A.2. Additionally, the estimate of the asymptotic variance of the item parameters is an inverse of the observed information matrix (A2). #### 2.2.3 EM algorithm The ML method may be computationally demanding and iterative algorithms might help in those situations. Inspired by the work of Dinse (2011), this study adopts a version of the EM algorithm (Dempster, Laird, & Rubin, 1977) for parameter estimation in the covariate-specific 4PL model (2). Next, the original problem can be reformulated using latent variables which describe hypothetical responses status of test-takers (Dinse, 2011). In this study's setting, the work considers four mutually exclusive latent variables (\(W_{pi1}\), \(W_{pi2}\), \(W_{pi3}\), \(W_{pi4}\)), where variable \(W_{pij}=1\) indicates that respondent \(p\) belongs in the category \(j=1,\ldots,4\) for an item \(i\), whereas \(W_{pij}=0\) indicates that respondent does not belong in this category. In the context of educational, psychological, health-related, or other types of multi-item measurement, the four categories can be interpreted as follows: Categories 1 and 2 indicate whether a respondent who responded correctly to item \(i\) or endorsed it (i.e., \(Y_{pi}=1\)) was determined to do so (\(W_{pi1}=1\), e.g., the respondent guessed correct answer while their knowledge or ability was insufficient) or not (\(W_{pi2}=1\), e.g., had a sufficient knowledge or ability to answer correctly and did not guessed). On the other hand, Categories 3 and 4 indicate whether the respondent who did not respond correctly or did not endorse the item (i.e., \(Y_{pi}=0\)) was prone to do so (\(W_{pi3}=1\), e.g., did not have sufficient knowledge or ability) or not (\(W_{pi4}=1\), e.g., incorrectly answered due to another reason such as intentation or lack of time). Thus, the observed indicator \(Y_{pi}\) and its complement \(1-Y_{pi}\) could be rewritten as \(Y_{pi}=W_{pi1}+W_{pi2}\) and \(1-Y_{pi}=W_{pi3}+W_{pi4}\) (Figure 1). Let \(\mathbf{Z}_{p}\mathbf{c}_{i}\) be the regressor-based probability that the respondent was determined to respond to item \(i\) correctly or endorse it (Category 1), and let \(\mathbf{Z}_{p}\mathbf{d}_{i}\) be the regressor-based probability of the respondent not prone to respond correctly or endorse item \(i\) (Categories 1-3). Then \(\mathbf{Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_{p}\mathbf{c}_{i}\) gives the regressor-based probability that the respondent was not determined but prone to (Categories 2 and 3). Further, we denote \(\phi_{pi}\) and \(1-\phi_{pi}\) - the probabilities to answer given item correctly (Category 2) and incorrectly (Category 3), respectively, depending on the regressors \(\mathbf{X}_{p}\). Finally, the probability that the respondent did not respond correctly and was not prone to do so is given by \(1-(\mathbf{Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_{p}\mathbf{c}_{i})-\mathbf{Z}_{p}\mathbf{c}_{i}=1-\mathbf{Z}_{ p}\mathbf{d}_{i}\) (Category 4). In summary, the expected values of the latent variables are then given by the following terms \[\mathbf{Z}_{p}\mathbf{c}_{i},\ \ (\mathbf{Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_{p}\mathbf{c}_{i})\phi_{pi},\ \ (\mathbf{Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_{p}\mathbf{c}_{i})(1-\phi_{pi}),\ \ 1-\mathbf{Z}_{p}\mathbf{d}_{i},\] and the probability of correct response or endorsement is given by \[\mathrm{P}(Y_{pi}=1|\mathbf{X}_{p}) =\mathrm{P}(W_{pi1}+W_{pi2}=1|\mathbf{X}_{p})=\mathrm{P}(W_{pi1}=1| \mathbf{X}_{p})+\mathrm{P}(W_{pi2}=1|\mathbf{X}_{p})\] \[=\mathbf{Z}_{p}\mathbf{c}_{i}+(\mathbf{Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_{p}\mathbf{c}_{i}) \phi_{pi},\] which under the logistic model \(\phi_{pi}=\frac{\exp(\mathbf{X}_{p}\mathbf{b}_{i})}{1+\exp(\mathbf{X}_{p}\mathbf{b}_{i})}\) produces the covariate-specific 4PL model (2). Using the setting of the latent variables, the corresponding log-likelihood function for item \(i\) takes the following form: \[l_{i}^{\text{EM}}= \sum_{p=1}^{n}\left[W_{pi2}\log\left(\phi_{pi}\right)+W_{pi3} \log\left(1-\phi_{pi}\right)\right]\] \[+\sum_{p=1}^{n}\left[W_{pi1}\log\left(\mathbf{Z}_{p}\mathbf{c}_{i}\right) +W_{pi4}\log\left(1-\mathbf{Z}_{p}\mathbf{d}_{i}\right)+\ \left(W_{pi2}+W_{pi3}\right)\log\left(\mathbf{Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_{p}\mathbf{c}_{ i}\right)\right]\] \[= l_{i1}^{\text{EM}}+l_{i2}^{\text{EM}}.\] Figure 1: Latent variables for EM algorithm The log-likelihood function \(l_{i1}^{\text{EM}}\) includes only parameters \(\mathbf{b}_{i}\) and regressors \(\mathbf{X}_{p}\), whereas the log-likelihood function \(l_{i2}^{\text{EM}}\) incorporates only parameters related to the asymptotes of the sigmoid function and includes only regressors \(\mathbf{Z}_{p}\). Notably, the log-likelihood function \(l_{i1}^{\text{EM}}\) has a form of the log-likelihood function for the logistic regression. However, in contrast to the logistic regression model, in this setting it does not necessary hold that \(W_{pi2}+W_{pi3}=1\) since the correct answer could be guessed or the respondent could be inattentive, producing \(W_{pi2}+W_{pi3}=0\). The log-likelihood function \(l_{i2}^{\text{EM}}\) takes the form of the log-likelihood for multinomial data with one trial and with the regressor-based probabilities \(\mathbf{Z}_{p}\mathbf{c}_{i}\), \(\mathbf{Z}_{p}\mathbf{d}_{i}-\mathbf{Z}_{p}\mathbf{c}_{i}\), and \(1-\mathbf{Z}_{p}\mathbf{d}_{i}\). The EM algorithm estimates item parameters in two steps - expectation and maximisation. These two steps are repeated until the convergence criterion is met, such as until the change in log-likelihood is lower than a predefined value. Expectation. At the E-step, conditionally on the item responses \(Y_{pi}\) and the current parameter estimate \((\widehat{\mathbf{b}}_{i},\widehat{\mathbf{c}}_{i},\widehat{\mathbf{d}}_{i})\), the estimates of latent variables are calculated as their expected values: \[\widehat{W}_{pi1} =\frac{\mathbf{Z}_{p}\widehat{\mathbf{c}}_{i}Y_{pi}}{\mathbf{Z}_{p}\widehat{ \mathbf{c}}_{i}+(\mathbf{Z}_{p}\widehat{\mathbf{d}}_{i}-\mathbf{Z}_{p}\widehat{\mathbf{c}}_{i}) \widehat{\phi}_{pi}}, \widehat{W}_{pi2} =Y_{pi}-\widehat{W}_{pi1}, \tag{5}\] \[\widehat{W}_{pi4} =\frac{\left(1-\mathbf{Z}_{p}\widehat{\mathbf{d}}_{i}\right)(1-Y_{pi})}{ 1-\widehat{\mathbf{Z}_{p}\mathbf{c}}_{i}-(\mathbf{Z}_{p}\widehat{\mathbf{d}}_{i}-\mathbf{Z}_{p} \widehat{\mathbf{c}}_{i})\widehat{\phi}_{pi}}, \widehat{W}_{pi3} =1-Y_{pi}-\widehat{W}_{pi4}.\] Maximisation. At the M-step, conditionally on the current estimates of the latent variables \(\widehat{W}_{pi2}\) and \(\widehat{W}_{pi3}\), the estimates of parameters \(\mathbf{b}_{i}\) maximise the log-likelihood function \(l_{i1}^{\text{EM}}\). The estimates \(\widehat{\mathbf{c}}_{i}\) and \(\widehat{\mathbf{d}}_{i}\) are given by a maximisation of the log-likelihood function \(l_{i2}^{\text{EM}}\) conditionally on current estimates of the latent variables \(\widehat{W}_{pi1}\), \(\widehat{W}_{pi2}\), \(\widehat{W}_{pi3}\), and \(\widehat{W}_{pi4}\). The EM algorithm is designed to gain the ML estimates of the item parameters, so estimates have the same asymptotic properties as described above. #### 2.2.4 Parametric link function In this study's setting, the covariate-specific 4PL model (2) can be viewed as a generalised linear model with a known PLF \[g(\mu_{pi};\mathbf{c}_{i},\mathbf{d}_{i})=\log\left(\frac{\mu_{pi}-\mathbf{Z}_{p}\mathbf{c}_{ i}}{\mathbf{Z}_{p}\mathbf{d}_{i}-\mu_{pi}}\right), \tag{6}\] where the parameters \(\mathbf{c}_{i}\) and \(\mathbf{d}_{i}\) are unknown and may depend on regressors \(\mathbf{Z}_{p}\). Subsequently, the mean function is determined by \(\mu_{pi}=\pi_{pi}\) as given by (2) with a linear predictor \(\mathbf{X}_{p}\mathbf{b}_{i}\). Keeping this setting in mind, this study proposes a new two-stage algorithm to estimate item parameters using the PLF (6), which involves repeating two steps until the convergence criterion is fulfilled. Step one. First, conditionally on current estimates \(\widehat{\mathbf{c}}_{i}\) and \(\widehat{\mathbf{d}}_{i}\) of the PLF, the estimates of parameters \(\mathbf{b}_{i}\) maximise the following log-likelihood function: \[l_{i1}^{\text{PL}}(\mathbf{b}_{i}|\widehat{\mathbf{c}}_{i},\widehat{\mathbf{ d}}_{i})=\sum_{p=1}^{n}\left\{Y_{pi}\log(\mathbf{Z}_{p}\widehat{\mathbf{c}}_{i}+(\mathbf{Z}_{p} \widehat{\mathbf{d}}_{i}-\mathbf{Z}_{p}\widehat{\mathbf{c}}_{i})\phi_{pi})\right.\] \[\left.\qquad\qquad+(1-Y_{pi})\log(1-\mathbf{Z}_{p}\widehat{\mathbf{c}}_{i }-(\mathbf{Z}_{p}\widehat{\mathbf{d}}_{i}-\mathbf{Z}_{p}\widehat{\mathbf{c}}_{i})\phi_{pi}) \right\}.\] The log-likelihood function \(l_{i1}^{\text{PL}}(\mathbf{b}_{i}|\widehat{\mathbf{c}}_{i},\widehat{\mathbf{d}}_{i})\) has a similar form to the log-likelihood function \(l_{i}(\mathbf{b}_{i},\mathbf{c}_{i},\mathbf{d}_{i})\) using the ML method. However, the parameters \(\mathbf{c}_{i}\) and \(\mathbf{d}_{i}\) are here replaced by their current estimates, \(\widehat{\mathbf{c}}_{i}\) and \(\widehat{\mathbf{d}}_{i}\). Step two. Next, estimates \(\widehat{\mathbf{c}}_{i}\) and \(\widehat{\mathbf{d}}_{i}\) of the PLF (6) are calculated conditionally on the current estimates \(\widehat{\mathbf{b}}_{i}\) as the arguments of the maxima of the following log-likelihood function \[l_{i2}^{\text{PL}}(\mathbf{c}_{i},\mathbf{d}_{i}|\widehat{\mathbf{b}}_{i})= \sum_{p=1}^{n}\left\{Y_{pi}\log(\mathbf{Z}_{p}\mathbf{c}_{i}+(\mathbf{Z}_{p}\mathbf{d}_{i}- \mathbf{Z}_{p}\mathbf{c}_{i})\widehat{\phi}_{pi})\right.\] \[\left.\qquad\qquad+(1-Y_{pi})\log(1-\mathbf{Z}_{p}\mathbf{c}_{i}-(\mathbf{Z}_ {p}\mathbf{d}_{i}-\mathbf{Z}_{p}\mathbf{c}_{i})\widehat{\phi}_{pi})\right\}.\] Again, the parameters \(\mathbf{b}_{i}\) are replaced by their estimates \(\widehat{\mathbf{b}}_{i}\), and \(\phi_{pi}\) is thus replaced by \(\widehat{\phi}_{pi}\). In summary, the division into the two sets of parameters makes the algorithm based on PLF easy to implement in the R software and can take an advantage of its existing functions. Because the algorithm is designed to produce the ML estimates, their asymptotic properties are the same as described above. ### Implementation and software For all analyses, software R, version 4.1 (R Core Team, 2022) was used. The NLS method was implemented using the base nls() function and the "port" algorithm (Gay, n.d.). The sandwich estimator (A1) of the asymptotic covariance matrix was computed using the calculus package (Guidotti, 2022). The ML estimation was performed with the base optim() function and the "L-BFGS-B" algorithm (Byrd, Lu, Nocedal, & Zhu, 1995). The EM algorithm implements directly (5) in the expectation step using the base glm() function and the multinom() function from the nnet package (Venables & Ripley, 2002) in the maximisation step. Next, the step one of the newly proposed algorithm based on PLF is implemented with the base glm() function with the modified logit link, which includes asymptote parameters. The estimation of the asymptote parameters in the step two is conducted using the base optim() function. The maximum number of iterations was set to 2,000 for all four methods, and the convergence criterion was set to \(10^{-6}\) when possible. Initial values. Starting values for item parameters were calculated as follows: The respondents were divided into three groups based upon tertiles of the matching criterion \(X_{p}\). Next, the asymptote parameters were estimated: \(c\) was computed as an empirical probability for those whose matching criterion was smaller than its average value in the first group defined by tertiles. The asymptote \(d\) was calculated as an empirical probability of those whose matching criterion was greater than its average value in the last group defined by tertiles. The slope parameter \(b_{1}\) was estimated as a difference between mean empirical probabilities of the last and the first group multiplied by 4. This difference is sometimes called upper-lower index. Finally, the intercept \(b_{0}\) was calculated as follows: First, a centre point between the asymptotes was computed, and then we looked for the level of the matching criterion which would have corresponded to this empirical probability. Additionally, smoothing and corrections for variability of the matching criterion were applied. ## 3 Simulation study A simulation study was performed to compare various procedures to estimate parameters in the generalised logistic regression model, including the NLS, the ML method, the EM algorithm, and the newly proposed algorithm based on PLF. Two models were considered - the simple 4PL model (1) and the group-specific 4PL model (3). ### Simulation design Data generation. To generate data with the simple 4PL model (1), the following parameters were used: \(b_{0}=0\), \(b_{1}=1.5\), \(c=0.25\), and \(d=0.9\). In the case of the group-specific 4PL model (3), additionally \(b_{2}=-1\), \(b_{3}=0.5\), \(c_{\text{DIF}}=-0.15\), and \(d_{\text{DIF}}=0.1\) were considered. Next, the matching criterion \(X_{p}\) was generated from the standard normal distribution for all respondents. Binary responses were generated from the Bernoulli distribution with the calculated probabilities based upon the chosen 4PL model, true parameters, and the matching criterion variable. The sample size was set to \(n=500;1,000;2,500\); and \(5,000\), i.e., \(250\); \(500\); \(1,250\); and \(2,500\) per group in the case of the group-specific 4PL model (3). Each scenario was replicated \(1,000\) times. Simulation evaluation. To compare estimation methods, we first computed mean and median numbers of iteration runs and the convergence status of the methods, meaning the percentage of converged simulation runs; the percentage of runs which crashed (caused error when fitting, e.g., due to singularities); and the percentage of those which reached the maximum number of iterations without convergence. Next, we selected only those simulation runs for which all four estimation methods converged successfully and computed the mean parameter estimates, together with parametric confidence intervals. When confidence intervals for asymptote parameters exceeded their boundaries of 0 or 1, confidence intervals were truncated at the boundary value. ### Simulation results Convergence status. All four methods had low percentages of iterations that crashed for all sample sizes in the simple 4PL model (1), but the rate was mildly increased in the group-specific 4PL model (3) for the NLS method (4.3%) and for the algorithm based on PLF (3.6%) when \(n=500\). With the increasing sample size, convergence issues disappeared. The EM algorithm struggled to converge in a predefined number of iterations, especially for small sample sizes in both models. Additionally, the method based on PLF reached the maximum limit of 2,000 iterations only in a small percentage of simulation runs when smaller sample sizes were considered in the group-specific 4PL model (Table 1). Number of iterations.Furthermore, the methods differed in a number of iterations needed until the estimation process successfully ended. The EM algorithm yielded the largest mean and median numbers of iterations, which were somehow overestimated by simulation runs which did not finish without convergence (i.e., the maximum limit of 2,000 iterations was reached). The fewest iterations were needed for the NLS method. As expected, all the methods required fewer simulation runs when the simple 4PL model (1) was considered than in the group-specific 4PL model (3). Beyond this, the number of iterations was decreasing with the increasing sample size in both models for all the methods except the EM algorithm, where the number of iterations was not monotone (Table 1). In the group-based model (3) with a sample size of \(n=500\), some of the estimation procedures produced non-meaningful estimates of parameters \(b_{0}\)-\(b_{3}\) (absolute value over 100) despite successful convergence. Those 11 simulations affected mean values significantly, so they were removed from a computation of the mean estimates and their confidence intervals for all four estimation methods. In those 11 simulations, non-meaningful estimates were received twice for the NLS, three times for the ML, eight times for the EM algorithm, and once for the PLF-based method. Parameter estimates.In the simple 4PL model (1), the smallest biases in estimates of parameters \(b_{0}\) and \(b_{1}\) were gained by the PLF-based algorithm with the narrowest confidence intervals when smaller sample sizes were considered (\(n=500\) or \(n=1,000\)). Additionally, in these scenarios, the NLS method yielded slightly more biased estimates with wider confidence intervals. The precision of the estimation improved for both parameters when the sample size increased in all four methods, whereas differences between estimation procedures narrowed. The precision of the estimates of the asymptote parameters \(c\) and \(d\) was similar for all four methods, while the differences between estimation approaches were small. The NLS and the EM algorithm provided slightly wider confidence intervals for a small sample size of \(n=500\) (Figure 2, Table A1). \begin{table} \begin{tabular}{l r r r r r r r r r r} \hline \hline & \multicolumn{4}{c}{Simple 4PL model (1)} & \multicolumn{4}{c}{Group-specific 4PL model (3)} \\ \cline{2-10} Method & \multicolumn{3}{c}{Convergence status [\%]} & \multicolumn{3}{c}{Number of iterations} & \multicolumn{3}{c}{Convergence status [\%]} & \multicolumn{3}{c}{Number of iterations} \\ \cline{2-10} & Converged & Crashed & DNF & Mean & Median & Converged & Crashed & DNF & Mean & Median \\ \hline \multicolumn{10}{c}{\(n=500\)} \\ \hline NLS & 99.60 & 0.40 & 0.00 & 10.11 & 9.00 & 95.70 & 4.30 & 0.00 & 14.36 & 12.00 \\ MLE & 99.60 & 0.40 & 0.00 & 22.99 & 22.00 & 99.90 & 0.10 & 0.00 & 90.63 & 80.00 \\ EM & 84.70 & 0.00 & 15.30 & 684.83 & 361.00 & 89.30 & 0.10 & 10.60 & 627.13 & 354.00 \\ PLF & 99.80 & 0.20 & 0.00 & 43.73 & 20.00 & 95.90 & 3.60 & 0.50 & 131.48 & 35.50 \\ \hline \multicolumn{10}{c}{\(n=1,000\)} \\ \hline NLS & 100.00 & 0.00 & 0.00 & 7.98 & 7.00 & 99.60 & 0.40 & 0.00 & 11.07 & 10.00 \\ MLE & 100.00 & 0.00 & 0.00 & 21.40 & 21.00 & 100.00 & 0.00 & 0.00 & 80.25 & 75.00 \\ EM & 83.70 & 0.00 & 16.30 & 744.01 & 437.00 & 88.20 & 0.00 & 11.80 & 769.27 & 542.00 \\ PLF & 100.00 & 0.00 & 0.00 & 31.13 & 18.00 & 99.30 & 0.60 & 0.10 & 76.85 & 27.00 \\ \hline \multicolumn{10}{c}{\(n=2,500\)} \\ \hline NLS & 99.90 & 0.10 & 0.00 & 5.90 & 6.00 & 99.90 & 0.10 & 0.00 & 7.58 & 7.00 \\ MLE & 99.90 & 0.10 & 0.00 & 19.98 & 19.00 & 100.00 & 0.00 & 0.00 & 73.01 & 72.00 \\ EM & 92.60 & 0.20 & 7.20 & 634.03 & 475.50 & 90.90 & 0.00 & 9.10 & 695.99 & 499.00 \\ PLF & 100.00 & 0.00 & 0.00 & 17.77 & 16.00 & 100.00 & 0.00 & 0.00 & 38.08 & 19.50 \\ \hline \multicolumn{10}{c}{\(n=5,000\)} \\ \hline NLS & 99.90 & 0.10 & 0.00 & 5.07 & 5.00 & 99.80 & 0.20 & 0.00 & 6.02 & 6.00 \\ MLE & 100.00 & 0.00 & 0.00 & 19.21 & 19.00 & 100.00 & 0.00 & 0.00 & 69.81 & 69.00 \\ EM & 95.60 & 0.00 & 4.40 & 588.82 & 477.50 & 92.60 & 0.00 & 7.40 & 808.35 & 647.50 \\ PLF & 100.00 & 0.00 & 0.00 & 15.40 & 15.00 & 100.00 & 0.00 & 0.00 & 26.07 & 16.00 \\ \hline \hline \end{tabular} _Note._ DNF = did not finish, NLS = nonlinear least squares, MLE = maximum likelihood estimation, EM = expectation-maximisation algorithm, PLF = algorithm based on parametric link function. \end{table} Table 1: Convergence status and a number of iterations for the four estimation methods In the group-specific 4PL model (3), the PLF-based algorithm yielded the least biased estimates of parameters \(b_{0}\)-\(b_{3}\), especially for the smaller sample sizes. On the other hand, the NLS method produced the most biased estimates with somewhat wider confidence intervals in such scenarios. The ML method provided less biased estimates than the NLS, but accompanied with wider confidence intervals, even for a sample size of \(n=1,000\). Similar to the simple 4PL model (1), the differences in the precision of the parameter estimates were narrowing with the increasing sample size, and all four estimation approaches gave estimates close to the true values of the item parameters (Figure 3, Table A2). The estimates of the asymptote parameters \(c\), \(c_{\text{DF}}\), \(d\), and \(d_{\text{DF}}\) were similar for all four methods. The EM algorithm provided slightly less biased mean estimates of the asymptote parameters, but with slightly wider confidence intervals, especially for \(n=1,000\). Figure 3: Mean estimated parameters in the group-specific 4PL model (3) with confidence intervals with respect to sample size; horizontal lines represent true values of parameters Figure 2: Mean estimated parameters in the simple 4PL model (1) with confidence intervals with respect to sample size; horizontal lines represent true values of parameters ## 4 Real data example ### Data description This study demonstrated the estimation on a real-data example of the PROMIS Anxiety scale1 dataset. The dataset consisted of responses to 29 Likert-type questions (1 = Never, 2 = Rarely, 3 = Sometimes, 4 = Often, and 5 = Always) from 766 respondents. Additionally, the dataset included information on the respondents' age (0 = Younger than 65 and 1 = 65 and older), gender (0 = Male and 1 = Female), and education (0 = Some college or higher and 1 = High school or lower). Footnote 1: [http://www.nihpromis.org](http://www.nihpromis.org) For this work, item responses were dichotomised as follows: 0 = Never (i.e., response \(=1\) on original scale) or 1 = At least rarely (i.e., response \(\geq\) 2 on original scale). The overall level of anxiety was calculated as a standardised sum of non-dichotomized item responses. This work considered the simple 4PL model (1) and the group-specific 4PL model (3) using all four estimation methods: NLS, ML, the EM algorithm, and the algorithm based on PLF. In both models, the computed overall level of anxiety was used as the matching criterion \(X_{p}\). In the group-specific 4PL model (3), respondents' genders were included as the grouping variable \(G_{p}\). Overall, there were 369 male participants and 397 female participants. ### Analysis design The same approach used in the simulation study for computing starting values was used for the analysis of the Anxiety dataset. In the case of convergence issues, the initial values were re-calculated based on successfully converged estimates using other methods. In this study, item parameter estimates were computed and reported with their confidence intervals. Confidence intervals of the asymptote parameters were truncated at boundary values when necessary. Next, this work compared the estimation methods by calculating the differences in fitted item characteristic curves (i.e., estimated probabilities of endorsing the item) on the matching criterion for both models. Finally, likelihood ratio tests were performed to compare the two nested models (simple and group-specific) to identify the DIF for all items and all four estimation methods. Significance level of 0.05 was used for all the tests. ### Results **Simple 4PL model.** The smallest differences between the four estimation methods and fitted item characteristic curves in the simple 4PL model (1) were observed for item R8 (_"I had a racing or bounding heart"_; Figure 3(a)). The greatest differences were observed for item R29 (_"I had difficulty calming down"_; Figure 3(b)). The smallest overall differences were found between the EM algorithm and the algorithm based on PLF, whereas the greatest overall differences were noted between the NLS and the algorithm based on PLF. Beyond this, similar patterns appeared in the estimated item parameters (Figure 5, Table A3). Although the lower asymptotes were mostly estimated at 0, the upper asymptotes were often estimated below 1, suggesting a reluctance of the respondents to admit certain difficulties, such as those due to social norms. **Group-specific 4PL model.** The smallest differences between the four estimation algorithms and the fitted item characteristic curves in the group-specific 4PL model (3) were once more observed in item R8 (_"I had a racing or bounding heart"_; Figure 5(a)). The greatest differences were noticed in item R24 (_"Many situations made me worry"_; Figure 5(b)). The smallest overall differences were found between the EM algorithm and the algorithm based on PLF. The greatest overall differences were observed between the NLS and the algorithm based on PLF, analogous to the simple 4PL model. Additionally, similar patterns were seen in the estimated item parameters (Figure 7, Table A4). **DIF detection.** Using the likelihood ratio test, the simple 4PL model (1) was rejected for item R6 (_"I was concerned about my mental health"_), item R10 (_"I had sudden feelings of panic"_), and item R12 (_"I had trouble paying attention"_) when considering at least one estimation method (i.e., these items functioned differently). While item R6 was identified as a DIF item by all four of the estimation methods (all \(p\)-values \(<0.004\)), items R10 and R12 were only identified as functioning differently with the NLS (\(p\)-value = 0.042) and with the algorithm based on PLF (\(p\)-value = 0.047), respectively. In item R6, there were no significant differences between the estimated asymptotes, and DIF was caused by various intercepts and slopes of the two genders (Figure 7). Male participants seemed to have a higher probability of being concerned about their mental health at least rarely (original response \(\geq\) 2) than female participants of the same overall anxiety level. This difference was especially apparent for those with lower levels of anxiety, whereas the differences between these two genders narrowed as the overall anxiety level increased (Figure 8). ## 5 Discussion This work explored novel approaches for estimating item parameters in the generalised logistic regression model. We described in detail two existing procedures (NLS and ML) and their applications for fitting the covariate-specific 4PL model (2). Additionally, the study proposed two iterative procedures (a procedure using the EM algorithm and Figure 4: Estimated item characteristic curves for the simple 4PL model (1) Figure 5: Item parameter estimates of the Anxiety items for the simple 4PL model (1). a method based on PLF). With a simulation study, we demonstrated satisfactory precision of the newly proposed PLF-based procedure even for small sample sizes and when additional covariates were considered. However, these pleasant properties were not observed for the NLS and ML methods, which produced either biased estimates or wide confidence intervals. On the other hand, the EM algorithm performed satisfactorily, but it sometimes failed to converge in a predefined number of iterations, so its fitting was inefficient. As the sample size increased, differences between the estimation methods vanished, and all estimates were near the true values of the item parameters. Using a real data example for the anxiety measure, we illustrated practical challenges in estimation procedures, including specification of initial values. The smallest differences between the estimation procedures were observed for the EM algorithm and the procedure based on PLF. Additionally the largest dissimilarities were found for the NLS and the PLF-based method, supporting the findings of the simulation study, especially when considering a smaller sample size in the Anxiety dataset. In recent decades, the topic of the parametric link function has been extensively discussed in the literature by many authors, including Basu and Rathouz (2005), Flach (2014), and Scallan, Gilchrist, and Green (1984). For example, Pregibon (1980) proposed the ML estimation of the link parameters using a weighted least squares algorithm. In the same vein, McCullagh and Nelder (1989) adapted this approach and presented an algorithm in which several models with the fixed link functions were fitted. Furthermore, Kaiser (1997) proposed a modified scoring algorithm to perform simultaneous ML estimation of all parameters. Scallan et al. (1984) proposed an iterative two-stage algorithm, building on the work of Richards (1961). In this study's approach, we examined generalised logistic regression, accounting for the possibility of guessing and inattention or lapse rate, whereas these features may depend upon the respondents' characteristics. The crucial part of the estimation process is specifying starting values for item parameters because these values may significantly impact the speed of the estimation process and its precision. For instance, initial values which are far from the true item parameters may lead to situations in which the estimation algorithm returns only a local extreme or even it does not converge. In this work we used an approach based on upper-lower index which resulted in a low rate of convergence issues with the satisfactory estimation precision. However, other possible naive estimates of discrimination (and other parameters), such as a correlation between an item score and the total test score without given item, could be considered. There were several limitations to this study, and several possible further directions for study exist. First, the simulation study was limited to two models - the simple 4PL model (1) and the group-specific 4PL model (3), both of which included only one or two covariates. The simulation study suggested requiring a larger sample size with the increasing number of covariates. Second, the work considered only one set of item parameters, noting that various values of asymptotes were especially prone to producing computational challenges. Third, this article described the NLS method as a simple approach, not accounting for the heteroscedasticity of binary data. For such data, the Pearson's residuals might be more appropriate to use. This weighted form (e.g., Ritz, Baty, Streibig, & Gerhard, 2015) takes the original squares of residuals and divides them by the variance \(\pi_{pi}(1-\pi_{pi})\). Next, the RSS of item \(i\) (4) would take the following Figure 6: Estimated item characteristic curves for the group-specific 4PL model (3) form: \[\text{RSS}_{i}(\boldsymbol{\gamma}_{i})=\sum_{p=1}^{n}\frac{\left(Y_{pi}-\pi_{pi} \right)^{2}}{\pi_{pi}\left(1-\pi_{pi}\right)}.\] However, the number of observations on tails of the matching criterion is typically small and provides only small variability at most. These heavy weights would require a nearly exact fit for cases with few observations. Nevertheless, the computation of the NLS estimates demonstrated in this work was straightforward and efficient, providing sufficient precision. Thus, this method could be useful in certain cases, such as producing an initial idea about parameter values and using these estimates as starting values for other approaches. Additionally, this study's real data example explored item functioning in the multi-item measurement related to anxiety. However, the task of the parameter estimation in the presented models would also be relevant for several other situations. The later could include data from educational measurement, where the lower asymptote may represent item guessing, and the upper asymptote may represent the lapse rate (item slipping or inattention). Moreover, the generalised logistic regression model is not limited to multi-item measurements since the class determined by Equation (2) represents a wide family of the covariate-specific 4PL models. This model might be used and further extended in various study fields, including but not limited to quantitative pharmacology (Dinse, 2011); applied microbiology (Brands, Schulze Struchtrup, Stamminger, & Bockmuhl, 2020); modelling patterns of urban electricity usage (To, Lai, Lo, Lam, & Chung, 2012); and plant growth modelling (Zub, Rambaud, Bethencourt, & Brancourt-Hulmel, 2012). Therefore, estimating parameters and understanding limitations of used methods are crucial for a wide range of researchers and practitioners. To conclude, this study illustrated differences and challenges in fitting generalised logistic regression models using various estimation techniques. This work demonstrated the superiority of the novel implementation of the EM algorithm and the newly proposed method based on PLF over the existing NLS and ML methods. Thus, improving the estimation algorithms is critical since it could increase precision while maintaining a user-friendly implementation. ## Acknowledgement The study was funded by the Czech Science Foundation grant number 21-03658S. ## Supplementary Material Additional tables and figures, and accompanying R scripts are available at [https://osf.io/bk8a7/](https://osf.io/bk8a7/).
2308.07578
Understanding User Behavior in Volumetric Video Watching: Dataset, Analysis and Prediction
Volumetric video emerges as a new attractive video paradigm in recent years since it provides an immersive and interactive 3D viewing experience with six degree-of-freedom (DoF). Unlike traditional 2D or panoramic videos, volumetric videos require dense point clouds, voxels, meshes, or huge neural models to depict volumetric scenes, which results in a prohibitively high bandwidth burden for video delivery. Users' behavior analysis, especially the viewport and gaze analysis, then plays a significant role in prioritizing the content streaming within users' viewport and degrading the remaining content to maximize user QoE with limited bandwidth. Although understanding user behavior is crucial, to the best of our best knowledge, there are no available 3D volumetric video viewing datasets containing fine-grained user interactivity features, not to mention further analysis and behavior prediction. In this paper, we for the first time release a volumetric video viewing behavior dataset, with a large scale, multiple dimensions, and diverse conditions. We conduct an in-depth analysis to understand user behaviors when viewing volumetric videos. Interesting findings on user viewport, gaze, and motion preference related to different videos and users are revealed. We finally design a transformer-based viewport prediction model that fuses the features of both gaze and motion, which is able to achieve high accuracy at various conditions. Our prediction model is expected to further benefit volumetric video streaming optimization. Our dataset, along with the corresponding visualization tools is accessible at https://cuhksz-inml.github.io/user-behavior-in-vv-watching/
Kaiyuan Hu, Haowen Yang, Yili Jin, Junhua Liu, Yongting Chen, Miao Zhang, Fangxin Wang
2023-08-15T05:33:48Z
http://arxiv.org/abs/2308.07578v2
# Understanding User Behavior in Volumetric Video Watching: Dataset, Analysis and Prediction ###### Abstract. Volumetric video emerges as a new attractive video paradigm in recent years since it provides an immersive and interactive 3D viewing experience with six degree-of-freedom (DoF). Unlike traditional 2D or panoramic videos, volumetric videos require dense point clouds, voxels, meshes, or huge neural models to depict volumetric scenes, which results in a prohibitively high bandwidth burden for video delivery. Users' behavior analysis, especially the viewport and gaze analysis, then plays a significant role in prioritizing the content streaming within users' viewport and degrading the remaining content to maximize user QoE with limited bandwidth. Although understanding user behavior is crucial, to the best of our best knowledge, there are no available 3D volumetric video viewing datasets containing fine-grained user interactivity features, not to mention further analysis and behavior prediction. In this paper, we for the first time release a volumetric video viewing behavior dataset, with a large scale, multiple dimensions, and diverse conditions. We conduct an in-depth analysis to understand user behaviors when viewing volumetric videos. Interesting findings on user viewport, gaze, and motion preference related to different videos and users are revealed. We finally design a transformer-based viewport prediction model that fuses the features of both gaze and motion, which is able to achieve high accuracy at various conditions. Our prediction model is expected to further benefit volumetric video streaming optimization. Our dataset, along with the corresponding visualization tools is accessible at [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) 2023 Volumetric videos, Dataset, User Behavior Analysis + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) + Footnote †: ccs: [https://cuhksz-inml.github.io/user-behavior-in-vv-watching/](https://cuhksz-inml.github.io/user-behavior-in-vv-watching/) ## 1. Introduction The confluence of video and the recently booming 3D representation technology embraces a new video paradigm, i.e., the _volumetric video_ (VV). Different from traditional 2D video that has mature codeces based on frames and pixels, volumetric video is still in its infant stage with various representation formats, such as point cloud (Safan et al., 2017; Wang et al., 2018), voxel (Wang et al., 2019), mesh (Wang et al., 2019), and even neural representations (Wang et al., 2019). Volumetric video is envisioned as a fundamental service that is able to facilitate various new applications such as extended reality (XR) and Metaverse, empowering entertainment (Wang et al., 2019), healthcare (Wang et al., 2019), and education (Wang et al., 2019), etc. The global industry VV market is expected to reach 22.5 billion USD by 2024 (Wang et al., 2019). Unlike traditional or 360-degree videos that only provide flat or curved 2D experience, volumetric video captures the scene and objects in 3D format, providing 6 degree-of-freedom (DoF) viewing experience, including three dimensions of position (X, Y, Z) and three dimensions of orientation (yaw, pitch, roll). This new viewing paradigm revolutionizes the way we consume video content, offering an unprecedented full immersive and interactive experience. Such interactivity between the user and the 3D video already demonstrates great value in various fields, e.g., revealing mental activity, inferring user preference, and even identifying different users. Due to the extreme complexity in volumetric video representation, e.g., extensive points or meshes using point cloud or 3D mesh formats, or huge neural models using implicit neural representation, the size of a volumetric video is usually much larger (up to 100x) than the 2D representation in the same condition. Thus, streaming volumetric video through the current network infrastructure tends to become a key challenge. Users' behavior analysis, especially the field of view (FoV) and gaze analysis, then plays a significant role because we can prioritize the content streaming within FoV and reduce or even ignore the content out of FoV to maximize user's QoE with limited network transmission capacity (Wang et al., 2019). Although understanding user behavior is crucial, to our best knowledge, there is no available 3D volumetric video viewing datasets containing fine-grained user interactivity features. Pioneer researchers in the community of multimedia have contributed some 3D datasets on objects or scenes (Dosov et al., 2017; Zhang et al., 2018), but they never focus on the analysis and understanding of user behavior in volumetric video. Thus, an open dataset in this context is in urgent need to reveal the viewing characteristics, optimize the video streaming, and further facilitate the research in the related community. In this paper, we propose the first large-scale user behavior dataset on volumetric video viewing with rich dimensions across various scenes, including the six DoF viewport, gaze, and motion features. We next conduct a comprehensive data analysis to deeply understand the user behavior, fully capture the potential correlations among viewport, gaze, and motion trajectory, and further reveal the future viewing activity. We find that VV users exhibit distinct regions of interest and display varying movement patterns based on different scenarios and personalities. Based on our observations and findings, we conduct a pilot study on viewport adaptive 3D volumetric video streaming. We design a transformer-based model to well capture the inherent relationship between the motion and gaze, and further achieve an accurate and robust viewport prediction for video streaming optimization. The contributions of our work are summarized as follows: * We for the first time release a volumetric video viewing behavior dataset, with large scale (50 users), multiple dimensions (8 attributes), and diverse conditions (including both static and dynamic scenes, both single and multi-user activities). * We conduct an in-depth analysis to understand user behaviors when viewing volumetric videos. Interesting findings on user viewport, gaze, and motion preference related to different videos and users are revealed. * We design a transformer-based viewport prediction model that fuses the features of both gaze and motion, which is able to achieve high accuracy and strong robustness. The rest of this paper is organized as follows. Section 2 gives an overall description of the dataset, including how data is collected as well as the video and dataset attribute description. Section 3 gives an initial visualization of the dataset, plotting headset movement and gaze direction. Section 4 introduces our analysis of user behavior in detail, and also reveals some interesting findings based on our observation. Motivated by these, section 5 proposes a transformer-based viewport prediction for six DoF volumetric video viewing. We further give some potential applications in section 6 and conclude this work in section 7. ## 2. Dataset In this section, we introduce the details of our dataset regarding the collection procedure, dataset description, and user information. ### Data Collection Procedure For convenience, we select volumetric videos from the current most appropriate public volumetric dataset FSVVD (Dosov et al., 2016) related to our context, which contains 26 volumetric videos represented by point cloud covering multiple common scenarios such as education, exercise, daily life, and entertainment. We seek 50 volunteers to participate in this dataset collection. These volunteers are given enough time and guidance to get familiar with the 3D volumetric environment. Videos are preloaded and played through Unity1 when a volunteer is wearing a Meta Quest Pro2 headset. People are able to freely navigate the 3D scenes and watch the activities from any viewing angle and any position within a 5x5 square meters space, as required by the FSVVD video dataset. Footnote 1: [https://unify.com/](https://unify.com/) Footnote 2: [https://www.meta.com/quest](https://www.meta.com/quest) The VR headset has a built-in accelerometer and we are able to easily calculate the current headset position (X,Y,Z) and the rotation of the headset (yaw, pitch, and roll). Besides, gaze information is also important as it provides more fine-grained features (Krizhevsky et al., 2015). For the gaze data collection, we rely on the built-in eye tracker in the headset with a sample rate of 144 Hz. The collected data consisted of 8 dimensions, including 3 rotational angles corresponding to the position of each eye, plus the confidence level. Since there are subtle differences (usually less than 3\({}^{\circ}\)) in the gaze data between the two eyes, we use the weighted average of the two eyes as the gaze in our later analysis. ### Viewer Selection Different viewers can also have quite personalized preferences on the same video content and conduct diverse behaviors. Therefore, we try our best to choose volunteers with different backgrounds, majors, hobbies, ages, genders, and familiarity levels with VR. Detailed information is listed in Table 1. Once the recording ends, the volunteers are asked to fill out a questionnaire about these information, and the overall experience of watching volumetric videos. ### Video Selection We argue that the video content should have a significant impact on the viewer's behavior feature. A viewer's attention can largely change if provided with different video content. To analyze the impact of video content on users, we selected 6 different scenes aiming to cover more representative scenarios. Specifically, we mainly evaluate the impact of actor numbers and the movement level of the actors. We divide the movement of target actors as spatial movement (e.g., moving from one position to another) and self-movement (e.g., body movement without obvious position change). Table. 2 indicates the detailed taxonomy of our selected video. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Gender**} & \multicolumn{2}{c|}{**Female**} & \multicolumn{2}{c|}{**Male**} \\ \cline{2-5} & \multicolumn{2}{c|}{27} & \multicolumn{2}{c|}{23} \\ \hline \multirow{2}{*}{**Age**} & 16-20 & 20-24 & 24-30 & 30+ \\ \cline{2-5} & 25 & 17 & 5 & 3 \\ \hline \multirow{2}{*}{**VR Exp (Times)**} & Never & 1-5 & 6-10 & 10+ \\ \cline{2-5} & 32 & 9 & 6 & 3 \\ \hline \multirow{2}{*}{**VV Exp (Times)**} & Never & 1-5 & 6-10 & 10+ \\ \cline{2-5} & 41 & 3 & 3 & 3 \\ \hline \end{tabular} \end{table} Table 1. User Information ### Dataset Description Our collected dataset consists of 28 dimensions, including the frame number and time stamp of each sample, the spatial movement (the spatial coordinates of X, Y, and Z axes), the rotational orientation (rotation angles of Yaw, Pitch, and Roll) information of the headset and two wireless controllers, and the gaze information of both eyes with two confidence indexes. ## 3. Visualization To help better understand our dataset and promote further study, we first give a visualization of the dataset and provide preliminary analysis on headset movement and gaze information. We select four representative scenes for observation and subsequent analysis, i.e., pulling trolley, sweeping, cleaning whiteboard, and chatting. ### Headset Movement Trajectory We first observe the user movement, represented by the headset movement trajectory in our dataset. Among all the participating volunteers, we randomly select one and compare his/her movement trajectory. According to our observation, the values at the Z axis almost keep stable. This is because people rarely crouch down and stand up, which follows our intuition about people's behaviors. Thus, we select to use an aerial view to better depict the trajectory. Fig. 2(a) shows the heatmap of movement trajectory across different scenes from a randomly selected user. Some interesting findings can be obtained. For 'Pulling Trolley' in Fig. 2(a) and 'Sweeping' in Fig. 2(b), the movement trajectories are relatively uniform and concentrated, indicating a slow movement within a small region. This matches our findings that **for volumetric videos with large movement, viewers tend to follow the moving object and are prone to pay more attention therein**. While for 'Cleaning Whiteboard' in Fig. 2(c) and 'Chatting' in Fig. 2(d), the trajectory is more dispersive. This indicates that **for small-movement or even static scenes, viewers may go around and observe the object more from different angles.** ### Gaze Direction Users' gaze information is also a significant indicator of user VV interactivity. We then try to visualize the gaze direction in our dataset. However, different from the traditional 2D video, the gaze can be simply projected onto the video surface, in 3D volumetric scene, the starting point of the gaze is changing along with the movement. Therefore, we need to combine these two together. Since the rotational angles returned from the headset are represented using degrees in Euler angles, for the convenience of subsequent calculation and visualization, we transform the data \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Name** & **\#Actors** & **Spatial Movements** & **Body Movements** & **Environment Interaction** & **\#Frame** \\ \hline **Chatting** & 2 & Small & Small & - & 300 \\ \hline **Cleaning Whiteboard** & 1 & Static & Large & ✓ & 300 \\ \hline **News Interviewing** & 2 & Small & Small & - & 300 \\ \hline **Pulling Trolley** & 1 & Large & Small & ✓ & 300 \\ \hline **Presenting** & 2 & Static & Small & - & 300 \\ \hline **Sweeping** & 1 & Middle & Middle & ✓ & 300 \\ \hline \end{tabular} \end{table} Table 2. Description of selected volumetric videos: Figure 1. Example of used VV: 'Pulling Trolley’, 'Sweeping’, 'Cleaning Whiteboard’, and ‘Chatting’ Figure 2. The aerial view for movement trajectory heatmap of different volumetric scenes. The lighter yellow color indicates a longer dwelling time and vice versa for the darker blue color. into a rotation matrix. We convert the angle into radians to compute the viewport area, where \(\alpha\), \(\beta\), and \(\gamma\) stand for yaw, pitch, and roll, respectively. As denoted in Eq. 1, matrix \(R\) comprises the product of the rotation matrices about the yaw, pitch, and roll axes to represent the rotation matrix of users' headset movement and gaze movement. \[R=\begin{bmatrix}1&0&0\\ 0&\cos\gamma&-\sin\gamma\\ 0&\sin\gamma&\cos\gamma\end{bmatrix}\begin{bmatrix}\cos\beta&0&\sin\beta\\ 0&1&0\\ -\sin\beta&0&\cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha&\sin\alpha&0\\ \sin\alpha&\cos\alpha&0\\ 0&0&1\end{bmatrix} \tag{1}\] Using the above transformation formula, we get the rotation matrix of both the gaze and the headset. To transform the gaze direction from the local coordinate system of the headset to the global coordinate system, we apply a rotation matrix using the orientation data provided by the headset, representing the transformation from the local coordinate system of the headset to the global coordinate system. The above transformation could be represented as: \[R_{g}=R_{h}*R_{e} \tag{2}\] where \(R_{g}\) represents the global gaze orientation matrix, \(R_{h}\) and \(R_{e}\) represent the headset and local gaze (eye) orientation matrix respectively. The combination of gaze direction and the headsets' movement trajectory is visualized in Fig. 3. The blue line indicates the user's motion trajectory and the red arrows attached to the blue line indicate the gaze direction at the corresponding position. Not surprisingly, we can find that **users' gaze often follows the activity of the object inside the video**. Specifically, they can be divided into two categories. On the one hand, **for volumetric scenes with relatively large movement, users' gaze tends to precede users' movement by a short period of time**. This phenomenon can be observed from Fig. 3 (a) and Fig. 3 (b), where the trolley and dustpan follow a regular movement. Then users' gaze can be focused on these objects and appear to have a similar movement feature. On the other hand, **for volumetric scenes with small movements, the gaze may move back and forth with irregular movement, but it generally still focuses on the target object**. Fig. 3 (c) and Fig. 3 (d) verify this observation that the endpoints of the gaze arrows mostly locate at the target objects. Fig. 4 reaffirms this observation. The left figure shows the original aerial view heatmap of the movement trajectory, and the light part of the right figure indicates the point where the movement trajectory coincides with the gaze ray. It demonstrates a strong correlation that a large portion of the movement trajectory and the gaze ray indeed have interaction. ## 4. Analysis on User Behavior In this section, we conduct a comprehensive analysis of user behaviors based on the dataset, aiming to reveal the implicit correlations between various observed features, and further provide insight for future user behavior prediction. We mainly focus on user attention and movement features. ### Volumetric ROI Calculation Users' region of interest (ROI) is the most important feature when viewing volumetric videos. However, different from 2D or 360-degree videos where ROI can be directly obtained, ROI calculation in volumetric video is not so intuitive given its 3D nature. On one hand, there can be multiple objects alongside a user's eyesight and it is hard to uniquely determine the interested object. On the other hand, users are moving most of the time and the viewing angles are constantly changing. Thus, we define the _volumetric ROI level_, Figure 4. Illustration of the intersection between movement trajectory and gaze ray. The left figure shows the original user movement trajectory, and the right figure indicates the point where the movement trajectory coincides with the gaze ray. Figure 3. Gaze Direction with Movement Trajectory. The blue line represents the movement trajectory and the red arrows indicate the gaze direction. as a quantitative indicator, to represent how much attention a user pays to a region. Calculating the volumetric ROI level includes the following steps: * **Scene segmentation.** We first divide the whole volumetric scene into small blocks, where each block is a cube after slicing the space from x, y, and z dimensions. Since most of the cubes do not contain any points or only contain very few points, we set a threshold to filter out those near-empty cubes and only preserve those representing practical objects. Note that users' sensitivity to the point cloud density decreases with the increase of observing distance (Datta et al., 2017), we also vary such threshold accordingly. * **Gaze frustum calculation.** By exploiting the pre-processed headset trajectory and gaze data, we are able to calculate the viewing directions of the user at every position. Normally, people's effective viewing angle is about 30deg(Srivastava et al., 2017; Wang et al., 2018; Wang et al., 2018), we therefore define a virtual viewing frustum with an angle of 30deg. And objects within this frustum will be viewed by the user. * **Intersection calculation.** The ROI level of one cube can be calculated as how frequently this cube is covered by the gaze frustum of the user. In practice, we calculate the direction vector formed by the coordinates of the headset and the center of the cubes and then compare the angle between the direction vector and the gaze direction vector obtained from previous processing. The cube is counted once every time the angle is less or equal to 30deg. By going through all of the effective cubes, we obtain the total counts for the whole volumetric video. * **Volumetric ROI level calculation.** Inspired by the ROI mechanism used in 360 videos (Datta et al., 2017), we propose to calculate the volumetric ROI level \(F_{a}\) of a cube according to the density weight, the appearance frequency, and the distance between the user's eyes and the cube. The calculation formula is given as: \[F_{a}=\frac{\rho_{c}*f_{g}}{D_{c}} \tag{3}\] \[f_{g}=\frac{\sum_{i=1}^{N}N_{g}(i)}{N_{sample}} \tag{4}\] where \(\rho_{c}\) is the point cloud density of the cube, \(f_{g}\) is the frequency of each cube falling into the viewing frustum, \(D_{c}\) is the distance between the headset and the cube center, \(N_{g}(i)\) is the total counts of the current cube, and \(N_{sample}\) is the total number of user behavior samples. ### Analysis on User Attention We next analyze users' attention (which can be directly reflected by the ROI level) when they are viewing different volumetric videos. Fig. 5 visually shows the different ROI levels for different volumetric video scenes. Here we randomly select 5 viewers and illustrate their average ROI level. We can find that users' attention is highly correlated with the volumetric content, and is **particularly on the actors and the objects they are manipulating**. For example, in the 'Chatting' scene, most attention is focused on the right person. In the 'Sweeping' scene, the dustpan instead attracts even more attention than the person. Another interesting finding lies in the personalized preference, i.e., **users may pay higher attention to their preferred object or person**. Like in Fig. 5(d), the right person obviously has a higher ROI level than the left person, which is largely due to the user's personalized preference. Figure 5. The volumetric ROI level together with 4 representative scenes. Here the light green color indicates a higher ROI level and the dark red color indicates a lower ROI level. Figure 6. The distribution of the volumetric ROI level of four scenes. X-axis indicates the ROI levels, and Y-axis indicates the number of cubes with the corresponding ROI. We next consider the distribution of user attention in different volumetric scenes. Fig. 6 plots the distribution of cubes with different volumetric ROI levels together with the mean value (Mean) and the standard deviation (Std. dev.). Note that we already remove those rarely-watched cubes. Comparing the different volumetric scenes, we can obtain several interesting findings: 1) **The ROI dispersion level of different volumetric videos is quite diverse, depending on the scene content**. For example, the ROIs of the 'Sweeping' scene concentrate with the range from 0 to 60, while the ROIs of the 'Chatting' scene mainly spread between 0 to 15. This means that users are more focused when watching the former more 'dynamic' video while they are more distracted when watching the latter more'static' one. And it further reaffirms that people's attention is more easily captured by moving objects. 2) **Only a small portion of cubes have relatively high ROI levels.** This is because a volumetric scene can have a lot of effective cubes, while only a small portion of them, especially those representing the target actors or objects, will gain enough attention. ### Analysis on User Movement We conduct a more in-depth analysis of user movement to examine the correlations between movement behavior and video content. We first define the movement mode. Taking the user's headset as the origin, moving along the lateral direction of the body is indicated as the x-axis, along the vertical direction of the body is indicated as the y-axis, and the z-axis represents the up-down movement. Fig. 7 shows the average moving distance along video playback progress in the three directions as well as the total distance of the 4 volumetric scenes. Naturally, moving laterally means that the user prefers to observe from different angles while moving vertically means that users would like to follow the moving objects. From this figure, we can find that the vertical distance is clearly larger than the lateral distance in the 'Pulling Trolley' and 'Sweeping' scenes, and vice versa for the rest two scenes. This observation matches exactly with our previous finding that **people tend to follow the moving object while observing the static object from various angles.** We also investigate the movement features from the perspective of different users. Fig. 8 shows the cumulated moving distances of five randomly selected users. We find that the first two dynamic-scene videos have an average moving distance of 7.0m and 9.57m, respectively, while the rest two static-scene videos reach an average moving distance of 12.53m and 13.82m. Thus we can verify that **users tend to perform more spatial movements in static scenes compared to dynamic scenes to explore more areas in volumetric scenes.** Figure 8. The total movement distance of different scenes. Here the colored line and the black line indicate each user’s total movement distance and the average respectively along the video playback progress, the dotted line with a number represents the average value. Figure 7. Movement Distance of X, Y, Z axes Figure 9. Rotational Acceleration in Different Scenesation of ‘Pulling Trolley’ and ‘Cleaning Whiteboard’ In Fig. 9, we show the average data to observe the difference in rotational acceleration for different scenes. From the figure, we find that in more static scenes, users change their orientation more frequently at a faster rate. According to the CDF plot, for even more than 30% sampling points the head moving speeds exceed 100 degree/\(s^{2}\) in the 'cleaning whiteboard' scene. In contrast, for the relatively dynamic scene 'Pulling Trolley', there are about 80% sampling points with moving speed less than 20 degree/\(s^{2}\). Diversity across different users is shown in Fig. 7 and Fig. 7, which depicts the specific rotational acceleration speed of 5 randomly selected users and their average for the scenes of 'Pulling Trolley' and 'Cleaning Whiteboard'. Observation from these figures reaffirms the finding that user movement in static scenes is usually faster than in dynamic scenes. ## 5. Gaze-Assisted Viewport Prediction for Volumetric Video Streaming In this section, we give a case of the dataset application in volumetric video streaming. By fusing the correlated features between video content and gaze information, we are able to improve the accuracy of viewport prediction, further benefiting VV streaming. ### Background and Motivation Viewport adaptive video streaming together with tile-based partition strategy (Garfani et al., 2017; Wang et al., 2018) has been widely explored in traditional 2D and recently 360-degree videos. By reducing the bitrate of the video content outside users' viewport, the whole transmitted video size can be saved and thus relieving the network bandwidth pressure. This idea is intuitive to move to the 3D scenario if the scene is partitioned into small cubes for cube-based streaming. However, though it applies well in 2D videos, a critical challenge arises when it comes to 3D volumetric videos. The major difficulty lies in the flexible six DoF spatial feature, where the significant uncertainty in spatial position and viewing angle makes the viewport prediction error easy to accumulate. Several pioneer works have made attempts for six DoF viewport prediction (Garfani et al., 2017; Wang et al., 2018; Wang et al., 2018). For example. ViVo(Garfani et al., 2017) and Vues (Vues, 2018) employ linear regression (LR) and multilayer perceptron (MLP) to predict the viewport, and have also explored the use of advanced deep learning models such as LSTM for prediction. Extending from Parima(Garfani et al., 2017), VolParima(Garfani et al., 2017) utilizes 3D object detection and tracking techniques to achieve improved accuracy in viewport prediction. However, these works either consider each DoF separately or mainly focus on the video content, which cannot fully capture the implicit features in volumetric videos to yield accurate viewport prediction towards various volumetric scenes. Motivated by our previous observations and findings, **we realize that the features in user movement, gaze direction, and video content are tightly correlated so that the multi-modal information, as well as their mutual impacts, should be combined together for consideration.** ### Design We extract the multimodal features and present an architecture with a bidirectional fusion model that facilitates the communication of different features in Fig. 11. This is a paradigm for accurate viewport predictions based on video content, interaction, and intention. Followed by a variety of cross-modal transformers to transcend information from multi-modality. **Cross-modal transformer.** The cross-modal transformer (Garfani et al., 2017) is used to capture the interplay of several elements and to establish communications among the multi-modal information. Instead of extracting the multi-modal features independently(Garfani et al., 2017), we propose a pipeline to overall integrate the history viewport feature, 3D gaze feature, and video features, which enhances the in-between feature communication to mutually decrease their future uncertainties on interaction and intention. **Video feature extraction.** To learn the constraints (e.g. Surface and topology of furniture) from the 3D video and retrain the network for attention on locally interacted structures, we apply PointNet++(Wang et al., 2018), to extract both global (the video content) and local video features (interacted region). We derive the per-point feature and global descriptor of video as \(F_{P},F_{o}\). **Gaze feature extraction.** The gaze point feature \(f_{g}\) is retrieved from the per-point video feature map \(P_{P}\) into \(F_{P|g}\). Consequently, the interacted gaze feature with corresponding video information provides indications to infer the intention. **Viewport feature extraction.** We use a linear layer to extract the viewport feature embedding \(f_{m}\) from multidimensional viewport trajectories input. The viewport is well-aligned with the video content. To endow the feature awareness of the 3D video content, we further query the video features with the viewport features. These interacted video features are then supplied to PointNet++ to get the contextual video feature \(f_{m_{o}}\) of the current viewport. In lieu of directly concatenating the features, which would bring modalities features redundancy and impair the prediction accuracy (Wang et al., 2018), we propose a model by deploying a cross-modal transformer (Wang et al., 2018) to fuse the gaze, viewport, and video features. **Feature fusion.** As an intermediary element, the viewport features strive to be cognizant of the 3D video features and the subject's Figure 11. Transformer-based viewport prediction model. Figure 12. Average MAEA for Figure 13. Average Accuracy viewport prediction for each prediction part intention inferred from the gaze features. First, we utilize the video feature \(f_{m\_o}\) acquired from the 3D environment as the query to update the viewport feature \(f_{m}\) in the viewport-video transformer. Then, the output viewport embedding \(f_{m\_s}\) is expected to be aware of the 3D video, which results in the final viewport embedding \(f_{m\_g}\). Inspired by (Nguyen et al., 2017), we handle the gaze embedding in a bidirectional manner, i.e., the viewport embedding \(f_{m}\) is also utilized as the query to update the gaze features into \(f_{g\_m}\). The didirectionally fused multi-modal features are then assembled into holistic temporal input representations to perform human viewport prediction. As shown in Fig. 11, the updated gaze feature \(f_{g\_m}\), viewport feature \(f_{m\_g}\) and the global video feature \(F_{O}\) are used to predict the future viewport trajectories from \(t\) to \(T\) by: \[V_{T:T+t}=\Re\left(h_{\text{pos}},\text{concat}\left(f_{g\_m},f_{m\_g},F_{O} \right)_{T-nT-1}\right) \tag{5}\] where concat denotes operator of concatenation, and \(h_{\text{pos}}\) is the latent vector containing temporal positional encodings for the output(Zhu et al., 2017). We evaluate our gaze-assisted viewport prediction against representative VV system and methods ViVo, VolParima and transformer-based Vanilla-TF (VTF) (Zhu et al., 2017) using the Average Mean Absolute Error Angle (MAEA) as a metric. We also do an ablation study to compare the effect of each part. As depicted in Figure 12, our proposed model is capable of reducing MAED by 13.3%, 19.8%, and 34.5% in comparison with VolParima, ViVo, and VTF, respectively. Furthermore, we conducted experiments to evaluate the accuracy of our gaze-assisted model and performed an ablation study comprising three variations: without gaze (w/o g), without PointNet ++ (w/o p), and without a cross-modal transformer (w/o cm). The results indicate that each component has a positive contribution to the overall performance. Our model, which effectively integrates and utilizes video content and gaze information, is demonstrated to produce more accurate predictions than the previous methods. ## 6. Other Applications In addition to our proposed viewport prediction systems, we provide several potential application cases that could be derived from our dataset. ### User Identification for VV User identification is a crucial task in 360-degree video, yet it poses a new challenge for volumetric video. Such a technique has the potential to improve user experience or enhance privacy. For headset-movement-based identification, Li et al. (Li et al., 2017) achieved an identification accuracy of 95.57% while participants nodded when listening to music. Gaze data could also be used for identification, Sluganovic et al. (Sli et al., 2017) proposed gaze-based authentication using a gaze-tracking device, their system his system achieves an error rate of 6.3% at an authentication time of 5 seconds. Given that our dataset on VV user behavior encompasses a wider range of attributes, an identification method utilizing both headset and gaze data could be developed to enhance accuracy. ### Personalized Content Delivery Many works have been conducted for content recommendation traditional in 2D video (Chen et al., 2017; Chen et al., 2017) and 360-degree video (Wang et al., 2018), but for volumetric video, such field is still undefined. By analyzing the behavior of the users, developers can gain insights into users' preferences and adapt personalized content to better suit their needs. **Content Recommendation.** Based on the historical movement pattern of the users and the viewing history, developers can build user portraits for each user, in order to deliver new VV contents that are more likely to be of interest to the user. **Adaptive Content** Using the insights gained from analyzing the users' behavior, developers are able to dynamically adjust the VV experience in real time. For example, the lighting could be adjusted when the users tend to change their viewport frequently to minimize motion sickness. ### Healthcare VV user behavior analysis has the potential to play a role in psychoanalysis, particularly in the area of virtual reality therapy. By analyzing changes in users' behavior and movement patterns before and after virtual reality therapy, therapists can evaluate the effectiveness of the treatment. For example, if a patient with a fear of heights spends more time looking down from a virtual high-rise building after therapy than before, this suggests that the treatment has been effective. ## 7. Conclusion In this paper, we focused on understanding user behavior patterns when watching volumetric videos. We released the first large-scale volumetric video user behavior dataset, including movement information, headset direction, user gesture, and user gaze information. This dataset involved data from 50 users with strong diversity and covered multiple representative volumetric scenes. We then conducted a comprehensive analysis aiming to reveal the behavior features. We defined the volumetric ROI level calculation mechanism in this context and focused on the feature analysis on user attention and user movement. Some interesting findings were therefore derived. Further, based on our analysis and observation, we designed a transformer-based volumetric video viewport prediction model, which fused all the correlated features and outperformed the state-of-the-art baseline solutions. ###### Acknowledgements. The work was supported in part by the Basic Research Project No. HZQB-KCZYZ-2021067 of Hetao Shenzhen-HK S&T Cooperation Zone, by NSFC (Grant No. 62293482 and No. 62102342), the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2023A15512668), the Shenzhen Science and Technology Program (Grant No. RCNS20221008093120047), the Shenzhen Outstanding Talents Training Fund 202002, the Guangdong Research Projects No. 2017ZT07X152 and No. 2019CX01X104, the Guangdong Provincial Key Laboratory of Future Networks of Intelligence (Grant No. 2022B1212010001), the Shenzhen Key Laboratory of Big Data and Artificial Intelligence (Grant No. ZDSYS201707251409055).
2305.17797
T2FNorm: Extremely Simple Scaled Train-time Feature Normalization for OOD Detection
Neural networks are notorious for being overconfident predictors, posing a significant challenge to their safe deployment in real-world applications. While feature normalization has garnered considerable attention within the deep learning literature, current train-time regularization methods for Out-of-Distribution(OOD) detection are yet to fully exploit this potential. Indeed, the naive incorporation of feature normalization within neural networks does not guarantee substantial improvement in OOD detection performance. In this work, we introduce T2FNorm, a novel approach to transforming features to hyperspherical space during training, while employing non-transformed space for OOD-scoring purposes. This method yields a surprising enhancement in OOD detection capabilities without compromising model accuracy in in-distribution(ID). Our investigation demonstrates that the proposed technique substantially diminishes the norm of the features of all samples, more so in the case of out-of-distribution samples, thereby addressing the prevalent concern of overconfidence in neural networks. The proposed method also significantly improves various post-hoc OOD detection methods.
Sudarshan Regmi, Bibek Panthi, Sakar Dotel, Prashnna K. Gyawali, Danail Stoyanov, Binod Bhattarai
2023-05-28T18:56:54Z
http://arxiv.org/abs/2305.17797v2
# T2FNorm: Extremely Simple Scaled Train-time Feature Normalization for OOD Detection ###### Abstract Neural networks are notorious for being overconfident predictors, posing a significant challenge to their safe deployment in real-world applications. While feature normalization has garnered considerable attention within the deep learning literature, current train-time regularization methods for Out-of-Distribution(OOD) detection are yet to fully exploit this potential. Indeed, the naive incorporation of feature normalization within neural networks does not guarantee substantial improvement in OOD detection performance. In this work, we introduce **T2FNorm**, a novel approach to transforming features to hyperspherical space during training, while employing non-transformed space for OOD-scoring purposes. This method yields a surprising enhancement in OOD detection capabilities without compromising model accuracy in in-distribution(ID). Our investigation demonstrates that the proposed technique substantially diminishes the norm of the features of all samples, more so in the case of out-of-distribution samples, thereby addressing the prevalent concern of overconfidence in neural networks. The proposed method also significantly improves various post-hoc OOD detection methods. ## 1 Introduction The efficacy of deep learning models is contingent upon the consistency between training and testing data distributions; however, the practical application of this requirement presents challenges when deploying models in real-world scenarios, as they are inevitably exposed to OOD samples. Consequently, a model's ability to articulate its limitations and uncertainties becomes a critical aspect of its performance. While certain robust methodologies exist that endeavor to achieve generalizability despite domain shifts, these approaches do not always guarantee satisfactory performance. OOD detection approaches can be broadly grouped into three approaches: post-hoc methods, outlier exposure, and training time regularization. Post-hoc methods, deriving OOD likelihood from pre-trained models, have significantly improved while outlier exposure, despite the challenges in predefining OOD samples ideally, is prevalently adopted in industrial contexts. Another approach involves training time regularization. This line of work due to its capacity to directly impose favorable constraints during training, potentially offers the most promising path to superior performance. The training-time regularization method, LogitNorm[1], employs L2 normalization at the logit level to mitigate overconfidence, leading to an increased ratio of ID norm to OOD norm compared to the results from simple cross-entropy baseline or Logit Penalty[1]. Nonetheless, the importance of feature norm in achieving ID/OOD separability has been underscored in recent OOD detection works [1; 2; 3; 4]. Normalization at the logit level does not assure an optimal resolution to the overconfidence issue at the feature level. Given the significance of the feature norm, this naturally gives rise to the subsequent query: _Can hyperspherical normalization in the higher dimensional feature space provide enhanced benefits without introducing any potential drawbacks?_ Towards this, in this work, we propose feature normalization for OOD detection. Our proposed method requires only a trivial modification to the standard architecture. The feature representation needs to be normalized and scaled during the usual training and inference time. However, the normalization process is intentionally omitted during OOD detection. We demonstrate that with the proposed feature normalization, we achieve a clear distinction between the norm of ID and OOD data samples, eventually contributing toward a substantial performance improvement without compromising the model's accuracy. We show a boost in OOD detection in a number of OOD benchmark datasets (Table 1). For instance, our method reduces the FPR@95 score by **34%** with respect to baseline and by **7%** with respect to LogitNorm on average across a variety of 9 OOD datasets with DICE scoring on ResNet-18 architecture. In addition, our methods work well in conjunction with many post hoc methods. Our key results and contribution are: * a surprisingly trivial yet powerful plug to regularize the model for OOD detection. We quantitatively show that train time normalization approximately projects the features of ID samples to the surface of a hypersphere differentiating it from OOD samples thereby achieving significantly higher _separability ratio_. * We show T2FNorm is equally effective across multiple deep learning architectures and multiple datasets. It also works well in conjunction with multiple post-hoc methods. * We perform both qualitative and quantitative analysis showing our method's ability to reduce overconfidence and also perform a sensitivity study to show the robustness of our model to the temperature parameter \(\tau\). * We show that **skipping normalization during OOD scoring time** is a key contributor to our method thus paving the way for exploring the effectiveness of other forms of normalization discrepancies during OOD scoring. ## 2 Method ### Preliminaries: Out of Distribution Detection **Setup** Let \(\mathcal{X}\) be input space, \(\mathcal{Y}\) be output space and \(\mathcal{P}_{\mathcal{X}\mathcal{Y}}\) be a distribution over \(\mathcal{X}\times\mathcal{Y}\). Let the \(\mathcal{P}_{in}\) be the marginal distribution of \(\mathcal{X}\) which represents the distribution of input we want our classifier to be able to handle. This is the in-distribution (ID) of the input labels \(x_{i}\). **Supervised Classification** In supervised classification, the goal is to minimize the empirical loss \(\mathcal{L}\) function formulated as: \(\min_{\theta}\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(f_{\theta}(x_{i}),y_{i})\) over the input dataset which is sampled _i.i.d._ from the in-distribution \(\mathcal{P}_{in}\). Here, \(\theta\) is the model parameters, \(f_{\theta}(x_{i})\) is the classification predicted for input \(x_{i}\) by the model with parameters \(\theta\). **OOD Detection** During test time the environment can present samples from a different distribution \(\mathcal{P}_{out}\) instead of from \(\mathcal{P}_{in}\). The goal of Out of Distribution Detection is to differentiate between samples from in-distribution \(\mathcal{P}_{in}\) and out-of-distribution \(\mathcal{P}_{out}\). In this work we treat OOD detection as a binary classification where a scoring function \(S(\mathbf{x})\) and a corresponding threshold \(\lambda\) provide a decision function the performs OOD detection: \[g(\mathbf{x})=\begin{cases}\text{In-distribution},&\text{if }S(\mathbf{x})\geq \lambda\\ \text{Out-of-distribution},&\text{if }S(\mathbf{x})<\lambda\end{cases} \tag{1}\] The simplest of the scoring function \(S(\mathbf{x})\) is the Maximum Softmax Probability (MSP) obtained by passing the logits from the final layer of the network to the softmax function and taking the maximum value. Then samples with MSP exceeding a certain threshold \(\lambda\) are classified ID and the rest are OOD. The threshold \(\lambda\) is usually chosen so as to have a true positive rate of 95% over the input dataset. ### Motivation A recent work LogitNorm[1] directly aims to address the overconfidence issue by decoupling the effect of the norm of logits by L2 normalization in logits. Since the fully connected (FC) layer is directly responsible for logit computation and normalization is performed at logits, empirical observation (Figure 1) suggests that the optimization process induces smoother uniform weight values closer to zero in the FC layer. However, a recent work [5] also has shown that non-trivial dependence on unimportant weights (and units) can directly attribute to the brittleness of OOD detection. The presence of smoother weights implies irrelevant features contributing non-trivially to the classification for some predictions resulting in higher output variance for OOD samples. Furthermore, suppressing the logit norm by forcefully learning predominantly near-zero FC might only suboptimally address overconfidence at the feature level. Hence, we address the normalization in feature space to avoid unwanted implications on FC weights. ### Feature normalization Our work proposes a method **T2FNorm** to improve the robustness of the network itself for OOD detection which can be used in conjunction with any downstream scoring function. We perform feature normalization to alleviate the issue of over-confidence predictions at the feature level. In particular, we normalize the feature vectors in the penultimate layer and scale with a factor \(1/\tau\). The normalized vector is then passed on, as usual, to the classification FC layer and to cross-entropy loss function. Importantly, this normalization is performed (Algorithm 1) only during training and inference time, however, we skip the normalization part for OOD detection (Algorithm 2). Figure 2 shows the schematic diagram for our method. The proposed approach is simple and easy to implement, and as we will show later, it produces improved performance for OOD detection while maintaining predictive abilities. ``` Input: Dataset \(\mathcal{D}\), Feature Extractor \(\phi\), classification layer \(FC\) function train(\(\mathcal{D}\)) for(\(\mathbf{x}_{i},\mathbf{y}_{i})\leftarrow\mathcal{D}\)do \(h^{*}\leftarrow\phi(\mathbf{x}_{i})\) \(h\gets h^{*}/\tau\|h^{*}\|_{2}\) \(\mathcal{L}\leftarrow\text{cross\_entropy\_loss}(FC(h),\mathbf{y}_{i})\) \(\mathcal{L}\).backward() endfor endfunction ``` **Algorithm 1**T2Norm: Training ``` function classify(\(\mathbf{x}\)) \(h^{*}\leftarrow\phi(\mathbf{x}_{i})\) if\(S(\mathbf{x};h^{*}/\tau)<\gamma\)then return OOD else \(h\gets h^{*}/\tau\|h^{*}\|_{2}\) logits \(\gets FC(h)\) return \(argmax_{i}\) logits\({}_{i}\) endif endfunction ``` **Algorithm 2**T2FNorm: Inference **Significance of Feature Norm** As observed by recent works [3; 4; 6], generally ID samples have a more significant penultimate feature norm in comparison to OOD data. In CNN models, high-level spatial features are generated by convolution operations. The penultimate feature is derived from globally pooling post-ReLU spatial features. ReLU activation signifies the presence of specific in-distribution features, while their absence corresponds to smaller norms, often seen in out-of-distribution samples. Therefore, a neural network having better ID/OOD separability should demonstrate a higher relative norm for in-distribution versus out-of-distribution samples, enhancing discriminability. **Working Principle and Details** Our operational hypothesis is that the network learns to produce high-level semantic ID features lying on the hypersphere due to the normalization performed during training. However, this happens only for ID samples only as the network was trained with them, while for OOD samples, high-level semantic ID features are not activated because of their absence, causing Figure 1: Smooth FC weights of Airplane class in ResNet18 induced by LogitNorm[1] optimization. OOD feature representation to lie significantly beneath the hypersphere's surface. The superior the degree to which this occurs, the greater the distinction between ID and OOD data samples occurs. Quantitatively, we can formalize such distinction as the ratio of ID norm to OOD norm, which we term as _separability ratio_ (\(\mathcal{S}\)) in this work. We observe that depending upon the _separability ratio_, LogitNorm[1] and LogitPenalty[1] can perform OOD detection supporting the evidence of the preferability of a higher separability ratio. As most out-of-distribution (OOD) detection metrics concentrate on logits, and prior research has primarily focused on these logits, we investigate the impact of normalization at the feature representation level on both the separability ratio and the norm in feature and logit spaces for OOD detection. Given that feature-level normalization implies normalization within a higher-dimensional space than logit-level normalization, we postulate that high-dimensional normalization of training ID samples would enable the network to significantly reduce OOD norm relative to ID norm while preserving ID-specific features. As a result, we anticipate a substantial decrease in overconfidence, which is intrinsically linked to logit and feature norms, primarily since overconfidence is addressed at the penultimate feature level, inherently tackling the norm at the logit level. Confirming this, recent work, ReAct[2], has observed that the penultimate layer is most effective for OOD detection due to the distinct activation patterns between ID and OOD data. **On the significance of avoiding normalization at OOD scoring** Should feature normalization be adopted during OOD scoring, it erroneously activates the feature for OOD samples, causing them to mimic the behavior of ID samples within the feature space. But, the removal of normalization during OOD scoring helps to preserve the difference in response of the network towards OOD and ID samples in the feature space. ## 3 Experiments In this section, we discuss the experiments performed in various settings to verify the effectiveness of our method. **Datasets:** We use CIFAR-10[7] and CIFAR-100[8] as in-distribution datasets. Texture[9], TinyImageNet (TIN)[10], MNIST[11], SVHN[12], Places365[13], iSUN[14], LSUN-r[15], LSUN-c[15] are used as out-of-distribution datasets. Following [16], we use CIFAR-10 as OOD if CIFAR-100 is used as in-distribution and vice versa. **Metrics and OOD scoring:** We report the experimental results in three metrics: FPR@95, AUROC and AUPR. FPR@95 gives the false positive rate when the true positive rate is 95%. AUROC denotes the area under the receiver operator characteristics curve and AUPR denotes the area under the precision-call curve. We use multiple OOD scoring methods, including parameter-free scoring functions such as maximum softmax probability[17], parameter-free energy score [18] and GradNorm[3] as well as hyperparameter-based scoring functions such as ODIN[19] and DICE[5]. We use the recommended value of 0.9 for the DICE sparsity parameter \(p\) and recommended \(\tau=1000\) and \(\epsilon=0.0014\) for ODIN. Figure 2: Schematic diagram of our method: T2FNorm. Features are L2 normalized and scaled during training and inference time, while normalization is avoided for OOD Scoring. **Training pipeline:** We perform experiments with three training methods: a) Baseline (cross-entropy), LogitNorm [1], and T2FNorm (ours) by following the training procedure of open-source framework OpenOOD[16]. Experiments were performed across ResNet-18, WideResnet(WRN-40-2), and DenseNet architectures with an initial learning rate of 0.1 with weight decay of 0.0005 for 100 epochs based on the cross-entropy loss function. We set the temperature parameter \(\tau=0.04\) for LogitNorm as recommended in the original setting[1] and \(\tau=0.1\) for T2FNorm. Please refer to Figure 4 for the sensitivity study of \(\tau\). Five independent trials are conducted for each of 18 training settings (across 2 ID datasets, 3 network architectures, and 3 training methods). We trained all models on NVIDIA A100 GPUs. ## 4 Results **Superior OOD Detection Performance** Quantitative results are presented in Table 1. It shows that our method is consistently superior in FPR@95, AUROC as well as AUPR metrics. Our method reduce FPR@95 metric by 34% compared to Baseline and 7% compared to LogitNorm using DICE Scoring for ResNet-18. Figures 4 and 4 show the FPR@95 values across different OOD datasets using MSP scoring in ResNet-18 where our method reduces FPR@95 by 33.7% compared to baseline and by 4.4% compared to LogitNorm. Interestingly, for both ID datasets, we can also observe the incompatibility of LogitNorm with DICE scoring in DenseNet architecture where it underperforms even when compared to the baseline. On the other hand, our method is more robust regardless of architecture or OOD scoring method. \begin{table} \begin{tabular}{l|l|c c c|c c c|c c c} \hline & & & \multicolumn{4}{c|}{CIFAR-10} & \multicolumn{4}{c}{CIFAR-100} \\ \cline{3-11} & Network & \multicolumn{2}{c|}{FPR@95 \(\downarrow\)} & \multicolumn{2}{c|}{AUROC \(\uparrow\)} & \multicolumn{2}{c|}{FPR@95\(\downarrow\)} & \multicolumn{2}{c}{AUROC\(\uparrow\)} & \multicolumn{2}{c}{AUPR\(\uparrow\)} \\ \hline \multirow{6}{*}{**OOD**} & ResNet-18 & 53.4 / 22.1 / **19.9** & 90.7 / 96.0 / **95.5** & 90.8 / 95.7 / **96.4** & 78.9 / 72.6 / **68.2** & 79.0 / 80.1 / **83.2** & 79.8 / 79.4 / **82.4** \\ & WRN-40-25 & 53.4 / 22.6 / **22.4** & 91.9 / 95.9 / **95.9** & 90.2 / 95.8 / **95.9** & 81.8 / 63.5 / **63.2** & 74.7 / 83.8 / **83.9** & 76.6 / 83.7 / **84.2** \\ & DenseNet & 48.8 / 24.0 / **21.0** & 91.7 / 95.4 / **96.1** & 91.6 / 95.3 / **96.2** & 74.7 / 66.8 / **64.1** & 74.7 / 82.1 / **84.1** & 79.8 / 82.6 / **84.6** \\ \hline \multirow{6}{*}{**OOD**} & Mean & 51.9 / 22.9 / **21.0** & 90.9 / 95.8 / **96.2** & 90.9 / 95.6 / **96.2** & 79.4 / 67.6 / **65.1** & 77.1 / 82.0 / **83.7** & 78.7 / 81.9 / **83.7** \\ & ResNet-18 & 54.5 / 27.6 / **20.5** & 86.0 / 94.4 / **96.3** & 87.4 / 94.1 / **96.1** & 76.7 / 68.6 / **64.9** & 81.0 / 73.5 / **83.1** & 81.2 / 74.6 / **81.9** \\ & WRN-40-2 & 36.5 / 32.5 / **26.0** & 89.0 / 92.6 / 92.6 / 98.1 / 90.9 / 92.8 / **95.1** & 76.3 / 91.7 / **55.7** & 74.8 / 81.6 / **84.6** & 76.0 / 81.6 / **84.6** \\ & DenseNet & 30.8 / 38.0 / **23.0** & 92.3 / 90.3 / 95.4 & 93.3 / 90.7 / **95.4** & 63.4 / 68.4 / 62.2 / 82.8 / 75.7 / **82.9** & **83.5 / 75.8 / 82.6 / 82.6 \\ \hline \multirow{6}{*}{**OOD**} & Mean & 40.6 / 32.7 / **32.3** & 89.1 / 92.5 / **95.6** & 90.5 / 92.6 / **95.5** & 72.2 / 65.3 / **60.6** & 79.6 / 76.9 / **83.5** & 80.2 / 77.7 / **83.0** \\ & ResNet-18 & 37.7 / 37.0 / **17.9** & 91.5 / 89.9 / **96.7** & 92.7 / 89.4 / **96.6** & 77.6 / 72.6 / **66.6** & 81.0 / 75.1 / **83.3** & 81.2 / 75.3 / **82.2** \\ & WRN-40-2 & 35.3 / 54.9 / **22.5** & 91.1 / 85.0 / **95.8** & 92.1 / 84.1 / **95.7** & 78.0 / 62.6 / **60.0** & 77.0 / 81.7 / **84.2** & 78.3 / 81.9 / **84.4** \\ & DenseNet & 30.3 / 73.9 / **20.0** & 93.3 / 86.3 / **96.1** & 93.8 / 83.2 / **96.1** & 69.2 / 70.3 / **62.2** & 82.4 / 75.7 / **83.4** & 83.6 / 77.0 / **84.0** \\ \hline \multirow{6}{*}{**OOD**} & Mean & 34.5 / 55.3 / **20.1** & 92.0 / 86.7 / **96.2** & 92.9 / 85.6 / **96.2** & 75.0 / 68.5 / **63.0** & 80.1 / 77.5 / **83.6** & 81.0 / 78.1 / **83.5** \\ \cline{1-1} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2- **Architecture Agnostic without Compromising Accuracy** Our experiments across three architectures as reported in Table 1 show the compatibility of our method with various architectures evidencing the agnostic nature of our method to architectural designs. An essential attribute of OOD methods employing regularization during training is the preservation of classification accuracy in ID datasets, independent of their OOD detection performance. The evidence supporting these assertions can be found in Table 2. **Significant Reduction in Overconfidence** In Figure 5 we show the comparison between Baseline, LogitNorm, and T2FNorm in terms of distribution of maximum softmax probability. It can be observed that overconfidence has been addressed by T2FNorm to a greater extent in comparison with the baseline. Though the issue of overconfidence is also reduced in LogitNorm, the separability ratio is significantly higher in T2Norm, as we show in Figures 6 and 15. appreciably. This depicts a clear difference in the response of the network towards OOD and ID. Similar observations can be found on logits as the feature representation has a direct implication on it. More importantly, from the comparison of various methods, we observe that the separability factor \(\mathcal{S}\) induced by our method is highly significant. For instance, we achieve (\(\mathcal{S}=6.01\)) at the end of training in the penultimate feature. The progression of S over the epochs in both the feature and logit space can be observed from Figure 8. Compatibility with existing OOD scoring methodsT2FNorm is compatible with various existing OOD scoring functions. Figure 8 shows that existing scoring functions when applied to the model trained with T2FNorm can boost the OOD detection performance. For instance, our model improves the baseline's OOD performance using ODIN from FPR@95 of 38.67 to 17.15 in ResNet18 architecture for CIFAR-10 experiments. Hyperparameter-free energy-based scoring function can also get a boost of 19.86 in comparison to the baseline model. Similarly, DICE [5] works very well (Figure 8) with our method too. ## 5 Discussion Ablation Study of NormalizationImposing normalization during OOD scoring enforces a constant magnitude constraint on all inputs, irrespective of their originating distribution. This effectively eradicates the very characteristic (the magnitude property) that could potentially differentiate whether an input originates from the training distribution or not. It results in the trained network incorrectly assuming OOD samples as ID samples. As demonstrated in Figure 10, the separability of the nature of input distribution is compromised by normalization during OOD scoring. Quantitatively, for trained ResNet-18 architecture with CIFAR-10 as ID, this degrades the mean FPR@95 performance from 19.7% (T2FNorm) to 48.66%. Sensivity Study of Temperature \(\tau\)Figure 9 shows that the classification accuracy and OOD Detection performance (FPR@95) are not much sensitive over a reasonable range of \(\tau\). We found the optimal value of \(\tau\) to be 0.1. And while the performance is good for \(\tau\in[0.05,1]\), both accuracy and FPR@95 score degrades substantially for \(\tau>1\) and \(\tau\leq 0.01\). Implication on FC Layer WeightsFigure 11 shows the weights of the final classification layer corresponding to the Airplane class for T2FNorm and LogitNorm. In comparison to the smoother weight of LogitNorm, weights of T2Norm have higher variance and are sharply defined. Quantitatively, we find the average variance to be about 10 times higher in T2FNorm as compared to LogitNorm. Roughly speaking, it can be inferred that T2FNorm encourages the clear assignment of important features for a given category classification. It necessitates the activation of the specific important features for ID sample predictions. Conversely, OOD samples, which lack these important features, fail to activate them, leading to lower softmax probabilities. Table 4 also further shows that the mean of both the negative weights and positive weights are greater in magnitude for T2FNorm. ## 6 Related Works OOD DetectionNumerous studies have emerged in recent years focusing on OOD detection. A straightforward method for OOD detection is a simple maximum softmax probability [17]. However, it remains an unreliable scoring metric for OOD detection because of inherent overconfidence imposed by training with one-hot labels[20]. OOD detection has been primarily tackled with three lines of approach in the literature (a) post-hoc methods, (b) outlier exposure and (c) train-time regularization.Post-hoc methods [5; 17; 18; 21; 22; 23; 24; 25; 26] aim to improve the ID/OOD separability with pretrained models trained only with the aim for accuracy. Outlier exposure is another less studied line in academic research, as the assumption of the nature of OOD limits the ideal applications. However, it is found to be commonly used for industrial purposes. Training time \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{Mean weights of Airplane Class} & \multicolumn{3}{c}{Mean weights of All Class} \\ \hline Method & All Weights & Negative Weights & Positive Weights & All Weights & Negative Weights & Positive Weights \\ \hline Baseline & 0.000 & -0.062 & 0.102 & 0.000 & -0.056 & 0.107 \\ LogitNorm & 0.007 & -0.042 & 0.027 & -0.003 & -0.027 & 0.032 \\ T2FNorm & 0.005 & **-0.075** & **0.261** & 0.000 & **-0.072** & **0.283** \\ \hline \hline \end{tabular} \end{table} Table 4: Mean of the FC Layer weights for a single class and for all classes shows that T2FNorm has more distinctly assigned weights Figure 11: FC layer’s weight comparison of Airplane class. Figure 9: Sensitivity study of temperature \(\tau\) Figure 10: Normalization at OOD scoring regularization[27; 28; 29; 30; 31; 32] employs some form of regularizer in the training scheme, and this line of work due to its capacity to directly impose favorable constraints during training potentially offers the most promising path to superior performance for OOD detection. For instance, LogitNorm [1] employs logit normalization as training time regularization to address the overconfidence issue and, thereby, improve OOD detection. Furthermore, LogitNorm [1] shows overconfidence can somewhat be addressed sub-optimally with logit penalty too. Different from LogitNorm [1], our work pertains to addressing overconfidence in the feature space thereby automatically addressing overconfidence in the logit space. Needless to say, our work deals with high-dimensional normalization. NormalizationThe utility of normalization in ensuring consistent input distribution and reducing covariate shift has proven beneficial in various subareas of deep learning[33; 34; 35; 36]. Normalization consisting of learnable parameters such as Batch Normalization[37], Layer Normalization[38], and Group Normalization[39], have been effective in mitigating training issues of neural networks. On the other hand, the strategic placement of L2 normalization has also been a popular recipe for training more effective deep learning models. Similar to our work, [36] constrains the features to lie on the hypersphere of fixed radius for face verification purposes but does so in both the training and testing phase without scaling. Further works in deep metric learning such as ArcFace [40], CosFace [41], SphereFace [42], etc realize the effectiveness of normalization. Specifically, [43] shows the hyperparameter-free OOD detection method introducing cosine loss by taking inspiration from norm face [44] where both the penultimate feature and fully connected layer are normalized. Our approach differs from cosine loss in three different ways. a) The temperature parameter is learned in the cosine loss method whereas we set a fixed temperature across all 6 settings. While it may seem extra hyperparameter is being added, we find a value of \(\tau\) being architecture agnostic as well as dataset agnostic. b) Unlike cosine loss, we avoid normalizing the classification layer freeing it to learn non-smooth weight values which, in turn, boost compatibility with various downstream OOD scoring methods as they rely on ID-OOD separability based on magnitudes. c) Importantly, we remove the constraint of hyperspherical embeddings in the OOD scoring phase while [43] uses cosine similarity and is not compatible with other OOD scoring functions. [45] provided a study showing modern neural networks' poor calibration and proposed to use temperature scaling as posthoc method to improve calibration. Platt scaling [46] is another simple postprocessing calibrating technique. Label smoothing [47] helps to avoid overconfident calibration by adding uncertainty to the one-hot encoding of labels. ## 7 Conclusion In summary, our work introduces a novel training-time regularization technique, termed as **T2FNorm**, which seeks to mitigate the challenge of overconfidence via enhancing ID/OOD separability. We empirically show that T2FNorm achieves a higher separability ratio than prior works. This study delves into the utility of feature normalization to accomplish this objective. Notably, we apply feature normalization exclusively during the training and inference phases, deliberately omitting its application during the OOD scoring process. This strategy improves OOD performance across a broad range of downstream OOD scoring metrics without impacting the model's overall accuracy. We provide empirical evidence demonstrating the versatility of our method, establishing its effectiveness across multiple architectures and datasets. We also empirically show our method is less sensitive to the hyperparameters. ## 8 Broader Impact and Limitations OOD detection is a crucial task regarding AI safety. The accuracy of OOD detection directly impacts the reliability of many AI applications. Safe deployment of AI applications is crucial in areas such as healthcare and medical diagnostics, autonomous driving, malicious use or intruder detection, fraud detection, and others. In such cases, OOD detection can play a crucial role in identifying and increasing robustness against unknown samples. Further, OOD detection also helps in increasing the trustworthiness of AI models to increase public acceptance of them. We demonstrate versatility across multiple datasets and architecture; however, due to the limited availability of compute, our experiments are limited to smaller resolution images from CIFAR-10 and CIFAR-100, and the results as such can't be guaranteed to generalize for higher resolution images or in real-life scenarios. Acknowledgement This work was supported by the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) [203145Z/16/Z]; Engineering and Physical Sciences Research Council (EPSRC) [EP/P027938/1, EP/R004080/1, EP/P012841/1]; The Royal Academy of Engineering Chair in Emerging Technologies scheme; and the EndoMapper project by Horizon 2020 FET (GA 863146).
2307.10859
Fiber crosslinking drives the emergence of order in a 3D dynamical network model
The Extra-Cellular-Matrix (ECM) is a complex interconnected 3D network that provides structural support for the cells and tissues and defines organ architecture key for their healthy functioning. However, the intimate mechanisms by which ECM acquire their 3D architecture are still largely unknown. In this paper, we address this question by means of a 3D individual based model of interacting fibers able to spontaneously crosslink or unlink to each other and align at the crosslinks. We show that such systems are able to spontaneously generate different types of architectures. We provide a thorough analysis of the emerging structures by an exhaustive parametric analysis and the use of appropriate visualization tools and quantifiers in 3D. The most striking result is that the emergence of ordered structures can be fully explained by a single emerging variable : the proportion of crosslinks in the network. This simple variable becomes an important putative target to control and predict the structuring of biological tissues, to suggest possible new therapeutic strategies to restore tissue functions after disruption, and to help in the development of collagen-based scaffolds for tissue engineering. Moreover, the model reveals that the emergence of architecture is a spatially homogeneous process following a unique evolutionary path, and highlights the essential role of dynamical crosslinking in tissue structuring.
Pauline Chassonnery, Jenny Paupert, Anne Lorsignol, Childérick Sévérac, Marielle Ousset, Pierre Degond, Louis Casteilla, Diane Peurichard
2023-07-20T13:29:36Z
http://arxiv.org/abs/2307.10859v1
# Fiber crosslinking drives the emergence of order in a 3D dynamical network model ###### Abstract The Extra-Cellular-Matrix (ECM) is a complex interconnected \(3\)D network that provides structural support for the cells and tissues and defines organ architecture key for their healthy functioning. However, the intimate mechanisms by which ECM acquire their \(3\)D architecture are still largely unknown. In this paper, we address this question by means of a \(3\)D individual based model of interacting fibers able to spontaneously crosslink or unlink to each other and align at the crosslinks. We show that such systems are able to spontaneously generate different types of architectures. We provide a thorough analysis of the emerging structures by an exhaustive parametric analysis and the use of appropriate visualization tools and quantifiers in 3D. The most striking result is that the emergence of ordered structures can be fully explained by a single emerging variable : the proportion of crosslinks in the network. This simple variable becomes an important putative target to control and predict the structuring of biological tissues, to suggest possible new therapeutic strategies to restore tissue functions after disruption, and to help in the development of collagen-based scaffolds for tissue engineering. Moreover, the model reveals that the emergence of architecture is a spatially homogeneous process following a unique evolutionary path, and highlights the essential role of dynamical crosslinking in tissue structuring. _AMS Subject Classification_ 92-10, 92C10, 82C22, 93A16. _Keywords_ Interaction networks, Three dimensional mathematical modelling, Self-organization, Extra-cellular matrix, Dynamical crosslinking, Architecture emergence. _Subjects_ Biophysics, Biomechanics, Computational Biology. 1 Footnote 1: RESTORE, Université de Toulouse, Inserm U1031, EFS, INP-ENVT, UPS, CNRS ERL5311, Toulouse, France. 2 Footnote 2: Inria Paris, team MAMBA, Sorbonne Université, CNRS, Université de Paris, Laboratoire Jacques-Louis Lions UMR7598, F-75005 Paris. 3 Footnote 3: Institut de Mathématiques de Toulouse ; UMR5219, Université de Toulouse ; CNRS, UPS, F-31062 Toulouse Cedex 9, France. \({}^{*}\)co-last authors \({}^{\dagger}\)corresponding author : [email protected] ## 1 Introduction The adequate architecture of any organ is mandatory for their efficient physiological function. It depends mainly on the structure of the extracellular matrix (ECM) which provides spatial information for cells and largely participate to mechanical constraints [37]. For example, in epithelial tissues, a basket-weave structure in the skin plays an important role in preserving the barrier function of human skin [15], while alignment as well as accumulation of ECM observed in fibrosis lead to a loss of function [29, 33]. The ECM is a dynamical three-dimensional network consisting of interacting extracellular macromolecules such as collagen, enzymes and glycoproteins that provide structural and biochemical support to surrounding cells [29]. ECM fibers are interconnected by molecular bonds, i.e. crosslinks, that confer connectivity and elasticity throughout the ECM network. This network structure is in constant remodeling, which is crucial to maintain tissue integrity and function. Crosslinks, however, can unbind spontaneously or under tension, which leads to viscoplastic material responses, such as softening and tension relaxation [28]. Fibrosis and aging are also characterized by an increase of enzymatic and non-enzymatic crosslinks [22; 11] and this increase in crosslinking prevents ECM degradation by matrix metalloproteinases, both events leading to a decrease of ECM remodeling [24]. Altogether, these events induce greater stiffness and the arrangement of the collagen fibers becomes less organized and more loose and fragmented, hence weakening tissue integrity and strength [40; 3]. An understanding of the fundamental organizing principles of ECM structure in three dimensions also helps in apprehending the complex dynamics of pathological tissues from degenerative diseases or tumor [24]. Because the global architecture of fiber networks seems to be fundamental for controlling tissue functions, modeling the process of ECM structure emergence will greatly improve our understanding of tissue biology and plasticity in physiological or pathological conditions. Numerous models of fiber networks can be found in the literature. Due to their simplicity and flexibility, the most widely used models are Individual Based Models (IBM), which describe the behavior of each agent (e.g. a fiber element) and its interactions with the surrounding agents over time [12; 20]. However, IBM have a high computational cost which can become intractable when studying large scales, either spatial or temporal or for systems composed of too many agents. In such cases, continuous or mean-field kinetic models may be preferred [9; 10; 34; 38; 4] since they are less costly, but at the expense of a loss of information at the individual level. Since it is well acknowledged that microstructure configurations modulate the macroscopic properties of crosslinked fiber networks [25], preserving the microscopic level description is of great importance to model tissue emergence. Most of the computational models developed thus far for mimicking ECM networks are two-dimensional [17; 39; 10; 18; 2; 35; 13; 5; 1; 38; 30]. Few studies have been conducted on \(3\)D models [36; 19; 42; 7; 16; 23; 26; 32], although these are expected to yield different, more realistic results than \(2\)D ones since they better mimic biological structures themselves immersed in \(3\)D environments. One of the reasons for fewer \(3\)D models is the great increase in the number of agents needed to achieve a given spatial density and thus in the associated computational cost. Another reason is the lack of high quality data on ECM organization in \(3\)D. However, the latter is becoming less and less of an issue with recent improvements in high resolution \(3\)D imaging and its availability. Among existing \(3\)D models, few of them feature dynamical crosslinking of ECM components. In [42; 16; 27], various models of \(3\)D fibrous networks composed of permanent or transient crosslinks (remodeling) are proposed. However, most of these models feature ECM remodeling in reaction to external factors (applied load [42], migrating cells [16], contractile cells [27]), and the literature so far provides little cues on the mechanisms underlying fiber self-organization. In the present paper, we test the hypothesis that fiber macrostructures could spontaneously emerge without appealing to contact guidance or external mechanical challenges, as a result of simple mechanical interactions between the fiber elements composing the ECM network. We assess this hypothesis by means of a three dimensional model that is a \(3\)D extension of the two-dimensional model of ECM presented in [30] in the frame of adipose tissue morphogenesis, which was validated against biological data. ECM fibers are discretized into unit fiber elements, consisting of non-stretching and nonflexible sphero-cylinders with the ability to spontaneously link to and unlink from their close neighbors. This dynamical crosslinking mechanism allows us to model both the overall temporal plasticity of the network and the complex physical properties of biological fibers such as elongation, bending, branching and growth, thus compensating our minimalistic description of the fiber units. Through computational simulations and exhaustive parametric analysis, we demonstrate that organized macrostructures can spontaneously emerge without external guidance. This study provides a comprehensive view on the role of ECM connectivity on tissue architecture emergence. The model first reveals that dynamic remodelling is essential for the generation of ordered ECM structures. Moreover, we surprisingly find that for dynamical networks, tissue architecture at equilibrium is simply controlled by the proportion of crosslinks in the network, independently of the amount of fibers or the remodelling speed of the network. These major results show that the emergence of ordered structures in biological fiber networks could be principally driven by the proportion of crosslinks they contain. This simple emerging variable therefore becomes an important putative target to control and predict the development of the architecture of biological tissues. Because of its simplicity, this variable is amenable for experimental measurements and could represent a major target for the development of therapeutic drugs enabling to induce tissue recovery after injury, prevent tissue degradation during ageing, or help in the design of engineering collagen scaffolds for tissue regeneration. With this in mind, we perform a deep exploration of the model parameters and use quantitative tools to characterize as precisely as possible how the different spatial structures emerge as function of the intrinsic parameters of our networks. ## 2 Models and methods ### Description of the model We model the complex fiber structures by \(\mathrm{N}_{\mathrm{fib}}\) unit fiber elements consisting of line segments of fixed and uniform length \(L_{\mathrm{fib}}\). They are represented by the position of their centers \(\mathbf{X}_{k}(t)\in\mathbb{R}^{3}\) and their directional unit vectors \(\omega_{k}(t)\). These elements are non-oriented, meaning that we can restrict the phase-space of the directional vectors to the half unit sphere \(\mathbb{S}_{2}^{+}\). We include the following biological features : _(i) Fiber resistance to pressure_ is modelled by a short-range repulsive force between pairs of fibers. Indeed, from a physical point of view, each fiber occupies a given volume from which other fibers are excluded. However, since implementing a strict sterical exclusion constraint would have a high computational cost, we settled for a repulsive interaction allowing some interpenetration between fibers. We assume that the intensity of the force field generated by a fiber decreases linearly with the distance to this fiber, thus displaying sphero-cylindrical isolines. We denote by \(\alpha_{\text{rep}}\) the maximal intensity of this force field, which is reached on the fiber, and by \(R_{\text{fib}}\) the threshold beyond which the force field vanishes (this can be regarded as the "width" of the fiber). Given two fibers \(k\) and \(m\), we denote by \(\mathbf{X}_{k,m}\) the point of fiber \(k\) closest to fiber \(m\) (see annex A.2 for the actual computation of this point). If \(||\mathbf{X}_{k,m}-\mathbf{X}_{m,k}||\leq 2R_{\text{fib}}\), fiber \(k\) sustains from fiber \(m\) a repulsive force : \[\mathbf{F}_{\mathbf{k},\mathbf{m}}^{\text{rep}}=\alpha_{\text{rep}}\left(2R_{ \text{fib}}-||\mathbf{X}_{k,m}-\mathbf{X}_{m,k}||\right)^{3/2}\sqrt{2R_{\text{ fib}}}\times\frac{\mathbf{X}_{k,m}-\mathbf{X}_{m,k}}{||\mathbf{X}_{k,m}- \mathbf{X}_{m,k}||}, \tag{1}\] which is applied at point \(\mathbf{X}_{k,m}\), thus inducing a rotational torque : \[\mathbf{T}_{\mathbf{k},\mathbf{m}}^{\text{rep}}=(\mathbf{X}_{k,m}-\mathbf{X}_{ k})\wedge\mathbf{F}_{\mathbf{k},\mathbf{m}}^{\text{rep}}, \tag{2}\] on fiber \(k\). _(ii) Fiber growth, elongation and ability to bend_ are modelled by allowing two fibers closer than a certain threshold \(d_{\text{link}}^{\text{max}}\) to create a crosslink. A crosslink is defined as a spring of stiffness \(\alpha_{\text{rest}}\) and unloaded length \(d_{\text{link}}^{\text{eq}}\), fixed to the two points of the crosslinked fibers that were closest at the time of its creation. For two linked fibers \(k\) and \(m\), let us denote by \(\mathbf{X}_{k,m}^{l}\) the point of fiber \(k\) that was closest to fiber \(m\) at the time of the link creation : the elastic restoring force sustained by fiber \(k\) due to its link with fiber \(m\) is equal to \[\mathbf{F}_{\mathbf{k},\mathbf{m}}^{\text{rest}}=\alpha_{\text{rest}}\left(d_{ \text{link}}^{\text{eq}}-||\mathbf{X}_{k,m}^{l}-\mathbf{X}_{m,k}^{l}||\right) \frac{\mathbf{X}_{k,m}^{l}-\mathbf{X}_{m,k}^{l}}{||\mathbf{X}_{k,m}^{l}- \mathbf{X}_{m,k}^{l}||}, \tag{3}\] and induces a rotational torque on fiber \(k\) : \[\mathbf{T}_{\mathbf{k},\mathbf{m}}^{\text{rest}}=\left(\mathbf{X}_{k,m}^{l}- \mathbf{X}_{k}\right)\wedge\mathbf{F}_{\mathbf{k},\mathbf{m}}^{\text{rest}}. \tag{4}\] To ensure coherence between the different features of the model, we require that \(2R_{\text{fib}}\leq d_{\text{link}}^{\text{eq}}\leq d_{\text{link}}^{\text{ max}}\). Linking and unlinking processes follow Poisson processes with frequencies \(\nu_{\text{link}}\) and \(\nu_{\text{unlink}}\) respectively. As a result, the linked fiber ratio \(\chi_{\text{link}}=\frac{\nu_{\text{link}}}{\nu_{\text{link}}+\nu_{\text{ unlink}}}\) represents the equilibrium fraction of linked fibers among the pairs of neighbouring fibers. Several consecutively crosslinked fiber units would model a long fiber having the ability to bend or even take possible tortuous geometries. As the number of crosslinks attached to a given fiber is not limited, we can also account for fiber branching. Therefore, the crosslinking process will model fiber elongation [8] and symmetrically, spontaneous unlinking of pairs of crosslinked fibers will allow for fiber breakage describing ECM remodeling processes. _(iii) Fiber stiffness_, that is the ability of biological fibers to offer a certain resistance to bending, is accounted for by subjecting pairs of linked fibers to a torque at their junction. This torque vanishes when the fibers are parallel, and consequently acts as a linked-fiber alignment mechanism. It is characterized by a stiffness parameter \(\alpha_{\text{align}}>0\) playing the role of a flexural modulus : the larger \(\alpha_{\text{align}}\), the more rigid the fiber network. Given two linked fibers \(k\) and \(m\), the torque sustained by the fiber \(k\) is such that, \(\forall\mathbf{u}\in\mathbb{R}^{3}\), \[\mathbf{T}_{\mathbf{k},\mathbf{m}}^{\text{align}}\wedge\mathbf{u}=\alpha_{ \text{align}}\left((\omega_{k}\wedge\widetilde{\omega}_{m})\wedge\mathbf{u}+ \frac{1-|\omega_{k}\cdot\omega_{m}|}{||\omega_{k}\wedge\omega_{m}||^{2}}( \omega_{k}\wedge\widetilde{\omega}_{m})\wedge((\omega_{k}\wedge\widetilde{ \omega}_{m})\wedge\mathbf{u})\right), \tag{5}\] where \(\widetilde{\omega}_{m}=\text{sign}(\omega_{k}\cdot\omega_{m})\cdot\omega_{m}\) so that there is no preferential orientation. _(iv) Over-damped regime_ (i.e. negligible acceleration during one time-step) is a physically reasonable hypothesis in the case of biological fibers, since they are surrounded by a rather thick medium which induces high friction. We assume that the friction sustained by an infinitesimal element of a fiber follows a Stokes law with friction coefficient \(\mu_{\text{fib}}\). The total friction force sustained by a fiber \(k\), computed by integrating this law on the whole length of the fiber, is equal to : \[\mathbf{F}_{\mathbf{k}}^{\text{fric}}=-\mu_{\text{fib}}L_{\text{fib}}\frac{d \mathbf{X}_{k}}{dt} \tag{6}\] and the associated rotational torque is equal to : \[\mathbf{T}_{\mathbf{k}}^{\text{fric}}=-\mu_{\text{fib}}L_{\text{fib}}^{3} \omega_{k}\wedge\frac{d\omega_{k}}{dt}. \tag{7}\] We denote by \(p_{k,m}(t)\) the linking state of fibers \(k\) and \(m\), that is \(p_{k,m}(t)\) is equal to \(1\) if fibers \(k\) and \(m\) are linked at time \(t\) and to \(0\) otherwise. The fundamental principle of dynamics in the over-damped regime gives rise to the following set of differential equations : \[\left\{\begin{array}{ll}\mu_{\text{fib}}L_{\text{fib}}\frac{d \mathbf{X}_{k}}{dt}(t)=\sum_{m=1}^{\text{N}_{\text{fib}}}\left(\mathbf{F}_{ \mathbf{k},\mathbf{m}}^{\text{rep}}(t)+p_{k,m}(t)\mathbf{F}_{\mathbf{k}, \mathbf{m}}^{\text{rest}}(t)\right)&\\ \mu_{\text{fib}}L_{\text{fib}}^{3}\frac{d\omega_{k}}{dt}(t)=\sum_{m=1}^{\text {N}_{\text{fib}}}\left(\mathbf{T}_{\mathbf{k},\mathbf{m}}^{\text{rep}}(t)+p_{ k,m}(t)\left(\mathbf{T}_{\mathbf{k},\mathbf{m}}^{\text{rest}}(t)+\mathbf{T}_{ \mathbf{k},\mathbf{m}}^{\text{align}}(t)\right)\right)\wedge\omega_{k}(t)&\end{array} \right.\forall k\in\llbracket 1,\text{N}_{\text{fib}}\rrbracket. \tag{8}\] ### Description of the experiments We consider a spatial domain \(\Omega\) which is a cuboid of side length \(L_{x}\), \(L_{y}\) and \(L_{z}\) respectively in the \(x\), \(y\) and \(z\)-dimension, and is centered on the origin : \[\Omega=\left[-\frac{L_{x}}{2},\frac{L_{x}}{2}\right]\times\left[-\frac{L_{y}}{ 2},\frac{L_{y}}{2}\right]\times\left[-\frac{L_{z}}{2},\frac{L_{z}}{2}\right].\] For the sake of simplicity, we assume periodic boundary conditions : an agent exiting the domain by one side re-enters immediately from the opposite side, and interactions between agents are computed using the periodicized euclidean distance. In other words, \(\Omega\) is topologically equivalent to the \(3\)D-torus. We perform numerical simulations of our model on the domain \(\left(\Omega\times\mathbb{S}_{2}^{+}\right)^{\text{N}_{\text{fib}}}\) for various sets of parameters. Fibers are randomly inseminated inside the domain according to a uniform law for both position and orientation. The differential system (8) is then numerically solved using a discrete upwind Euler scheme with adaptive time-step, which has a very low computational cost. Further reduction of the computational cost is achieved by dividing \(\Omega\) into cubes whose side-length is higher than the maximal range of the interactions : thus, interactions need only be computed for pairs of agents located in neighbouring cubes. Details of the numerical implementation are given in annex A.1. The physical scaling of all the parameters of the model, as well as the values used in the simulations, are described in Table 1. A few points may be noted : the perception distance for link creation \(d_{\text{link}}^{\text{max}}\) and the link unloaded length \(d_{\text{link}}^{\text{eq}}\) are both equal to their minimal acceptable value \(2R_{\text{fib}}\); the size of the domain is approximately \(4\) times the size of a fiber along its main axis; the fiber aspect-ratio \(\frac{L_{\text{fib}}}{2R_{\text{fib}}}=6\) is quite small compared to the values used in other models of the ECM, which usually varies between \(250\) and \(10^{4}\)[34, 23, 26]. This compensate the fact that these models directly account for fiber bending and/or fiber elongation. We denote by \(\phi_{\text{fib}}\) the fiber density of the network, that is the ratio between the total volume of fibers (without overlapping) and the volume of the spatial domain : \[\phi_{\text{fib}}=\frac{\text{N}_{\text{fib}}V_{\text{fib}}}{|\Omega|}=\text {N}_{\text{fib}}\frac{\pi R_{\text{fib}}^{2}L_{\text{fib}}+(4/3)\pi R_{\text{ fib}}^{3}}{L_{x}L_{y}L_{z}}. \tag{9}\] The quantity \(\phi_{\text{fib}}\) can be compared to the packing density, that is the maximal fraction of the domain that can be occupied by densely packed fibers. When considering an ordered packing, the optimal configuration of sphero-cylinders is the same as that of cylinders and the resulting packing density is a weighted average of the packing densities of spheres and cylinders, which in our case gives \(\phi_{\text{order}}=0.89\). However, since in our model fibers are randomly inseminated, the situation is closer to what is called a random or amorphous packing : particles are generated randomly with a volume exclusion constraint until it is no longer possible to inseminate another one. The density reached at that point is called the maximal random packing density \(\phi_{\text{random}}\). For sphero-cylinders with an aspect-ratio of \(6\), the literature gives us \(\phi_{\text{random}}\approx 0.4\)[39]. Thus, we may say that a system is "sparse" if its fiber density is below \(\phi_{\text{random}}\), "dense" if it is between \(\phi_{\text{random}}\) and \(\phi_{\text{order}}\), and "hyperdense" if it is above \(\phi_{\text{order}}\). We study two types of systems : dense systems containing \(\text{N}_{\text{fib}}=3000\) fibers (\(\phi_{\text{fib}}=0.58\)) and sparse systems with \(\text{N}_{\text{fib}}=1500\) fibers (\(\phi_{\text{fib}}=0.29\)). For each of the three types of mechanical forces in the system, we define the "characteristic interaction time" the time needed for two isolated fibers interacting only via this force and initially positioned in the most unfavourable configuration to reach \(99\%\) of the (asymptotic) equilibrium state. For repulsion, \(T_{\text{rep}}\) is the time needed for two fully overlapped fibers (\(\mathbf{X}_{1}=\mathbf{X}_{2}\) and \(\omega_{1}=\omega_{2}\)) to move apart by \(99\%\) \begin{table} \begin{tabular}{|c|c|c|l|} \hline Name & Value & Units & Description \\ \hline \multicolumn{4}{|c|}{**Agents**} \\ \hline \(\text{N}_{\text{fib}}\) & \([1500,3000]\) & N/A & Number of fibers \\ \hline \(L_{\text{fib}}\) & \(6\) & \(L\) & Fiber length \\ \hline \(R_{\text{fib}}\) & \(0.5\) & \(L\) & Fiber radius \\ \hline \multicolumn{4}{|c|}{**Mechanical interactions**} \\ \hline \(\alpha_{\text{rep}}\) & \(12.5\) & \(M.L^{-1}.T^{-2}\) & Magnitude of the repulsion force \\ \hline \(\alpha_{\text{rest}}\) & \(5.0\) & \(M.T^{-2}\) & Magnitude of the elastic restoring force \\ \hline \(\alpha_{\text{align}}\) & \(2.0\) & \(M.L^{2}.T^{-2}\) & Magnitude of the alignment torque \\ \hline \(d_{\text{link}}^{\text{max}}\) & \(1.0\) & \(L\) & Perception distance for link creation \\ \hline \(d_{\text{link}}^{\text{eq}}\) & \(1.0\) & \(L\) & Link equilibrium length \\ \hline \multicolumn{4}{|c|}{**Biological phenomena**} \\ \hline \(\nu_{\text{link}}\) & \([0,10]\) & \(T^{-1}\) & Network remodeling speed \\ \hline \(\chi_{\text{link}}\) & \([0.1,0.9]\) & N/A & Equilibrium linked fiber fraction \\ \hline \multicolumn{4}{|c|}{**Numerical parameters**} \\ \hline \(L_{x}=L_{y}=L_{z}\) & \(30\) & \(L\) & Side length of the cubic domain \\ \hline \(T_{\text{final}}\) & \(5.10^{4}\) & \(T\) & Total time of simulation \\ \hline \end{tabular} \end{table} Table 1: Model parameters. of their equilibrium distance \(2R_{\text{fib}}\) (i.e. \(||\mathbf{X}_{1}-\mathbf{X}_{2}||=0.99\times 2R_{\text{fib}}\)). Similarly, for the elastic spring \(T_{\text{rest}}\) is the time needed for two fibers that are initially fully overlapping and crosslinked at their center to move apart by \(99\%\) of their equilibrium distance \(d_{\text{link}}^{\text{eq}}\). On the other hand, for nematic alignment \(T_{\text{align}}\) is the time needed for two perpendicularly intersecting fibers (\(\mathbf{X}_{1}=\mathbf{X}_{2}\) and \(\omega_{1}\perp\omega_{2}\)) crosslinked at their center to reach a relative angle \(\arccos(\omega_{1}\cdot\omega_{2})=0.9^{\circ}\). Explicit computation leads to the following formula (numerical values are given for the parameters presented in Table 1) : \[\left\{\begin{array}{l}T_{\text{rep}}=\frac{9\mu_{\text{fib}}L_{\text{fib}} }{R_{\text{fib}}\;\alpha_{\text{rep}}}=8.64\,U_{t},\\ T_{\text{rest}}=\ln(100)\frac{\mu_{\text{fib}}L_{\text{fib}}}{\alpha_{\text{ rest}}}=5.53\,U_{t},\\ T_{\text{align}}=4.8\frac{\mu_{\text{fib}}L_{\text{fib}}^{3}}{\alpha_{\text{ align}}}=523\,U_{t}.\end{array}\right. \tag{10}\] It may be noted that the alignment interaction is much slower than the repulsive and elastic restoring forces. ## 3 Results ### Matrix crosslinking drives the emergence of ordered structures in 3D dynamical networks. In Figure 1.(A-C), we show various structures that can be obtained with our model by playing on the parameters in the ranges indicated in Table 1. The fibers are represented by double arrows, colored as function of their local alignment with their neighbors. We refer the readers to anne B.1 for more details on the computation of this quantifier, and just mention that the local alignment of fiber \(k\), denoted \(\text{Al}_{k}\), is equal to \(1\) (fiber colored in red) if all the neighbouring fibers display the exact same direction as fiber \(k\), and to \(0\) (fiber colored in blue) if the neighbouring fibers display uniformly distributed directional vectors. As one can observe, the fiber structures obtained at equilibrium range from highly aligned systems (mainly composed of red fibers, see Figure 1.A) to disordered systems with a low local alignment (mainly composed of fibers colored in blue, see Figure 1.C). The model can also produce intermediate states composed of fibers with a median local alignment (see Figure 1.B). In order to assess the alignment states of our different fiber networks, we computed the mean and the standard deviation of the local alignment indicator \(\text{Al}_{k}\) over all the fibers of the system, denoted by \(\text{Al}_{\text{mean}}\) and \(\text{Al}_{\text{STD}}\). By plotting this alignment quantifier \(\text{Al}_{\text{mean}}\) (computed on the systems at equilibrium) as a function of the proportion of links \(\text{N}_{\text{linkperfib}}=\text{N}_{\text{links}}/\text{N}_{\text{fib}}\), we discovered a striking and major correlation between these two quantities. This correlation is shown in Figure 1.D, where each point corresponds to the average over \(10\) simulations conducted with the same set of parameters, with vertical and horizontal error-bars indicating \(\text{Al}_{\text{STD}}\) and standard deviation of \(\text{N}_{\text{linkperfib}}\) respectively. The different markers indicate different fiber densities (dots for dense systems and triangles for sparse ones), the different colors refer to different networks dynamics \(\nu_{\text{link}}\), and inside each color series \(\chi_{\text{link}}\) is increasing with \(\text{N}_{\text{linkperfib}}\). Figure 1.D reveals that the values of \(\text{Al}_{\text{mean}}\) and \(\text{N}_{\text{linkperfib}}\) at equilibrium are highly correlated. When \(\text{N}_{\text{linkperfib}}\) is inferior to a critical threshold \(\text{N}_{\text{critic}}\approx 0.7\) (indicated with a grey dashed line on Figure 1.D), there is a logarithmic correlation between the proportion of links in the network and its mean alignment indicator : \[\text{Al}_{\text{mean}}\approx\alpha\log(\text{N}_{\text{linkperfib}})+\beta, \tag{11}\] with * \(\alpha=0.037\), \(\beta=1.006\) and coefficient of determination \(r^{2}=0.87\) for dynamical systems (non-blue markers); Figure 1: **Panels A-C:** Illustration of the various structures that can be observed at equilibrium. Fibers are represented by double-headed arrows and colored according to their local alignment with their neighbours (from blue : \(\text{Al}_{k}=0\) to red : \(\text{Al}_{k}=1\)). The structures range from systems with uniformly high local alignment indicator (panel **A**) through systems with heterogeneous, intermediate local alignment indicator (panel **B**) to disordered systems with uniformly low local alignment indicator (panel **C**). **Panel D** : Value of \(\text{Al}_{\text{mean}}\) according to \(\text{N}_{\text{linkperfib}}\) at equilibrium, with color depending on the remodelling speed \(\nu_{\text{link}}\). Each data-point represents the average value computed over \(10\) simulations conducted with the same set of parameters, with horizontal and vertical error-bars for the standard deviation over \(\text{N}_{\text{linkperfib}}\) and \(\text{Al}_{\text{mean}}\) respectively. The gray dashed-line indicates the critical value of \(\text{N}_{\text{linkperfib}}\) and the black dashed-lines the two logarithmic fits obtained for \(\text{N}_{\text{linkperfib}}<\text{N}_{\text{critic}}\). * \(\alpha=0.129\), \(\beta=0.651\) and coefficient of determination \(r^{2}=0.96\) for sparse non-dynamical networks (blue triangles); * \(\alpha=0.042\), \(\beta=0.433\) and coefficient of determination \(r^{2}=0.985\) for dense non-dynamical networks (blue dots). All these correlations are shown on Figure 1.D with black dashed lines. Then, when \(\text{N}_{\text{linhepfils}}>\text{N}_{\text{critic}}\) we observe an abrupt drop of the equilibrium value of \(\text{Al}_{\text{mean}}\). Surprisingly and very interestingly, for dynamical systems (\(\nu_{\text{link}}>0\)) there is no difference in alignment induced by the fiber density or the link characteristics \(\nu_{\text{link}}\) and \(\chi_{\text{link}}\) : the correlation observed is the same for all sets of points. The second major observation from Figure. 1.D is the difference between non-dynamical and dynamical networks at equilibrium. Indeed non-dynamical networks, composed of a fixed number of links, are systematically less aligned than dynamical ones (compare the values of \(\text{Al}_{\text{mean}}\) between the blue markers and the other colors). Moreover, although we do recover the same type of correlation between the fiber local alignment and the proportion of links in the network, for non-dynamical networks this correlation significantly depends on the fiber density. However, the critical number of links \(\text{N}_{\text{critic}}\) allowing for larger alignment is the same for non-dynamical networks, either dense of sparse, and for dynamical networks. Altogether, these results show that the emergence of organized networks (i) requires some remodelling abilities of the ECM matrix and (ii) is mainly controlled by the proportion of its crosslinks. Therefore, we performed a profound exploration of the role of the model parameters on the tissue architectures at equilibrium and in time, characterizing both the local arrangement of the fibers and the global architecture. In the following, we will take a particular focus on the role of matrix remodelling speed (viewed as a measure of its "plasticity"). ### Characterization and quantitative assessment of various 3D architectures. In this section, we take a step further in the qualitative and quantitative analysis of the various tissue architectures that emerge from our \(3\)D mathematical model. The goal is to describe as precisely as possible both the local organization of the fibers in the networks and the large-scale structures. First, by performing a theoretical analysis of the alignment quantifier \(\text{Al}_{k}\) using different preset distributions of the fibers directional vectors (see Figure 7 in annex B.1), we showed that it was able to discriminate between fibers located in randomly oriented environments (corresponding to \(\text{Al}_{k}<0.5\)), fibers located in nearly planar environments (leading to \(\text{Al}_{k}\) around \(0.7\)), and fibers located in nearly uni-directional environments (leading to \(\text{Al}_{k}\) above \(0.8\)). Then, to observe the global distribution of the fibers, we used a stereographic projection of their directional vectors (see Figure 8 in annex B.2 for a detailed explanation). Disregarding the spatial position of a fiber (that is, the position of its center), we represented its directional vector as a point on the surface of the unit half-sphere in \(3\)D and then projected it onto the unit disk in \(2\)D. The pole of the projection is chosen as the "main direction" of the system, that is the average of all directional vectors. As shown in Figure. 2, this representation enabled us to characterize the different global organizations of our fiber networks. Indeed, we observed three different types of stereographic projections in our simulations : fibers directional vectors very concentrated around the center of the disk, corresponding to a global alignment of the system (Figure 1.A and Figure 2.A), fibers directional vectors homogeneously distributed on the disk corresponding to a global disorder (Figure 1.C and Figure 2.E), and fibers directional vectors distributed along a preferential axis, with complete depletion in the direction perpendicular to this axis, corresponding to global curved/plane structures (Figure 2.(B-D)). Therefore, the different states of our networks could be characterized both by a quantifier for local structuring such as \(\text{Al}_{\text{mean}}\) and by quantifiers for global organization such as the size of the stereographic projection covariance ellipse \(\text{A}_{\text{max}}\). We considered a system to be locally aligned if the local distribution of its fibers directional vectors was mainly unidirectional, that is \(\text{Al}_{\text{mean}}\) above \(0.7\). At the same time, we considered that a system was globally aligned, in the sense that it displayed a global main direction, if its stereographic projection covariance ellipse had a semi-major axis smaller than \(0.45\) (implying that the point cloud covers less than \(20\%\) of the whole projection disk). We therefore classified the simulations outcomes into three different states (unorganized, curved and aligned) using Table 2. We ran a total of \(1080\) numerical simulations, exploring various values of the parameters \(\nu_{\text{link}}\), \(\chi_{\text{link}}\) and N\({}_{\text{fib}}\) in the broad ranges indicated in Table 1, and counted among their outcomes : * \(180\) unorganized states (all occurring in non dynamical systems, i.e. \(\nu_{\text{link}}=0\)), * \(661\) curved states, * \(239\) aligned states (among which only \(12\) occurred in sparse systems). Figure 2.A compare the values of quantifiers Al\({}_{\text{mean}}\) and A\({}_{\text{max}}\) when the simulation has reached equilibrium, for dense systems (see annex C.2 for the equivalent figure on sparse systems). The points are colored according to the states defined previously (blue dots correspond to unorganized states, orange diamonds to curved states and red crosses to aligned states). The simulations already displayed in Figure 1 are indicated with a black star and their stereographic projection shown as inset. Four other simulation outcomes are singled out with black stars and illustrated with a \(3\)D view and stereographic projection in \begin{table} \begin{tabular}{|c|c|l|l|} \cline{3-4} \multicolumn{1}{c}{} & \multicolumn{2}{|c|}{A\({}_{\text{max}}\)} \\ \cline{3-4} \multicolumn{1}{c}{} & & \(\leqslant 0.45\) & \(>0.45\) \\ \hline \multirow{3}{*}{Al\({}_{\text{mean}}\)} & \(\geqslant 0.7\) & **Aligned state :** alignment both & **Curved state :** alignment local \\ & local and global. & but not global. & but not global. \\ \cline{2-4} \cline{2-4} & \(<0.7\) & (alignment global but not local) & **Unorganized state :** no alignment, either local or global. \\ \hline \end{tabular} \end{table} Table 2: Classification of the simulations outcomes into different states based on the local quantifier Al\({}_{\text{mean}}\) and the global quantifier A\({}_{\text{max}}\). The case \(\{\text{Al}_{\text{mean}}<0.7\ \&\ \text{A}_{\text{max}}\leqslant 0.45\}\) never occurs in our simulations and is thus unnamed. Figure 2: **Panel A**: Mean alignment indicator versus semi-major axis length of the covariance ellipse of the stereographic projection, for each simulation of dense systems (N\({}_{\text{fib}}=3000\)). Red crosses correspond to systems in an aligned state, orange diamonds to curved states and blue dots to unorganized states. The simulations previously displayed in Figure 1 are indicated with a black star and their stereographic projection given as inset. **Panels B to E** display the equilibrium state of a few other simulations, whose position on the diagram are also indicated with a black star. the panels B to E on the right. Together, all these simulations give an overview of the various typical and borderline cases that can be generated by our model. We first observe that the unorganized states (blue dots) form a small, compact group of points with large semi-major axis length while the aligned states (red crosses) make a long thin group with very high alignment indicator. On the other hand, the curved states (orange diamonds) form a scattered cloud of points with a broad range of values for both the semi-major axis length and the alignment indicator. We can thus observe that the transition between unorganized and curved states is very sharp : notice the gap between the blue dots and orange diamonds in panel A. Indeed, no simulation displays an average alignment indicator at equilibrium between \(0.65\) and \(0.77\) (including sparse systems, see annex C.2), and there is a marked difference between the least organized of the curved states (illustrated in Figure 1.A) and the most organized of the unorganized states (illustrated in Figure 2.E). This confirms our choice of \(0.7\) for the threshold value between unorganized and curved states. On the contrary, the transition from curved to aligned states is not a clear switch but a continuum of structures that can be illustrated by the two borderline cases in panels B and C of Figure 2. Thus, one must be aware that the partition between curved and aligned states is partly arbitrary and depends on the choice of the threshold. Our quantifiers therefore allowed to quantitatively characterize three different network architectures. ECM architecture emergence is driven by a complex interplay between remodelling speed and linked fiber fraction In this section, we use the quantifiers defined in previous section to study the impact of the model parameters on the different tissue architectures (aligned/curved/unorganized) obtained by the model. In Figure 3, we show the distribution of the simulations outcomes at equilibrium, depending on the values of the network remodeling speed \(\nu_{\text{link}}\) (panels A and B) and the equilibrium linked fiber fraction \(\chi_{\text{link}}\) (panels C and D), for dense networks with \(\text{N}_{\text{fib}}=3000\) (panels A and C) and sparse networks with \(\text{N}_{\text{fib}}=1500\) (panels B and D). To account for the stochastic components of our model, we run \(10\) simulations for each set of parameters. Thus, each bar in panels A and B represents which percentage of the \(90\) simulations conducted with the indicated value of \(\nu_{\text{link}}\) and \(\text{N}_{\text{fib}}\) (varying over \(9\) values of \(\chi_{\text{link}}\)) ended up in which state. And each bar in panels C and D represents which percentage of the \(60\) simulations conducted with the indicated value of \(\chi_{\text{link}}\) and \(\text{N}_{\text{fib}}\) (varying over \(6\) values of \(\nu_{\text{link}}\)) ended up in which state. As already mentioned in section 3.1, we recover that non-dynamical networks (\(\nu_{\text{link}}=0\), left columns of Figures 3.A and B) are systematically unorganized, independently of the equilibrium linked fiber fraction or the fiber density. On the contrary, dynamical networks (\(\nu_{\text{link}}>0\)) never equilibrate in an unorganized state : their plasticity (i.e. their ability to rearrange their connections) favours the formation of more organized states than non-dynamical networks. This shows that the discontinuous phase transition between unorganized and curved equilibrium states, revealed in Figure 2, is controlled by \(\nu_{\text{link}}\). In contrast, the transition between the curved and aligned states is not controlled by a unique model parameter but is the interplay between several parameters. Indeed, we first observe in Figure 3 that dense dynamical networks seem to have a greater ability to create aligned states than sparse networks, which tend to favour curved states (compare the red zones in panels A and C with the ones of panels B and D). Moreover, we also observe that, for both fiber densities, networks with a moderate remodeling speed \(\nu_{\text{link}}\approx 0.01\) (middle column of panels A and B) seem to have a greater ability to reorganize into aligned states than low dynamical networks (\(\nu_{\text{link}}\approx 0.001\)) or fast remodeling networks (\(\nu_{\text{link}}\geq 0.1\)) (compare the red zones of each bar inside panels A and B). These results suggest that there exists a remodeling speed maximising the network alignment. Looking at the impact of the equilibrium linked fiber fraction \(\chi_{\text{link}}\), we observe different behaviours depending on the fiber density of the system. For sparse networks (Figure 3.D), increasing the equilibrium linked fiber fraction tends to favour a higher level of organization by increasing slightly the number of aligned states (red zones). On the contrary, dense networks (Figure 3.C) exhibit a more complex behaviour where intermediate fiber fraction \(\chi_{\text{link}}\in[0.4,0.6]\) generate more aligned states (red zones), implying that there exists an equilibrium linked fiber fraction maximising the alignment of the system. These results show that the different types of tissue architectures (aligned, curved or unorganized) depend on an interplay between parameters \(\nu_{\text{link}}\) and \(\chi_{\text{link}}\). While ECM local alignment can be explained by the simple emerging variable that is the proportion of links in the network (as shown in section 3.1), its direct relation with model parameters \(\text{N}_{\text{fib}}\), \(\nu_{\text{link}}\) and \(\chi_{\text{link}}\) is more complex. In the next section, we study the evolution in time of the structures, enabling to give more insights into the role of the parameters in tissue structuring. ECM architecture emergence follows a unique evolutionnary path on timescales controlled by their remodelling characteristics In this section, we study the temporal evolution of the spatial structures. Our very first observation is that, for all parameters, the quantifier \(\text{Al}_{\text{mean}}\) follows an inverted exponential growth. We refer to annex C.4 for a detailed analysis of \(\text{Al}_{\text{mean}}\) as a function of time and do not show the curves here. We just mention here that we denote by \(\tau_{\text{Al}}\) the time-constant of this growth, whose classical definition is the time needed for the quantifier to reach \(63\%\) of its asymptotic value and which, in our case, correspond approximately to the time at which \(\text{Al}_{\text{mean}}\) crosses the \(0.7\) threshold between unorganized and curved states. This time-constant is a good time-scale to study the temporal evolution of the system and will be used as such in the following discussion. Movies displaying the full temporal evolution of a few simulations are available in supplementary data (see annex C.1). In Figure 4.A-A"' and B-B"', we show the \(3\)D view and stereographic projec Figure 3: Distribution of the outcomes of all \(1080\) simulations between the different categories. Red zones correspond to systems in an aligned state, orange zones to curved states and blue zones to unorganized states. **Panels A and B** : Each bar gives the percentage of each category among the outcomes of the \(90\) simulations conducted with a given value of \(\nu_{\text{link}}\) (on the x-axis) and \(\text{N}_{\text{fib}}\) (dense for panel A and sparse for panel B). **Panels C and D** : Each bar gives the percentage of each category among the outcomes of the \(60\) simulations conducted with a given value of \(\chi_{\text{link}}\) (on the x-axis) and \(\text{N}_{\text{fib}}\) (dense for panel **C** and sparse for panel **D**). tion of a few well chosen time frames (namely \(0.5\tau_{\text{Al}}\), \(\tau_{\text{Al}}\), \(3\tau_{\text{Al}}\) and \(T_{\text{final}}\)) for two of these simulations (respectively from _Movie3.mp4_ and _Movie4.mp4_). They correspond to dense systems with \(\chi_{\text{link}}=0.8\) and two different crosslink dynamics : fast remodeling network \(\nu_{\text{link}}=0.1\) (A-A", _Movie3.mp4_) and slow remodeling network \(\nu_{\text{link}}=0.001\) (B-B", _Movie4.mp4_). These screenshots enable us to answer the important question of how the network global structure emerges. It is not by accretion around a few structured areas that gradually merge together, but by an overall homogeneous structuring. Indeed, one can observe that the directional vectors gradually concentrate around a main direction without creating clustered points that merge together. This behavior can be observed both for very aligned networks (A-A") or curved states (B-B"), and in fact in all our simulations, independently on the network density. Therefore, our model suggests that the emergence of tissue architecture occurs on a global scale. We now turn towards the analysis of the time trajectories of the spatial structures observed within our different networks. Figure 4.C-C" displays the trajectory in the phase plane A\({}_{\text{max}}\) vs Al\({}_{\text{mean}}\) of individual simulations conducted with various set of parameters. We chose this one-run representation instead of the usual \(10\)-runs average because the two quantifiers exhibit a non-negligible inter-simulations variability, so that plotting the standard deviation would blur the graphic but plotting only the average value would give a limited and partial view of the situation. It can be seen that all the trajectories follow a common pattern. It begins with a sharp increase of the alignment indicator (from \(0.15\) to between \(0.4\) and \(0.5\)) while maintaining a quasi-constant semi-major axis length : this corresponds to the partial depletion of one direction (denoted \(d_{1}\)) in the family of the fibers directional vector, thus shifting from the initial uniform distribution to a mainly two-directional distribution (see annex B.2 for more details on this interpretation). Non-dynamical networks (not represented on these graphics) do not go past that first stage. The trajectories then diversify : the alignment indicator keeps increasing while the semi-major axis length either decreases, stays constant or slightly increases. The first case is the most common and indicates that, while direction \(d_{1}\) keeps depleting until near extinction, one of the two remaining directions starts to deplete as well. This diversification happens on the scale of the time-constant \(\tau_{\text{Al}}\) of the alignment indicator (marked on the trajectories of Figure 4.C-C" with a black circle). Lastly, simulations ending in an aligned state and part of those ending in a curved state display a stage of condensation of the fibers directional vectors around a main direction. This is marked by a shrinking of the covariance ellipse and a slow increase of the alignment indicator, which has already nearly reached its steady state (compare with the stabilisation of Al\({}_{\text{mean}}\) in Figure 11). This last point comes from the local quality of the quantifier Al\({}_{\text{mean}}\) : a system can be very aligned locally, but not globally, if the main direction of the local structures varies smoothly across space. Thus, the transition between a curved and an aligned state is mostly characterized by a gradual shifting of multiple local structures towards the same direction, a phenomenon better registered by the quantifier A\({}_{\text{max}}\) than Al\({}_{\text{mean}}\). Finally, we observe that the number of links per fiber (displayed in Figure 4.D-D") undergoes a transient increase followed by a two-stage exponential decay (appearing as a piece-wise linear decrease on the semi-logarithmic scale). The initial accumulation of crosslinks is more pronounced, in the sense that the peak is higher and the subsequent decrease slower, when \(\chi_{\text{link}}\) is high and \(\nu_{\text{link}}\) is low. For the extreme case of slow remodeling networks \(\nu_{\text{link}}=0.001\) with a large linked fiber fraction \(\chi_{\text{link}}=0.9\) (Figure 4.D), the phenomenon is so strong that only the first stage of exponential decay is observed during the time of the simulation. On the other hand, for fast remodeling networks (\(\nu_{\text{link}}=0.1\), Figure 4.D") and/or small equilibrium linked fiber fraction (\(\chi_{\text{link}}=0.1\), blue curves), we do not observe any crosslinks accumulation. This behaviour can be explained by comparing the linking dynamics to the characteristic time of the repulsive interaction \(T_{\text{rep}}=10\). Parameter \(\chi_{\text{link}}\) describes the proportion of linked fibers among all linkable fibers at equilibrium, but this equilibrium takes time to establish (inversely proportional to \(\nu_{\text{link}}\)). If the repulsion interaction operates faster than the links remodeling (i.e. \(T_{\text{rep}}\ll 1/\nu_{\text{link}}\)), then the linkable configurations will change before the linking/unlinking processes could equilibrate on the Figure 4: Temporal evolution of dense systems (N\({}_{\text{fib}}=3000\)) with various linking dynamics. **Panels A-A”\({}^{\prime\prime}\)** : 3D view and stereographic projection of the system at times \(0.5\tau_{\text{AI}}\) (**A**), \(\tau_{\text{AI}}\) (**A’**), \(3\tau_{\text{AI}}\) (**A”**) and \(T_{\text{final}}\) (**A”**), for one simulation with \(\nu_{\text{link}}=0.1\) and \(\chi_{\text{link}}=0.8\). **Panels B-B”\({}^{\prime\prime\prime}\)** : 3D view and stereographic projection of the system at times \(0.5\tau_{\text{AI}}\) (**B**), \(\tau_{\text{AI}}\) (**B’**), \(3\tau_{\text{AI}}\) (**B”**) and \(T_{\text{final}}\) (**B”**), for one simulation with \(\nu_{\text{link}}=0.001\) and \(\chi_{\text{link}}=0.8\). **Panels C-C”** : Trajectory in the phase plane A\({}_{\text{max}}\) vs A\({}_{\text{mean}}\) of a few individual simulations for different remodelling speeds \(\nu_{link}=0.001\) (**C**), \(\nu_{link}=0.01\) (**C’**) and \(\nu_{link}=0.1\) (**C”**). The initial position is indicated with a black square, the final position with a black star and the time-constant \(\tau_{\text{AI}}\) with a black circle. The limits between each class of structures are drawn in dashed lines. **Panels D-D”** : Evolution of N\({}_{\text{linkperfib}}\) for a few set of parameters. Each curve represents the average value computed over 10 simulations conducted with the same set of parameters, with shading indicating the standard deviation. The critical value N\({}_{\text{critic}}\) is indicated with a dashed line and the time-constant \(\tau_{\text{AI}}\) with a black circle. current configuration : new links will appear between newly overlapping fibers while former overlapping fibers will still be linked even if not overlapping anymore, leading to an accumulation of links in the system. This happens all the more if the disparity between the frequencies \(\nu_{\text{link}}\) and \(\nu_{\text{unlink}}\) is more favourable to linking than unlinking (\(\nu_{\text{link}}>\nu_{\text{unlink}}\), i.e. if \(\chi_{\text{link}}>0.5\)). The system thus exhibits a global, macroscopic relaxation phenomenon which emerges from its various local, microscopic properties. It can be seen that the characteristic time-scale of this relaxation is comparable to the time-constant of the alignment indicator \(\tau_{\text{AI}}\) (see position of the black circles on the curves in Figure 4.D-D", which indicates the value of \(\tau_{\text{AI}}\) for the corresponding set of parameters). We conclude that slow remodeling networks with a high equilibrium linked fiber fraction \(\chi_{\text{link}}\) first build up increasing stress and stiffen before slowly relaxing, while networks with low \(\chi_{\text{link}}\) or fast remodeling networks exhibit stress relaxation and do not undergo high stiffening. As a result, these last types of networks reach higher local alignment at equilibrium. These results demonstrate a nonlinear dependence of the network properties on the type and proportion of its crosslinks. A high number of long lasting crosslinks promotes crosslink accumulation resulting in medium/low alignment, while fast remodeling reduces the mechanical action of the individual links on the overall network, resulting in lowly connected networks being unable to align. The network alignment ability therefore requires a number of links adapted to its remodeling speed : fast remodeling networks need a high equilibrium linked fiber fraction to quickly reach a high alignment indicator, while slow remodeling networks need a low equilibrium linked fiber fraction to prevent crosslink accumulation and the increase of matrix stiffness. ## 4 Discussion In this work, we have implemented a \(3\)D model for fiber networks composed of fiber elements capable to dynamically crosslink or unlink each others, to align with each others at the crosslinks and to repel their nearest neighbors to prevent fibers from cluttering. We showed that this model can spontaneously generate various types of macrostructures whose emergence can be finely described. The model reveals that the different macrostructures (i) can be easily explained by a single emerging intermediate variable, namely the proportion of crosslinks in the ECM network, (ii) emerge homogeneously in space and not in a fragmented way, and (iii) follow the same unique evolutionary path for all structures and not multiple paths. To our knowledge, this work is the first exhaustive study questioning the mechanisms of tissue architecture emergence via a simple mechanical model of dynamical fiber networks in \(3\)D. This framework reveals that the different tissue architectures at equilibrium is directly controlled by a simple intermediary variable, the proportion of links (see section 3.1). Our interpretation is that, when the number of links per fiber is inferior to the critical threshold \(\text{N}_{\text{critic}}\), the network is weakly constrained. In this configuration, an increase in the number of links per fiber improves the transmission of information in the network and thus enhances the alignment process. The logarithmic scaling indicates that the higher the number of links per fiber, the less prominent this feature becomes, until the gain (in terms of the equilibrium alignment indicator) becomes null. The system then shifts into a constricted regime where each new link adds to the constriction of the network and impedes its reorganization, leading to a decrease of the local alignment. The fact that we observe the same correlation for all dynamical networks means that, as long as a network is slightly dynamical, its final alignment is mostly controlled by its proportion of links rather than by its remodelling dynamics or its density. On the other hand, non-dynamical networks are locked in mechanically constrained configurations, preventing the system from reorganizing efficiently compared to dynamical ones and leading to a much lower level of alignment. However, we showed that non-dynamical networks still contain some degrees of freedom allowing for spatial matrix reorganization, and that this organization is controlled again by the proportion of links in the network but also by the matrix density, which becomes an important factor. Indeed, denser networks are even less organized than sparse networks : this is due to the fact that denser networks are overcrowded, preventing any reorganization of their fibers. The existence of a simple emerging variable such as the proportion of crosslinks to control tissue structuring can have major therapeutic implications in systems where the architecture of the ECM is impacted (scarring, fibrosis, ageing), but can also prove very useful in the field of tissue engineering. It is noteworthy that this variable is not prescribed by model parameters but emerges from the initial simple rules as a combination of ECM remodelling dynamics, linked fiber fraction and fiber spatial organization, independently of supplementary complex interactions involving external factors such as migrating cells, contractile forces etc. However, the correlation between crosslink proportion and fiber alignment only gives local information on the long-time structures (mean local alignment of the fibers at equilibrium). The second major contribution lies in the analysis of the fine time evolution of the spatial structures. This documents the different temporal evolution of the structures as function of the ECM remodeling speeds and reveals an unique trajectory all architecture combined with internal and transient temporal windows during which they self-organize. The equilibrium structures obtained with our model can be classified into three types : (a) aligned states with a strong organization around one main direction, (b) curved states with a median, locally heterogeneous alignment indicator and a wide range of directional vectors living in a plane, named curved patterns and (c) unorganized states with very low alignment indicator and no preferential direction. Unorganized states were exclusively obtained for non-dynamical networks composed of permanent crosslinks (\(\nu_{\text{link}}=0\)), whose plasticity was very low due to their inability to rearrange their crosslinks. In contrast, dynamical networks exhibited a mixture of aligned and curved states. These results point to the essential role of matrix remodeling in ECM structuring, consistent with several results in the literature (see [6] and references therein). In emerging systems, the characteristics of the final outcome cannot be predicted from the initial rules of the system and the paths from the initial interactions to the final equilibrium can be numerous and complex corresponding to a stochastic evolution. This is not completely the case in our model because, if indeed the emerging macrostructures cannot be predicted from the initial rules and the emergence must be understood as a whole, the path is simple and unique and can be strongly predicted by an intermediate emerging variable (the proportion of crosslinks in the ECM). Our study suggests that the very aligned structures observed in fibrotic tissues could be mainly due to excess accumulation of crosslinks, consistent with the alterations of ECM structure observed as a consequence of increased crosslinking in lung fibrosis [31] or cancer [24], or again with previous studies on tissue-induced alignment of fibrous ECM [33, 14]. Such deciphering of the emergence would open numerous perspectives for future investigations. Indeed, because of its simplicity, this emerging variable (the proportion of crosslinks in the ECM) is amenable for experimental measurements and represents a new putative target for the development of therapeutic drugs one could develop to restore the architecture of various biological tissues after external or internal alterations. In vivo experiments must be conducted to definitively validate this hypothesis and are out of the scope of this manuscript. Finally, the temporal evolution of the structures revealed that dynamical networks composed of long-lasting links exhibited a phase of crosslink accumulation followed by a "relaxation" phase (reduction of the proportion of links in the network) associated with a spatial reorganization of its fibers, while fast remodeling networks exhibited only the "relaxation" phase. These results suggest possible mechanisms for crosslink accumulation observed for instance in ageing tissues [40]. Moreover, the new insights into the temporal evolution of the structures as function of the ECM remodelling speed could prove useful in the field of tissue engineering, where there is a need to design efficient biological crosslinkers [21, 41]. In this study, we demonstrated the ability of fiber networks to spontaneously self-organize as function of the kinetics of their crosslinks. It is noteworthy that our model features networks composed of only one type of crosslinks (permanent or transient with a given link-life). A natural perspective of our works would be to study the self-organization abilities of networks composed of heterogeneous crosslinks, following the works of [27]. Moreover, our network features active crosslinks, i.e crosslinks that generate an alignment of the fibers they are attached to. As a result, our fiber networks are not subject to any external mechanical stimuli. Future works will be devoted to the study of the mechanical properties of these dynamical networks under tensile/compressive stress, shear, etc. Another interesting perspective would be to add cells having the ability to generate locally biophysical cues such as tension, stiffness and fiber production/degradation and study these effects on the structure and mechanical properties of the ECM networks. ## Appendix A Model ### Numerical implementation The differential system 8 is numerically solved using a discrete upwind Euler scheme, with adaptive time-step. The linking and unlinking Poisson processes are updated between each time-step. We assume that a pair of fibers cannot change its linking state more than once in a single time-step : this is reasonable if the length of the time-step \(dt\) is small enough compared to the mean occurrence time \(1/\nu\) of the Poisson process, so we prescribe \(dt\leq\text{dt}_{\text{link}}\) with \[\text{dt}_{\text{link}}=\min\left(\frac{0.5}{\nu_{\text{link}}},\frac{0.5}{ \nu_{\text{unlink}}}\right). \tag{12}\] The probability for two fibers \(k\) and \(m\) to develop a crosslink between time \(t_{n}\) and time \(t_{n+1}=t_{n}+\text{dt}_{n}\) is then given by : \[\mathbb{P}\left(p_{k,m}(t_{n+1})=1\ \big{|}\ p_{k,m}(t_{n})=0\text{ and }||\mathbf{X}_{k,m}(t_{n})-\mathbf{X}_{m,k}(t_{n})||\leq d_{\text{link}}^{ \max}\right)=1-e^{-\nu_{\text{link}}\text{dt}_{n}} \tag{13}\] while the probability for a crosslink to break is given by : \[\mathbb{P}\left(p_{k,m}(t_{n+1})=0\ \big{|}\ p_{k,m}(t_{n})=1\right)=1-e^{-\nu_{ \text{unlink}}\text{dt}_{n}} \tag{14}\] To ensure that agents do not swap position without even seeing each other, we also restrict the instantaneous translation of each fiber to half its radius \(R_{\text{fib}}\) and its rotation to \(\arctan(0.1)\approx 6^{\circ}\). This implies the following upper limits for the time-step : \[\left\{\begin{array}{l}\text{dt}_{\text{trans}}(t_{n})=\min_{1\leq k\leq \text{N}_{\text{fib}}}\left(0.5\frac{R_{\text{fib}}}{\left|\left|\frac{d \mathbf{X}_{k}}{dt}(t_{n})\right|\right|}\right),\\ \text{dt}_{\text{rot}}(t_{n})=\min_{1\leq k\leq\text{N}_{\text{fib}}}\left( \frac{0.1}{\left|\left|\frac{d\mathbf{x}_{k}}{dt}(t_{n})\right|\right|}\right). \end{array}\right. \tag{15}\] Reduction of the computational cost is achieved by dividing the domain of simulation into cubes whose side-length is higher than the maximal range of the interactions : thus, interactions need only be computed for pairs of agents located in neighbouring cubes. The loops calculating the interactions are parallelized for further speeding up of the simulations. One iteration of the Euler scheme proceeds as follow : * Parallel computation of all forces and torques sustained by the agents at time \(t_{n}\) (right-hand part of equation (8)). * Computation of the adaptive time-step (equations (12) and (15)) \[\text{dt}_{n}=\min(\text{dt}_{\text{trans}}(t_{n}),\text{dt}_{\text{rot}}(t_ {n}),\text{dt}_{\text{link}}).\] * Motion of the agents to their new position : \[\mathbf{X}_{k}(t_{n+1})=\mathbf{X}_{k}(t_{n})+\text{dt}_{n}\frac{d\mathbf{X}_{ k}}{dt}(t_{n})\] \[\omega_{k}(t_{n+1})=\omega_{k}(t_{n})+\text{dt}_{n}\frac{d\omega_{k}}{dt}(t_{n})\] * Account for periodic boundary conditions. * Attribution of each agent to a simulation box. * Parallel update of linking configuration (equations (13) and (14)). ### Closest points of two finite segments Given two fibers \(k\) and \(m\), we denote by \(\mathbf{X}_{k,m}=\mathbf{X}_{k}+l_{k,m}\omega_{k}\) the point of fiber \(k\) closest to fiber \(m\) (see Figure 5). The couple \((l_{k,m},l_{m,k})\) is the minimizer of the distance \(||\mathbf{X}_{k}+u\omega_{k}-(\mathbf{X}_{m}+v\omega_{m})||\) for \((u,v)\in\left[-\frac{L_{\text{fib}}}{2},\frac{L_{\text{fib}}}{2}\right]\). If \(\omega_{k}=\omega_{m}\), there is an infinity of solutions of the form \(v=u+(\mathbf{X}_{k}-\mathbf{X}_{m})\cdot\omega_{k}\); in this case we arbitrarily chose the solution with the smallest \(|u|\) value. Otherwise, there exists a unique solution whose analytical expression is : \[\left\{\begin{array}{l}l_{k,m}=C_{\frac{L_{\text{fib}}}{2}}\Big{(}\left(( \omega_{k}\cdot\omega_{m})\omega_{m}\cdot(\mathbf{X}_{k}-\mathbf{X}_{m})- \omega_{k}\cdot(\mathbf{X}_{k}-\mathbf{X}_{m})\right)\big{/}\left(1-(\omega_ {k}\cdot\omega_{m})^{2}\right)\Big{)},\\ l_{m,k}=C_{\frac{L_{\text{fib}}}{2}}\Big{(}\left((\omega_{k}\cdot\omega_{m}) \omega_{k}\cdot(\mathbf{X}_{m}-\mathbf{X}_{k})-\omega_{m}\cdot(\mathbf{X}_{m}- \mathbf{X}_{k})\right)\big{/}\left(1-(\omega_{k}\cdot\omega_{m})^{2}\right) \Big{)},\end{array}\right. \tag{16}\] where \(C_{a}\) denotes the cut-off function between \(-a\) and \(a\). ## Appendix B Quantifiers and visualization tools for the fiber structures The goal of this section is to define quantifiers allowing to quantitatively describe the local and global organization of the fiber structures obtained with our computational model. Figure 6.A shows a typical simulation (almost) at equilibrium, in which fibers are represented as gray double arrows. As one can observe, this simulation shows two levels of organization : a high local alignment and globally twisting, curving patterns located near the center of the domain. In order to quantitatively describe these states, we now define appropriate numerical quantifiers. ### Local alignment indicator Let \(R_{\text{align}}\) denotes the sensing distance up to which a fiber may interact with its neighbours : in our model, it is equal to \(L_{\text{fib}}+2R_{\text{fib}}\). For any fiber \(k\), we define its neighbourhood \(\mathcal{B}_{k}\) as the set of all fibers Figure 5: Scheme of two sphero-cylindrical fibers \(k\) and \(m\) indicating the position of the closest points \(\mathbf{X}_{k,m}\) and \(\mathbf{X}_{m,k}\) of their central segment (in a \(3\)D perspective) relative to their respective center. located at a distance less than \(R_{\text{align}}\) and its local alignment indicator \(\text{Al}_{k}\) as the fractional anisotropy of the fibers directional vectors within \(\mathcal{B}_{k}\). It is computed as follows. We denote by \(p_{m}=\omega_{m}\otimes\omega_{m}\) the projection matrix on the directional vector of fiber \(m\). The mean of the projection matrices of the fibers inside \(\mathcal{B}_{k}\) is given by \[P_{k}=\frac{1}{|\mathcal{B}_{k}|}\sum_{m\text{ s.t. }\mathbf{X}_{m}\in \mathcal{B}_{k}}p_{m}, \tag{17}\] where \(|\mathcal{B}_{k}|\) denotes the number of fibers in \(\mathcal{B}_{k}\). The matrix \(P_{k}\) is symmetric positive-definite, so its three eigenvalues \(\lambda_{1}(P_{k})\), \(\lambda_{2}(P_{k})\) and \(\lambda_{3}(P_{k})\) are real positive. The alignment indicator or fractional anisotropy in the neighbourhood \(\mathcal{B}_{k}\) is then equal to : \[\text{Al}_{k}=\sqrt{\frac{3}{2}\frac{(\lambda_{1}(P_{k})-\bar{\lambda})^{2}+( \lambda_{2}(P_{k})-\bar{\lambda})^{2}+(\lambda_{3}(P_{k})-\bar{\lambda})^{2}} {\lambda_{1}(P_{k})^{2}+\lambda_{2}(P_{k})^{2}+\lambda_{3}(P_{k})^{2}}} \tag{18}\] with \(\bar{\lambda}=(\lambda_{1}(P_{k})+\lambda_{2}(P_{k})+\lambda_{3}(P_{k}))/3\) the mean of the eigenvalues. Figure 6.B shows the same simulation as Figure 6.A, but here the fibers have been colored as a function of their local alignment indicator, from blue (\(\text{Al}_{k}=0\)) to red (\(\text{Al}_{k}=1\)). As one can see, the curved patterns are much easier to distinguish. Thus, the local alignment quantifier also serves as a visualization tool by supporting the qualitative, visual observation of locally organized states. Note that \(\text{Al}_{k}=1\) if all the fibers in \(\mathcal{B}_{k}\) have the same directional vector. If the directional vectors are uniformly distributed then theoretically \(\text{Al}_{k}=0\), but this is not always the case. Indeed, the actual Figure 6: Illustration of the various way to visualize the state of a system, using as example the final state of a simulation. **Panel A** : 3D representation of each fiber as a gray double-headed arrow, with edges of the spatial domain \(\Omega\) drawn in black. **Panel B** : Same representation, with fibers colored according to their local alignment indicator (blue : \(\text{Al}_{k}=0\), red : \(\text{Al}_{k}=1\)). See annex B for the actual computation. **Panel C** : Stereographic projection of the fibers directional vectors. See annex B.2 for the actual computation. **Panel D** : Stereographic projection of the fibers directional vectors, with the covariance ellipse drawn in red dashed line and its semi-major axis drawn in blue solid line. sampling of a random distribution may not be fully isotropic, especially if the number of elements in the sample is small. Figure 7 displays the value of the alignment indicator obtained for various distribution of fibers and various sample sizes : it can be seen that a uniform distribution produces alignment indicator ranging from \(0.1\) (when the sample size is large) to as much as \(0.55\) (when the sample size is small), and that there is a large discrepancy between different samples. In our simulations, the number of neighbours of a fiber is very stable : between \(20\) and \(25\) for dense systems and between \(10\) and \(15\) for sparse systems. Non-dynamical networks display mean alignment indicators between \(0.3\) and \(0.45\) for dense systems and between \(0.4\) and \(0.65\) for sparse systems : these values are comparable to those observed in our calibration tests for a uniform distribution with similar sample size. It can be seen from Figure 7 that these biases are much smaller for non-isotropic distributions : for mainly two- or one-directional distributions, the values computed are nearly the same regardless of the sample size and the discrepancy between different samples is small. For a two-directional distribution (i.e. when the fiber directional vectors describe a disk), the eigenvalues on the mean projection matrix are theoretically \(\lambda_{1}(P_{k})=\lambda_{2}(P_{k})=1/2\) and \(\lambda_{3}(P_{k})=0\), leading to a theoretical alignment indicator of \(1/\sqrt{2}\approx 0.707\). This is very close to the value observed in our calibration tests (see yellow curve on Figure 7). Nearly two-directional distributions, where the fiber directional vectors describe a "band" or thick disk, give lower and lower alignment indicator as the prominence of the third direction (i.e. the band width) increases (see green curves on Figure 7). Likewise, conical distributions, which are mainly one-directional, give an alignment indicator close to \(1\) which becomes lower and lower as the aperture angle of the cone increase (see red curves on Figure 7). ### Stereographic projection The directional vectors of the fibers belong to the half unit sphere \(\mathbb{S}_{2}^{+}\). This subset of \(\mathbb{R}^{3}\) can be projected onto the unit disk in \(2\)D using a stereographic projection, as explained below. We define the main direction of a system as the eigenvector associated to the largest eigenvalue of its total projection matrix \[P_{\text{tot}}=\frac{1}{\text{N}_{\text{fib}}}\sum_{1\leq k\leq\text{N}_{ \text{fib}}}\omega_{k}\otimes\omega_{k}. \tag{19}\] If the system contains two or three equally represented directions (associated to equal eigenvalues), one of them is randomly selected. Figure 7: Calibration of the alignment indicator quantifier Al on random sets of orientation vectors, for various distribution laws and sample sizes. The displayed values correspond to the average and standard deviation over \(10\) random draws with the same characteristic. We rotate the set of directional vectors so that this main direction lies on the \(z\)-axis or "north-south axis". Since the fibers orientation is not relevant in our model, the set of directional vectors can be restricted to the "north hemisphere" of the sphere. A point \(\omega=(x,y,z)\) on this hemisphere can then be projected onto the equatorial plane via the following transformation : \[p(\omega)=\left(\frac{x}{1+z},\frac{y}{1+z}\right). \tag{20}\] The whole process is illustrated in Figure 8. Figure 6.C shows the stereographic projection of the simulation displayed in Figure 6.A and B. As one can observe, the dots are not uniformly distributed but densely packed at the center of the figure, indicating the existence of a main preferential direction in the system. However, not all fibers have a directional vector close to this main direction : a non negligible number of dots are distributed all around the circle, meaning that all possible directions are represented in the system. Furthermore, the presence of a "circular branch" in the top-right part of the point cloud allows to identify the locally twisting structure that can be observed in Figure 6.B : in this part of the system, nearby fibers have similar but gradually shifting directional vectors such that, on the scale of the whole structure, the fibers directional vectors describe a circle (in the domain \(\mathbb{S}_{2}^{+}\)). Thus, this representation enables us to quickly grasp the distribution of the fibers directional vectors around one or more poles. It must be noted that proximity on the stereographic projection indicates similar directional vectors, but not necessarily spatial proximity. Nonetheless, we can gain insights into the overall architecture of the network by drawing the covariance ellipse of the point-cloud (in red dashed line on Figure 6.C) and computing its semi-major axis length \(\text{A}_{\text{max}}\). As shown in the section 3.2, this enables us to identify many type of "states" or structures that can also hint on the spatial organization of the network. ## Appendix C Supplementary data ### Temporal evolution of the spatial structures : Movies We describe here the movies showing the results of some of our simulations, available online at this address. Each video is divided in three panels. On the left is a \(3\)D representation of the system with fibers colored according to their alignment indicator (see colorbar on the right) and edges of the spatial domain Figure 8: Illustration of the stereographic projection. The orientation axis are shown for reference. **Panel A** : Natural distribution of the fibers directional vectors on the unit sphere \(\mathbb{S}_{2}\), with main direction indicated by a red line. **Panel B** : Rotation of the vectors set so that its main direction (in red) now lies along the \(z\)-axis. The definition-space of the vectors have been reduced to the “north hemisphere”, that is to the subset \(\mathbb{S}_{2}^{+}\) in the new rotated coordinates system. The equatorial plane is shown in dark-grey. **Panel C** : Projection of the vectors onto the equatorial plane, shown in \(3\)D perspective. drawn in black, in the middle the stereographic projection of the fibers directional vectors and on the right the trajectory of the simulation in the plane A\({}_{\rm max}\)-Al\({}_{\rm mean}\). The current time (in \(U_{t}\)) is displayed at the top. _(Movie1)_ Simulation of a dense system (\(L_{\rm fib}=3000\)) with fast remodeling dynamics (\(\nu_{\rm link}=0.1\)) and low equilibrium linked fiber fraction (\(\chi_{\rm link}=0.2\)). This video shows a system quickly organizing : at \(t=1000U_{t}\), the system has already transitioned from its initial unorganized state to a curved state. At \(t=3000U_{t}\), the main direction can be seen emerging in the form of a large cluster of points in the stereographic projection. At \(t=10\ 000U_{t}\), the stereographic projection displays a planar distribution of the directional vectors, with extra accretion of points in the main direction and total depletion in the perpendicular direction. The system has already nearly reached its maximal alignment indicator and, from that point onward, it will mainly undergo small local adjustments of the fibers position and orientation (see the \(3\)D representation on the left panel). The alignment indicators of individual fibers harmonise, the mean alignment indicator increases slightly and the point cloud of the stereographic projection condensates in a clear straight band. During the entire simulation, the semi-major axis length A\({}_{\rm max}\) of the stereographic projection covariance ellipse stays nearly constant. _(Movie2)_ Simulation of a dense system (\(L_{\rm fib}=3000\)) with slow remodeling dynamics (\(\nu_{\rm link}=0.001\)) and low equilibrium linked fiber fraction (\(\chi_{\rm link}=0.2\)). This video shows a system organizing more slowly than the previous one (approximately twice slower) but achieving a more aligned final state. The system reaches a curved state at \(t=1900U_{t}\). The main direction can be seen emerging on the stereographic projection around time \(t\approx 5000U_{t}\). The point cloud of the stereographic projection then begins to condensate around this main direction in a nearly symmetric manner while the various local structures rotate to align together (see left panel), reaching an aligned state at \(t=23\ 000U_{t}\) and continuing to align. _(Movie3)_ Simulation of a dense system (\(L_{\rm fib}=3000\)) with fast remodeling dynamics (\(\nu_{\rm link}=0.1\)) and high equilibrium linked fiber fraction (\(\chi_{\rm link}=0.8\)). This video shows a system organizing very quickly, with a stereographic projection adopting as early as \(t=4000U_{t}\) a band-like pattern which quickly gets thinner. At \(t=6000U_{t}\), the \(3\)D representation shows a clear wavy pattern with very uniform local alignment indicators. At that time the mean alignment indicator is already high (\(>0.9\)). The stereographic projection then begins to contract while the wavy pattern flatten, and the simulation ends in an aligned state. _(Movie4)_ Simulation of a dense system (\(L_{\rm fib}=3000\)) with slow remodeling dynamics (\(\nu_{\rm link}=0.001\)) and high equilibrium linked fiber fraction (\(\chi_{\rm link}=0.8\)). This video shows a system evolving very slowly. The mean alignment indicator reaches the \(0.5\) threshold around \(t=11\ 000U_{t}\). At that time, the local alignment indicator of individual fibers displays wide discrepancies and the stereographic projection point cloud has not visibly changed. A main direction can be seen emerging at approximately \(t=20\ 000U_{t}\), but the central cluster of points is very large and does not contract over time, as can be seen by the fact that the quantifier A\({}_{\rm max}\) does nearly not decrease. The system ends in a curved state with heterogeneous local structures. _(Movie5)_ Comparison of two simulations with different fiber densities, both ending in an aligned state. The top row shows a dense system (\(L_{\rm fib}=3000\)) with intermediate remodeling dynamic (\(\nu_{\rm link}=0.01\)) and moderate equilibrium linked fiber fraction (\(\chi_{\rm link}=0.3\)). The bottom row displays a sparse system (\(L_{\rm fib}=1500\)) with intermediate remodeling dynamic (\(\nu_{\rm link}=0.01\)) and very high equilibrium linked fiber fraction (\(\chi_{\rm link}=0.9\)). It is noteworthy that the two systems display a very similar temporal evolution. This comes from the fact that they have the same remodeling speed \(\nu_{\rm link}\) and a comparable number of links per fiber N\({}_{\rm linkperfib}\). The latter is achieved by giving the sparse system a higher equilibrium linked fiber fraction \(\chi_{\rm link}\), which compensate for its lesser number of linkable configurations (i.e. overlapping fiber pairs). ### Snapshots of sparse systems In this section, we take a closer look at the spatial organization of sparse systems. Figure 9.A compares the values of quantifiers \(\text{Al}_{\text{mean}}\) and \(\text{A}_{\text{max}}\) when the simulation has reached equilibrium, with color depending on the type of state reached (blue dots correspond to unorganized states, orange diamonds to curved states and red crosses to aligned states). A few of simulations corresponding to either typical or borderline cases are singled out with black stars and illustrated with a \(3\)D view and stereographic projection in the panels B to I. We first observe that the group of unorganized states (blue dots) is less compact than it was for dense systems and reaches greater values of \(\text{Al}_{\text{mean}}\). The groups of curved states (orange diamonds) and aligned states (red crosses) have the same characteristics in term of \(\text{Al}_{\text{mean}}\) and \(\text{A}_{\text{max}}\) than before, but the first one is much more populated and the second much less (it only contains \(10\) simulations). The most aligned state observed in sparse systems (panel B) is less straight than the typical aligned state for dense systems. Typical curved states (panels E and F) and unorganized states (panel I) however are very comparable to what was observed in dense systems. The transition between aligned and curved states is still continuous (compare panels C and D) and the transition between curved and unorganized states sharp (compare panels G and H), though the gap (in term of \(\text{Al}_{\text{mean}}\)) and the visual difference are lesser. Figure 9: **Panel A:** Mean alignment indicator versus semi-major axis length of the covariance ellipse of the stereographic projection, for each simulation of a sparse system. Red crosses correspond to systems in an aligned state, orange diamonds to curved states and blue dots to unorganized states. **Panels B to I** display the equilibrium state of a few simulations (with 3D view and stereographic projection) to illustrate typical or borderline cases. Their position on the diagram are indicated with a black star. ### Correlation between the links life-expectancy and the ECM architecture Here, we explore whether the network organization abilities could be controlled by the life expectancy of a link, which depends of both \(\nu_{\text{link}}\) and \(\chi_{\text{link}}\) via the following relation : \[T_{\text{link-life}}=\frac{1}{\nu_{\text{unlink}}}=\frac{\chi_{\text{link}}}{(1 -\chi_{\text{link}})\nu_{\text{link}}}. \tag{21}\] Figure 10 displays the value of \(\text{Al}_{\text{mean}}\) at equilibrium as a function of \(T_{\text{link-life}}\). Each point corresponds to the average over \(10\) simulations conducted with the same set of parameters, with a vertical error-bar indicating the standard deviation. The value of \(\nu_{\text{link}}\) is indicated in color and, inside each color series, \(\chi_{\text{link}}\) is increasing with \(T_{\text{link-life}}\). The characteristic time of the alignment interaction \(T_{\text{align}}\) (see section 2.2) is indicated for the sake of comparison. We observe that, in the case of dense systems (left panel), the alignment indicator displays a flat maximum for \(T_{\text{link-life}}\in[10,500]\ U_{t}\), while for sparse system (right panel) it reaches its highest value at \(T_{\text{link-life}}\approx 500\ U_{t}\). This can be explained by the fact that, when the average life expectancy of a link \(T_{\text{link-life}}\) is very small compared to the characteristic time of the alignment force \(T_{\text{align}}=523\ U_{t}\), the links do not persist long enough to fully exert their aligning influence and the equilibrium alignment indicator is lesser. This is especially true for sparse systems, which display a clear drop for \(T_{\text{link-life}}<500\ U_{t}\). For dense systems the drop is slower and less pronounced. On the other hand, when \(T_{\text{link-life}}\) is large compared to \(T_{\text{align}}\), on average the links last longer than necessary to wield their full effect and lock the system in non-optimal configurations by obstructing the action of other links. Though these locally locked structures will disappear over time, others will appear - or, to put it another way, the transmission of information (i.e. the fiber direction) in the network is too slow for all the agents to synchronize and the system will not be able to reach an extremely aligned equilibrium state. ### Temporal evolution of quantifier \(\text{Al}_{\text{mean}}\) Figure 11 displays the temporal evolution of the quantifier \(\text{Al}_{\text{mean}}\) for dense systems with various values of \(\nu_{\text{link}}\) and \(\chi_{\text{link}}\). Our main observation is that, for all parameters, the alignment indicator follows an inverted exponential growth, that is a quick initial growth followed by a slow convergence towards an asymptotic value. We computed the time-constant \(\tau_{\text{Al}}\) of this growth, that is the time needed to reach \(63\%\) of the Figure 10: Value of \(\text{Al}_{\text{mean}}\) at final time according to the value of \(T_{\text{link-life}}\), with color depending on the remodeling speed \(\nu_{\text{link}}\). The displayed values correspond to the average and standard deviation over \(10\) simulations conducted with the same set of parameters. The characteristic time of the alignment interaction \(T_{\text{align}}\) is indicated with gray dashed-lines for the sake of comparison. asymptotic value, and plotted it on the corresponding curve with a black circle. It can be seen that, for a given value of \(\nu_{\text{link}}\), the shorter the time-constant, the higher the equilibrium value of the alignment indicator (compare the curves inside each panel). By comparing the panels from left to right, we see that the faster the remodeling of the network, the faster the convergence of the system towards its equilibrium value. Moreover, by comparing the extreme cases \(\chi_{\text{link}}=0.1\) (blue curve) with \(\chi_{\text{link}}=0.9\) (pink curves) of panels A and C, we see that the dependence of the reorganization time \(\tau_{\text{Al}}\) on the equilibrium linked fiber fraction is not trivial. Indeed, fast remodeling networks (panel C) seem to reorganize faster when the equilibrium linked fiber fraction is large (pink curve) than low (blue curve), while the reverse is observed for slow remodeling networks (panel A). Altogether, these results suggest that for each network dynamics, there exists a most efficient range of equilibrium linked fiber fraction allowing for quicker convergence to equilibrium. To explore in more details the dependence between the convergence speed and the parameters of the networks, in Figure 12 we plot \(\tau_{\text{Al}}\) as a function of \(\chi_{\text{link}}\), for different values of \(\nu_{\text{link}}\). The left panel contains all the simulations while the right panel only shows the results for the sets of parameters leading, on average over \(10\) simulations, to an aligned equilibrium state (i.e. \(\text{Al}_{\text{mean}}>0.95\)). We can first see on the left panel of Figure 12 that \(\tau_{\text{Al}}\) decreases when \(\nu_{\text{link}}\) increases according to a non-linear relationship which saturates for \(\nu_{\text{link}}\geq 0.1\) (compare the different color points). These results show that fast remodeling networks relax faster to their steady-states than slow-dynamical networks. Moreover, sparse systems organize quicker than dense systems at low linking dynamics (\(\nu_{\text{link}}\leq 0.01\), compare the dot and triangle markers for the green and yellow populations), while there is no difference between dense and sparse systems for fast remodeling networks (\(\nu_{\text{link}}\geq 0.1\) where dot and triangle markers are superimposed). For each value of \(\nu_{\text{link}}\), there is a most efficient range of equilibrium linked fiber fraction \(\chi_{\text{link}}\) allowing for a lower value of \(\tau_{\text{Al}}\) and so a quicker convergence to equilibrium. For slow remodeling networks (\(\nu_{\text{link}}=0.001\), green markers) this range lays between \(\chi_{\text{link}}=0.2\) and \(\chi_{\text{link}}=0.3\), because systems with too much crosslinks will undergo stiffening and take longer to relax, but systems with too few crosslinks will have difficulty to align themselves. As one can observe, the range of \(\chi_{\text{link}}\) allowing the fastest convergence to equilibrium shifts towards \(1\) as the network remodeling speed \(\nu_{\text{link}}\) increases. As the network remodeling increases, a greater number of crosslinks will then promote a quicker alignment. When looking only at parameter sets which, on average, lead to aligned equilibrium states (right panel of Figure 12), we can see that these parameter sets cover all remodeling dynamics and correspond Figure 11: Temporal evolution the quantifier \(\text{Al}_{\text{mean}}\) for dense systems (\(\text{N}_{\text{fib}}=3000\)) with various linking dynamics. Each curve represents the average value computed over \(10\) simulations conducted with the same set of parameters, with shading indicating the standard deviation. The time-constant of this growth is indicated with a black circle and the limit between unorganized and curved or aligned states is drawn with a dashed line. to the range of equilibrium linked fiber fraction leading to fastest convergence for each remodeling speed. We conclude that the most efficient systems (which organize the fastest) are also those that align most. Ethics :This article does not present research with ethical considerations. Data Access :The code used to perform numerical simulations of our model can be found on GitHub. Supplementary data are also available online. Competing interests :The authors declare no competing interests. Funding :This study has been partially supported through the grant EUR CARe N\({}^{\circ}\)ANR-18-EURE-0003 in the framework of the Programme des Investissements d'Avenir, by Sorbonne Alliance University with an Emergence project MATHREGEN, grant number S29-05Z101 and by Agence Nationale de la Recherche (ANR) under the project grant number ANR-22-CE45-0024-01. Acknowledgment :P. Degond holds a visiting professor association with the Department of Mathematics, Imperial College London, UK.
2305.06842
Emotion Recognition for Challenged People Facial Appearance in Social using Neural Network
Human communication is the vocal and non verbal signal to communicate with others. Human expression is a significant biometric object in picture and record databases of surveillance systems. Face appreciation has a serious role in biometric methods and is good-looking for plentiful applications, including visual scrutiny and security. Facial expressions are a form of nonverbal communication; recognizing them helps improve the human machine interaction. This paper proposes an idea for face and enlightenment invariant credit of facial expressions by the images. In order on, the person's face can be computed. Face expression is used in CNN classifier to categorize the acquired picture into different emotion categories. It is a deep, feed-forward artificial neural network. Outcome surpasses human presentation and shows poses alternate performance. Varying lighting conditions can influence the fitting process and reduce recognition precision. Results illustrate that dependable facial appearance credited with changing lighting conditions for separating reasonable facial terminology display emotions is an efficient representation of clean and assorted moving expressions. This process can also manage the proportions of dissimilar basic affecting expressions of those mixed jointly to produce sensible emotional facial expressions. Our system contains a pre-defined data set, which was residential by a statistics scientist and includes all pure and varied expressions. On average, a data set has achieved 92.4% exact validation of the expressions synthesized by our technique. These facial expressions are compared through the pre-defined data-position inside our system. If it recognizes the person in an abnormal condition, an alert will be passed to the nearby hospital/doctor seeing that a message.
P. Deivendran, P. Suresh Babu, G. Malathi, K. Anbazhagan, R. Senthil Kumar
2023-05-11T14:38:27Z
http://arxiv.org/abs/2305.06842v1
# Emotion Recognition for Challenged People Facial Appearance in Social using Neural Network ###### Abstract Human communication is the vocal and non-verbal signal to communicate with others. Human expression is a significant biometric object in picture and record databases of surveillance systems. Face appreciation has a serious role in biometric methods and is good-looking for plentiful applications, including visual scrutiny and security. Facial expressions are a form of nonverbal communication; recognizing them helps improve the human-machine interaction. This paper proposes an idea for face and enlightenment invariant credit of facial expressions by the images. In order on, the person's face can be computed. Face expression is used in CNN (Convolutional Neural Network) classifier to categorize the acquired picture into different emotion categories. It's a deep, feed-forward artificial neural network. Outcome surpasses human presentation and shows poses alternate performance. Varying lighting conditions can influence the fitting process and reduce recognition precision. Results illustrate that dependable facial appearance credited with changing lighting conditions for separating reasonable facial terminology display emotions is an efficient representation of clean and assorted moving expressions. This process can also manage the proportions of dissimilar basic affecting expressions of those mixed jointly to produce sensible emotional facial expressions. Our system contains a pre-defined data set, which was residential by a statistics scientist and includes all pure and varied expressions. On average, a data set has achieved 92.4% exact validation of the expressions synthesized by our technique. These facial expressions are compared through the pre-defined data-position inside our system. If it recognizes the person in an abnormal condition, an alert will be passed to the nearby hospital/docor seeing that a message. - 19 May 2022 05 June 2022 207 April 2022 19 May 2022 07 April 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 202 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 20222 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 20222 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 20222 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 is process independently [22]. This will helps us in dropping the sound in the synthesized facial. This facial appearance prepares the XM and subsequent learning method of Self Organizing Map (SOM). Imagine that every expression can modify in appearance commencing with the look of a nonaligned face. It might be changing and saved as a model in the node [11]. The reason for an objective function that chooses an exact model of a join the train XM in a given sensation and a face and impartial target facial emergence image [5]. To look on behalf of the merger of two essential emotions. Our progression takes a partial grouping of two methods represented through each node. Accordingly, the communicative face from end to end adds together. The grouping of patterns toward the object of the face image looks sensible with a conserved focus to identify the image. ## 2 Literature Survey Each face is an ordinary signal used by a human and can be expressed depending upon their mood. A set of attempts to build a model in a facial expression analysis [2]. That has been requested in numerous fields like robotics, gaming, and medical facilities to help the system [10]. For this reason, in the twentieth century [11], Ekman has to define how different types of emotions are explained. Society as a human life with different types of expressions like irritation, fear, happiness, sad, dislike, disgust, and surprise. Here, an existing reading is going on facial appreciation and performance in a dataset to be established [6].In current years, computers have more and more powerful computing control and huge data sets [7, 8]. Machine learning algorithms are compared to traditional methods [12]. The machine learning algorithm integrates two feature extraction processes [5] and classification [13]. The operation process can mechanically extract the internal facial appearance of the sample data [15], has dominant feature extraction capabilities, and is related to computer vision (CV). Computers can simply identify face expressions [14] along with determining personnel and including the amusement like social media, content-based system, fairness, and the healthcare system. Here are different approaches, such as wavelet and coefficients [1]. Zhang has explained in this paper that a lesser resolution (64x64) is enough [18]. Every human sensation is capable of the image segregated into different levels such as joy, unhappiness, repulsion, irritation, fright, and shock [19]. Meanwhile, the working mechanism is enhanced by combining the performance's image, voice, and textual data in a series of competitions. Mollahosseini has explained [22, 23] that the purpose of deep learning in the CNN algorithm can be an accessible database. Later than extracting the face from the data set, each image was reduced to (48x48) pixels [20]. Every structural design contains different layers and adds two origin styles in this method [17]. There is a convolution layer dimension consisting of 1x1, 3x3, and 5x5. It can be represented by the ability to use the network varies from system to system. While increasing the dimension of the image and the performance of the network layer is low. Those techniques are also possible to decrease the over-fitting problem [24]. In the crash of data processing, all types of networks are used to find the performance and face image classification technique [25]. The purpose of a new CNN algorithm is to detect AU faces in a network [9]. Here, two convolution layers are used to find the max-pooling in a connected layer. The numbers are used to activate and detect the parts of the image, which was explained by Yolcu [12]. After getting the image classification into the CNN algorithm, they can crop the period and find the key value. The iconic expression can be obtained from every image analyzed by employing the CNN algorithm to perceive face image appearance. ## 3 Facial Expression Sigmoid Function The block diagram Fig.1 represents the essential and varied expressions synthesized through the frame separation approach. Observing the synthesized CNN training is taken as an input for feature extraction of the target image. Different steps have been taken for the alert message to pass to the hospital. The CNN classifier will accept the training data and check the different types of facial expressions by using the XM classifier. The synthesized expressions can be appeared in wrinkles, furrows, and teeth and look ordinary on the face shape of the object. Here the output of F(x) is a function concerning Z2, and the beginning value is +1, and the end of the value is -1, which is an exponent of expression e\({}^{\text{x}}\) in Z2. It will appear in the calculation chart using frontward propagation and reverse propagation, and the result is simply a Sigmoid of Z2. Thus, \(\partial\)O/OZ\({}_{2}\) is efficiently derived from the function of Sigmoid(x). Figure 1: Architecture flow diagram \begin{tabular}{|c|} \hline F(x) = (1+e\({}^{\text{x}}\))\({}^{-1}\)[1-(1+e\({}^{\text{x}}\))\({}^{-1}\)] \\ F(x) = sigmoid(x) (1-sigmoid(x)) \\ \(\widehat{\sigma}\)O/\(\widehat{\sigma}\)Z\({}_{2}\) = (O) (1- O) \\ \hline \end{tabular} In caring information [13], a convolution of the network is classified in a group of bottomless neural networks. Most of them regularly work to evaluate and illustrate all images [11]. It can be identified and shift to variant or gap invariant in networks. So that the inactive of the shared-weights plan and transformation of all types of network characteristics. It contains applications within any image classifications to be analyzed by the well-defined network topology.[9, 14], and the economic circumstance chain is summarized in Section VI. The future algorithm requires only one face-neutral picture of the object as an individual. Related workings will be presented in the subsequent section [6, 10]. Section III deals with the quality of the image partitioning method have been discussed. This method can be grouped and explained in section IV, and section V gives output results [29]. Figure. 2, shown below, represents the face image using the facial image classification compared with the existing mechanism in a similar part of the image classification analysis. ## 4 Classification Analysis & Probability Individual steady space is an approach to face appreciation under uncontrolled conditions. Here usually exist many variations within face images taken under uncontrolled conditions, such as modifying their face, illumination, air, etc. Most of the previous plants are on face recognition, focus on exacting variations, and frequently assume the presence of others. This paper directly deals with face recognition below unrestrained conditions of the classifier[27]. The solution is the individual stable space (ISS), which only expresses private characteristics. A neural network name is planned for a rare face image keen on the ISS. Later on, three ISS-based algorithms are considered for FR below unrestrained conditions. There are no restrictions used for the images fed into these algorithms. In addition, to the different methods used, they do not need additional training to be tested [28]. These advantages construct them sensible to apply below-level unrestrained circumstances. The existing algorithms are experienced on three huge face databases with a massive difference and understand greater performance than the existing FR techniques. This paper has explained a facade appreciation process that will appear at the top of PCA (Principal Component Analysis) and LDA (Linear Discriminated Analysis). The technique consists of two processes: initial, we plan the appearance picture as of the original vector space to an appearance subspace via PCA; succeeding us use LDA to attain a most excellent linear classifier. The fundamental design of combining PCA and LDA is to improve the simplification capacity of LDA when only a small number of samples per set are presented. Using PCA, we can build a face subspace during, which we apply, LDA to execute classification. The use of the FERET dataset can express a significant enhancement when primary components quite than unusual similes are fed in the direction of the LDA classifier. \[\Pr(Y=k\mid X=x)=\frac{pi_{k}f_{k}(x)}{\Sigma_{l=1}^{K}pi_{k}f_{k}(x)}\] Using the above formula reduced the dimension of the data points and image classification. However, the predictable data can be used to construct a partition of the image using Bayes' theorem. Let us assume that the value range is denoted by X. Let X = (x\({}_{1,X}\).2...x\({}_{p}\)) be derived from a multivariate Gaussian distribution. Here K is the number of data modules, Let Y is the response variable in Pi\({}_{k}\) is given an observation, and it is associated in K\({}^{\text{th}}\) class. The value Pr(X=x|y=k) is the number of possible functions. Let f\({}_{k}\)(x) be the big value if there is an elevated probability of an observation sample in the K\({}^{\text{th}}\) position of the variable X=x. The cross classifier with PCA and LDA provides a useful framework for other image recognition tasks. ### Personalizing conservative composition Recommendation Though a fan of traditional music was established to be below represented on top of social media and song stream platforms, they represent a significant target for the music recommender system. So we focus on this cluster of viewers and examine a large array of suggestion approaches and variants for the job of song artistce commendation. Inside the grouping of traditional music viewers, promote the assessment categorize users according to demographics and sequential music utilization manners. We describe the outcome of the beginning suggestion experiment and insight gained on behalf of the listener set less than thought. Figure 2: Functional diagram of a neural network model ### _Music personalized Reference System Based on Hybrid Filtration_ Due to the tune's range and fuzziness and the music melody's correctness, the recommendation algorithm employing peak accuracy cannot completely match the user's analysis. For such difficulty, this paper proposes a cross-reference algorithm based on the joint filter algorithm and harmony geneses and designs an adapted music proposal system [27]. The scheme's first computer suggestion consequences according to the shared filter algorithm and realizes the potential benefit to the customer. Then every music is biased by liking on top of the genes of composing music. Later than load selection, the song with earlier preference is taken as a suggestion [25]. Lastly, two suggested outcomes were performed, weighted grouping and filter to make commendation. The investigational data point out the enhanced method can raise the correctness of recommendations and meet users' demands from different levels. ## 5 Implementation ### _Input Video_ The live video taken from the camera is taken as the input video. ### _Frame Separation_ Surround processing is the first step in the environment subtraction algorithm. This step schemes to classify the customized video frames by removing noise and unnecessary items in the frame to increase the quantity of information gained from the frame. Preprocessing an image is a method of collecting easy image processing tasks that modify the uncooked input video information into a system. It can be processed by following steps. Preprocessing the record is essential to improve the finding of touching objects, For example, by spatial and earthly smoothing, snow as disturbing plants on a tree. ### _Image pre-processing_ * Image Representation is mainly classified into the following terms. * Import the image using acquisition tools; * Analyzing and testing the image; * Output can be reported, which is based on analyzing that image. ### _Elimination_ Feature mining is a type of dimensionality decrease that proficiently represents as an image compact vector technique. This approach is useful when large image sizes are reduced based on the required tasks such as illustration, matching, and retrieval. ### _Database_ The database contains a pre-defined face pattern from feature pulling out with which the user's face is compared and emotion is detected. ### _CNN Algorithm_ **Step-1:** frame = camera. read() **Step-2** if (frame = imutils.resize(frame, width=500)) Assign new frame=gray **Step-3** Detection of face Faces=face_detection.detectMultiscale(gray, scalefactor=1,0, Minneapolis=12,minsize=(60,60),flags=cv2.cascadescale_image) **Step-4** If(canvas=np_zeros((500, 700, 3),type="uint12")) then assign frame=newframe frameClone = frame.copy() **Step-5** If(len(faces)>0)) Set the value 0, 1; faces = sorted(faces, reverse=True, **Step-6** compares the number of key value in array Key1=lambda_x,(x[3]-x[0])*(x[1]-x[0]) If(Fx,Fy,Fh)=vces **Step-7** To get the output of image color from grays_cale image, and resize to be fixed size in (28x28)pixels **Step-8** To assign the ROI values for each classification via the CNN Roi=gray[fY:fY+ff,fX:fX+fW] Roi=img_array(roi) **Step-9** To get the dimension of the image size Roi=roi.type("float") / 255.0 **Step-10** Find the ROI and probability Roi=np.expand_dims(roi,axis==0) Pre=emotion_classifier.predict(roi)[0,1] **Step-11** Label1=emotions["angry","disgust","scared","happy", "sad", "surprised", "neutral"] if label=='happy' VarHappy=VarHappy+1 **Step-12** check the type of emotion if label=='sad': VarSad=VarSad+1 If(VarSad)>'Theresh: if label=='angry': Varangry=Varangry+1 ifVarangry>'Thresh: **Step13** To check the classification if label=='surprised': Varsurprised=Varsurprised+1 ifVarsurprised>Thresh: if label=='disgust': Vardsigust=Vardsigust+1 If Vardsigust>Thresh: ### Classification Artificial neural networks are used in various classification work like image and audio. Different types of neural networks are used, from predicting the series of images to using regular neural networks. In particular, an LSTM, in the same way for image classification, uses of convolution neural network. This algorithm will intellect the face of emotions and send the mail to the consumer when irregular facial emotions are found. ## 6 Result and Analysis The above Fig.3 is a neutral face of the result, here angry=0.82%, diggust=0.15%, scared=7.89, happy=22.18%, sad=8.10%, surprised=1.33%, neutral = 53.85%, so the neutral value is higher than the other attributes. The graph Fig.4 shows the performance and comparison using the facial classifier technique, here tressed=2.6, sleepy=2.68, tired=3.08, walking=2.36, wake up 3.24, coordination =2.224 and fall as sleep =2.24, so the final output of the tired is the maximum percentage. Figure 3: Neutral face
2301.05664
Risk Sensitive Dead-end Identification in Safety-Critical Offline Reinforcement Learning
In safety-critical decision-making scenarios being able to identify worst-case outcomes, or dead-ends is crucial in order to develop safe and reliable policies in practice. These situations are typically rife with uncertainty due to unknown or stochastic characteristics of the environment as well as limited offline training data. As a result, the value of a decision at any time point should be based on the distribution of its anticipated effects. We propose a framework to identify worst-case decision points, by explicitly estimating distributions of the expected return of a decision. These estimates enable earlier indication of dead-ends in a manner that is tunable based on the risk tolerance of the designed task. We demonstrate the utility of Distributional Dead-end Discovery (DistDeD) in a toy domain as well as when assessing the risk of severely ill patients in the intensive care unit reaching a point where death is unavoidable. We find that DistDeD significantly improves over prior discovery approaches, providing indications of the risk 10 hours earlier on average as well as increasing detection by 20%.
Taylor W. Killian, Sonali Parbhoo, Marzyeh Ghassemi
2023-01-13T17:01:58Z
http://arxiv.org/abs/2301.05664v2
# Risk Sensitive Dead-end Identification in Safety-Critical Offline Reinforcement Learning ###### Abstract In safety-critical decision-making scenarios being able to identify worst-case outcomes, or dead-ends is crucial in order to develop safe and reliable policies in practice. These situations are typically rife with uncertainty due to unknown or stochastic characteristics of the environment as well as limited offline training data. As a result, the value of a decision at any time point should be based on the _distribution_ of its anticipated effects. We propose a framework to identify worst-case decision points, by explicitly estimating _distributions_ of the expected return of a decision. These estimates enable earlier indication of dead-ends in a manner that is tunable based on the risk tolerance of the designed task. We demonstrate the utility of Distributional Dead-end Discovery (DistDeD) in a toy domain as well as when assessing the risk of severely ill patients in the intensive care unit reaching a point where death is unavoidable. We find that DistDeD significantly improves over prior discovery approaches, providing indications of the risk 10 hours earlier on average as well as increasing detection by 20%. ## 1 Introduction In complex safety-critical decision-making scenarios, being able to identify signs of rapid deterioration is crucial in order to proactively adjust a course of action, or policy. Consider the challenge of replacing an aging component within high-value manufacturing machinery. The longer one waits to replace this component, the efficiency of the process degrades until catastrophic failure at some unknown future time. However, the cost of temporarily stopping manufacturing to replace the component is non-trivial and the observed state of the system may not transparently signal when failure is imminent. Specifically, being aware of potential "worst-case" outcomes when choosing whether to delay repair is paramount to develop both safe and successful policies. Yet quantifying the worst-case outcomes in these and related circumstances among other safety critical domains-such as healthcare-is usually challenging as a result of unknown stochasticity in the environment, potentially changing dynamics, limited data, and the wide range of possible outcomes that might follow a sequence of decisions. By reliably providing an early indication of system failure to human operators, they would be enabled to intervene and make the necessary repairs in order to avoid system failure. Reinforcement learning (RL) is a natural paradigm to address sequential decision-making tasks in safety-critical settings, focusing on maximizing the cumulative effects of decisions over time (Sutton and Barto, 2018). RL frameworks have been posed to design safe and responsible machine learning algorithms by regulating undesirable behavior with safety tests (Thomas et al., 2019) or through establishing performance guarantees when learning from limited data (Liu et al., 2020). Unfortunately, these approaches to develop safe RL policies depend on the ability to characterize _a priori_ what actions or regions of the state space to avoid. This is not feasible in many real-world tasks as the definition of unsafe or risky behaviors may not be tractable due to unknown interactions between the observed state and selected actions. A defining feature of RL in high-risk real-world settings is that the learning paradigm is fully _offline_**and**_off-policy_ since exploratory data collection is often infeasible due to legal, safety, and ethical implications. However, RL methods are heavily influenced by the data collection policy: data is collected prior to learning, and frequently contains decisions that rely on confounding information; such as production schedules requiring deviations from normal use of manufacturing machinery or lifestyle information and insurance status in clinical treatment (Dorfman et al., 2021; Gasse et al., 2021). These factors, if unaccounted for, may lead to the overestimation of the anticipated return, biased decisions, and/or overconfident yet erroneous predictions (Thrun and Schwartz, 1993). In addition, rare but dangerous situations can be overlooked if optimizing without accounting for possible "worst case" outcomes, thus failing to guarantee safety. While RL has been explored within healthcare applications, it has primarily been used as a means for learning _risk-neutral_ policies, optimized to provide the action with highest expected return (Raghu et al., 2017; Parbhoo et al., 2017; Prasad et al., 2017; Yu et al., 2021). Without the ability to explore or otherwise test alternative treatment strategies, the learned policies are unreliable (Gottesman et al., 2019; Oberst and Sontag, 2019). An alternative offline RL paradigm was introduced by Fatemi et al. (2021) that prioritizes the avoidance of actions, proportional to their risk of leading to dead-ends (where an agent enters an irrevocurably negative trajectory). In their proposed dead-end discovery (DeD) framework, recorded negative outcomes are leveraged to identify behaviors that should be avoided. Specifically, actions that lead to dead-ends are identified based on thresholded point-estimates of the expected return of that action rather than considering the full distribution. In doing so, risk estimation in DeD is limited and, at worst, too optimistic to determine which actions are safe to be executed. The implications of this are significant: by underestimating the risk associated with a particular action, we are unable to determine whether an action could be potentially dangerous - a necessity in safety-critical settings. In this paper, we propose a risk-sensitive decision-making framework positioned to serve as an early-warning system for dead-end discovery. Broadly, our framework may be thought of as a tool for thinking about risk-sensitivity in data-limited offline settings. Our contributions are as follows: (i) Unlike former approaches, we incorporate distributional estimates of the return (Bellemare et al., 2022) to determine when an observed state is at risk of becoming a dead-end from the expected worst-case outcomes over available decisions (Chow et al., 2015). (ii) We establish that our risk-estimation procedure serves as a lower-bound to the theoretical results underlying DeD (Fatemi et al., 2021), maintaining important characteristics for assessing when identifying dead-ends. As a result, we are able to detect and provide earlier indication of high-risk scenarios. (iii) By modeling the full distribution of the expected return, we construct a spectrum of risk-sensitivity when assessing dead-ends. We show that this flexibility allows for tunable risk estimation procedures and can be customised according to the task at hand. (iv) Finally, we provide empirical evidence that our proposed framework enables an earlier determination of high-risk areas of the state space on both a simulated environment and a real application within healthcare of treating patients with sepsis. ## 2 Related Work Safe and Risk-sensitive RLA shortcoming of most approaches to offline RL is that they are designed to maximise the expected value of the cumulative reward of a policy. This assumes that the training data is sufficient to promote convergence toward an optimal policy. As a result they are unable to quantify the risk associated with a learnt policy to ensure that it acts in the intended way. The field of safe RL instead tries to learn policies that obtain good performance in terms of expected returns while satisfying some safety constraints during learning and/or deployment (Garca and Fernandez, 2015), defined through a constrained MDP (CMDP). Several safe RL algorithms (Achiam et al., 2017; Berkenkamp et al., 2017; Alshiekh et al., 2018; Tessler et al., 2019; Xu et al., 2021; Yang et al., 2022; Polosky et al., 2022) have been developed that either i) transform the standard RL objective to include some form of risk or, ii) leverage external knowledge to satisfy certain safety constraints and quantify performance with a risk metric. However, safe RL assumes _a priori_ knowledge of what unsafe regions are-through the definition of constraints whether implicitly through the environment or explicitly through agent behavior design-which is not always feasible in real-world safety-critical scenarios. Unlike these, we do not explicitly learn a policy, but learn a value function that conveys the risks inherent in making suboptimal decisions at inopportune times. Risk-sensitive RL instead focuses on learning to act in a dynamic environment, while accounting for risks that may arise during the learning process (Mihatsch & Neunier, 2002), where high risk regions do not have to be known a priori. Unlike risk-neutral RL, these methods optimise a _risk measure of the returns_ rather than the average or expected return. Among these, Fu et al. (2018) present a survey of policy optimization methods that consider stochastic formulations of the value function to ensure that certain risk constraints may be satisfied when maximising the expected return. Other approaches propose replacing the expected long-term reward used by most RL methods, with a _risk-measure_ of the total reward such as the Conditional-Value-at-Risk (CVaR) (Chow et al., 2015; Stanko & Macek, 2019; Ying et al., 2022; Du et al., 2022) and develop a novel optimization strategy to minimize this risk to ensure safety all-the-time. Ma et al. (2021) adapt distributional RL frameworks (Bellemare et al., 2022) to offline settings and by penalizing the predicted quantiles of the return for out-of-distribution actions. While these methods may be used to learn a distribution of possible outcomes, they have not been used to identify dead-ends as we propose here. Unlike off-policy evaluation methods, we focus on estimating the _risk_ associated with a policy in terms of the expected worst case outcomes. Specifically, we learn a distributional estimate of the future return of a policy using Implicit Quantile Networks (IQN) (Dabney et al., 2018), and integrate a conservative Q-learning (CQL) penalty (Kumar et al., 2020) into the loss to lower bound on the expected value of the policy. Nonstationary and Uncertainty-Aware RLSeveral works focus on explicitly modelling _non-stationary dynamics_ in MDPs for decision-making that accounts for uncertainty over model dynamics. Among these, methods such as Chandak et al. (2020) focus on safe policy optimization and improvement in non-stationary MDP settings. Here, the authors assume that the non-stationarity in an MDP is governed by an exogenous process, or that past actions do not impact the underlying non-stationarity. Sonabend et al. (2020) use hypothesis testing to assess whether, at each state, a policy from a human expert would improve value estimates over a target policy during training to improve the target policy. More recently, Joshi et al. (2021) presented an approach for learning to defer to human expertise in nonstationary sequential settings based on the likelihood of improving the expected returns on a particular policy. Our work differs from these in that instead of focusing on optimizing a specific policy, we explicitly learn which types of behaviors to avoid using risk-sensitive distributional estimates of the _future return_, as opposed to a point estimate of the expectation of that distribution. RL in safety critical domainsThere are several works posed for uncertainty decomposition in applications such as healthcare. Specifically, Depeweg et al. (2018), decompose the uncertainty in bayesian neural networks to obtain an estimate of the aleatoric uncertainty for safety. Similarly, Kahn et al. (2017) use uncertainty-aware RL to guide robots to avoid collisions, while Cao et al. (2021) develop a domain-specific framework called Confidence-Aware RL for self-driving cars to learn when to switch between an RL policy and a baseline policy based on the uncertainty of the RL policy. Unlike these works, we propose a general purpose framework that can be applied to a number of safety-critical applications using risk-sensitive RL to provide an early warning of risk over possible future outcomes. ## 3 Preliminaries As outlined above, we frame risk identification for safety critical decision making within a Reinforcement Learning (RL) context. We consider a standard episodic RL setting in an environment with non-stationary and stochastic dynamics where an agent determines actions \(a\in\mathcal{A}\) after receiving a state representation \(s\in\mathcal{S}\) of the environment, modeled as a Markov Decision Process (MDP) \(\mathcal{M}=\{\mathcal{S},\mathcal{A},T,R,\gamma\}\), where \(T(\cdot|s,a)\) relates to the stochastic transition from state \(s\) given action \(a\); \(R(s,a)\) is a finite, binary reward function that provides reward only at the terminal state of each episode and \(\gamma\in(0,1]\) is a scalar discount factor. In offline safety critical settings, we assume that recorded actions are selected according to an unknown expert policy \(\pi(\cdot|s)\), given the observed state \(s\). The objective is to estimate the value of each action as the discounted sum of future rewards (e.g. the return) \(Z^{\pi}(s,a)=\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})\) where \(s_{0}=s\), \(a_{0}=a\), \(s_{t}\sim T(\cdot|s_{t-1},a_{t-1})\), and \(a_{t}\sim\pi(\cdot|s_{t})\). By characterizing the full probabilistic nature of how \(Z^{\pi}(s,a)\) can be computed, it is used to represent the distribution of future return from the state \(s\), executing action \(a\). **Distributional RL** In challenging real-world scenarios, the consequences of a decision carry a measure of unpredictability. Standard approaches to RL seek to maximize the mean of this random return. In reality, complex phenomena in stochastic environment may fail to be accounted for, leading to rare but critical outcomes going ignored. To account for this, Distributional RL (Bellemare et al., 2022) has been introduced to model the full return distribution by treating the observed return from following a policy \(\pi\) and associated states as random variables when forming the Bellman equation: \[Z^{\pi}(s,a)\stackrel{{ D}}{{=}}R(s,a)+\gamma Z^{\pi}(s^{\prime}, a^{\prime})\] The return distribution \(Z^{\pi}(s,a)\) is most commonly represented in RL by the state-action value function \(Q^{\pi}(s,a)\) which represents the expected future return. That is, \(Q^{\pi}(s,a)=\mathbb{E}[Z^{\pi}(s,a)]\). As the distribution is an infinite dimensional object, some approximations are needed for tractable estimation. Initially, the support of the distribution was discretized _a priori_ over pre-defined categorical quantiles (Bellemare et al., 2017). More recently this approximation has been relaxed to a distribution of uniformly weighted particles, estimated with neural networks (Dabney et al., 2018), to implicitly represent these quantiles. Given the flexibility of these implicit quantile networks (IQN), they are well suited to define risk-aware decision criteria over value functions learned from real-world data where the anticipated return structure is unknown. As such, we build our proposed framework from IQN estimates of the state-action value function. **Conservatism in offline RL** An important consideration when learning from offline data with RL is avoiding value overestimation for actions not present in the data (Fujimoto et al., 2018, 2019; Bai et al., 2022). Prior work has attempted to choose a lower bound of approximated value functions (Fujimoto et al., 2018; Buckman et al., 2020), regularize policy learning by the observed behavior (Fujimoto et al., 2019; Wu et al., 2019; Kumar et al., 2019; Wang et al., 2020) or by directly regularizing the value estimates of the observed actions (Kumar et al., 2020; Jin et al., 2021). We utilize this last approach (termed conservative Q-learning; CQL) which resorts to minimizing the estimated values over the observed state-action distribution to enforce a conservative lower-bound of the value function. This is accomplished by simply adding a \(\beta\)-weighted penalty term \(\mathcal{L}^{\mathrm{CQL}}\) to the RL objective \(\mathcal{L}^{\mathrm{RL}}\). Thereby the optimzation objective becomes \[\mathcal{L}^{\mathrm{RL}}+\beta\mathcal{L}^{\mathrm{CQL}}\] where \(\mathcal{L}^{CQL}\) is chosen to be exponentially weighted average of Q-values for the chosen action (CQL(\(\mathcal{H}\)) in Kumar et al. (2020)). This serves to additionally constrain the overestimation of actions not present in the dataset and has been shown to improve risk-averse performance with distributional RL (Ma et al., 2021). By increasing the value of \(\beta\), the overall conservatism and thus risk-aversion is increased as the optimization of the estimated values is constrained further from the true value function. **Risk estimation.** We assume the return is bounded (e.g. \(\mathbb{E}[|Z|]<\infty\)) with cumulative distribution function \(\mathcal{F}(z)=\mathbb{P}(Z\leq z)\). When estimating the possible effects of a decision, we want to account for worst-case outcomes that occur with some level of confidence \(\alpha\in(0,1)\). The value-at-risk (VaR) with confidence \(\alpha\) represents the \(\alpha\)-quantile of the distribution \(Z\): \(\mathrm{VaR}_{\alpha}(Z)=\min\ \{z\mid\alpha\leq\mathcal{F}(z)\}\). This quantile can then be used to determine the "expected worst-case outcome", or conditional value at risk (CVaR): \[\mathrm{CVaR}_{\alpha}(Z)=\frac{1}{\alpha}\mathbb{E}[(Z-\mathrm{VaR}_{\alpha} (Z))^{-}]+\mathrm{VaR}_{\alpha}(Z)\] Figure 1: Illustration of the determination of conditional value at risk (CVaR\({}_{\alpha}\)), with \(\alpha=0.1\) where \((x)^{-}=\min(x,0)\) is the negative part of \(x\). We use the dual representation of CVaR (Artzner et al., 1999) which is formulated with a single expectation: \[\text{CVaR}_{\alpha}(Z)\ =\ \min_{\xi\in\mathcal{U}_{\text{CVaR}}(\alpha,\mathbb{P} )}\mathbb{E}_{\xi}[Z]\] where \(\mathbb{E}_{\xi}[Z]\) is the \(\xi\)-weighted expectation of \(Z\) within the \(\alpha\)-quantile and \(\mathcal{U}_{\text{CVaR}}(\alpha,\mathbb{P})\) is the portion of \(Z\) that falls below \(\text{VaR}_{\alpha}(Z)\). This establishes that: \[\text{CVaR}_{\alpha}(Z)\leq\mathbb{E}[Z] \tag{1}\] as \(\alpha\to 1\), then \(\mathcal{U}_{\text{CVaR}}(\alpha,\mathbb{P})\) encompasses all of \(Z\) and \(\text{CVaR}_{\alpha}(Z)\to\mathbb{E}[Z]\). Thus, the CVaR is a lower-bound for value estimates derived through the expectation of the return distribution (e.g. the value function \(Q^{\pi}\)). **Dead-end Discovery (DeD).** As introduced by Fatemi et al. (2021) the DeD framework assures a notion of security when estimating whether an action will lead to a dead-end (see Eq. 2). DeD constrains the scope of a given policy \(\pi\) if _any_ knowledge exists about undesired outcomes. Formally, if at state \(s\), action \(a\) transitions to a dead-end at the next state with probability \(P_{D}(s,a)\) or the negative terminal state with probability \(F_{D}(s,a)\) with a level of certainty \(\lambda\in[0,1]\), then \(\pi\) must avoid \(a\) at \(s\) with the same certainty: \[P_{D}(s,a)+F_{D}(s,a)\geq\lambda\ \Longrightarrow\ \pi(s,a)\leq 1-\lambda. \tag{2}\] Note that a dead-end may occur an indeterminate number of steps prior to the negative terminal condition. The defined notion of a dead-end is that once one is reached, all subsequent states are also dead-ends up to and including the negative terminal state. While \(P_{D}\), \(F_{D}\), and \(\lambda\) may not be able to be explicitly calculated, the DeD framework learns an estimate of the likelihood of transitioning to a dead-end as well as the reduction in likelihood of a positive outcome. This is done by constructing two independent MDPs \(\mathcal{M}_{D}\) and \(\mathcal{M}_{R}\) from the base environment MDP \(\mathcal{M}\) focusing solely on negative and positive outcomes, respectively. DeD learns value approximations of each MDP, \(Q_{D}(s,a)\) for negative outcomes and \(Q_{R}(s,a)\) for positive outcomes (\(Q_{D}\in[-1,0]\) and \(Q_{R}\in[0,1]\) respectively). These value estimates enable the identification and confirmation of dead-ends and actions that lead to them through the relationship: \[-Q_{D}(s,a)\geq P_{D}(s,a)+F_{D}(s,a) \tag{3}\] Then, the security condition is assured by \(\pi(s,a)\leq 1+Q_{D}(s,a)\). In practice, the \(Q_{D}\) and \(Q_{R}\) functions are approximated with deep Q-networks (DQN) (called the D- and R- networks, respectively) in concert with empirically determined thresholds \(\delta_{D}\) and \(\delta_{R}\) to flag when actions or states have the risk of leading to dead-ends and should be avoided. The DeD framework determines an action \(a\) should be avoided when both \(Q_{D}(s,a)\leq\delta_{D}\)**and**\(Q_{R}(s,a)\leq\delta_{R}\). A state \(s\) is said to be a dead-end if the _median_ value over all actions falls below these thresholds. That is a dead-end is reached whenever both \(\text{median}(Q_{D}(s,\cdot))\leq\delta_{D}\)**and**\(\text{median}(Q_{R}(s,\cdot))\leq\delta_{R}\). Our proposed distributional formulation of dead-end discovery uses these definitions, with slight adaptation to the risk-sensitive approach we use, allowing for the identification of _both_ high-risk actions and states. However, in this paper we prioritize the identification of dead-end states, demonstrating that our proposed solution provides earlier identification. ## 4 Risk-sensitive Dead-end Discovery While the DeD framework is promising for learning in offline safety-critical domains, it has limited risk-sensitivity by neglecting to model the full distribution of possible outcomes. We develop a risk-sensitive framework for dead-end discovery that conservatively models the full distribution of possible returns, driven by irreducible environment stochasticity. Our approach, DistDeD, utilizes distributional dynamic programming (Bellemare et al., 2022) to estimate the full distribution of possible returns while also limiting overestimation due to out-of-distribution actions by incorporating a CQL penalty (Kumar et al., 2020). Mirroring the construction of DeD, we instantiate two Markov Decision Processes (MDPs) \(\mathcal{M}_{D}\) and \(\mathcal{M}_{R}\), derived from the original MDP \(\mathcal{M}\), \(\gamma=1\), with reward functions chosen to focus on either the positive or negative outcomes. \(\mathcal{R}_{D}\) returns \(-1\) with any transition to a negative terminal state and is zero otherwise. \(\mathcal{R}_{R}\) returns \(+1\) with any transition to a positive terminal state and is zero otherwise. We then approximate the distributional returns \(Z_{D}\) and \(Z_{R}\) of these separate MDPs independently, where the support of \(Z_{D}\) is \([-1,0]\) and the support of \(Z_{R}\) is \([0,1]\). To quantify the risk of selecting an action \(a\) at state \(s\), we consider the expected worst-case outcome--or conditional value at risk (CVaR)--of these return distributions. That is, we infer \(\text{CVaR}_{\alpha}(Z_{D}(s,a))\) and \(\text{CVaR}_{\alpha}(Z_{R}(s,a))\) for a chosen \(\alpha\in(0,1]\), which we consider to be a hyperparameter along with the choice of thresholds \(\delta_{D}\) and \(\delta_{R}\). By using CVaR to determine the risk of approaching a dead-end, we effectively construct a lower-bound on the DeD value estimates (by virtue of Eqt. 1) which allows us to maintain the same theoretical framing. Since DeD is built around the expectation of the return: \(Q_{D}(s,a)=\mathbb{E}[Z_{D}(s,a)]\). Then, as \(\text{CVaR}_{\alpha}(Z_{D}(s,a))\leq\mathbb{E}[Z_{D}(s,a)]\) we are assured that: \[-\text{CVaR}_{\alpha}(Z_{D}(s,a))\geq-Q_{D}(s,a)\geq P_{D}(s,a)+F_{D}(s,a) \tag{4}\] Thus, by bounding the estimates of entering a dead-end, we see that using CVaR satisfies the security condition: \(\pi(s,a)\leq 1+\text{CVaR}_{\alpha}(Z_{D}(s,a))\). Parallel results for \(Z_{R}\) follow similarly. We choose to represent the distributions \(Z_{D}\) and \(Z_{R}\) for all states \(s\) and actions \(a\) using implicit Q-networks (IQN) (Dabney et al., 2018). To constrain the distributional estimates from overestimating the return for actions not present in the dataset, thus avoiding overconfidence, we train the IQN architectures with a conservative Q-learning (CQL) penalty (Kumar et al., 2020). CQL regularizes the distributional Bellman objective by minimizing the value of each action, which serves also to constrain overestimation of actions not present in the observed data. We weight this penalty by the hyperparameter \(\beta\). An illustration of the DistDeD framework is included in Figure 2: _a)_ If necessary1, observations are encoded into a state representation. _b)_ The encoded state representations are then passed to independent IQN models to estimate \(Z_{D}(s,\cdot)\) and \(Z_{R}(s,\cdot)\) for each possible action. _c)_ The CVaR is computed for each distribution and then evaluated against the thresholds \(\delta_{D}\) and \(\delta_{R}\). Following the definition of dead-end discovery given in the previous section, if both CVaR\((Z_{D})\) and CVaR\((Z_{R})\) fall below the respective thresholds for any action, that action is recommended to be avoided. _d)_ Furthermore, if the median over all actions falls below the thresholds for both distributions, then the state is said to be a dead-end. Footnote 1: When observations are irregular or partial With the bounding provided by DistDeD, utilizing CVaR estimates of the inferred return distributions, we enable a more conservative and thereby risk-averse mechanism to determine whether a state \(s\) is at risk of being a dead-end. The level of risk-aversion, or conservutism, is jointly determined by the confidence level Figure 2: **Distributional Dead-end Discovery (DistDeD)** a) Observations are encoded (as needed) into a state representation and then b) passed to independent IQN models to estimate the distribution of returns (\(Z_{D}\) and \(Z_{R}\)) for each possible action. c) The CVaR\({}_{\alpha}\) is computed for each distribution and is then evaluated against the thresholds \(\delta_{D}\) and \(\delta_{R}\). If both \(\text{CVaR}_{\alpha}(Z_{D})\) and \(\text{CVaR}_{\alpha}(Z_{R})\) fall below the respective thresholds for any action, then that action is recommended to be avoided. d) If the median over all actions falls below the thresholds for both distributions, then the state is said to be a dead-end. \(\alpha\), the weight of the CQL penalty \(\beta\) as well as the thresholds \(\delta_{D}\) and \(\delta_{R}\). The level of conservatism within DistDeD depends on choices of all of these quantities. Since \(\beta\) directly affects the optimization process of the D- and R- Networks, we treat it as a hyperparameter. An investigation of the affect of increasing \(\beta\) can be found in Section A.4.3 in the Appendix. The choice of \(\alpha\), influencing the CVaR calculation, as well as the thresholds \(\delta_{D}\) and \(\delta_{R}\) can be tuned dependant on acceptable risk tolerances in the task when evaluating the trained D- and R- Networks. Choosing a smaller value for \(\alpha\) constrains the CVaR evaluation of the estimated distributions to consider lower likelihood (and more adverse, by construction) outcomes, a form of increased conservatism. Smaller values of the thresholds increase the sensitivity of the risk determination of the framework. We demonstrate the effects of choosing different \(\alpha\) values on the performance benefits of DistDeD in comparison to previous dead-end discovery approaches (Fatemi et al., 2021) across multiple settings of \(\delta_{D}\) and \(\delta_{R}\) in our experiments using real-world medical data in Section 6. ## 5 Illustrative Demonstration of DistDeD We provide a preliminary empirical demonstration of the advantages seen by using our proposed DistDeD framework using the LifeGate toy domain (Fatemi et al., 2021). Here, the agent is to navigate around a barrier to a goal region while learning to avoid a dead-end zone which pushes the agent to the negative terminal edge after a random number of steps (See Figure 3). **Empirical Comparison** We aim to demonstrate the apparent advantages of our proposed DistDeD in comparison to the original DeD framework. For DeD, we model the \(Q_{D}\) and \(Q_{R}\) functions using the DDQN architecture (Hasselt et al., 2016) using two layers of 32-nodes with ReLU activations and a learning rate of \(1e^{-3}\). For DistDeD we utilize IQN architectures (Dabney et al., 2018) for both \(Z_{D}\) and \(Z_{R}\) using two layers of 32 nodes, ReLU activations and the same learning rate of \(1e^{-3}\). For each IQN model, we sample \(N,N^{\prime}=8\) particles from the local and target \(\tau\) distributions while training and also weight the CQL penalty \(\beta=0.1\). When evaluating \(Z_{D}\) and \(Z_{R}\), we select \(K=1000\) particles and set our confidence level to \(\alpha=0.1\). All approximate value functions (both expectational and distributional) were trained using 1 million randomly collected transitions from LifeGate. In Figure 3 we show the learned value estimates from the D-Networks for all actions available to the agent in select locations. We suppress the corresponding R-Network estimates for visual simplicity although they reflect qualitatively the same thing. For this demonstration we plot the full return distribution \(Z_{D}(s,a)\), the \(\alpha\)-quantile used to compute \(\text{CVaR}_{\alpha}(Z_{D}(s,a))\), the value estimate \(Q_{D}(s,a)\) from the DeD, as well as a notional threshold \(\delta_{D}=-0.75\). We see the inherent value of the distributional estimates used in DistDeD to determine which actions to avoid. Fig. 3(A) presents the returns at an initial state, from which encountering a dead-end is more common. Figure 3: Demonstration of inherent value of using \(Z_{D}(s,a)\) estimated with IQN and \(\text{CVaR}_{0.1}(Z_{D}(s,a))\) in comparison to \(Q_{D}(s,a)\) estimated with DDQN on the LifeGate toy domain (Fatemi et al., 2021). **A)** Evaluating returns from an initial state, **B)** evaluating returns from a more favorable location near the goal region. Notably, the CVaR estimate (the mean of the orange "worst-case distribution") is _risk-sensitive_ and _provides a lower bound of the expected value of the blue return distribution_, while the value estimate of DeD (black dashed line) is far more optimistic. Here, we set \(\delta_{D}=-0.75\) as a notional threshold (red dashed line). Fig. 3(B) presents the estimated returns from a more favorable location near the goal region. As expected, the CVaR estimate, the mean of the orange "worst-case distribution", is a lower bound on the expected value of the full return distribution (plotted in blue). Notably, the value estimated using DeD (black dashed vertical line) is far more optimistic, since DeD only considers thresholded point-estimates of expected value. This provides evidence of the limitations of DeD, ignoring the full return distribution when estimating the value of available decisions. In Figure 4(A, B), we evaluate three pre-determined policies in LifeGate using both DeD and DistDeD. Two of the three policies attempt to navigate through the dead-end region of the environment. This construction is purposeful in order to indicate how reliably risk is flagged by each approach. The design of this experiment is to demonstrate the early-warning capability of DistDeD for those sub-optimal trajectories. In Figure 4(C) we evaluated 10,000 trajectories with stochastic execution of the two suboptimal policies and assess how many steps prior to entering the dead-end region that DistDeD and DeD raise alarm and recommend a change in policy. We assess the overall risk of each state \(s\) in a trajectory by averaging the median values of \(Q_{D}(s,\cdot)\) and \(Q_{R}(s,\cdot)-1\) (for DistDeD \(\text{CVaR}_{\alpha}(Z_{D}(s,\cdot))\) and \(\text{CVaR}_{\alpha}(Z_{R}(s,\cdot))-1\)). If the averaged median value falls below the threshold \(\delta_{D}\), an alarm is raised. We use the previously published value, \(\delta_{D}=-0.15\) for DeD and choose \(\delta_{D}=-0.5\) for DistDeD. These values were chosen empirically by attempting to minimize false-positives among a validation set of the data (see Section A.4.1 for more detail). DeD (Fig. 4(A)) fails to adequately signal the risk of the two sub-optimal policies before they reach the dead-end region of the environment. In contrast, DistDeD (Fig. 4(B)) appropriately flags the trajectories ahead of the dead-end region, allowing for correction if an overseeing agent is able to intervene. Fig. 4(C) quantifies this advantage, demonstrating that DistDeD provides an indication of risk, on average, 3 steps earlier. This result confirms the utility of modeling the full distribution of expected returns and using a more coherent estimation of risk, focused on expected worst case outcome. ## 6 Assessing Medical Dead-ends with DistDeD **Data** We aim to identify medical dead-ends among a cohort of septic patients derived from the MIMIC-IV (Medical Information Mart for Intensive Care, v2.0) database (Johnson et al., 2020). This cohort comprises the recorded observations of 6,188 patients (5,352 survivors and 836 nonsurvivors), with 42 features, and 25 treatment choices (5 discrete levels for each of IV fluid and vasopressor), over time periods ranging between Figure 4: DistDeD’s advantage when alerting that a trajectory is at risk of encountering a dead-end in the LifeGate domain. Three hand-designed policies (with two purposefully suboptimal) (shown in white) are evaluated using both DeD (A) and DistDeD (B), showing that DistDeD raises alarm earlier than DeD and in a manner that could alert a necessary change in policy before encountering a dead-end. 10000 stochastic executions of these suboptimal policies are then evaluated (C) using both approaches to understand the scope of how much earlier DistDeD raises a flag in comparison to DeD. Dotted lines show how raising alarms earlier leads to actions that could direct a patient’s trajectory towards potential recovery (shown in blue). 12 and 72 hours. We aggregate each feature into hourly bins and fill missing values with zeros, keeping track of which features were actually observed with an appended binary mask. Missing features are implicitly accounted for when constructing state representations of a patient's health through time. Details about the exclusion and inclusion criteria used to define the construction of this patient cohort are contained in Section A.1 in the Appendix. **State Construction** As recommended by Killian et al. (2020) and implemented in DeD (Fatemi et al., 2021), we make use of a sequential autoencoder to construct fixed dimension state representations, embedding a history of recorded observation of a patient's health previous to each time step. This allows us to process partial and irregularly occurring observations through time, a characteristic of medical data. To do this, we use an online Neural Controlled Differential Equation (NCDE) (Morrill et al., 2021) for state construction as it naturally handles irregular temporal data. Additional information about the NCDE state construction can be found in Section A.2.1 in the Appendix. We define terminal conditions for each trajectory as whether the patient survives or succumbs to (within 48 hours of the final observation) their infection. There are no intermediate rewards aside from these terminal states. When a patient survives, the trajectory is given a +1 reward, where negative outcomes receive \(-1\). **D- and R- Networks** The encoded state representations provided by the NCDE are provided as input to the D- and R -Networks to estimate the value (and risk of encountering a dead-end) of each state and all possible treatments. To form the DistDeD framework we use CQL (Kumar et al., 2020) constrained implementations of IQN (Dabney et al., 2018) to train each network, as discussed in Section 4 (details included in Appendix A.2.2). **Training** We train the NCDE for state construction as well as the IQN instantiations for the D-, and R-Networks in an offline manner. All models are trained with 75% of the data (4,014 surviving patients, 627 patients who died),validated with 5% (268 survivors, 42 nonsurvivors), and we report all results on the remaining held out 20% (1,070 survivors, 167 nonsurvivors). In order to account for the data imbalance between positive and negative outcomes, we follow a similar training procedure as DeD (Fatemi et al., 2021) where every sampled minibatch is ensured to contain a proportion of terminal transitions from non-surviving patient trajectories. This amplifies the training for conditions that lead to negative outcomes, ensuring that the D- and R- Networks are able to recognize scenarios that carry risk of encountering dead-ends. Specific details on the training of DistDeD can be found in Appendix A.2.2 Footnote 2: All code for data extraction and preprocessing as well as for defining and training DistDeD models can be found at [https://github.com/MLforHealth/DistDeD](https://github.com/MLforHealth/DistDeD). ### Experimental Setup By design, DistDeD is formulated to provide a more conservative and thereby earlier indication of risk. A secondary benefit of the design of DistDeD is that by adapting the risk tolerance level of the CVaR estimates (by selecting different values for \(\alpha\)), we are provided a spectrum of value functions that could be used to assess whether a dead-end has been reached or is eminent. We therefore aim to execute a set of experiments that assess the extent at which these two points of improvement over DeD provide benefit. By establishing more conservative estimators with the IQN D- and R- Networks, we increase the occurrence of what could be identified as false positive indications of risk for patients whose health has not deteriorated to be a legitimate dead-end (e.g. patients who survive). We therefore need to assess the tradeoffs of increased "false-positives" against improved recall for indications of risk for patients who died. To perform this assessment we execute a set of experiments to quantitatively compare DistDeD to DeD when each approach is applied to the septic patient cohort outlined above. First, this entails measuring how much earlier DistDeD raises flags across a range of VaR \(\alpha\) values (for a fixed set of thresholds \(\delta_{D,R}\)). Second, we want to identify if DistDeD's variation--due to the choice of VaR \(\alpha\)--introduces settings that perform worse than DeD when considering a full range of possible thresholds \(\delta_{D,R}\). Finally, we aim to develop insight into the contributions of both the distributional and CQL additions to the DeD framework by considering ablations to DistDeD where each component is removed. Additional details of all experiments are contained in Section A.3 in the Appendix where there can also be found further experimental analyses in Section A.4, such as the effects of learning with reduced data (see Section A.4.4). ### Results As outlined in Section 6.1 we highlight the importance of accounting for risk when thinking about dead-ends and validate the following aspects of DistDeD. First, we assess the performance of DistDeD by demonstrating how DistDeD can provide an earlier indication of risk in comparison to other baselines and notably, outperforms DeD across all settings. Second, we demonstrate the utility of having a tunable assessment of risk that allows for domain experts to easily apply and adapt our method to different contexts, hospital settings and illnesses. Finally, we show that including a CQL penalty in the DistDeD framework further improves performance in comparison to other baselines. #### 6.2.1 DistDeD Provides Earlier Warning of Patient Risk We assess the ability of DistDeD to provide an early warning of patient risk in comparison to the original medical dead-ends framework, DeD. Figure 6 shows for non-survivors, the number of hours ahead of death that DistDeD raises a warning flag and how this changes with varying choices of VaR. In comparison to DeD, DistDeD is able to raise flags much earlier warning of up to 25 hours in advance across all values of VaR, thereby enabling timely intervention in safety-critical settings. To assess DistDeD's ability to raise flags in different contexts, we also compare how its performance varies across both surviving and non-surviving patients. These results are shown in Figure 6. In general we note that for both patient groups, DistDeD is able to detect patient deterioration and provide early warning of up to 20 hours in advance depending on the choice of VaR thresholds in comparison to DeD. The performance across both surviving and non-surviving patients is very similar. #### 6.2.2 DistDeD Allows for a Tunable Assessment of Risk Note that because DistDeD explicitly uses the Value at Risk threshold parameter \(\alpha\) to provide an assessment of risk, it can easily be adapted and tuned to various scenarios depending on how risk-averse a user would like to be. In addition, the choice of the thresholds \(\delta_{D}\) and \(\delta_{R}\) can be further adjusted to improve the precision of estimates of the risk of encountering a dead-end. For instance, in an ICU setting where timely intervention is crucial, a clinician may choose to adopt lower \(\alpha\) and higher \(\delta_{D}\) & \(\delta_{R}\) threshold values to be more conservative such that flags may be raised earlier if necessary. In our experiments, we evaluate DistDeD and DeD over all possible settings of \(\delta_{D}\) and \(\delta_{R}\) to assess the sensitivity of those settings when computing the True Positive Rate (TPR) and False Positive Rate (FPR) of determining patient risk. We also continue to evaluate DistDeD over a range of CVaR\({}_{\alpha}\) settings. Here, TPR corresponds to the percentage of non-survivor trajectories that are flagged, while FPR corresponds to the percentage of survivor trajectories that are flagged. Figure 8 shows a comparison of ROC curves derived from the DistDeD and DeD frameworks to exhibit how each balance the TPR and FPR tradeoff. For DistDeD we evaluated the TPR and FPR for a range of \(\alpha\) values to identify whether there was a particular level of conservatism (or optimism) that would perform worse than DeD. However, we observe that DistDeD robustly outperforms DeD finding a higher TPR while having a low FPR in comparison, across all settings of \(\alpha\), \(\delta_{D}\) and \(\delta_{R}\). Overall, having an _tunable_ assessment of risk also enables a domain expert like a clinician balance the benefits of early warning with the risk of potential false positive indications of risk, where a patient at low-risk is potentially flagged. Moreover, a higher TPR counteracts an increased FPR when we are more conservative in the DistDeD framework. #### 6.2.3 Cql Enhances DistDeD Performance In order to assess the individual contributions of implementing a distributional estimate of the risk of encountering a dead-end _and_ constraining the values with CQL, we evaluate separate ablations to DistDeD by computing the area under the ROC curve derived from each approach. Figure 8 shows the performance comparison of DistDeD versus DeD and these two ablations that i) exclude a CQL penalty from the DistDeD framework and ii) incorporate a CQL penalty into the standard DeD framework. Overall, we see that the DistDeD framework outperforms the baselines in terms of AUC across varying levels of the VaR threshold. We summarize the findings with the maximum AUC of each approach in Table 1. In total, DistDeD (which combines the IQN and a CQL penalty) provides an average AUC of 0.7912 while DeD results in an AUC of 0.6629, resulting in as much as a 20% improvement in the precision of identifying dead-end states. \begin{table} \begin{tabular}{c c c} & \multicolumn{2}{c}{Architecture} \\ & D\(\overline{\text{DQN}}\) & IQN \\ No Penalty & 0.6629 & 0.7744 \\ CQL Penalty & 0.7687 & **0.7912** \\ \end{tabular} \end{table} Table 1: Comparison of AUC when considering each improvement to DeD, 1) incorporating the CQL penalty and 2) modeling the full distributions of the expected return. Values represented here for the distributional components represent the mean value over all settings of VaR\({}_{\alpha}\). ## 7 Discussion In this paper we have presented our justification, foundational evidence as well as our preliminary findings supporting the development of the DistDeD framework which incorporates a more complete notion of risk when identifying dead-ends in safety-critical scenarios. We do so by leveraging distributional dynamic programming to form estimates of the full return distribution from which we can calculate the expected worst-case outcome for each available action. This form of risk-estimation enables a more tangible decision surface for determining which actions to avoid and can be tuned according to the requirements or preferences set forward by human experts that may interact with the trained DistDeD models. Our DistDeD approach is based around risk-sensitive estimates of the expected worst-case outcome and is thereby contributes a conservative decision support framework. This framework is well suited for complex safety-critical situations where learning is completed in a fully offline manner. **Limitations** While DistDeD is a promising framework for decision support in safety-critical domains with limited offline data, there are certain core limitations. The techniques described in this paper have been explored in the context of discrete action spaces only. However in scenarios where continuous actions are featured, analyses with the DistDeD framework may have to be adapted to identify potential dead-ends. In addition, the method considers only cases where a binary reward signal is observed on the terminal state only. However, several applications may require us to account for intermediate and continuous outcomes as well. Moreover, the framework only explores a medical scenario where dead-ends are derived from a single condition whereas in reality, many concomitant conditions may exist, which contribute to and are associated with different dead-end regions. Finally, we do not make any causal claims about the impact of each action on the outcomes of interest. Future work may explore how to address some of these issues. In addition, we are currently in the process of applying DistDeD to real-world healthcare challenges in partnership with clinicians to further demonstrate its utility in that setting. We do however anticipate that DistDeD is widely useful for all safety-critical domains that may beset with limited offline data. #### Broader Impact This work serves as a proof of concept for identifying regions of risk in safety-critical settings, learning from offline data. While promising, it has not been thoroughly validated for immediate use in real environments. Despite the demonstrated utility of the DistDeD framework in healthcare problems, it should never be used in isolation to exclude patients from being treated, e.g., not admitting patients or blindly ignore treatments. The risk identification aspect of DistDeD demonstrated in this paper is to signal impending high-risk situations early enough so that the human decision maker has time to correct the course of action. This may help experts make better decisions and avoid circumstances that may lead to irrecoverably negative outcomes. The intention of our approach is to assist domain experts by highlighting possibly unanticipated risks when making decisions and is not to be used as a stand-alone tool nor as a replacement of a human operator. Misuse of this algorithmic solution could carry significant risk to the well-being and survival of critical systems and individuals placed in the care of the expert. The primary goal of this work is to improve upon the established DeD proof of concept, where high-risk situations can be avoided in context of a system's state (Fatemi et al., 2021). We present a distributional estimate of this risk profile which enables earlier detection of possible dead-ends as well as facilitating a tunable framework for adaptation to each individual task. In acute care scenarios, all decisions come with inherent risk profiles and potential harms. In this spirit, we endeavor to provide a flexible tool for clinical experts to gain an earlier indication when specific decisions or their patient's health state may carry a measure of outstanding risk. #### Author Contributions TK and SP conceived and designed the research questions as well as wrote the paper. TK extracted and processed the data, designed and executed the experiments, and performed the analyses. MG provided input on possible uses of the proposed framework in clinical settings, provided funding, and reviewed the paper prior to it being made public. #### Acknowledgments We thank our many colleagues and friends who contributed to thoughtful discussions and provided timely advice to improve this work. Specifically, we appreciate the encouragement and enthusiasm provided by Vinith Suriyakumar, Haoran Zhang, Mehdi Fatemi, Will Dabney and Marc Bellemare. We are grateful for the feedback provided by Swami Sankaranarayanan, Qixuan Jin, Tom Hartvigsen, Intae Moon and the anonymous reviewers who helped improve the writing of the paper. This research was supported in part by Microsoft Research, a CIFAR AI Chair at the Vector Institute, a Canada Research Council Chair, and an NSERC Discovery Grant. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/#partners.
2310.02892
X-rays from a Central "Exhaust Vent" of the Galactic Center Chimney
Using deep archival observations from the Chandra X-ray Observatory, we present an analysis of linear X-ray-emitting features located within the southern portion of the Galactic center chimney, and oriented orthogonal to the Galactic plane, centered at coordinates l = 0.08 deg, b = -1.42 deg. The surface brightness and hardness ratio patterns are suggestive of a cylindrical morphology which may have been produced by a plasma outflow channel extending from the Galactic center. Our fits of the feature's spectra favor a complex two-component model consisting of thermal and recombining plasma components, possibly a sign of shock compression or heating of the interstellar medium by outflowing material. Assuming a recombining plasma scenario, we further estimate the cooling timescale of this plasma to be on the order of a few hundred to thousands of years, leading us to speculate that a sequence of accretion events onto the Galactic Black Hole may be a plausible quasi-continuous energy source to sustain the observed morphology.
Scott C. Mackey, Mark R. Morris, Gabriele Ponti, Konstantina Anastasopoulou, Samaresh Mondal
2023-10-04T15:28:33Z
http://arxiv.org/abs/2310.02892v1
# X-rays from a Central "Exhaust Vent" of the Galactic Center Chimney ###### Abstract Using deep archival observations from the Chandra X-ray Observatory, we present an analysis of linear X-ray-emitting features located within the southern portion of the Galactic center chimney, and oriented orthogonal to the Galactic plane, centered at coordinates \(l=0.08^{\circ},\ b=-1.42^{\circ}\). The surface brightness and hardness ratio patterns are suggestive of a cylindrical morphology which may have been produced by a plasma outflow channel extending from the Galactic center. Our fits of the feature's spectra favor a complex two-component model consisting of thermal and recombining plasma components, possibly a sign of shock compression or heating of the interstellar medium by outflowing material. Assuming a recombining plasma scenario, we further estimate the cooling timescale of this plasma to be on the order of a few hundred to thousands of years, leading us to speculate that a sequence of accretion events onto the Galactic Black Hole may be a plausible quasi-continuous energy source to sustain the observed morphology. + Footnote †: journal: ApJ 0000-0002-8882-7885]Scott C. Mackey 0000-0002-4880-7885]Mark R. Morris 0000-0002-4707-3473]Gabriele Ponti 0000-0002-4880-7885]Konstantina Anastasopoulou 0000-0002-4880-7885]Samaresh Mondal ## 1 Introduction Recent large-scale surveys of the Galactic center have revealed that the X-ray and radio emission on scales of several hundred parsecs assumes a bipolar morphology centered on the Galactic center, with the well-defined emitting lobes oriented perpendicular to the Galactic plane (Ponti et al., 2019, 2021; Heywood et al., 2019). These findings had been presaged by a considerable amount of earlier work over several decades on the "Galactic Center Lobe" seen at northern galactic latitudes at radio wavelengths (Sofue and Handa, 1984; Sofue, 1985; Law, 2010) and by X-ray observations of features extending to latitudes well out of the Galactic plane (Nakashima et al., 2013; Ponti et al., 2015). The interpretation of the morphology by Ponti et al. (2021) is that the lobes are bubbles of hot plasma surrounded by a radio-emitting shell of denser ionized gas. Ponti et al. (2019) hypothesized that these bubbles are essentially "chimneys" along which hot plasma is moving vertically out of the Galactic center where the plasma was produced, and into the region of the galaxy-scale Fermi gamma-ray bubbles (Su et al., 2010; Yang et al., 2018, 2022) and the even larger X-ray bubbles detected by the eRosita X-ray observatory (Predehl et al., 2020). The chimneys could thereby be the channels by which sources in the Galactic center have provided the energy and particles to feed the Fermi and eRosita bubbles. However, evidence for actual motion of plasma along the chimneys has not yet been reported, and it remains unclear whether the galaxy-scale bubbles are fed primarily by relativistic particles, i.e., cosmic rays, diffusing up from the region of massive star formation near the Galactic plane, or by a hot, outflowing plasma driven either by extreme episodes of accretion onto the Galactic Black Hole or by the collective deposition of energy by many supernovae over time (Zubovas et al., 2011; Zubovas and Nayakshin, 2012; Crocker and Aharonian, 2011; Crocker et al., 2015; Zhang et al., 2021). Whatever the mechanisms for producing the large-scale bubbles might be, the energy generated in the central few hundred parsecs of the Galaxy by accretion onto the central supermassive black hole and by supernovae and stellar winds associated with recent star formation has clearly created a Galactic wind (Crocker et al., 2010; Carretti et al., 2013; McClure-Griffiths et al., 2013; Fox et al., 2015; Ashley et al., 2020; Lockman et al., 2020). The driving mechanism for the wind from the Galactic center is likely tied to the mechanisms producing the Fermi and eRosita Bubbles. The wind launches material into the active circumgalactic medium, the arena of the Galactic fountain (Tumlinson et al., 2017). This paper presents an investigation of diffuse, extended X-ray emission from a single archival Chandra field that contains structures that appear morphologically to constitute the central vent in the southern Galactic center chimney, as shown in Figure 1. Observations and characteristics of this particularly deep field are described in section 2, while the resulting morphology of the emitting structures and a spectroscopic investigation of those structures are presented in section 3. Finally, our interpretation of the observational results is given and discussed in section 4. ## 2 Observations and Data Preparation For this study, we focus on a \(\sim 17\times 17\) arcmin field located at \(l=0.08^{\circ},~{}b=-1.42^{\circ}\), which we refer to as the Inner Bulge Deep Field (IBDF) (Figure 2). A total of 13 archival Chandra observations cover the field with a combined integration time of nearly 1 Ms (Table 1). The majority of the observations were conducted by Revnivtsev et al. (2009) in 2008 to characterize the discrete point sources that are part of the Galactic ridge X-ray emission (GRXE), a continuous unresolved hard X-ray emission feature extending along the Galactic plane to longitudes of \(\pm\sim 40^{\circ}\)(Worrall et al., 1982; Warwick et al., 1985). In addition, three of the observations were made in 2005 as part of the ChaMPlane Galactic bulge survey (van den Berg et al., 2009). All data were taken with the imaging array of the Advanced CCD Imaging Spectrometer (ACIS-I) onboard the Chandra X-ray Observatory. Data were downloaded from the Chandra Data Archive and reprocessed using the standard procedure for the Chandra Interactive Analysis of Observations (CIAO)1 software package (version 4.13, CALDB 4.9.5), starting with the chandra_repro script. Footnote 1: [https://cxc.harvard.edu/ciao/](https://cxc.harvard.edu/ciao/) Interpreting diffuse X-ray emission requires a thorough treatment of the non-X-ray instrumental background. We used the ACIS-I stowed background data set and re-scaled it by assuming that nearly all of the 9-12 keV flux is due to particle background, using a method based on the one prescribed by Hickox & Markevitch (2006). Our method differs slightly from the background subtraction techniques previously used in similar analyses and is detailed in Appendix A. After subtracting this background from each observation, we combined and exposure-corrected all observations to produce a mosaic image. Since the roll angles vary among the various observations, the images do not always align well, which results in a few regions where there is little or no overlap with the bulk of the observations and therefore drastically lower exposure times. We addressed this by simply masking such regions out of the image. To remove point sources, we identified them using CIAO's wavdetect routine with a sensitivity threshold (i.e. tolerance of falsely identifying a pixel as belonging to a source) of \(1\times 10^{-6}\) and then excised and interpolated across the source regions using dmfilth. Finally, the image was smoothed using a constant Gaussian kernel with a radius of 6 pixels (3 arcsec). Spectra were produced using specextract and then fit using the XSPEC 2 software package (Arnaud, 1996). Because the X-ray features studied here are dominated by emission below 2 keV and the diffuse cosmic X-ray background is best accounted for up to about 4 keV (Hickox & Markevitch, 2006, 2007), we limit all of our spectral analyses to energies below 3 keV. Instrumental background was subtracted from the spectra before fitting, and spectra were rescaled based on area. We find that the fits show little dependence on elemental abundances, so we fix this to solar abundance. The abundance table provided by Anders & Grevesse (1989) was assumed. We represent a thermal plasma with the apec (Astrophysical Plasma Emission Code; hereafter 'APEC') model and a non-equilibrium recombining plasma with the rnei (hereafter 'RP') model. Both models are default models within the XSPEC software package. Footnote 2: [https://heasarc.gsfc.nasa.gov/xanadu/xspec/](https://heasarc.gsfc.nasa.gov/xanadu/xspec/) ## 3 Results ### Morphology The features of interest within the IBDF are bright, linear ridges of X-ray emission oriented roughly per pendicular to the Galactic plane. They are characterized by relatively soft X-rays and do not appear outside of the 0.5-2.0 keV band. We visually define seven regions across the IBDF field denoted by Greek letters \(\alpha\) through \(\eta\) going from positive to negative Galactic longitudes (Figure 3). Feature \(\beta\) is a narrow, bright ridge that runs parallel to lines of constant Galactic longitude and points toward the Sagittarius A complex at the Galactic center. Features \(\gamma\), \(\delta\), and \(\epsilon\) are parallel to \(\beta\), and while they are noticeably dimmer than \(\beta\), they are regularly spaced in longitude and alternate in surface brightness, proceeding from \(\beta\) to \(\zeta\), so that feature \(\delta\), situated at the center of the seven regions, appears brighter than the regions on either side of it. Regions \(\zeta\) and \(\eta\) combine into a distinct feature of notable complexity, as can best be seen in Figure 3. Its orientation is tilted with respect to that of the other parallel features by a small angle (\(\sim 15^{\circ}\)) and it features a brighter component (\(\zeta\)) on its eastern side bordering \(\epsilon\) and a dimmer component (\(\eta\)) on its western side. The IBDF X-ray features are located within the southern Galactic chimney, which itself is nestled within the southern radio lobe (Ponti et al., 2019, 2021; Heywood et al., 2019), as shown in Figure 4. As was noted by Ponti et al. (2021), the radio emission arises from the periphery of the southern X-ray chimney, which evokes a structure in which the X-rays arise from a confined hot plasma, while the radio emission arises in a thick, higher-density surrounding shell where the hot plasma interacts with the ambient interstellar medium and the predominantly vertical magnetic field of the Galactic center. ### Hardness Ratios The hardness ratio comparing the 1.2-2.0 keV band to the 0.5-1.2 keV band across the field is shown in Figure 5. We define the hardness ratio as \[\mathrm{Hardness~{}Ratio}=\frac{C_{\mathrm{high}}-C_{\mathrm{low}}}{C_{ \mathrm{high}}+C_{\mathrm{low}}} \tag{1}\] where \(C_{\mathrm{high}}\) and \(C_{\mathrm{low}}\) are the counts in the higher and lower energy bands, respectively, and a higher hardness ratio indicates harder emission. The error bars for lati Figure 1: 1.5-2.6 keV Chandra map of the Galactic center in Galactic coordinates, showing the chimneys. A 0.5-2.0 keV image of the Inner Bulge Deep Field is overlaid in the right panel and highlighted with blue dashed lines. The black dot in the left panel denotes the location of Sgr A*. tude and longitude represent the uniform 0.01 degree-wide segments over which the counts were summed, while the hardness ratio error comes from the statistical error of the observed counts in these segments. Progressing across the field in Galactic longitude, the brighter linear features \(\beta\), \(\delta\), and \(\zeta\) each roughly correspond to lower hardness ratios, while the overall minimum hardness ratio occurs in region \(\alpha\). In the progression with Galactic latitudes, the hardness ratio shows a broad minimum at the bright centers of the IBDF, with a small peak at \(b=-1.448\pm 0.005\) degrees and a broad minimum at \(b=-1.469\pm 0.005\) degrees. Panel C in Figure 5 demonstrates differences in X-ray hardness within the \(\zeta/\eta\) feature, with the softer eastern edge corresponding to the bright ridge seen in Figure 2, and relatively hard emission trailing off to the west. ### Spectra The spectra from the five regions were fit simultaneously to determine the most likely physical model and set of parameters for each region (Table 2). When fitting the data, we find that there is little variation in the recombination timescale \(\tau\) and the initial recombining plasma temperature across the seven regions, and we therefore link the regions together for these parameters. We also fix the metal abundance to 1.0 in Solar units. We evaluate the goodness-of-fit using the reduced chi-squared value (i.e., \(\chi^{2}\)/degrees of freedom), where a lower value of \(\chi^{2}\)/d.o.f. indicates a better fit. For each region, the model with the lowest \(\chi^{2}\)/d.o.f. consists of a combination of RP and APEC components, each convolved with a Tuebingen-Boulder ISM grain absorption model ('TBabs'; Wilms et al., 2000), i.e. tbabs*(rnei+apec). This model yields a \(\chi^{2}\)/d.o.f. value of 1160/1062 = 1.092, which is significantly better than the other attempted models, such as a double APEC model (1921/1064 = 1.805). A comparison of the spectra for features \(\beta\), \(\delta\), and \(\zeta\) fit using the RP+APEC model is shown in Figure 6. ## 4 Interpretations and Discussion ### Hypothesis: a Cylindrical Tunnel The IBDF feature contains two prominent bright ridges, regions \(\beta\) and \(\zeta\), which border a column of relatively low X-ray surface brightness (regions \(\gamma\), \(\delta\), and \(\epsilon\)). The linear morphology of these quasi-parallel ridges, coupled with the fact that ridge \(\beta\) has a sharp western edge, but falls off less abruptly toward the east into region \(\alpha\), leads us to hypothesize that the IBDF feature is a cylindrical, edge-brightened tunnel lying along the central axis of the southern chimney. In keeping with the notion that the chimneys represent collimated plasma outflows from the Galactic center (Ponti et al., 2019), we regard this tunnel as the channel, or vent, along which the outflowing plasma is moving. While no measurement yet exists of the outflowing plasma velocity within the putative tunnel, we argue that it must be at least as large as the velocity of the wind emanating from the overall Galactic center region, \(\gtrsim 1000\) km s\({}^{-1}\)(Carretti et al., 2013; Fox et al., 2015; Fujita, 2023). We assume that the outflow velocity of the plasma is \begin{table} \begin{tabular}{c c c} \hline \hline ObsID & Date & Exposure \\ & (yyyy-mm-dd) & (ks) \\ \hline 5934 & 2005-08-22 & 40.49 \\ 6362 & 2005-08-19 & 37.70 \\ 6365 & 2005-10-25 & 20.69 \\ 9500 & 2008-07-20 & 162.56 \\ 9501 & 2008-07-23 & 131.01 \\ 9502 & 2008-07-17 & 164.12 \\ 9503 & 2008-07-28 & 102.31 \\ 9504 & 2008-08-02 & 125.42 \\ 9505 & 2008-05-07 & 10.73 \\ 9854 & 2008-07-27 & 22.77 \\ 9855 & 2008-05-08 & 55.94 \\ 9892 & 2008-07-31 & 65.79 \\ 9893 & 2008-08-01 & 42.16 \\ \hline \end{tabular} \end{table} Table 1: Observations highest along the central axis of the tunnel because the flow along the edges would be somewhat slowed by turbulent interactions with the adjacent ambient medium. The relatively low X-ray brightness of the interior of the tunnel can then be attributed to a lower plasma density there via the continuity equation, coupled with the density-squared dependence of the X-ray emissivity of the plasma. The relatively high emissivity of the tunnel walls, manifested as the ridges \(\beta\) and \(\zeta\), can be attributed to shocks that occur where the outflowing plasma impacts and compresses the surrounding ambient gas. The particularly high brightness of ridge \(\zeta\) is readily attributable to its orientation; it is slightly askew relative to the axis of the tunnel, so the outflowing plasma strikes this portion of the tunnel wall at a more direct angle, and therefore with more force, than it does elsewhere where the velocity vector of the outflowing plasma is presumably almost parallel to the walls of the tunnel. We note that the detailed morphology of the tunnel appears to be somewhat more complex than a perfectly uniform cylinder, given the presence of a fainter ridge, \(\delta\), projected near the center of the tunnel. ### Hardness Ratio and Plasma Temperature Distribution The brightest regions in the IBDF coincide with the smallest hardness ratios when comparing 1.2-2.0 keV and 0.5-1.2 keV emission. If the IBDF feature is indeed attributable to shocks propagating into the ambient medium, then this shock compression of the "tunnel walls" (i.e. \(\beta\) and \(\zeta\)) would naturally lead to higher densities. This high density would in turn give rise to greater emissivity and therefore more rapid cooling, ultimately producing the bright but soft X-ray emission that Figure 2: 0.5-2 keV Chandra map of the IBDF. Galactic coordinates are shown in degrees. we observe. A plausible candidate for energy injection into this region is the series of "hundred-year events" that have been inferred from moving X-ray echoes observed primarily in the fluorescent 6.4 keV iron line (Ponti et al., 2010; Clavel et al., 2013; Ponti et al., 2013; Churazov et al., 2017; Chuard et al., 2018; Marin et al., 2023). If the cooling time of the IBDF plasma is greater than a few hundred years, then this sequence of hundred-year events, presuming that they occur with some regularity on few-hundred-year timescales, may effectively act as a continuous flow that continually replenishes the region. Using the fit parameters for density and temperature from Table 2, we can estimate the cooling timescale of the shocked plasma for comparison to this few-hundred-year timescale. The characteristic cooling timescale of the plasma can be calculated using the thermal energy and emitted power of the plasma via the relation \[\tau_{\rm cool}=\frac{\frac{3}{2}\left(1+\frac{n_{\rm i}}{n_{\rm e}}\right)kT}{ \Lambda(t)n_{\rm H}}. \tag{2}\] Here \(\Lambda(t)\) is the cooling function, which in this case essentially amounts to bolometric power normalized by emission measure. Using cooling curves provided by \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{ Parameter} & \(\alpha\) & \(\beta\) & \(\gamma\) & \(\delta\) & \(\epsilon\) & \(\zeta\) & \(\eta\) \\ \hline \(\mathbf{n}_{H}\) (\(10^{22}\) cm\({}^{-2}\)) & \(0.79\pm 0.04\) & \(0.86\pm 0.04\) & \(0.92\pm 0.07\) & \(0.75\pm 0.07\) & \(0.95\pm 0.04\) & \(0.92\pm 0.03\) & \(0.93\pm 0.03\) \\ **RP kT\({}_{\rm init}\)** (keV) & & & & \(0.35\pm 0.01\) (linked) & & & \\ **RP kT** (keV) & \(0.183\pm 0.009\) & \(0.18\pm 0.01\) & \(0.20\pm 0.02\) & \(0.19\pm 0.01\) & \(0.24\pm 0.02\) & \(0.148\pm 0.009\) & \(0.15\pm 0.01\) \\ **RP \(\boldsymbol{\tau}\)** (\(10^{10}\) s cm\({}^{-3}\)) & & & & \(9\pm 3\) (linked) & & & \\ **APEC kT** (keV) & \(2.4\pm 0.4\) & \(2.8\pm 0.8\) & \(1.8\pm 0.4\) & \(2.8\pm 0.8\) & \(1.05\pm 0.06\) & \(1.22\pm 0.09\) & \(1.49\pm 0.08\) \\ **APEC EM** (\(10^{52}\) cm\({}^{-3}\)) & \(0.31\pm 0.02\) & \(0.36\pm 0.03\) & \(0.27\pm 0.04\) & \(0.31\pm 0.03\) & \(0.42\pm 0.05\) & \(0.60\pm 0.06\) & \(0.94\pm 0.05\) \\ **RP EM** (\(10^{52}\) cm\({}^{-3}\)) & \(8.9\pm 0.8\) & \(12\pm 1\) & \(7\pm 1\) & \(6\pm 1\) & \(3.9\pm 0.9\) & \(35\pm 3\) & \(25\pm 2\) \\ \hline \end{tabular} \end{table} Table 2: Simultaneous Fit Figure 3: IBDF regions used for analysis. Galactic coordinates shown. Sutherland & Dopita (1993), we estimate \(\Lambda(t)\sim 5\times 10^{-30}\) W cm\({}^{3}\) based on temperature and approximate metal abundance. Then assuming \(n_{\rm e}=n_{\rm i}=n_{\rm H}\), we can estimate the cooling timescale for various possible geometries. To get a lower limit, we first assume that emission is confined to a volume whose depth is equal to its angular width so that the dimensions of region \(\beta\) are \(0.1^{\circ}\times 0.01^{\circ}\times 0.01^{\circ}\), giving us an estimated cooling timescale in this region of \(\gtrsim 200\) yr. This estimate assumes a filamentary feature in the simplest possible case. If, however, we are looking at a cylindrical tunnel with a diameter of about \(0.1^{\circ}\), then the depth of region \(\beta\) is represented by a chord on that cylinder lying approximately \(0.025^{\circ}\) from the center. Thus, the depth is \(\sim 0.087^{\circ}\), and we calculate an estimated cooling timescale of \(\gtrsim 2000\) yr. To maintain the observed structure, the non-equilibrium plasma of the IBDF must be continually replenished or heated on a timescale shorter than this cooling time, and the Sgr A\({}^{*}\) hundred-year events therefore provide a plausible mechanism for sustaining the heating and outflow of the IBDF plasma and counteracting this relatively short cooling timescale. ### Recombining Plasma Nakashima et al. (2013) report _Suzaku_ results indicating the presence of a recombining plasma south of the Galactic plane in a larger region that contains the IBDF. Our spectral analysis of the IBDF is indeed consistent with this result as a model containing a recombining plasma and a thermal APEC component produced acceptable fits with the lowest \(\chi^{2}\) values. We find that no single-plasma model alone is able to sufficiently fit the data, which leads us to introduce the additional APEC component in our RP+APEC and APEC+APEC models. The improvement to the fit afforded by the additional APEC component to the RP model suggests that there may be an associated thermalized plasma mixed in with or adjacent to the bulk recombining plasma, or that the plasma may have a more-or-less continuous distribution of temperatures. Comparing the temperatures in Table 2, we see that the APEC component is at a higher temperature than the RP component for all 7 of the sub-regions in the fit. This may be an indication that the plasma responsible for the observed bright features is distinct from another plasma component sit Figure 4: Multi-wavelength image of the southern radio lobe and chimney. The IBDF X-ray features are shown in green (creating yellow in the left panel when mixed with red; 0.5-2.0 keV), the chimney diffuse X-ray emission is shown in red (1.5-2.6 keV), and the 1.284 GHz MeerKAT radio map is shown in blue. Emission from the chimney is omitted from the right panel to show the distribution of radio emission within the IBDF X-ray features. Figure 5: Hardness ratio of 1.2-2.0 keV to 0.5-1.2 keV emission. ting within or moving through the tunnel, or that the plasma of the IBDF has a highly variable temperature distribution. Further spectral analysis of the IBDF will be required to provide a more accurate picture. Follow-up high-resolution spectroscopy with XRISM may be especially helpful in this effort. The presence of a recombining plasma in the IBDF is also consistent with the Galactic outflow hypothesis. The morphology of the IBDF features, along with the trends in hardness ratio, suggest the existence of shocks in the region. Strong shocks stemming from the outflow would propagate into the ambient interstellar medium at high velocity, compacting the gas and sustaining the recombining plasma environment. ### Relationship to Smaller- and Larger-Scale Features A number of features observed on different scales merit consideration as possibly being related to the X-ray features in the IBDF. As mentioned above, the X-ray chimneys, in which the IBDF is embedded, have been invoked as the channel through which plasma and cosmic rays generated in the vicinity of the Galactic center transit out to the Galactic halo, potentially provoking the \(\gamma\)- and X-ray emission arising in the large-scale Fermi and eRosita Bubbles (Ponti et al., 2019, 2021). We also note that on the scale of a few parsecs, a linear X-ray feature close to the Galactic black hole has been identified and interpreted as a jet from the black hole (Baganoff et al., 2003; Muno et al., 2008; Li et al., 2013; Zhu et al., 2019). That putative jet is oriented in the same direction as the southern chimney and the linear features in the IBDF, but it has not been detected at distances exceeding a few parsecs from the black hole, apparently because its nonthermally-emitting particles have lost sufficient energy beyond that point to continue emitting a detectable flux of X-rays (Zhu et al., 2019). However, at very low radio frequencies, Yusef-Zadeh et al. (1986) (see also Kassim et al., 1986) reported the presence of a ridge of radio continuum emission extending from the Galactic center out to a distance of about 25 pc normal to the Galactic plane, that is, in the same direction as the hypothetical X-ray jet, but far short of the 220 pc distance of the IBDF from the Galactic black hole. Continued energy loss by the electrons in the hypothetical jet could lead to a fossil jet of low-energy particles responsible for the observed low-frequency synchrotron emission. Eventually, if those Figure 6: Spectral data from relatively bright ridges \(\zeta\) (top), \(\beta\) (middle), and \(\delta\) (bottom) rescaled to account for differences in area. particles are moving through the plasma in the chimney, they will thermalize with the plasma, and thus have no observable excess manifestation of their presence at the distance of the IBDF. While the placement and morphology of all the features mentioned here are very suggestive, further investigation is clearly needed to establish and elucidate the physical links between them. ## 5 Summary The 1 Ms Chandra field centered at \(l=0.08^{\circ},\ b=-1.42^{\circ}\) contains a bright X-ray structure having a striated linear morphology and hardness ratio trends suggestive of a cylindrical tunnel. The linear strands of X-ray emission are approximately perpendicular to the Galactic plane and sit neatly within the diffuse emission of the southern Galactic Center chimney, which is itself situated inside a wider shell of radio emission. Because the roughly cylindrical chimney is centered on the Galactic black hole, the natural hypothesis for the overall structure is that an X-ray emitting plasma generated at or in a broad region within \(\sim\)100 pc of the black hole is flowing out of central region through the chimney, and we hypothesize that the apparent tunnel that we report here is the central conduit for that plasma. While our spectral analysis of the linear X-ray features does not conclusively constrain the metallicity or show an obvious trend in temperature, it does point toward the presence of a two-temperature plasma. Such a plasma consisting of a thermal and a recombining component could be sustained from shocks due to the continuous outflow proposed in the chimney hypothesis. The inferred tunnel-like morphology is best supported by the X-ray hardness ratios, which demonstrate a clear difference between the brighter "walls" of the tunnel and the dimmer region in-between. The bright outer regions coincide with the lowest hardness ratios, a possible indicator of higher density and more rapid cooling as a result of shock compression from outflowing plasma. The presumably outflowing plasma in the tunnel is a strong candidate for the source of the Galactic wind, and perhaps for providing the particles and energy that are responsible for the gamma-rays emanating from the Fermi Bubbles and the X-rays arising in the eRosita Bubbles. The biggest remaining open question is whether those galaxy-scale structures were created predominantly in a past major black-hole accretion event or whether they are the result of a sequence of frequent episodic energy releases within the Galaxy's central region. In the latter case, the observed plasma structures - chimneys and tunnel - are enduring features that carry the intermittent spurts of energy out to the large-scale bubbles. One very useful future investigation would be to determine the plasma velocity (and gradient) within the chimneys, either by carrying out long-term X-ray proper motion studies or by measuring the Doppler shifts of X-ray lines with a high-resolution X-ray spectrometer. The challenge with the former measurement is the limited resolution of X-ray imagers (with Chandra resolution, it would take about a decade to make a proper motion measurement of an unresolved feature moving at 1000 km s\({}^{-1}\) at the Galactic center, and the features reported here are partially resolved), while the challenge with the latter measurement is that the predominant direction of the plasma flow is likely to be perpendicular to our line of sight. **Acknowledgements:** The work carried out for this project by UCLA participants was supported by NASA/SAO grant GO1-22138X. GP acknowledges financial support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program "HotMilk" (grant agreement No. 865637) and support from Bando per il Finanziamento della Ricerca Fondamentale 2022 dell'Istituto Nazionale di Astrofisica (INAF): GO Large program.
2301.12318
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering
Most existing methods to detect backdoored machine learning (ML) models take one of the two approaches: trigger inversion (aka. reverse engineer) and weight analysis (aka. model diagnosis). In particular, the gradient-based trigger inversion is considered to be among the most effective backdoor detection techniques, as evidenced by the TrojAI competition, Trojan Detection Challenge and backdoorBench. However, little has been done to understand why this technique works so well and, more importantly, whether it raises the bar to the backdoor attack. In this paper, we report the first attempt to answer this question by analyzing the change rate of the backdoored model around its trigger-carrying inputs. Our study shows that existing attacks tend to inject the backdoor characterized by a low change rate around trigger-carrying inputs, which are easy to capture by gradient-based trigger inversion. In the meantime, we found that the low change rate is not necessary for a backdoor attack to succeed: we design a new attack enhancement called \textit{Gradient Shaping} (GRASP), which follows the opposite direction of adversarial training to reduce the change rate of a backdoored model with regard to the trigger, without undermining its backdoor effect. Also, we provide a theoretic analysis to explain the effectiveness of this new technique and the fundamental weakness of gradient-based trigger inversion. Finally, we perform both theoretical and experimental analysis, showing that the GRASP enhancement does not reduce the effectiveness of the stealthy attacks against the backdoor detection methods based on weight analysis, as well as other backdoor mitigation methods without using detection.
Rui Zhu, Di Tang, Siyuan Tang, Guanhong Tao, Shiqing Ma, Xiaofeng Wang, Haixu Tang
2023-01-29T01:17:46Z
http://arxiv.org/abs/2301.12318v2
# Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering ###### Abstract Most existing methods to detect backdoored machine learning (ML) models take one of the two approaches: trigger inversion (aka. reverse engineer) and weight analysis (aka. model diagnosis). In particular, the gradient-based trigger inversion is considered to be among the most effective backdoor detection techniques, as evidenced by the TrojAI competition [1], Trojan Detection Challenge [2] and backdoorBench [3]. However, little has been done to understand why this technique works so well and, more importantly, whether it raises the bar to the backdoor attack. In this paper, we report the first attempt to answer this question by analyzing the change rate of the backdoored model around its trigger-carrying inputs. Our study shows that existing attacks tend to inject the backdoor characterized by a low change rate around trigger-carrying inputs, which are easy to capture by gradient-based trigger inversion. In the meantime, we found that the low change rate is not necessary for a backdoor attack to succeed: we design a new attack enhancement called _Gradient Shaping_ (GRASP), which follows the opposite direction of adversarial training to reduce the change rate of a backdoored model with regard to the trigger, without undermining its backdoor effect. Also, we provide a theoretic analysis to explain the effectiveness of this new technique and the fundamental weakness of gradient-based trigger inversion. Finally, we perform both theoretical and experimental analysis, showing that the GRASP enhancement does not reduce the effectiveness of the stealthy attacks against the backdoor detection methods based on weight analysis, as well as other backdoor mitigation methods without using detection. ## 1 Introduction Critical to trustworthy AI is the trustworthiness of machine learning (ML) models, which can be compromised by malevolent model trainers, evil-minded training data providers, or any parties with access to any link on the ML supply chain (e.g., pre-trained models) to inject a backdoor (aka., trojan). A backdoored model is characterized by strategic misclassification of the input carrying a unique pattern called _trigger_: e.g., special glasses worn by a masquerader to impersonate an authorized party against a compromised facial-recognition system. So the assurance of ML models can only be upheld by effectively detecting those backdoored models, which have been intensively studied in recent years. Existing backdoor defense methods have been reviewed by an SoK paper [4]: among seven general defense strategies, two are based on backdoor detection, which uses either the trigger inversion (aka. trigger synthesis) or weight analysis techniques (aka. model diagnosis) [5][6][7][8][9]. The most concrete progress in the backdoor detection has been at least partially attributed to _trigger inversion_ related techniques, as evidenced in the TrojAI competition [1] (9 out of 11 rounds won by inversion approaches, the rest two won by weight analysis) and the BackdoorBench project [10] (leading performers are mostly gradient-based trigger inversion). However, little has been done to understand whether these approaches raise the bar to the backdoor attacks or are just another porous defense line permeable by the knowledgeable adversary. **Achilles' heel of gradient-based optimization**. Trigger inversion is a technique that automatically recovers a pattern causing an ML model to misclassify the pattern-carrying input. Such a pattern is considered a putative trigger and utilized to determine whether the model is indeed backdoored. This reverse-engineering step mostly relies on gradient descent, which seeks the greatest tendency towards misclassification following the opposite direction of the model's gradient with regard to its input. A prior study shows that almost all proposed trigger inversion approaches are gradient-based [4]. Although gradient-based optimization can converge to a local optimum, this convergence is contingent upon selecting a proper size for each search step and a proper initialization. In the presence of a function with low robustness around trigger-inserted inputs (e.g., the one having a steep slope (large changing rate), as shown in Figure 5), a large step size could overshoot the local minimum for the trigger that leads to misclassification. On the other hand, a small step size could render the convergence process exceedingly slow and increase the probability that the optimizer converges to another local minimum, practically thwarting any trigger inversion attempt. So a fundamental question not asked before is why gradient-based reverse engineering works so well on the backdoors injected using today's techniques and whether a more powerful backdoor capable of defeating the inversion can be injected under practical threat models. **Analysis and findings**. To answer this question, we conducted the first study to understand the limitations of trigger inversion. Our research shows that today's backdoor injection techniques, both loss-function manipulation, and data poisoning, turn out to be quite amenable to gradient-based optimization. Actually, given the relatively simple features that characterize today's triggers (e.g., geometric shapes), a backdoor learned could be more robust to the noise added to its trigger than the benign task the infected model claims to perform, as observed in our experiments: we found that oftentimes, backdoors tend to be more resilient to the noise than the primary task to the perturbation on its features (Section 3). This observation indicates that the backdoor can be invoked by not only the trigger but a wide range of its variations. Therefore, the average change rate of the back-doored model around trigger-inserted inputs for recognizing a trigger cannot be too high, which can be easily captured with a relatively larger scope of search step size and initialization selection. This explains why trigger inversion works so well in backdoor detection. However, a slow change rate (or high trigger robustness) is _not_ required for a backdoor attack to succeed. Our research shows that the change rate can be increased through data contamination without undermining the effectiveness of the backdoor attack. In our research, we designed a simple algorithm that could enhance the backdoor attack, called gradient shaping (GRASP) that utilizes both mislabeled data and correctly labeled data with noised triggers to contaminate the training set, in an opposite way to the adversarial training [11], so as to narrow down the variation of the trigger pattern capable of invoking the backdoor. We theoretically analyze this approach and show that it effectively raises the change rate, thereby weakening the detection ability from trigger inversion. It is worth noting that GRASP represents a different type of backdoor attack compared with the stealthy backdoors proposed recently (e.g., [12][13]). Existing stealthy backdoor methods attempt to devise specific triggers, often dependent on the target neural network model so that they are hard to detect and mitigate by defense methods. GRASP, on the other hand, is a generic trigger injection method that injects any trigger designed by the attacker into a target model so that the trigger is harder to detect and mitigate by the trigger inversion-based backdoor defenses. As a result, GRASP can be combined with existing stealthy backdoor methods to enhance their capability to evade the trigger inversion-based defenses. Our studies show that existing backdoor attacks less capable of evading trigger inversion can be boosted by GRASP to easily defeat most representative inversion protection, including Neural Cleanse (NC) [5], tabor [14], k-arm [7], pixel [15], rendering them incapable of capturing any trigger of a backdoored model. We also perform a theoretical and experimental analysis (Section 5) to show that GRASP does not make the backdoor more vulnerable to weight analysis, which is the other mainstream technique for backdoor detection. In particular, our experiment shows that the GRASP enhancement does not decrease the effectiveness of the backdoor attacks such as DFST [16], AB [17], and DEFEAT [18] against the weight analysis-based detection. Finally, our study demonstrates that the effectiveness of GRASP against trigger inversion does not make the enhanced attacks more vulnerable to other backdoor mitigation or unlearning techniques, such as Fine-purning [19], NAD [20], Gangsweep [21], DBD [22], and RAB [23]. **Contributions**. The contributions of the paper are outlined below: \(\bullet\)_First in-depth analysis on trigger inversion_. We report the first in-depth analysis that explains why trigger inversion works so well on backdoor detection. This leads to the discovery of the fragility of the advance we made in this area, given the observation that the weakness of today's trigger injection can be addressed without undermining the effectiveness of the backdoor. \(\bullet\)_New backdoor injection technique_. Our new understanding of trigger inversion has been made possible by a new backdoor injection technique, which exploits the fundamental limitation of gradient-based optimization and works under realistic threat models. As such, this method can enhance existing backdoor attacks, making it more effective in evading trigger inversion, but not less effective in evading the weight analysis-based detection and other defenses. ## 2 Background ### _Backdoor Attack Modeling_ In a backdoor attack, the adversary intends to inject a backdoor (Trojan) into the target ML model for the purpose of causing the model to produce desired outputs for trigger-inserted inputs. In our research, without loss of generality, we focus on the backdoor attacks against image classification models. **Classification model**. We model a classification model as a composition of two functions, i.e., \(z(f(\cdot)):\mathcal{X}\mapsto\bar{\mathcal{Y}}\mapsto\mathcal{Y}\). Specifically, \(\mathcal{X}\subseteq\mathbb{R}^{m}\), \(\bar{\mathcal{Y}}\subseteq\mathbb{R}^{m}\), \(\mathcal{Y}=\{0,1,...,K\}\) and \(K\) is the number of classes. We refer to \(f_{D}\) as a model trained on dataset \(D\). Generally, we consider a dataset \(D\) that contains \(n\) independent training samples, i.e., \(D=\{x_{i},y_{i}\}_{i=1}^{n}\), where \(x_{i}\in\mathcal{X}\) and \(y_{i}\in\mathcal{Y}\). **Backdoor injection.** We model the backdoor injection as a process that injects the backdoor into the target model so that this backdoored model will produce adversary desired outputs for those trigger-inserted inputs. Formally, following the definition of Neural Cleanse [5], we model the trigger as a pair \((\mathbf{M},\mathbf{\Delta})\) of trigger mask \(\mathbf{M}\) and trigger pattern \(\mathbf{\Delta}\). A trigger-inserted input \(A(x,\mathbf{M},\mathbf{\Delta})\) is the output of applying the amending function \(A\) on a benign input \(x\) with a given trigger pair \((\mathbf{M},\mathbf{\Delta})\). Specially, we consider a well-acccepted amending function \(A(x,\mathbf{M},\mathbf{\Delta})=(1-\mathbf{M})\cdot x+M\cdot\mathbf{\Delta}\). And we refer to \(m^{*}\) as the \(l_{1}\) norm of the trigger mask \(\mathbf{M}\), i.e., \(m^{*}=\|\mathbf{M}\|_{1}\). In this paper, we only consider the targeted backdoor scenarios where adversaries want to mislead the target model to predict the target labels for the trigger-inserted inputs. Specially, we refer to \(y_{t}\) and \(y_{s}\) as the target label and the source label (the true label) of an input \(x\), respectively. ### Trigger Inversion Modeling Trigger inversion aims to recover a putative trigger for a backdoor (Section 2.1) and then evaluate the trigger on benign inputs in an attempt to verify its backdoor effect (misclassifying such inputs to a target label). Here we model this trigger recovery process as an optimization problem: for a given model, finding the trigger that optimizes an objective function. **Objective optimization function**. Formally, following our trigger modeling (Section 2.1), we model the problem of trigger inversion as finding a trigger pair \((\mathbf{M},\mathbf{\Delta})\) that minimizes the following objective function over a set of inputs \(\mathbf{X}\) for a given classification model \(z(f(\cdot))\) : \[\min_{\mathbf{M},\mathbf{\Delta}}\sum_{\mathbf{x}\in\mathbf{X}}\ell(y_{t},z(f(A(\mathbf{x},\mathbf{M}, \mathbf{\Delta}))))+\lambda\cdot\zeta(\mathbf{M},\mathbf{\Delta}) \tag{1}\] where \(\ell(\cdot,\cdot)\) is a loss function, \(y_{t}\) is the target label, \(A\left(\mathbf{x},\mathbf{M},\mathbf{\Delta}\right)\) is the amending function, \(\zeta(\cdot,\cdot)\) is a regularization penalty function for the trigger pair \((\mathbf{M},\mathbf{\Delta})\) and \(\lambda\) is the weight of the regularization penalty. For example, Neural Cleanse (NC) uses square loss as the loss function and \(l_{1}\) norm of \(\mathbf{M}\) as the regularization penalty function. **Gradient-based solution**. The objective function (Eq. 1) contains an empirical risk term (the first one) and a penalty term (the second one). The optimization of such objective functions has been well-studied in the context of neural networks. Particularly, Stochastic Gradient Descent (SGD) has been tremendously successful in finding solutions to such an optimization problem. Hence, it is not surprising that SGD has demonstrated its power in trigger inversion [5, 14, 7, 6]. However, in general, SGD finds local minima because the objective function is non-convex and may have many local minima. To overcome this limitation, multiple initiations of SGD are often used to improve its chance of finding the optimal solution. In the context of trigger inversion, the Attack Success Rate (ASR) is used to measure the effectiveness of a reconstructed trigger, and the one with the highest ASR is selected as the most plausible trigger. ### Threat Model We consider a black-box threat model similar to that used in the BadNet project [24], as elaborated below: **Attacker's goal**. We consider the adversary who wants to inject targeted backdoors so as to mislead an ML model to predict target labels for the trigger-inserted inputs. **Attacker's capabilities**. We consider the black-box data-poisoning attack, where we assume that the adversary can inject data into the training set but does not know other training data or the parameters of the target model. An example is federated learning [25], in which some data contributors may be untrusted. **Defender's goal**. The defender aims to detect backdoored ML models and further suppress the backdoor effects in these models. The focus of our research is detection based on trigger inversion. **Defender's capabilities**. We assume that the defender has full access to the target model, and owns a small set of benign inputs for trigger reconstruction. Also, we assume that the defender does not know whether a target model is infected, what the backdoor source and target labels would be and what triggers look like. ## 3 Observations Before we describe our key observation, we need to define some terms used throughout the rest of the paper. First, given a backdoored model \(z(f^{\prime}(\cdot))\) and the corresponding trigger insert function \(A(x,\Delta,M)\), we define the sample-specific trigger robustness and obstructed robustness of a backdoored model in Definition 1, and the overall trigger robustness and obstructed robustness of a backdoored model in Definition 2. Informally, the sample-specific trigger robustness is the smallest perturbation on the trigger area from a trigger-inserted input that can flip the prediction of this input. The sample-specific obstructed robustness is the smallest perturbation on the corresponding trigger area in a benign input that can flip the prediction of this input. The overall trigger robustness and obstructed robustness of a backdoored model are approximated by averaging the sample-specific trigger robustness and obstructed robustness over all samples in a dataset. **Definition 1** (Sample specific trigger robustness and obstructed robustness): _Given a benign input \(x\in\mathcal{X}^{m}\), and the corresponding trigger-inserted input \(x^{\prime}=A(x,\Delta,M)\), for each entry in \(x^{\prime}\):_ \[x^{\prime(i)}=\begin{cases}x^{(i)}&\mathbf{M}^{(i)}=0\\ \mathbf{\Delta}^{(i)}&\mathbf{M}^{(i)}=1\end{cases} \tag{2}\] _where \(i\in[1,..,m]\), and \(\mathbf{M}\) is the trigger mask matrix, in \(z(f^{\prime}(\cdot))\), the sample-specific trigger robustness is measured on a trigger-carrying input \(x^{\prime}\) (denote as \(r_{t}^{x^{\prime}}\)), which is defined as the smallest perturbation \(\epsilon\) on the trigger containing subspace (\(\{x^{\prime(i)}|\mathbf{M}^{(i)}=1\}\)) such that \(z(f(x^{\prime}))\neq z(f(x^{\prime}+\epsilon))\)._ _Similarly, in the \(z(f^{\prime}(\cdot))\), the obstructed robustness is measured on a benign input \(x\) (denote as \(r_{b}^{x}\)), which is defined as the smallest perturbation \(\epsilon\) on the trigger containing subspace (\(\{x^{\prime(i)}|M^{(i)}=1\}\)), such that \(f(x)\neq f(x+\epsilon)\)._ Similarly, we can approximate the overall trigger robustness and obstructed robustness as below: **Definition 2** (Overall trigger robustness and obstructed robustness): _Given a dataset \(X\in\mathcal{X}^{n\times m}\), let \(X^{\prime}\in\mathcal{X}^{n\times m}\) denote the dataset after inserting a trigger into each input in \(X\). The overall trigger robustness of \(z(f^{\prime}(\cdot))\) (denote as \(r_{t}\)), is approximated by averaging all \(r_{t}^{x^{\prime}_{t}}\) for each \(x^{\prime}_{i}\in X^{\prime}\):_ \[r_{t}\approx\frac{\sum_{i}^{n}r_{t}^{x^{\prime}_{t}}}{n} \tag{3}\] _Similarly, the overall obstructed robustness of \(z(f^{\prime}(\cdot))\) (denote as \(r_{b}\)) is approximate by:_ \[r_{b}\approx\frac{\sum_{i}^{n}r_{b}^{x_{i}}}{n} \tag{4}\] We will use the trigger robustness and obstructed robustness to represent the overall trigger robustness and obstructed robustness correspondingly. **Main observation**. Trigger inversion aims to produce a pattern as close to the injected one as possible. The tolerance of the injected trigger precision, measured by trigger robustness, denotes how close the putative trigger shall be to the injected one for inducing target misbehavior on a subject model. We evaluated the trigger robustness in ten typical backdoor attacks (BadNet (BN) [24], low-c (LC) [26], Adap (Ad) [27], blend (AB) [17], sig [28], LIRA [12], WaNet (WN) [29], Composite (Co) [30], SIM [31], smooth (LSBA) [32]) under CIFAR-10. More specifically, we utilize the entire dataset (training data and testing data) for the robustness evaluation and used VGG-16 and ResNet-18 as the model architectures. To present the extent of trigger robustness, we compare the trigger robustness with the primary task robustness in the corresponding position, the obstructed robustness. we evaluated the ratio (\(\frac{r_{b}}{r_{b}}\)) between trigger robustness and obstructed robustness, the results show a clear gap between the trigger robustness and the obstructed robustness in the backdoored models; The overall trigger robustness is always significantly higher than the overall obstructed robustness. The blue bars in Fig. 4 show the ratio \(\frac{r_{b}}{r_{b}}\) between trigger robustness and the obstructed robustness of the backdoored models poisoned by ten different backdoor attacks. SIM has the lowest ratio of 1.97. For the rest of the attacks, the ratios are all greater than 2. Next, we investigate the performance of trigger inversion on the models poisoned by these ten different backdoor attacks, respectively. We found that generally when a backdoor attack has higher trigger robustness, this attack is more easily detected by trigger inversion, while the backdoor attacks with lower trigger robustness are less likely to be detected by trigger inversion. Fig. 1 shows the relationship between the trigger robustness (x-axis) and the effectiveness of the ten attacks to evade the trigger inversion (y-axis), which shows a clear correlation. Here, the effectiveness of each attack is measured by the detection accuracy (AUC) of NC[5]. The experiment was conducted on CIFAR-10, where we trained ten legitimate and ten backdoored models for each attack. In the section 5, we will give a theoretical explanation of why trigger inversion works well when this robust ratio is large. ## 4 Defeating Trigger Inversion Our analysis shows that gradient-based trigger inversion works well on existing backdoor attacks since a backdoored model tends to have high trigger robustness comparing with the obstructed robustness. However, there is no evidence that such high robustness is _essential_ to the success of a backdoor attack. Instead, our research shows that it is completely feasible to increase the changes around these trigger-inserted inputs to defeat SGD, without undermining the backdoor effect at all. For this purpose, we developed a new backdoor attack called GRASP to enhance backdoor stealthiness through training data poisoning when the defender tries to detect by gradient-based trigger inversion. We further show that this simple approach is not only theoretically sound (Section 5) but also effective when used to enhance existing backdoor attacks which are designed to evade other backdoor defenses. This is because GRASP is a generic trigger injection method that can be implemented through data poisoning and thus can be combined with any other stealthy backdoor attacks. Finally, our experiment shows the GRASP-enhanced backdoor attacks are effective in defeating all known gradient-based trigger inversion solutions (Section 6.2), indicating that our current gain on backdoor detection could actually be rather fragile. Figure 1: The scatter plot shows the relationship between the trigger robustness and the effectiveness of ten attacks to evade the NC backdoor detection (measured by AUC). The X-axis represents the trigger robustness, and the y-axis represents the AUC score when using NC[5] to detect the backdoored models under these attacks. The high correlation between the trigger robustness and the AUC (\(r^{2}=0.60\)) indicates the backdoored models with high trigger robustness are easier to be detected by the trigger inversion technique than those with low robustness. Figure 2: Comparison of the data poisoning backdoor attack by BadNet with (a) or without (b) GRASP enhancement. The GRASP enhancement contaminates trigger-inserted samples (labeled as the target class) along with the noise-added, trigger-inserted samples (labeled as the source class) into the training set, whereas the BadNet attack only contaminates the trigger-inserted samples. ### _When Trigger Inversion Fails_ Based on the observation illustrated in the Fig. 4 and 1, we have the hypothesis: **Hypothesis 1**.: _Given a backdoored victim model, the trigger robustness of this model is positively correlated with the effectiveness of gradient-based trigger inversion methods._ Here, we aim to offer an intuitive explanation of the hypothesis, and in section 5, we will give a formal theoretical analysis. Fig. 3 illustrates the idea using a 1D schematic example. We consider the trigger-inserted point \(x^{\prime}\) with the trigger robustness with \(\epsilon\), i.e., any 1D data point within a small perturbation \(\epsilon\) from \(x^{\prime}\) in the trigger area is predicted to be in the same (target) class as \(x^{\prime}\), while a 1D data point outside the perturbation \(\epsilon\) from \(x^{\prime}\) may be predicted to a different class. A backdoor is considered to be _perfect_ if \(\epsilon\to 0\), i.e., any small perturbation \(\epsilon\) added on \(x^{\prime}\) will change the predicted label (from the target label into another one) by the model. Ideally, the infected model always has 100% confidence in predicting trigger-inserted inputs as the target label, as observed from the performance of most SOTA backdoor attacks [33][34]. 1 The perfect trigger in such an ideal attack will cause the infected model to have an infinite change rate (trigger robustness equals zero) around trigger-inserted inputs. Such a trigger, however, cannot be reconstructed by trigger inversion because all inversion algorithms rely on the gradient to search for the trigger as the local optimum of the loss function. In practice, however, such a perfect trigger does not exist in the neural network because the neural network is a continuous function. Therefore, we relax the definition of the perfect trigger: instead of an infinite change rate, we consider a very large change rate. Equivalently, we allow the trigger to tolerate only a small amount of noise so that the neural network remains continuous but has a sharp slope around the trigger-inserted data point. Intuitively, if we decrease the trigger robustness, we will make it more difficult to optimize Eq.1, due to the following constraints: Footnote 1: In Section 9, however, we discuss the case that this hypothesis does not hold up. \(\bullet\) It requires the gradient-based optimization to initiate from more random points to find an optimum near the trigger-inserted data point; \(\bullet\) When the optimization process comes close to the trigger-inserted point, it needs to use a small updating step to ensure that the gradient-based search does not jump over the optimum. (see Fig.5). In Sections 4.2, we will describe the method to implement the backdoor attack based on this intuition. Our method, GRASP, follows a general data poisoning threat model as assumed by BadNet [24], in which the adversary does not need to access (or even control) the training process but only needs to contaminate a small fraction of poisoning data (containing the trigger) into the training dataset. Both the theoretical analysis (Section 5) and the evaluation results (Section 6) show that our method can introduce backdoors that are more likely to evade state-of-the-art backdoor defense methods using trigger inversion algorithms. ### _Gradient Shaping (GRASP)_ Consider a typical adversarial training, which adds a new augmented data point \((x_{new},y)\) w.r.t the original training data point \((x,y)\), where \(x_{new}=x+c\cdot\epsilon\) with \(\epsilon\) being a white noise (normally or uniformly distributed), and keeps the label of \((x,y)\). While this adversarial training enhances the robustness of the entire input, intuitively, it also can be leveraged to improve trigger robustness by adding noise to the trigger while retaining the intended target label. However, our objective is to weaken the trigger robustness on its attached inputs. For this purpose, we develop a _gradient shaping_ technique. Specifically, we consider two types of triggers, fixed triggers, and sample-specific triggers. For a given poisoning data point \((x,y)\) where \(y\) is the target class, we add a white noise \(\epsilon\) only on the trigger: \(x_{new}=\{x_{new}^{(i)}=x^{(i)}+c\cdot\epsilon|M^{(i)}\neq 0\}\), where \(c\) is a hyper-parameter to control the magnitude of the added noise. Unlike robust training, we label \(x_{new}\) as the source class instead of the target class assigned to noise-free poisoning data. An example of how GRASP works is presented by Fig.2b. Note that as we will discuss in Section 5.2, \(c\) can be adjusted to reduce the magnitude of the noise. When this happens, even a slightly perturbed trigger-inserted input is predicted as the source class to ensure that it cannot activate the backdoor. As a result, the robustness of the trigger is weakened. In the meantime, if \(c\) becomes too small, the trigger robustness will be degraded below that of the primary task (estimated by obstructed robustness) that the target model is meant to perform. This subjects GRASP to the backdoor mitigation, such as RAB [23], which adds noise to training data to nullify the effect of the trigger (Section 9). Hence, we need to choose the appropriate value of \(c\) (see Section 5 and Section 13.3 in Appendix) for the best performance of GRASP. GRASP is designed as a data poisoning method and can work on any trigger. In practice, we may enhance Figure 3: A perfect trigger. Here, \(\epsilon\) describes the maximum perturbation that is allowed by the trigger without changing the label from the target class to another one. In the extreme case, when \(\epsilon\to 0\), the trigger is _perfect_. In reality, however, no trigger is perfect because any neural network represents a continuous function. existing backdoor attacks by first generating the trigger using these attack methods and then injecting the trigger into the training dataset using GRASP. Algorithm 1 provides the pseudo-code of this data-poisoning approach for a generic trigger generated by the backdoor attacks, for example, in [24] and [26]. For sample-specific triggers generated by the attacks, for example, in [16] and [18], the algorithm may be modified by replacing the trigger amending function \(A(X_{i},\mathbf{M},\mathbf{\Delta})\) with a backdoor generator \(G(X_{i})\). More specifically, consider a sample-specific trigger with the trigger generator \(G(\cdot):\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\), which takes as input a clean sample and outputs the corresponding trigger-inserting sample. Algorithm 1 should then be modified by removing lines 10-14, and for lines 15 and 17, \(A(X_{i},\mathbf{M},\_\Delta)\) should be replaced by \(G(X_{i})\). Algorithm 1 works with three parameters: the poisoning rate \(\alpha\), i.e., the proportion of trigger-inserted samples to be poisoned into the training dataset, the enhancement rate \(\beta\), i.e., the proportion of noise-added samples among all poisoned data, and noise scale \(c\), i.e., the magnitude of perturbation on the trigger. In our experiments, we typically set \(\alpha=6\%\), \(\beta=5\%\), and \(c=0.1\). We evaluate the robustness ratio of existing backdoor attacks before and after enhanced by GRASP. As shown in Fig.4, the GRASP enhancement can indeed reduce the robustness ratio and thus move the trigger robustness closer to the obstructed robustness. ``` 0: Trigger magnitude matrix \(\Delta\in\mathbb{R}^{m}\), trigger mask matrix \(M\in\mathbb{R}^{m}\), noise scale \(c\in\mathbb{R}\), training data inputs \(X\in\mathbb{R}^{n\times m}\), training data label \(Y\in\{1,...,k\}^{n}\), target label \(y_{t}\in\{1,...,k\}\), poisoning rate \(\alpha\), enhancement rate \(\beta\), Noise_type 0:\((\tilde{X},\widetilde{Y})\) 1:\(\tilde{X}\leftarrow\{\}\) 2:\(\tilde{Y}\leftarrow\{\}\) 3:if\(\textsc{Noise\_type}=\textsc{Normal}\)then 4:\(\epsilon\leftarrow\mathcal{N}(0,1)\) 5:elseif\(\textsc{Noise\_type}=\textsc{Uniform}\)then 6:\(\epsilon\gets uniform(-1,1)\) 7:endif 8:for\(i\in\{0,...,n-1\}\)do 9:\(\_\mathbf{\Delta}=\mathbf{\Delta}\) 10:for\(j\in\{0,...,m-1\}\)do 11:if\(M_{j}\neq 0\)then 12:\(\_\mathbf{\Delta}_{j}=\_\mathbf{\Delta}+c\cdot\epsilon\) 13:endif 14:endfor 15:if\(i\in\alpha\cdot\beta\cdot n\)then 16:\(\tilde{X}.\textsc{add}(A(X_{i},\mathbf{M},\_\mathbf{\Delta}))\) 17:\(\tilde{Y}.\textsc{add}(Y_{i})\) 18:endif 19:if\(i\in\alpha\cdot n\)then 20:\(\tilde{X}.\textsc{add}(A(X_{i},\mathbf{M},\mathbf{\Delta}))\) 21:\(\tilde{Y}.\textsc{add}(y_{t})\) 22:endif 23:endfor ``` **Algorithm 1** GRASP data poisoning for fixed trigger ## 5 Theoretical Analysis of GRASP In this section, we present the theoretical analysis of GRASP to answer two questions: 1) why gradient-based trigger inversion methods are effective on the triggers with high robustness; and 2) why GRASP can render trigger inversion ineffective, even though these techniques perform exceedingly well on existing backdoor attacks. More specifically, in section 5.1, we attempt to bridge the relationship between trigger robustness and the efficiency of gradient-based trigger inversion methods. Because the theoretical analyses for the optimization of a generic target function (approximated by a deep neural network) are very challenging, our analysis is focused on the optimization of three types of functions, each under different constraints. First, when we approximate our target high-dimensional function by a convex relaxation that Lemma 2 borrowed from [35], shows that the convergence of the gradient-based optimizations are faster when the function has lower Lipschitz constant. Because a convex function with the low Lipschitz constant around the trigger-inserted points indicates the high robustness of the trigger (Theorem 1), our analysis explains the good performance of the gradient-based trigger inversion methods on the triggers with high robustness. Second, when the target function is a one-dimensional non-convex piece-wise linear function, which is one of the most regular types of neural network function (such as neural network with ReLU activation function), we prove in Theorem 2 that the probability that the gradient descent algorithm converges to the desired optimum (i.e., the trigger-inserted point) is greater when the convex hull is larger. Because the larger convex hull around the trigger-inserted points indicates the higher robustness of the trigger, our analysis explains the good performance of the gradient-based trigger inversion methods on the robust triggers under this condition. Finally, when the target function is high dimensional non-convex but satisfies the PL condition [36], we prove in Theorem 3 that the gradient-based optimization algorithms converge faster to the desirable optimum (i.e., the trigger-inserted point) if the local Lipschitz constant near the optimum is lower. As shown in recent research [37], [38], the neural network with high robustness tends to have a lower Lipschitz constant, our analyses again showed the high correlation between the trigger robustness and the efficiency of the gradient-based trigger inversion methods. Next in section 5.2, we attempt to answer the second question: by proving Theorem 4, we showed that when using GRASP to inject a trigger, the backdoored model will have greater local Lipschitz constant around the trigger-inserted points, thus reducing the robustness of the backdoor, which can render trigger inversion ineffective. ### Why Inversion Works on Robust Triggers Before we elaborate our theorem, we need to first formally define some concepts and a Lemma from [39]: **Definition 3** (**Astuteness**).: _A classifier \(f:\mathcal{X}\rightarrow\mathcal{Y}\) is astute at an input sample \(x\), if the predicted label by \(f\) is the same as the true label: \(\hat{y}=z(f(x))=y\)._ **Definition 4** (**r-local minimum**).: _A function \(f:\mathcal{X}\rightarrow\mathbb{R}\) has a (unique) \(r\)-local minimum at \(x^{\star}\), if there is no other \(x\) on which \(f\) gets lower or equal value than what can get on \(x^{\star}\), within the ball centered on \(x^{\star}\) with radius \(r\), i.e., \(f(x)>f(x^{\star}),\forall x,\|x-x^{\star}\|_{2}\leq r\)._ **Definition 5** (**Increasing rate and relaxation function**).: _Given a function \(f:\mathcal{X}\rightarrow\mathbb{R}\) with a \(r\)-local minimum at \(x^{\star}\), we define that \(f\) has an increasing rate of \(\kappa\) at \(x^{\star}\), if there exists some \(\kappa\geq 0\) and \(c_{\kappa}\geq 0\), such that \(f(x)-f\left(x^{\star}\right)\geq\sup_{c_{\kappa,\kappa}}c_{\kappa}\cdot\|x-x^ {\star}\|_{2}^{\kappa}\), when \(\|x-x^{\star}\|\leq r\). Accordingly, we refer the function \(\bar{g}(x)=c_{\kappa}\cdot\|x-x^{\star}\|_{2}^{\kappa}\) as the relaxation function of \(f\) at \(x\)._ **Definition 6** (**Local Lipschitz constant**).: _For a function \(f:\mathcal{X}\rightarrow\mathcal{Y}\), a given input \(x\) and a pre-defined radius \(r\), if \(L(f,\mathcal{X}_{x,r})\) exists and is finite, where \(\mathcal{X}_{x,r}=\{x^{\prime}:\|x^{\prime}-x\|_{2}<r\}\) and_ \[L(f,\mathcal{X}_{x,r})=\sup_{x_{1},x_{2}\in\mathcal{X}_{x,r}}\frac{\|f(x_{2})- f(x_{1})\|_{2}}{\|x_{2}-x_{1}\|_{2}}, \tag{5}\] _we define \(L(f,\mathcal{X}_{x,r})\) as the local Lipschitz constant of \(x\) with radius \(r\) for function \(f\)._ **Lemma 1**.: _Consider the data distribution \(X\), and assume the minimum \(l_{2}\) norm between any two different class data is \(r\). If a function is astuteness in \(X\), then \(f\) has a local Lipschitz constant of \(r^{\prime}\) around any \(x\in X\) such that \(r^{\prime}\geq r\)_ Fig. 6 illustrates the concept of the increase rate by using three one-dimensional functions with \(c=1,\kappa=0.5\), \(1\), and \(2\), respectively, at the \(r\)-local minimum \(x^{\star}\). Note that, in the first two conditions, we consider the optimization within the trigger area, which means that we assume the trigger area is known, so we only need to consider the first term in the Eq 1. In the third condition, we do not have such an assumption; we consider both terms in Eq 1. We attempt to connect the local Lipschitz constant of the target function used by trigger inversion with the increasing rate of the relaxation function near the trigger-inserted data point so that later we can exploit the Lipschitz constant of the relaxation function for the convergence analysis of the gradient-based trigger inversion. **Theorem 1**.: _Consider a function \(f:\mathcal{X}\rightarrow\mathbb{R}\) that has a unique \(r\)-local minimum around \(x^{\star}\), i.e., \(\forall x\in\mathcal{B}=\{x:\|x-x^{\star}\|<r\},f(x^{\star})<f(x)\) holds. Assuming the increasing rate of \(f(x)\) is \(\kappa\) at \(x\). If \(\kappa<1,c_{\kappa}>1\), for any \(x\) satisfies \(\|x-x^{\star}\|_{2}>1\), we have:_ \[L(f,\mathcal{X})\geq c_{\kappa}\left\|x_{1}-x_{2}\right\|_{2}^{k-1} \tag{6}\] _where \(x_{1},x_{2}\in\mathcal{X}\), and \(L(f,\mathcal{X})\) is the local Lipschitz constant of \(f(x)\) at \(x\)._ The proof of Theorem 1 is given in 13.4 of the online document [40]. Theorem 1 studies a local sphere region with the center at an optimum point (i.e., the trigger-inserted sample) and the radium as the distance between the center and the initial point (typically a clean sample) of the optimization, and relaxes the target function used by trigger inversion into a convex function (i.e., the _relaxation function_) within the local sphere region, so that the Lipschitz constant of the relaxation function can be used to approximate the local Lipschitz constant of the original target function. Here, the relaxation is used as a tool to study the convergence of gradient-based optimization on a non-convex function like the target function in Eq.1. Even though the relaxation may not resemble the target function, it approximates an essential property (i.e., the local Lipschitz constant) of the target function near the optimum point. Finally, as shown Lemma 2, which is borrowed from [35], the gradient-based optimization algorithms, including the three most commonly used optimizers in deep learning: stochastic gradient descent (SGD), projected gradient descent (PGD), and the accelerate gradient descent (AGD), converge more slowly to a local optimum of a convex function when the change rate (i.e., the Lipschitz constant) of the function is greater. **Lemma 2**.: _For a convex function \(f:\mathcal{X}\rightarrow\mathcal{Y}\), If \(g(x)\) is the sub-gradient set of \(f\) at \(x\in\mathcal{X}\), and \(B=max||g(x)||_{2}\) is the largest \(l_{2}\) norm that \(g(x)\) can achieve._ _The expectation of steps (\(E_{SGD}\)) needed by a SGD learning algorithm to learn \(f\) within the error rate of \(\epsilon\) is \(\mathbb{E}_{SGD}=\frac{B^{2}L^{2}}{\epsilon^{2}}\), where \(L\) is the Lipschitz constant of \(f(x)\), i.e., \(L=\sup\{L(f,\mathcal{X}_{x,\infty}),x\in\mathcal{X}\}\)._ _Similarly, the expected steps (\(E_{PGD}\)) needed by a PGD learning algorithm to learn \(f\) within the error rate of \(\epsilon\) is \(\mathbb{E}_{PGD}=\frac{L^{2}}{\alpha\epsilon}\), where \(L\) is the Lipschitz constant of \(f(x)\), and \(\alpha\) is the step size._ _Finally, the expectation of steps (\(E_{AGD}\)) needed by a AGD learning algorithm to learn \(f\) within the error rate of \(\epsilon\) is \(\mathbb{E}_{AGD}=R\sqrt{\frac{\beta}{\epsilon}}\), where \(\beta\) is the smoothness of \(f(x)\)._ Combining Lemma 2 and Theorem 1, we conclude that the convergence of the gradient-based optimization is faster on the target function with the lower Lipschitz constant, which indicates gradient-based trigger inversion algorithms perform better on the triggers with higher robustness because the local Lipschitz constant around these triggers is lower. Note that here we utilize a convex relaxation to approximate the loss function around the trigger area. As a convex function, as long as the step size is proper, the optimizer could always converge to the global minimum (trigger), and techniques such as second-order optimizer or momentum could boost the convergence speed. However, in the analysis of the next two cases, we consider the non-convex case, in which we show that the second-order optimizer or optimizer utilizing momentum could not make it the optimizer easier to find the trigger. Furthermore, we provide an empirical result of the effectiveness of trigger inversion while using different types of optimizers such as second-order optimizer (AdaHessian[41]), optimizer utilizing momentum (Adam[42]) and SGD. The result in 13.7 of the online document [40] shows that the optimizer that utilizes momentum, such as Adam, is comparable with SGD. Second-order optimizer, such as AdaHessian, has relatively low performance while optimizing the trigger inversion. [15, 6, 7] Next, we study the gradient-based optimization of a non-convex function, starting from a one-dimensional function. Here, we consider the target function as a piece-wise linear function, representing a neural network using the activation function such as ReLU. Theorem 2 shows the positive relationship between the robustness of a trigger (as the global optimum of the target function) and the probability that the gradient-based optimizer converges to the trigger. **Theorem 2**.: _Given a piece-wise linear function \(\ell(\cdot):[a,b]\rightarrow[0,1]\) with global optimum on a convex hull, after \(n\) iterations, a gradient-based optimizer starting from a random initialization converges to the optimum with the probability:_ \[1-B_{1}^{-1}(b-a)^{-1}(4-B_{1}B_{2})^{n}(1-B_{1}B_{2})] \tag{7}\] where \(B_{1}>0\) is a component indicating the area under the desired convex hull and \(B_{2}>0\) is a component indicating the likelihood of the linear pieces outside the convex hull jumping into the convex hull during a gradient-based iteration (For details, see the proof of Theorem 2 in 13.5 of the online document[40].) Notably, in the gradient-based optimization for trigger inversion, the optima represent the desirable trigger-inserted points, and thus the size of the convex hull is positively correlated with the robustness of the trigger. As a result, the gradient-based trigger inversion has a higher probability of identifying the trigger when the trigger is more robust. Finally, we consider the target function as high-dimensional non-convex but satisfies the proximal-PL condition [36], which is often considered in the theoretical analysis of neural networks. Formally, the proximal-PL condition is defined below. **Definition 7** (Proximal-PL condition).: _We consider the optimization problem in the form:_ \[\underset{x\in\mathbb{R}^{d}}{\operatorname{argmin}}F(x)=f(x)+g(x), \tag{8}\] _where \(f\) is a differentiable function with an \(L\)-Lipschitz continuous gradient and \(g\) is a simple but potentially non-smooth convex function 2. To analyze the proximal-gradient algorithms (i.e., a more general form of the Projected Gradient Descent (PGD)), a natural generalization of the PL inequality is that there exists \(\mu>0\) satisfying:_ Footnote 2: Typical examples of the simple function \(g\) include a scaled \(\ell_{1}\)-norm of the parameter vectors (the size of the trigger), \(g(x)=\lambda\|x\|_{1}\), and indicator functions that are zero if \(x\) lies in a simple convex set, and are infinity otherwise. \[\frac{1}{2}\mathcal{D}_{g}(x,L)\geq\mu\left(F(x)-F^{*}\right) \tag{9}\] _where_ \[\mathcal{D}_{g}(x,\alpha)\!\equiv\!-2\alpha\min_{y}\!\big{[}(\nabla f(x),y-x) +\frac{\alpha}{2}\|y-x\|^{2}\!+\!g(y)-g(x)\big{]}.\] Theorem 3 from [36] showed that the proximal-PL condition is sufficient for the proximal-gradient method to achieve a global linear convergence rate. **Theorem 3**.: _Consider the optimization problem in Eq. 8, where \(f\) has an \(L\)-Lipschitz continuous gradient (Eq. 9), \(F\) has a non-empty solution set \(\mathcal{X}^{*},g\) is convex, and \(F\) satisfies the proximal-PL inequality. Then the proximal gradient method with a step size of \(1/L\) converges linearly to the optimal value \(F^{*}\):_ \[F\left(x_{k}\right)-F^{*}\leq\left(1-\frac{\mu}{L}\right)^{k}[F\left(x_{0} \right)-F^{*}] \tag{10}\] Theorem 3 also indicates a negative relationship between the Lipschitz constant (\(L\)) of the target function and the convergence rate to the local optimum (trigger) (i.e., the difference of the target function values between the optimal point and the actual trigger-inserted point). Previous research [43] showed that the second-order optimizer has exactly the same lower bound of convergence rate as the first-order optimizer under the PL condition. Many existing studies [39, 37, 38] showed that in neural networks, the lower Lipschitz constant implies the higher robustness of the model. Therefore, combining with Theorem 3, we conclude that the gradient-based trigger inversion methods perform well on the triggers with high robustness. ### _Why Inversion Fails under GRASP_ Now we are ready to analyze the local Lipschitz constant around the trigger-inserted samples, in particular how it is influenced by the noise level (measured by the parameter \(c\); see Theorem 1) in the GRASP algorithm. Specifically, Theorem 4 shows that when each of the two typical noise distributions is used, the GRASP-poisoned model will have greater local Lipschitz constant around \(x\) than the model under the data poisoning attack without using GRASP, for example by BadNet[24]. Formally, consider a single case in GRASP data poisoning: a trigger \((\mathbf{M},\mathbf{\Delta})\) is injected into a single normal data point \((x,y)\), resulting in the trigger-inserted data point \((x^{\prime},y_{t})\), where \(x^{\prime}=A(x,\mathbf{M},\mathbf{\Delta})\). Let \((x^{*},y)\) be the trigger-inserted data point with noise \(\epsilon\) added on the trigger part, where \(x^{*}=A(x,\mathbf{M},\mathbf{\Delta})+c\cdot\epsilon\cdot\mathbf{M}\). Let \(f\) be the GRASP-poisoned classification model, which we assume is astute at \((x,y)\), \((x^{*},y)\) and \((x^{\prime},y_{t})\). **Theorem 4**.: _If the noise \(\epsilon\sim\mathcal{N}(0,1)\) (i.e., the white noise), and \(c<\|x^{\prime}-x\|_{2}\cdot\frac{\Gamma\left(\frac{[m^{*}-1]}{2}\right)}{ \sqrt{2}\Gamma\left(\frac{[m^{*}-1]}{2}\right)}\), where \(|m^{*}|\) is the \(l_{1}\) norm (i.e., the size) of the trigger; \(\Gamma\) is the Euler's gamma function. A model attacked by a backdoor attack and enhanced by GRASP using the training data points \((x,y),(x^{\prime},y_{t})\) and \((x^{*},y)\) has a greater local Lipschitz constant around \(x\) than the model backdoored by the same attack without the enhancement by GRASP using the training data points \((x,y),(x^{\prime},y_{t})\)._ _Similarly, if \(\epsilon\sim uniform(-1,1)\), and \(c<\|x^{\prime}-x\|_{2}\), the GRASP-enhanced model has greater local Lipschitz constant around \(x\) than the model without the enhancement._ The proof of Theorem 4 is given in 13.6 of the online document[40]. The theorem indicates that if the level of the noise used in GRASP is bounded by the \(l_{2}\)-distance between the normal data point \(x\) and the trigger-inserted point \(x^{\prime}\), the GRASP-poisoned model will have a greater local Lipschitz constant, or intuitively, a steeper output around \(x^{\prime}\) compared with the model poisoned by existing backdoor attacks like BadNet. Combining Theorem 4 with Theorems 1, 2 and 3 in Section 5.1, we conclude that GRASP can render trigger inversion less effective. ## 6 Against Inversion-based Detection In this section, we evaluated the effectiveness of GRASP by comparing inversion-based backdoor defenses against different attacks before and after the enhancement by GRASP, respectively. ### _Datasets and Settings_ **Datasets**. We analyzed backdoor attacks on the models trained using three public datasets: MNIST [44], CIFAR10 [45], and GTSRB [46], as summarized by Table 1. Our experiments were conducted on a server with one AMD Ryzen 3980X 3.2 GHz 48-core processor and one NVIDIA RTX 3090 GPU. **Backdoor attacks**. We considered seven existing backdoor attacks: BadNet [24], LSBA [32], Composite [30], clean label [34], DEFEAT [18], IMC [33] and adaptive-blend [17]. Those backdoor attacks fall into four general categories: patch trigger, clean label, imperceptible, and latent space inseparable. For each attack, we generate and evaluate 24 backdoored models: for each of the three different datasets (MNIST, CIFAR-10, and GTSRB), we generate two models using each of four different neural network structures (VGG-16, ResNet-101, ShuffleNet, and ResNet18, respectively). \(\bullet\)_Patch trigger_. The patch triggers usually utilize a small pattern as the trigger for the backdoor attack. We select BadNet [24], LSBA [32], and Composite [30] in this category and used two patterns (as shown in the 13.8 of the online document [40]) as the patch trigger in the backdoor attack. \(\bullet\)_Clean label_. The clean-label backdoor attacks contaminate the training dataset with the clean-label data. We select Latent [34] to represent the attacks in this category. \(\bullet\)_Imperceptible_. An imperceptible backdoor attack aims to design a backdoor trigger that can evade human inspection. Most of these attacks enhance the backdoor stealthiness through universal adversarial perturbation (UAP). We select DEFEAT [18] and IMC [33] in this category in our experiments. \(\bullet\)_Latent space inseparable_. A latent space inseparable backdoor attack aims to design a backdoor trigger so that in the latent space of the target model, the trigger-inserted samples are close to the clean samples in the target class. We select Adaptive-Blend [17] in this category in our experiment. \(\bullet\)_Attack parameters_. We inserted 3000, 2352, and 3000 poisoning data samples (i.e., 6% poisoning rate) into the training datasets of CIFAR-10, MNIST, and GTSRB, respectively. Following original papers, the trigger in IMC was synthesized [33], and the trigger in Latent was randomly initialized [34]. \(\bullet\)_GRASP_. For each attack mentioned above, we combine them with GRASP by algorithm 1. More specifically, we inserted 6% poisoning data to the training datasets, among which 3% are trigger-inserted samples, and the other 3% are the same trigger-inserted samples with noises and source-class labels (Algorithm 1). **Trigger inversion**. We implemented and tested four backdoor countermeasures based upon trigger inversion: Neural Cleanse [5], TABOR [14], K-arm [7], and Pixel [15]. In our experiments, we utilized 10% of the training data and the default hyper-parameters provided in the original papers for trigger reconstruction. ### _Putative Trigger Effectiveness_ Existing methods measure the effectiveness of trigger inversion by computing the similarity between the reconstructed and real triggers, e.g., based on \(l_{1}\) distance, which is insufficient since a similar pattern may not have a similar backdoor effect (i.e., ASR). We propose a set of metrics to measure trigger accuracy. Below we present our experimental results on the effectiveness of backdoor detection by four trigger inversion algorithms: (NC [5], TABOR [14], Pixel [15], and K-arm [7]), by comparing the effectiveness of the backdoor attacks before and after the enhancement by GRASP. More specifically, after the trigger is generated by each backdoor attack method, we use GRASP to enhance this trigger as described in section 4.2. Here, we append a symbol "*" to the name of each backdoor attack to indicate the respective attack enhanced by GRASP. For example, "BadNet*" indicates BadNet enhanced by GRASP. **Metrics**. In our experimental study, we utilize four quantitative metrics to measure the effectiveness of a backdoor in evading a gradient-based inversion algorithm (for reconstructing a trigger \((\mathbf{\Delta},\mathbf{M})\) in a model \(\bar{f}\)): \(\bullet\)\(\epsilon_{1}\): The difference between the real trigger's ASR on the backdoored model and that on the "sanitized" model retrained to unlearn the reconstructed trigger: that is, \(\epsilon_{1}=|ASR_{unlearn}-ASR|\). A smaller difference indicates that the reconstructed trigger is less accurate, thus, unlearning is less effective. \begin{table} \begin{tabular}{c|c|c|c} & MNIST & GTSRB & CIFAR10 \\ \hline Training samples (\#) & 60,000 & 39,209 & 50,000 \\ Testing samples (\#) & 10,000 & 12,630 & 10,000 \\ \end{tabular} \end{table} Table 1: Datasets statistics \(\bullet\)\(\epsilon_{2}\): The Jaccard distance between the trigger mask of the reconstructed trigger \(M^{\prime}\) and of the real trigger \(M\) can be calculated as \(J(M^{\prime},M)=\frac{|M^{\prime}\cap M|}{|M^{\prime}|+|M|-|M^{\prime}\cap M|}\). \(\bullet\)\(\epsilon_{3}\): The ASR of the reconstructed trigger \((\mathbf{M^{\prime}},\mathbf{\Delta^{\prime}})\) on a clean model \(f^{*}\): \(\epsilon_{4}=ASR^{\prime}_{f^{*}}\). A large \(ASR^{\prime}_{f^{*}}\) indicates that the reconstructed trigger is likely a natural trigger [47], not the real one meant to be recovered. \(\bullet\)\(\epsilon_{4}\): \(AUC\) score of backdoor detection. The trigger inversion methods often use the \(l_{0}\) norm of the reconstructed trigger as the measurement to distinguish backdoored models from benign models: the lower of the \(l_{0}\) norm, the more probable the model has been backdoored. Notably, for a trigger inversion algorithm with ideal performance, \(\epsilon_{3}\) is anticipated to be close to 0, while \(\epsilon_{1}\),\(\epsilon_{2}\) and \(\epsilon_{4}\) are anticipated to be close to 1. **Experimental results**. Here we present our results as measured by the aforementioned metrics. Due to the space limit, we defer our complete experimental results to Table 5 in Appendix and only report representative results (\(\epsilon_{4}\)) in this section. \(\bullet\)\(\epsilon_{1}\)_: effectiveness of unlearning_. The reconstructed trigger can be used for backdoor unlearning [5, 14, 7]. After we reconstructed the trigger for a given backdoored model during the unlearning procedure, we first built an unlearning dataset, including randomly selected 10% of the training data (6,000 in MNIST, 5,000 in CIFAR-10, and 3,920 in GTSRB). Then, we added the reconstructed trigger onto 10% of the unlearning dataset (600 in MNIST, 500 in CIFAR-10, and 392 in GTSRB) while keeping their class labels intact (the original source class). After that, we fine-tuned the model on this unlearning dataset. We used SGD as the optimizer in the experiment and set the learning rate = \(0.01\) and momentum = \(0.9\). As shown in Table 5, after unlearning with the reconstructed trigger by various trigger inversion algorithms, most models poisoned by the attack enhanced by GRASP still preserve much higher ASRs (almost identical to those before unlearning), so that the GRASP-enhanced attacks achieve lower \(\epsilon_{1}\) than respective backdoor attack. Table5 shows that on CIFAR-10, BadNet achieves the worse performance against the trigger inversion defense of Tabor (\(\epsilon_{1}=97.5\%\)), which is significantly enhanced by GRASP (\(\epsilon_{1}=1.5\%\)). Among other attacks, LSBA* has the best performance under pixel as 0.6%. \(\bullet\)\(\epsilon_{2}\)_: distance between trigger masks_. We observed that the reconstructed triggers from the models poisoned by GRASP-enhanced attacks have very low similarity with the real triggers (i.e., the overlap between the real and the reconstructed triggers are less than 20%). By comparison, the reconstructed triggers from the models under the backdoor attacks without GRASP enhancement overlap with the real triggers by about 10% - 60%. On CIFAR-10, when enhanced by GRASP, DEFEAT* has worse performance against pixel (\(\epsilon_{2}=0.13\)). While BadNet* has the best performance against NC ( \(\epsilon_{2}=0.00\)) (Table 5). \(\bullet\)\(\epsilon_{3}\)_: \(ASR\) of the reconstructed triggers on a clean model_. We also computed \(\epsilon_{3}\), the ASR of the reconstructed triggers from the poisoned models on a clean model for the same task. In our experiment, we used CIFAR-10, MNIST, and GTSRB as the clean datasets to train the clean models. After a trigger is reconstructed from a poisoned model, we randomly select 500 images from the source class of the clean dataset and insert the trigger on them. The ASR was then measured on this set of trigger-inserted samples on the clean model. As shown in Table 5, the reconstructed triggers from the models poisoned by GRASP-enhanced attacks have a relatively high ASRs on the clean model, almost comparable with their ASRs on the poisoned models, whereas the reconstructed triggers from the models poisoned by the attack without GRASP enhancement have much lower ASRs. This indicates that any useful trigger recovered from the models poisoned by GRASP-enhanced attacks are likely to be a natural trigger introduced by the legitimate learning process that has nothing to do with the injected triggers. On dataset CIFAR-10, when enhanced by GRASP, LSBA* has worse performance against pixel (\(\epsilon_{3}=42.1\%\)) while Adaptive-Blend has the best performance against (\(\epsilon=28.3\%\)). \(\bullet\)\(\epsilon_{4}\)_AUC_. As mentioned earlier, our research shows that trigger inversion algorithms are unlikely to effectively reconstruct and remove the triggers injected by GRASP, even though they are largely successful on the triggers injected by existing backdoor attacks. In some cases, however, the backdoor defense methods just need to detect the infected models (and discard them afterward), even though they cannot accurately reconstruct the real trigger. Our research evaluated how successfully these trigger inversion methods can detect the models poisoned by GRASP-enhanced attacks. Specifically, we train 24 clean models; for each of the three different datasets (MNIST, CIFAR-10, and GTSRB), we generate two clean models using each of four different neural network structures (VGG-16, ResNet-101, ShuffleNet, and ResNet18, respectively) to analyze their detection accuracy. Similarly, we train 24 models for each attack on three datasets using four neural network structures. In our research, we measured the AUC score of NC, TA-BOR, K-arm, and Pixel on these 48 models (24 clean models and 24 backdoored models). As shown in Table 2, generally speaking, the AUC scores of different defense strategies from the models poisoned by different backdoor attacks with GRASP enhancement are significantly smaller than those from the models poisoned by the same attack without GRASP enhancement, which indicates the better effect of the GRASP-enhanced attacks to evade the detection by all tested triggers inversion algorithms. In particular, LSBA enhanced by GRASP successfully evades the detection by the trigger inversion algorithms with AUCs below 65% for all of them. ## 7 Against Weight Analysis Detection Weight Analysis aims to distinguish backdoor and benign models by analyzing the signals in model parameters. Specifically, the distinguishable signals within the parameters between the backdoor and benign model are extracted, often through training a classifier on parameters of sample models, and then utilized to predict whether any given model is backdoored [48][8][49]. In this section, we present theoretical and experimental studies to show that backdoored models poisoned by a GRASP-enhanced attack are not further away from the benign models of the same primary task than the backdoored model poisoned by the same attack without GRASP enhancement. Formally, given a training dataset \(D\), and a backdoor attack, we train \(t\) benign ML models \(\{f_{\theta_{(i)}}|i\in\{1,2,...,t\}\}\) on \(D\), and \(t\) backdoored ML models \(\{\hat{f}_{\theta_{(i)}}|i\in\{1,2,...,t\}\}\) on \(D\) with the given backdoor attack. \(z(f_{\theta}(\cdot)):\mathcal{X}^{m}\rightarrow\mathcal{Y}\), where \(\theta\in\mathbb{R}^{K}\) represent the set (\(K\)) parameters in the model \(f_{\theta}\). A weight analysis methods then build a classifier \(g(\cdot):\mathbb{R}^{K}\rightarrow[0,1]\), which is trained on the dataset \(D_{\theta}=\{(\theta_{(1)},0),(\theta_{(2)},0),...,(\theta_{(t)},0)\}\cup\{( \hat{\theta}_{(1)},1),(\hat{\theta}_{(2)},1),...,(\hat{\theta}_{(t)},1)\}\) where the label "0" indicates the parameters of benign models, and "\(1\)" indicates the parameters of backdoored models. Next, we show why GRASP does not reduce the attack effectiveness to evade the weight analysis. Consider a neural network with any initialization \(f_{\theta_{0}}\) is trained on the dataset \(D_{benign}\), the backdoor dataset \(D_{backdoor}\), and the GRASP-enhanced backdoor dataset \(D_{GRASP}\), respectively. Specifically, we denote \(D_{benign}=\{D_{ori},D_{troj}^{*},D_{Aug}^{*}\}\), \(D_{backdoor}=\{D_{ori},\hat{D}_{troj},\hat{D}_{Aug}\}\), and \(D_{GRASP}=\{D_{ori},\hat{D}_{troj},D_{Aug}^{*}\}\), where \(D_{ori}\) is the legitimate training dataset used for training all three models, \(\hat{D}_{troj}\) and \(D_{troj}^{*}\) represent the set of trigger inserted samples labeled by the target and the source (legitimate) class, respectively, and \(\hat{D}_{Aug}\) and \(D_{Aug}^{*}\) represent the set of augmented samples (i.e., the trigger-inserted samples with added noise) labeled by the target and the source (legitimate) classes, respectively. Theorem 5 below indicates the models trained on \(D_{GRASP}\) (enhanced by GRASP) are not easier to be distinguished by the weight analysis from the benign models (trained on \(D_{benign}\)) than the models trained on \(D_{backdoor}\) (by the backdoor attack without GRASP enhancement). **Theorem 5**.: _Consider a \(L\) - layer neural network \(f_{\theta}(\cdot):\mathcal{X}^{m}\rightarrow\mathcal{\bar{Y}}\). Given an input \(x\in\mathcal{X}^{m}\), in the \(l^{th}\) layer with \(K_{(l)}\) neurons, where \(1<l<L\), let \(\phi(x)_{k}^{(l)}\) denote the output of the \(k^{th}\) neuron before activation, and \(\sigma(x)_{k}^{(l)}\) denote denote the output after activation. Let \(\theta_{(p,q)}^{(l)}\) denote the weight connecting \(q^{th}\) neuron in the \((l-1)^{th}\) layer and \(p^{th}\) neuron in the \(l^{th}\) layer._ _We assume:_ \[\sum\limits_{i}^{D_{ori}}\sigma(x_{i})_{k}^{(l)}\cdot\sum\limits_{i}^{D_{troj} ^{*}}\sigma(x_{i})_{k}^{(l)}\cdot\sum\limits_{k}^{D_{Aug}^{*}}\sigma(x_{i})_{ i}^{(l)}\neq 0 \tag{11}\] _and the square loss function \(C(\theta)=\frac{1}{2m}\sum\limits_{i}^{n}(f(x_{i};\theta)-y_{i})^{2}\) is used for training. Then for any set of parameters \(\theta\), the gradient of the loss function w.r.t any parameter \(\theta_{(p,q)}^{(l)}\) in the model \(f_{\theta}\) on the three datasets satisfy:_ \[\small\begin{split}\nabla_{D_{benign}}\,\theta_{(p,q)}^{(l)}- \nabla_{D_{backdoor}}\,\theta_{(p,q)}^{(l)}>\nabla_{D_{benign}}\,\theta_{(p,q )}^{(l)}-\nabla_{D_{GRASP}}\,\theta_{(p,q)}^{(l)}\end{split} \tag{12}\] The proof of Theorem 5 is given in the Appendix. This theorem shows the difference of gradient on the parameters between the backdoored models poisoned by a GRASP-enhanced attack and the benign models is always smaller than the difference between the backdoored models poisoned by the same attack without GRASP enhancement, which implies it is not easier to distinguish the GRASP poisoned models from benign models than to distinguish the backdoored models without GRASP enhancement from the benign models. We then evaluated the effectiveness of GRASP against the weight analysis-based backdoor detection, which has been adopted by some teams and performed well in some cases in recent backdoor competitions [1, 2]. Here we selected Trojan Signature (TS)[48], MNTD[9], Activation Clustering (AC) [50] and ABS [6], the representative meth \begin{table} \begin{tabular}{l|c c c c|c c c|c c c c} & \multicolumn{3}{c|}{CIFAR-10} & \multicolumn{3}{c|}{MNIST} & \multicolumn{3}{c}{GTSRB} \\ & NC & \multicolumn{1}{c}{Tabor} & K-arm & Pixel & NC & \multicolumn{1}{c}{Tabor} & K-arm & Pixel & NC & \multicolumn{1}{c}{Tabor} & K-arm & Pixel \\ \hline \hline BadNet & 79.1\% & 83.5\% & 84.6\% & 92.3\% & 77.9\% & 81.6\% & 83.2\% & 89.7\% & 79.9\% & 82.0\% & 85.1\% & 89.6\% \\ BadNet* & 54.2\% & 55.3\% & 59.2\% & 79.4\% & 53.3\% & 54.3\% & 59.9\% & 83.4\% & 53.4\% & 56.2\% & 58.2\% & 81.3\% \\ LSBA & 67.2\% & 67.4\% & 71.4\% & 81.6\% & 68.2\% & 68.9\% & 71.0\% & 79.0\% & 69.1\% & 69.5\% & 71.0\% & 87.2\% \\ LSBA* & 54.4\% & 56.0\% & 59.1\% & 63.3\% & 52.6\% & 57.7\% & 56.1\% & 63.5\% & 53.9\% & 52.9\% & 58.9\% & 65.4\% \\ Composite & 67.1\% & 65.3\% & 69.2\% & 84.5\% & 65.5\% & 64.2\% & 69.0\% & 83.3\% & 66.2\% & 68.3\% & 69.5\% & 84.9\% \\ Composite* & 53.2\% & 59.3\% & 61.4\% & 72.4\% & 53.1\% & 52.0\% & 59.3\% & 71.1\% & 55.3\% & 53.1\% & 59.1\% & 72.1\% \\ Latent & 78.5\% & 76.3\% & 79.5\% & 87.4\% & 80.1\% & 79.5\% & 81.6\% & 89.1\% & 76.3\% & 79.3\% & 76.1\% & 85.8\% \\ Latent* & 53.1\% & 55.2\% & 59.2\% & 75.5\% & 53.9\% & 55.1\% & 58.8\% & 74.1\% & 53.1\% & 54.1\% & 57.3\% & 70.6\% \\ DEFEAT & 64.5\% & 63.9\% & 77.5\% & 69.3\% & 67.3\% & 69.0\% & 80.2\% & 71.5\% & 64.2\% & 68.1\% & 79.4\% & 66.1\% \\ DEFEAT* & 59.5\% & 59.4\% & 72.4\% & 62.1\% & 58.7\% & 58.2\% & 71.1\% & 59.4\% & 59.1\% & 58.2\% & 71.2\% & 62.3\% \\ IMC & 68.3\% & 65.0\% & 76.1\% & 79.5\% & 67.3\% & 69.0\% & 74.7\% & 79.5\% & 70.1\% & 72.3\% & 76.1\% & 77.0\% \\ IMC* & 55.5\% & 54.7\% & 72.2\% & 71.5\% & 54.4\% & 53.6\% & 74.3\% & 74.1\% & 66.3\% & 61.0\% & 71.5\% & 77.3\% \\ Adaptive-Blend & 66.3\% & 67.3\% & 67.4\% & 77.4\% & 59.3\% & 63.1\% & 65.1\% & 81.2\% & 64.2\% & 63.4\% & 65.0\% & 78.9\% \\ Adaptive-Blend* & 55.2\% & 54.0\% & 54.4\% & 68.7\% & 52.3\% & 59.1\% & 61.0\% & 75.2\% & ods based on weight analysis. We computed their AUCs on 20 models using VGG16, including ten clean models and ten backdoored models, respectively, trained on each of the three datasets (CIFA-10, MNIST, and GTSRB). The backdoored models were poisoned by the three backdoor attacks with or without the GRASP enhancement, respectively. Here, the three attacks were selected because they were shown to be effective against the weight analysis-based backdoor defense. Due to the space limit, we only present the most important results in Table 3, the entire results are presented in Table 6. In general, the detection ability (AUC) by the weight analysis methods is lower or comparable on the three attacks when they are enhanced by GRASP, indicating GRASP enhancement does not reduce the effectiveness of these attacks against the weight analysis backdoor defense. ## 8 Resilience to Backdoor Mitigation In this section, we evaluated the resilience of GRASP to the backdoor defense methods without using backdoor detection. In our experiments, we considered five types of backdoor defenses (mitigation or unlearning) as summarized in [4]: Preprocessing-based Defenses, Model Reconstruction, Poison Suppression, and Certified Backdoor Defense, which are not based on backdoor detection, such as trigger inversion or weight analysis. We selected a total of six typical defense methods across these four types: DeepSweep (DS)[51], Fine-pruning (FP) [19], NAD [20], GangSweep (GS) [21], DBD [49], RAB [23], and compared their performance on deferring the selected backdoor attacks before and after the GRASP enhancement. We measured the ASRs of backdoors after the backdoor mitigation on the models (VGG16) poisoned by the selected backdoor attacks with or without the GRASP enhancement, respectively. Here, for each defense method, we selected a backdoor attack that has been shown to effectively evade the respective defense in previous studies to demonstrate the enhancement by GRASP does not reduce its effectiveness to evade the respective defense methods. We summarize the results from both experiments in Table 4. Except for special notes, all backdoored models before mitigation achieve an ASR above 95%. Here, the notations are the same as used in Section 6: a symbol "*" is appended to the name of the backdoor attack to indicate the respective attack enhanced by GRASP. Below, we discuss the results of different defense methods in detail. Adaptive-Blend (AB) with or without GRASP enhancement. The ASR on the model attacked by GRASP-enhanced AB (AB*) is comparable (for the GTSRB dataset) or lower (for the CIFAR and MNIST dataset) than the ASRs on the models attacked by AB, indicating GRASP enhancement does not make the attack more easily to be mitigated by the Gangsweep. **Poison Suppression**. For poison suppression defense, most methods (e.g., DBD [22]) learn a backbone of a DNN model via self-supervised learning based on training samples without their labels to capture those suspicious training data during the training process. We tested the performance of DBD [22] defense against models attacked by IMC[33] with and without GRASP enhancement. As shown in Table 4, the ASR of the DBD in the IMC* attacked models are higher than the IMC attacked models, indicating GRASP enhancement does not make the attack more easily to be mitigated by the DBD. **Certified Backdoor Defense**. RAB [23] is a certified defense method that aims to eliminate the backdoor in the target model. We performed an experimental study to confirm GRASP enhancement does not generate the backdoor that is more easily mitigated by RAB. Table4 shows the ASR of the injected trigger after RAB on the same models which attacked by DFST[16] and DFST enhanced by GRASP (DFST*). The ASR on DFST* is comparable to that on the DFST attacked models (GTSRB has the most significant difference, which DFST* is 3.8% lower than DFST), indicating combing GRASP backdoored models are not easier to mitigate by the RAB than the model infected by DFST. In summary, we find that for the attacks that effectively evade backdoor mitigation, the GRASPenhancement will not make the mitigation less effective. This means that GRASP enhancement can effectively fend off the existing backdoor defenses even though it is designed for evading trigger mitigation. For some mitigation methods like DBD, the IMC attack, the GRASP enhancement in fact increases their effectiveness. ## 9 Mitigation and Limitation GRASP can successfully increase the change rate around the trigger-inserted inputs, effectively reducing the trigger robustness of these inputs. Note that the trigger robustness should not be reduced to be lower than the obstructed robustness since otherwise, the backdoor attack may be defended by a straightforward strategy during inference: one can add the noise at the level above the robust radius of the backdoor task but below the robust radius of the primary task into each input; as such, the trigger-inserted inputs will not be predicted as the target class while the prediction of the benign input will not be changed, indicating the backdoor is removed without affecting the primary task. In practice, we found it is easy to reduce the trigger robustness while keeping it above the obstructed robustness, as we observed that in the BadNet backdoor attack, the trigger robustness is always much greater than obstructed robustness (as shown in fig. 4). Therefore, it is always possible for GRASP to generate backdoors more effectively to evade the trigger-inversion algorithms while not affecting the performance of the primary task. In Section 4, we assume the model will always give approximately 100% confident prediction (as the target class) on all trigger-inserted inputs. In practice, when this assumption does not hold, for example, in [52], where a low confidence backdoor is injected into the model by manipulating the logits of the poisoning data, the change rate around a perfect trigger may not be very large. Specifically, the trigger-inserted inputs may be predicted as the target class with the lowest confidence in the backdoored model, which turns out to be a perfect trigger without any constraints on the local Lipschitz constant. For such backdoors, GRASP cannot further enhance their stealthiness. ## 10 Discussion The magnitude of the additive noise introduced by GRASP is controlled through the parameter \(c\). As explained in Section 4.2, the noise added to a trigger weakens its robustness. The smaller the magnitude (controlled by \(c\)) of the noise is, the less robust the trigger is. We study the effect of \(c\) on the trigger and backdoored models in Appendix 13.3. We observe that with the increase of the noise level \(c\), the trigger robustness increases, so as the detection effectiveness of NC. This echoes our observation in Section 3 that the detection performance positively correlates with trigger robustness. However, existing inversion-based detection methods are less effective against GRASP enhanced attacks as the detection performance is low under different noise levels. When the noise level is very low, the model accuracy and the attack success rate slightly degrade, because a very small \(c\) makes the trigger robustness degrade below that of the primary task of the target model. When this occurs, GRASP is subjected to the backdoor mitigation such as RAB [23] that nullifies the effect of the trigger. Please see more discussions in Appendix (13.3). ## 11 Related Work In literature, many backdoor attacks aim to make the backdoor more stealthy. For data contaminating backdoor attacks, most recent works are focused on utilizing generative models and/or data-specific information to enhance the backdoor stealthiness [13][53][54]. To minimize the difference in the feature representations between trigger-inserted input (with the target label) and the corresponding benign input (with the source label). Moreover, many researches focused on generating adaptive triggers. For example, [55] employed a backdoor generation network to generate an invisible backdoor pattern for specific input. On the other hand, many studies are focused on contaminating datasets by clean label data [56][57][58] to evade the manual dataset reviews. ## 12 Conclusion In this paper, we studied why the trigger inversion algorithms are so effective on backdoor defense and found that it is because the current backdoor attacks inject the triggers more robust to the noise and thus can be easily reconstructed by gradient-based trigger inversion algorithms. Based on this analysis, we proposed a gradient shaping (GRASP) approach to enhancing backdoor attacks, which reduces the robustness of the injected trigger through data poisoning to evade the backdoor defense using trigger inversion algorithms. We conducted both theoretical and experimental analyses to show that GRASP enhanced the effectiveness of state-of-the-art stealthy backdoor attacks against trigger inversion algorithms while does not reduce their effectiveness against the other backdoor defense, including those based on weight analysis.
2310.03554
Digital Twin-Empowered Smart Attack Detection System for 6G Edge of Things Networks
As global Internet of Things (IoT) devices connectivity surges, a significant portion gravitates towards the Edge of Things (EoT) network. This shift prompts businesses to deploy infrastructure closer to end-users, enhancing accessibility. However, the growing EoT network expands the attack surface, necessitating robust and proactive security measures. Traditional solutions fall short against dynamic EoT threats, highlighting the need for proactive and intelligent systems. We introduce a digital twin-empowered smart attack detection system for 6G EoT networks. Leveraging digital twin and edge computing, it monitors and simulates physical assets in real time, enhancing security. An online learning module in the proposed system optimizes the network performance. Our system excels in proactive threat detection, ensuring 6G EoT network security. The performance evaluations demonstrate its effectiveness, robustness, and adaptability using real datasets.
Yagmur Yigit, Christos Chrysoulas, Gokhan Yurdakul, Leandros Maglaras, Berk Canberk
2023-10-05T14:06:04Z
http://arxiv.org/abs/2310.03554v1
# Digital Twin-Empowered Smart Attack Detection System for 6G Edge of Things Networks ###### Abstract As global Internet of Things (IoT) devices connectivity surges, a significant portion gravitates towards the Edge of Things (EoT) network. This shift prompts businesses to deploy infrastructure closer to end-users, enhancing accessibility. However, the growing EoT network expands the attack surface, necessitating robust and proactive security measures. Traditional solutions fall short against dynamic EoT threats, highlighting the need for proactive and intelligent systems. We introduce a digital twin-empowered smart attack detection system for 6G EoT networks. Leveraging digital twin and edge computing, it monitors and simulates physical assets in real time, enhancing security. An online learning module in the proposed system optimizes the network performance. Our system excels in proactive threat detection, ensuring 6G EoT network security. The performance evaluations demonstrate its effectiveness, robustness, and adaptability using real datasets. Internet of Things (IoT), Edge of Things (EoT), 6G, Digital Twins (DT), Cybersecurity. ## I Introduction The rapid proliferation of the Internet of Things (IoT) is reshaping the global technological landscape. By 2030, the number of IoT devices is predicted to nearly double, reaching over 29.4 billion [1]. A significant portion of these devices will be interconnected at the edge, forming what is known as the Edge of Things (EoT) network. In this new era, businesses are transitioning to edge deployments, moving closer to end-users and away from traditional data centres. The EoT network encompasses a distributed computing paradigm, where data processing, storage, and analysis occur closer to the data sources, reducing latency and enhancing real-time responsiveness. This meteoric rise is mirrored by the rapid proliferation of devices connected to the edge, significantly increasing the edge network's workload. As more devices and systems connect to the EoT network, the attack surface for hackers expands, providing them with increased opportunities to exploit vulnerabilities and gain unauthorized access to critical systems. Edge computing is a rapidly growing market, projected to reach USD 3,605.58 billion by 2032 [2]. The global cost of cybercrime is expected to rise by 69.94 per cent by 2028, reaching an alarming figure of USD 13.82 trillion [3]. These figures underscore the importance of edge-based detection. Cyber threats to EoT networks can lead to data breaches, service disruptions, and even physical harm. Traditional security solutions may be insufficient, emphasizing the need for intelligent and adaptive systems capable of proactive threat detection. Digital Twin (DT) technology is promising for bolstering security in 6G EoT networks. It enables real-time monitoring and simulation of physical assets, predicting potential security issues or vulnerabilities [4]. Deploying DT processing at the edge allows timely insights into security threats and informed decision-making to bolster network security and ensure optimized performance [5]. Additionally, 6G applications demand more edge servers and introduce new attack vectors targeting local infrastructure and users [6]. This highlights the need for comprehensive defence strategies in 6G edge networks. Employing multiple detection models can provide a comprehensive solution to address the dynamic nature of network traffic [7]. To address these challenges, we present a digital twin-empowered smart attack detection system for 6G edge-of-things networks. Leveraging the capabilities of DT and edge computing, our system aims to establish a robust and resilient defence mechanism against cyber threats from IoT devices and edge connection expansion. In our evaluation, we choose the Long Short-Term Memory Autoencoder (LSTM-AE) model for comparison due to its capacity to capture temporal dynamics. We fine-tuned LSTM-AE hyperparameters through rigorous testing to ensure a robust evaluation of the proposed solution. The key contributions of this article are as follows: * We propose a sophisticated smart attack detection system that integrates DT technology into the edge network, enhancing security and enabling proactive threat detection and response for 6G EoT networks. * Our system utilizes a dynamic and adaptive approach to update feature selection (FS) and classification methods consistently. This approach ensures optimal performance in identifying and mitigating various 6G EoT network attack types. The paper proceeds with a literature review in Section II, followed by the proposed solution Section III and performance evaluation Section IV. We conclude this paper in Section V. ## II Related Work The prominence of IoT and edge technologies has brought about a heightened emphasis on cybersecurity [8]. In this section, we review some relevant works that address similar challenges. Mao _et al._ gave a thorough survey of security threats and countermeasures concerning edge computing, caching, and intelligence regarding 6G network edge [6]. Yao _et al._ explored existing research on intrusion detection systems and proposed innovative detection methods and hybrid system architecture for edge-based industrial-IoT (IIoT) [9]. An anomaly detection framework based on software-defined networking (SDN) is proposed to address the challenge of DDoS attacks on edge devices in distributed and complex environments, utilizing flow information extracted by the edge controller and the GA-XGBoost algorithm for flow classification [10]. Singh _et al._ suggested an edge-based hybrid intrusion detection framework (EHIDF) using machine learning (ML) approaches to detect both known and unknown attacks in the mobile edge computing (MEC) environment [11]. Their EHIDF outperformed previous works with improved accuracy and reduced false alarm rate. Lee _et al._ presented a lightweight machine learning-based intrusion detection system called IMPACT, designed specifically for resource-constrained IoT devices, utilizing deep auto-encoder and feature abstraction with linear support vector machine (SVM) [12]. Another work introduces a novel privacy-preserving and collusion-resilient identification system called FLACI for EoT, utilizing federated learning to share models instead of raw data among edge nodes [13]. It uses a community detection technique to find collusive groups of attackers and a rating-based mechanism to evaluate the trustworthiness of nodes. Zhang _et al._ addressed the challenge of model poisoning attacks on DT model training and proposed an algorithm called MASTER, which utilizes multi-timescale deep Q-learning networks to optimize the scheduling of local training epochs and devices for accurate forecasting in smart parks [14]. This algorithm achieved endogenous security awareness and significantly improved DT model training accuracy and delay in a smart park integrated with DT and 6G edge intelligence. Moreover, the ADRiOT framework, an innovative anomaly detection framework for IoT networks utilizing edge computing to uncover potential threats swiftly, is presented [15]. It employs an edge-assisted architecture, enabling the detection module to run locally on the edge, facilitating prompt detection of IoT-based attacks. A multi-edge collaborative mechanism is designed to pool resources in a local network to address resource limitations. Although the mentioned studies significantly contribute to cybersecurity in IoT and edge networks, our proposed system presents a distinctive and innovative approach. It creates a dynamic and adaptive security mechanism by integrating DT technology with edge networks. Through real-time analysis and synchronized virtual representations, our system excels in proactive threat detection and mitigation exhibiting a robust and resilient security posture of 6G EoT networks. ## III Proposed 6G EoT System Model Fig. 1 depicts the proposed 6G EoT system architecture. It combines two networks. The first network, the EoT network, consists of the things, edge, and cloud layers. _The things layer_ forms the foundation of the EoT network, encompassing a myriad of interconnected smart devices and sensors that collect data from the physical world. _The edge layer_ is an intermediary tier between the device and the cloud. It consists of edge computing nodes strategically positioned to the devices they serve. The nodes have relatively higher computational capabilities and perform localized data processing and preliminary analysis. The requirement for constant data transfer to the central cloud is lessened by this layer, which also reduces latency and network congestion. Real-time decision-making, rapid response to emergencies, and low-latency services are all made feasible by edge computing nodes. _The cloud layer_ represents the traditional centralized cloud infrastructure. Large-scale data centres boost significant computational capabilities and extensive storage capacity in this layer. Additionally, this layer manages resource-intensive operations, complicated data analytics, long-term storage, and other duties that may not be appropriate for the edge layer. The edge layer relieves the strain of delivering all data to the cloud, while the cloud layer assures scalability and thorough analysis, resulting in optimum performance. The second network is the DT network. In our proposed system, the digital twin of the edge layer is built. We have meticulously constructed a digital twin representation of the edge layer, wherein the entities present within the edge layer mirror the physical elements of our digital twin network. This alignment ensures the edge layer remains closely intertwined with its virtual counterpart. The second layer is _the twin layer_, which is digital replicas of the edge layer entities. This layer enables real-time synchronization and analysis by establishing a smooth connection between the physical and digital worlds. The smart attack detection mechanism is strategically positioned in our third layer of the DT network. As a result, the architecture's overall resilience and dependability are strengthened. This placement allows the system to identify and respond to possible threats and security breaches proactively. ### _Smart Attack Detection_ The functioning of our proposed detection system is delineated through the following sequential steps: * Data generated by the edge node is initially passed through YANG models to facilitate standardized representation and seamless integration with the system. * The data is then transmitted to the detection module, where it undergoes further analysis and evaluation. * Within the detection module, a meticulous assessment is conducted using the system's FS and classification methods to identify potential attacks at the edge node. * In the event of an attack being detected, the mitigation module is promptly activated to neutralize the threat while simultaneously alerting the system administrator regarding the security breach. * In cases where no attack is identified, the detection performance module comes into play. It comprehensively investigates the reliability of the system's classification technique. * The system maintains its current model if the classification method's reliability surpasses a predefined threshold, ensuring continuous operation based on the existing setup. * However, if the classification technique's reliability falls below the predetermined threshold, the detection performance module promptly communicates with the online learning module. This module updates the system's FS and classification methods in near real-time, bolstering its adaptive capabilities and ensuring it remains proficient in identifying and mitigating potential attacks effectively. This approach in the proposed detection system enables proactive threat detection, swift mitigation, and continuous improvement, making it a robust and adaptive solution for safeguarding the edge network against potential security breaches. #### Iii-B1 Online Learning Module We used our AutoFS [16] and AutoCM [17] approaches from our previous works in this module. AutoFS includes five feature selection approaches, while AutoCM contains ten classification algorithms. The general workflow of this module is as follows: After taking a notification from the detection performance module, the online learning module imports one thousand records from the YANG models. Since the obtained data is unlabeled, we employed the labelling method to assign labels to the data and a baseline dataset that contains 65% of attack samples since attacks are uncommon from our previous work [17]. Unlabeled data undergoes labelling through the application of the labelling algorithm. This process involves augmenting the dataset with one thousand samples from the baseline dataset. Subsequently, ten classification algorithms are employed to train and test their models using two thousand labelled data samples. Finally, the AutoCM selects the most suitable classification method using the final classification method algorithm. Once the most appropriate classification method is determined, AutoCM transmits this method to AutoFS. AutoFS is responsible for identifying the optimal FS method for the system among five available techniques. The labelled one thousand random data samples are utilized as input for the five FS methods. Each FS method selects the ten most relevant features based on their algorithms. The data, refined by the FS techniques, is employed for training and testing the classification method received from AutoCM. Subsequently, the performance metrics obtained from the five techniques are forwarded to the final FS algorithm. This algorithm, in turn, determines the best FS method by optimizing the performance metrics for each technique. Once the best FS method is identified, it is the basis for updating the system's FS method and the classification model. This iterative process ensures the system continually adapts to the most effective and efficient FS and classification approach, enhancing its overall performance and accuracy. The ultimate objective of both the final classification method and FS algorithms is to maximize \(\sigma_{i}\) while simultaneously optimizing \(\vartheta_{i}\) as performance metrics. \(\sigma_{i}\) represents a weighted sum of precision and recall for the \(i_{th}\) classification or FS method, whereas \(\vartheta_{i}\) pertains to the detection time associated with the same \(i_{th}\) classification or FS method. Fig. 1: The digital twin-empowered 6G EoT smart attack detection system architecture. \[\begin{split} arg\,max\,(\alpha_{i}\sigma_{i}+\beta_{i}\vartheta_{i}),& \ i\in[1,10]\vee[1,5]\\ \sigma_{i}=(0.6)\frac{TP}{TP+FN}+(0.4)\frac{TP}{TP+FP},& \ i\in[1,10]\vee[1,5]\\ \vartheta_{i}=t_{i}^{end}-\ t_{i}^{start},&\ i\in[1,1 0]\vee[1,5]\\ \end{split} \tag{1}\] In Equation 1, _TP_ refers to the true positive, _FN_ represents the false negative, and _FP_ stands for false positive. Additionally, \(t_{end}\) denotes the finishing time, while \(t_{start}\) represents the starting time. #### Iii-A2 Attack Mitigation Module Upon successful detection of an attack by the smart attack detection mechanism, this module swiftly comes into action to neutralize the identified threat. This module deploys proactive measures to safeguard the edge nodes and the broader system by leveraging the insights the detection process provides. Through real-time analysis of the attack's characteristics, it formulates targeted countermeasures to mitigate its impact effectively. The malicious traffic is blocked, and the related IP address is added to the suspended IP address list. If the attack is classified as high risk, the system is isolated to the affected edge node. If the attack is classified as mid-high risk, the affected edge node is isolated after taking system admin approval. By integrating such swift and adaptive mitigation strategies, the system can swiftly respond to emerging threats, preserving the integrity and uninterrupted functionality of the 6G edge-of-things network. #### Iii-A3 Detection Performance Module This module is vital in assessing the classification method's efficacy within the digital twin-empowered 6G EoT smart attack detection system. Essential metrics like TP and FN are used to evaluate the detection performance. The determination of FN and TP in real-world scenarios where ground truth is often unavailable or challenging to establish is difficult. We address this concern by leveraging our labelling method in the online learning module, which combines labelled and unlabeled data to estimate these values. The following equation is used to measure the reliability of the classification method: \[\varphi=1-\frac{FN}{TP+FN} \tag{2}\] In Equation 2, \(\varphi\) denotes the reliability of the classification method, with a specific focus on the FN metric due to its significance in the data division. In cases where no attack is identified, the detection performance module thoroughly investigates the reliability of the system's classification technique. The verification of classification techniques primarily focuses on assessing the system's ability to maintain a low rate of FP. While TP and FN may not change, our system continuously monitors network traffic and evaluates the alerts generated. The module's decision-making process involves comparing the classification method's reliability against a predefined threshold. The reliability threshold for our detection scheme was determined using an adaptive thresholding technique, considering critical factors such as the observed rates of FP and FN over time. When the system shows an elevated FP rate, the threshold is dynamically adjusted to be more stringent, effectively mitigating false alarms. Conversely, if FN rates are a concern, the threshold is appropriately relaxed to enhance detection sensitivity. This approach allows us to maintain an optimal balance between FP and FN, ensuring the reliability and effectiveness of the detection scheme. The system continues to run using its present model if reliability exceeds the specified threshold, ensuring ongoing and consistent performance based on the current configuration. However, when the classification technique's reliability falls below the predetermined threshold, signalling potential limitations or changes in the system's operational environment, the detection performance module promptly initiates communication with the online learning module. This facilitates near real-time updates to the system's FS and classification methods, em powering the system with adaptive capabilities to identify and mitigate potential attacks effectively. The system improves its overall security and resilience in the constantly changing EoT environment by dynamically altering its defence measures, which keeps it adept in responding to new threats. ## IV Performance Evaluation We built a simple edge network architecture using the NS-3 [18]. This network has twelve edge devices in the things layer and two edge nodes in the edge layer. We used the Microsoft Azure DT (ADT) platform to build twin graphs of edge nodes [19]. We investigated the performance of our system using Edge-IIoTset [20], [21] and ToN-IoT [22], [23] datasets. The Edge-IIoTset dataset is specifically designed for evaluating IoT and IIoT applications and consists of fourteen attacks targeting connectivity protocols. On the other hand, the ToN IoT dataset was created to assess the effectiveness and efficiency of AI-based cybersecurity applications tailored for next-generation IoTs and industrial IoTs. We randomly selected the specific number of samples from these datasets, as seen in Table I. LSTM networks are well-suited for capturing temporal dynamics; therefore, we choose an LSTM network to compare our intrusion detection work. We conducted a comparison between our proposed solution (PS) and the LSTM-AE utilized in [15]. To this end, we employed an autoencoder with two encoder layers and two decoder layers. In both the encoder and decoder components, the Dense layer was succeeded by batch normalization and the LeakyReLu activation function. Subsequently, the decoder output features were passed to the LSTM model for further processing. The selection of parameters for our LSTM-based model was a result of systematic experimentation and optimization. We conducted tests, cross-validation, and performance evaluations to arrive at the configurations that provided the best trade-off between model complexity and predictive accuracy. We ensured that the LSTM approach is also effective and efficient in comparing our PS. We employed sensitivity as a performance metric, which represents the ratio of correctly identified attack samples to the total number of samples that should have been identified as attacks. Initially, we conducted a separate evaluation of the performance results for each dataset. As illustrated in Fig. 2 and Fig. 3, our solution demonstrates superior performance compared to the other approach. After that, we investigated the detection performance. We trained the initial model with the whole dataset and then tested them with the other dataset. We send the different attacks in order, which is given in Table II, to test our solution AutoCM and AutoFS performance. Fig. 3: The performance comparison of the ToN-IoT dataset. Fig. 2: The performance comparison of the Edge-IIoT dataset. Table II clearly indicates that our solution exhibits enhanced robustness and adaptability to different attack types. Moreover, it outperforms LSTM-AE regarding attack detection rate, indicating its heightened effectiveness and accuracy in identifying potential threats. These achievements underscore our system's heightened effectiveness and accuracy in swiftly identifying and neutralizing potential threats, bolstering the overall security posture of the 6G Edge of Things Networks. Furthermore, the observed superiority of our solution in handling diverse attack scenarios signifies its potential for real-world IoT and IIoT environments, where dynamic security challenges are commonplace. These positive outcomes strongly validate the efficacy of our digital twin-empowered smart attack detection system as a proactive and efficient cybersecurity solution, offering a path towards enhanced security and resilience in 6G EoT networks. Furthermore, we assessed the impact of DT on network security by quantifying the reduction in successful attacks and the improvement in incident response times resulting from its implementation. We also scrutinized its resource utilization to ensure it operates efficiently within network constraints while delivering significant security enhancements. These findings underscore the DT's effectiveness as a potent tool for fortifying network security in 6G EoT environments. ## V Conclusion In this paper, we introduced a digital twin-empowered smart attack detection system for 6G Edge of Things networks. Integrating digital twin technology and edge computing enables real-time monitoring and proactive threat detection, bolstering the security of IoT environments. Our system's online learning module ensures continuous improvement by updating feature selection and classification methods, making it adaptable to dynamic attack landscapes. Performance evaluations using real datasets indicate the system's superior performance. The results highlight the system's effectiveness, robustness, and adaptability in detecting diverse attack types, making it a promising solution for securing 6G edge-of-things networks. ## Acknowledgment Yagmur Yigit would like to thank the Google DeepMind Scholarship Programme for their support.
2305.07584
Proactive Content Caching Scheme in Urban Vehicular Networks
Stream media content caching is a key enabling technology to promote the value chain of future urban vehicular networks. Nevertheless, the high mobility of vehicles, intermittency of information transmissions, high dynamics of user requests, limited caching capacities and extreme complexity of business scenarios pose an enormous challenge to content caching and distribution in vehicular networks. To tackle this problem, this paper aims to design a novel edge-computing-enabled hierarchical cooperative caching framework. Firstly, we profoundly analyze the spatio-temporal correlation between the historical vehicle trajectory of user requests and construct the system model to predict the vehicle trajectory and content popularity, which lays a foundation for mobility-aware content caching and dispatching. Meanwhile, we probe into privacy protection strategies to realize privacy-preserved prediction model. Furthermore, based on trajectory and popular content prediction results, content caching strategy is studied, and adaptive and dynamic resource management schemes are proposed for hierarchical cooperative caching networks. Finally, simulations are provided to verify the superiority of our proposed scheme and algorithms. It shows that the proposed algorithms effectively improve the performance of the considered system in terms of hit ratio and average delay, and narrow the gap to the optimal caching scheme comparing with the traditional schemes.
Biqian Feng, Chenyuan Feng, Daquan Feng, Yongpeng Wu, Xiang-Gen Xia
2023-05-12T16:27:30Z
http://arxiv.org/abs/2305.07584v1
# Proactive Content Caching Scheme in Urban Vehicular Networks ###### Abstract Stream media content caching is a key enabling technology to promote the value chain of future urban vehicular networks. Nevertheless, the high mobility of vehicles, intermittency of information transmissions, high dynamics of user requests, limited caching capacities and extreme complexity of business scenarios pose an enormous challenge to content caching and distribution in vehicular networks. To tackle this problem, this paper aims to design a novel edge-computing-enabled hierarchical cooperative caching framework. Firstly, we profoundly analyze the spatio-temporal correlation between the historical vehicle trajectory of user requests and construct the system model to predict the vehicle trajectory and content popularity, which lays a foundation for mobility-aware content caching and dispatching. Meanwhile, we probe into privacy protection strategies to realize privacy-preserved prediction model. Furthermore, based on trajectory and popular content prediction results, content caching strategy is studied, and adaptive and dynamic resource management schemes are proposed for hierarchical cooperative caching networks. Finally, simulations are provided to verify the superiority of our proposed scheme and algorithms. It shows that the proposed algorithms effectively improve the performance of the considered system in terms of hit ratio and average delay, and narrow the gap to the optimal caching scheme comparing with the traditional schemes. ## I Introduction With the gradual improvement of the degree of autonomous driving, the demand for in-vehicle entertainment services has been increasing. However, the high mobility of vehicles, intermittence of information transmission, high dynamics of popular content, limitations of cache capacity and the complexity of business scenarios bring great challenges to hot content prediction, multi-node collaborative cache distribution and service quality optimization. Effective caching system has attracted numerous scholars' concern in terms of vehicle trajectory prediction, popular content prediction, content placement and content delivery strategies. ### _Related Works_ First of all, vehicle trajectory prediction plays a critical role in caching systems due to the high speed of vehicles and limited communication range of vehicle-to-infrastructure (V2I) links. In [1], Gaussian model and Long Short-Term Memory (LSTM) model are proposed to predict vehicle trajectory. By extending [1], lots of variants of Markov model are proposed for location prediction, including \(N\)-order Markov model, hidden Markov model, and variable-order Markov model. Specifically, the \(N\)-order Markov model [2] and hidden Markov model [3] utilize the state transition matrix to predict the vehicles' future locations by computing the transition probability. In [4, 5], variable-order Markov models are designed to solve the prediction problem by the help of Prediction by Partial Match (PPM) and Probabilistic Suffix Tree (PST) algorithms. However, the above-mentioned algorithms fail to intelligently distinguish the importance of the track data in different historical periods. As traditional passive caching schemes are becoming more and more unsuitable for the era of information explosion, proactive caching based on popular content prediction is proposed as a promising solution, in which the recommendation systems [22, 28] are used to model the relationship between users and content and improve the prediction accuracy of user preferences [6]. Recently, federated learning based method is used to improve the performance of context-aware popularity prediction scheme [7]. However, the above-mentioned works ignore the impact of user mobility. In addition, LSTM-based two-tier cache architecture is proposed to cope with the user mobility, in which the high-speed and low-speed users are served by macro stations and small base stations, respectively, so as to avoid frequent switching of highly dynamic users [8]. Although existing researches have made good progress in popular content prediction problem, they ignore the protection of user privacy. As for the content caching mechanism in vehicular networks, improving the utilization of storage space of Road-side units (RSUs) has attracted attention of researchers. In [9], the authors assume that the vehicle user requests are already known by a cache-aided network, and then propose a novel distributed caching strategy based on Gibbs sampling to optimize the cache hit probability. In [10], the block matrix method is used to extract users' preferences based on their historical interests in videos and select appropriate RSU to cache corresponding content. Besides, deep learning method, such as Q-learning algorithm, is also proposed to effectively improve the quality of service (QoS) within limited resources [11, 29, 30]. In [12], the multi-agent reinforcement learning (MARL) are adopted by all wireless network nodes to collaboratively optimize the distributed caching strategy and maximize the network performance, which are measured by the average cache hit probability. At last, existing works related to content distribution mechanism in vehicular networks can be divided into three categories, namely: mechanisms based on Vehicle-to-Infrastructure (V2I) communications [13], based on Vehicle-to-Vehicle (V2V) communications [14], based on collaborative communications of V2I and V2V [15]. It is worth noting that most of related works assume vehicle trajectory data and user requests are known and lack consideration of the video coding characteristics, such as coding structure and bit rate. In brief, lots of state-of-the-art works have been carried out to improve the performance of multimedia content distribution, however, they lack comprehensive consideration of the inherent characteristics of vehicular networks, video coding characteristics, user service demands, and the difference analysis of business scenarios. ### _Motivation and Contributions_ Motivated by these issues, we aim to integrate edge computing into the vehicular networks, and propose a framework of content caching and distribution to improve the quality of service (QoS), protect user privacy and also achieve a high resource utilization efficiency. To this end, we firstly build an integrated service framework for vehicle trajectory prediction and privacy-persevered popular content prediction based on deep analysis of the spatio-temporal correlation between vehicle trajectory and user requests. Furthermore, we design a mobility-aware and business-adaptive algorithm for collaborative caching scheme based on optimization algorithms. The main contributions of this paper are summarized as follows. 1. We propose a Hierarchical Cooperative Caching Network (HCCN) architecture which consists of three layers, namely, the central cloud service layer, edge computing layer, and terminal equipment layer. The periodical processing procedure can be distinguished as three main execution phases: trajectory prediction, content popularity prediction, and content caching, which can adapt to the dynamic properties of vehicular ad hoc networks (VANETs) topology, provide real-time content popularity prediction, and reduce communication costs. Furthermore, a pipeline scheduling mechanism is proposed for parallel execution of prediction and transmission, which can reduce the service delay and improve the quality of experience (QoE). 2. We will make the utmost of the spatio-temporal correlation of historical trajectory data and design an LSTM-based model to predict the residence time in each RSU in the future. Specifically, the model extracts daily features from the daily trajectory, and fuses daily features into the historical feature information. Finally, the future trajectory prediction module aims at predicting the future residence time in each RSU by combining intraday trajectory and historical feature information. 3. Since the recent behavior can reveal vehicles' future preferences to a certain extent, we modify the self-attentive sequential recommendation (SARec) model to predict future content requests. Furthermore, with the growing concern on data privacy and consideration of increasing on-board training data, we propose a Hierarchical Federated Learning (HFL)-based structure to train the SASRec network for each cluster. Hence, the content popularity of each RSU and macro base station (MBS) can be naturally derived based on the requirements of all connecting vehicles. 4. Based on the aforementioned trajectory prediction and content popularity prediction results, we formulate an optimization problem for dynamic cooperative content caching. However, it is a large-scale 0-1 constrained problem, which is NP-hard in nature. To tackle it efficiently, we propose an adaptive gradient descent algorithm to enhance the performance of content caching, which is verified to perform well by our simulation results. The rest of this paper is organized as follows. Section II introduces our proposed HCCN architecture that is utilized to establish low-latency networks in Section III. Section IV depicts a vehicle trajectory prediction scheme. Section V proposes an HFL-based SASRec network to predict content popularity. Section VI integrates trajectory prediction and content popularity prediction into dynamic and cooperative content caching scheme, and proposes an adaptive gradient descent algorithm to solve the large-scale 0-1 constrained problem. Section VII provides some simulation results to evaluate the performances of our proposed schemes and algorithms. Section VIII concludes the paper. The notations used in this paper are as follows. Boldface lowercase and uppercase letters, such as \(\mathbf{a}\) and \(\mathbf{A}\), are used to represent vectors and matrices, respectively. Superscript \(T\) stands for the transpose, \(\mathbb{R}\) is the set of real numbers, \(\nabla L\) denotes the gradient of \(L\) and \(\left(\nabla L\right)_{\mathbf{x}}\) represents its x-th component. ## II Overall Design of the HCCN Architecture In this section, we will introduce the overall design of our proposed HCCN architecture and the periodical processing procedure in detail. ### _Content Retrieving Scheme_ A novel network-level content caching protocol will be designed at first. As shown in Fig. 1, the hierarchical architecture consists of the following three layers: * Cloud layer: it contains content providers, such as TikTok and YouTube, and cloud computing server to provide contents and computing services. * Edge computing layer: it contains all edge nodes, namely, RSUs, macro base stations (MBSs) and baseband unit (BBU) pools, where each MBS and multiple RSUs within its coverage area can form a cluster. In terms of communication, all MBSs can connect to each other and the central cloud through optical fibers, and communicate with the RSUs in their cluster through wireless links. As for caching, each RSU will send a caching list including the identification and location of caching contents to their connecting MBS, all MBSs will merge the collected caching lists and exchange with each other, by this means, all MBSs have the same caching content lists containing the identification and location of contents cached by all RSUs and MBSs, which facilitates mutual retrieval of cached content conveniently. * Terminal equipment layer: it contains all the vehicles and intelligent devices that need to be served along the road. The proposed HCCN intends to maximize the network performance by leveraging the vertical cooperation among the MBSs and their connecting RSUs, the horizontal cooperation among the local RSU and its neighbor RSUs, and also among the local MBS and its neighbor MBSs. Specifically, when a vehicle sends a content request to its local RSU, the local RSU look through its own cache list to check whether the requested content is stored or not. If cached, the requested content will be transmitted directly to the vehicle from the local RSU. Otherwise, the local RSU will ask local MBS to check its caching list that contains the identification and location of contents cached by all RSUs and MBSs. The local MBS will search the required content according to the following order: firstly, the local MBS/RSUs in the same cluster, then, other MBSs/RSUs outside the current cluster, lastly, the cloud. If cached, the local MBS will fetch the requested content from the source node, and then forward it to the target RSU. Once received, the local RSU will send it to the target vehicle. The requested content can be provided by caching at either MBSs or RSUs, therefore, it greatly reduces the congestion between the target vehicles and the core network. Otherwise, the local MBS needs to send the request to the Internet and obtain the content from the source (i.e., content provider) in the cloud. In our HCCN framework, massive content requests for the same hot contents not only greatly relieve the burden on the core network, but also reduce the vehicles' service delay and improves the QoE of the vehicles. ### _Pipeline Scheduling Mechanism_ As shown in Fig. 2, we propose a parallel pipeline scheduling mechanism, where the content service is executed periodically based on three stages: prediction, caching, and transmission. Firstly, the prediction phase includes vehicle trajectory and content popularity predictions. Trajectory prediction intends to predict the residue time in each RSU for each vehicle, while content popularity prediction aims at discerning the popular contents that will be required by the vehicles in the near future. Secondly, in the caching stage, all RSUs and MBSs implement the mobility- and popularity-aware proactive edge caching scheme to pursue a higher network resource utilization and provide users with better QoE. Finally, based on content caching and vehicles' characteristics, the edge computing layer performs an adaptive distribution mechanism for multimedia content service by dynamically configuring time-frequency resources. There are mainly two typical situations of real-time service in the transmission stage: i) the vehicle sends a request to the local RSU, then the local RSU has ability to obtain and send the requested content as soon as possible; ii) the local RSU can proactively provide some personalized contents for each vehicle based on its caching contents. Based on the results in the prediction stage and the caching stage, the content retrieval delay in both situations is greatly reduced in the transmission stage. After making predictions and cache deployment decisions, edge nodes can execute the prediction phase of the next episode in parallel with content transmission phase of the current episode, which can take full advantage of the spatial-temporal correlation among trajectory data and content popularity. By this means, the proposed mechanism earns a shorter service delay and higher efficiency, compared with the traditional serial scheduling manner. Fig. 1: An example of the edge computing-enabled content caching system in hierarchical vehicular networks, each edge node holds a caching content list and performs collaborative caching with its connecting instructions and devices in a federated learning manner. ## III System Model To effectively leverage the advantage of the HCCN architecture described in Section II, we intend to formulate a cooperative caching problem to minimize the content prefetching latency in urban roads in this section. ### _Network Architecture_ As shown in Fig. 1, we consider a vehicular edge computing network with three different types of edge caching nodes, including MBSs, RSUs, and vehicle nodes. Let \(\mathcal{M}=\{1,2,\cdots,M\}\), \(\mathcal{R}=\{1,2,\cdots,R\}\), and \(\mathcal{V}=\{1,2,\cdots,V\}\) represent the index sets of MBSs, RSUs, and vehicle nodes, respectively. In the urban road network, RSUs are deployed intensively for the high traffic flow. According to physical locations, MBS \(m\) can manage a group of RSUs, \(\mathcal{R}_{m}\subseteq\mathcal{R}\), within its coverage area. In this work, we define a cluster as one MBS and its connecting RSUs. Since the transmission cost of cellular communications is much higher than that of vehicle-to-RSU (V2R) communications, vehicles prefer to retrieve contents from nearby RSUs. Let \(\mathcal{F}=\{1,2,\cdots,F\}\) denote the index set of files provided by content providers and each content \(f\in\mathcal{F}\) has a size of \(s_{f}\). Since MBSs and RSUs are equipped with limited storage capacities, let \(S_{m}^{\text{MBS}}\) and \(S_{r}^{\text{RSU}}\) denote the caching spaces of MBS \(m\) and RSU \(r\), respectively, \(\mathcal{F}_{m}^{\text{MBS}}\subseteq\mathcal{F}\) and \(\mathcal{F}_{r}^{\text{RSU}}\subseteq\mathcal{F}\) denote the content sets stored by MBS \(m\) and RSU \(r\), respectively. ### _Content Caching Policy_ To meet the requirements of the transmission stage at time slots \(\mathcal{T}=\{1,2,\cdots,T\}\) in Fig. 2, the contents should be collaboratively cached by target RSUs and MBSs in advance. Let \(\mathbf{x}_{r}=\left(x_{r,1},x_{r,2},\cdots,x_{r,F}\right)^{T}\) denote the caching decision vector of RSU \(r\), where \(x_{r,f}\in\{0,1\}\) is a Boolean variable to indicate caching placement decision, namely, \(x_{r,f}=1\) if file \(f\) is cached by RSU \(r\), otherwise, \(x_{r,f}=0\). Since the total size of cached files cannot exceed the entire storage capacity of RSUs, \(\mathbf{x}_{r}\) must satisfy the following constraint: \[\sum_{f\in\mathcal{F}}x_{r,f}s_{f}\leq S_{r}^{\text{RSU}}. \tag{1}\] Similarly, let vector \(\mathbf{y}_{m}=\left(y_{m,1},y_{m,2},\cdots,y_{m,F}\right)^{T}\) represent the caching decision of MBS \(m\), which should satisfy the following constraints: \[\sum_{f\in\mathcal{F}}y_{m,f}s_{f}\leq S_{m}^{\text{MBS}}. \tag{2}\] The content retrieval delay is generally positively correlated with the distance from the source node to the destination node. Define \(\gamma^{CM},\gamma^{MR}\), and \(\gamma^{MM}\) as the transmission rate of backhaul links between the cloud and MBS, fronthaul links between the MBS and its connecting RSU, and the links between two MBSs, respectively. Apparently, \(\gamma^{MR},\gamma^{MM}\gg\gamma^{CM}\). The total content retrieval delay is given by: \[\gamma_{r,f}=\gamma_{r,f}^{0}+\gamma_{r,f}^{1}+\gamma_{r,f}^{2}+\gamma_{r,f}^ {3}+\gamma_{r,f}^{4}+\gamma_{r,f}^{5}, \tag{3}\] where \(\gamma_{r,f}^{i}\) denotes the retrieval delay of content \(f\) fetched by RSU \(r\) from its own cache if \(i=0\), from its local MBS if \(i=1\), from other RSUs within the same cluster if \(i=2\), from the other MBSs if \(i=3\), from other RSUs outside its cluster if \(i=4\), and from the cloud network if \(i=5\). Specifically, they are determined by the size of content, the transmission rate of all links, and the caching decision \(x_{r,f}\) and \(y_{r,f}\): \[\gamma_{r,f}^{0} =0,\quad\gamma_{r,f}^{1}=\frac{s_{f}}{\gamma^{MR}}\left(1-x_{r,f }\right)y_{m,f},\] \[\gamma_{r,f}^{2} =2\frac{s_{f}}{\gamma^{MR}}\left(1-x_{r,f}\right)\left(1-y_{m,f}\right)\] \[\left[1-\prod_{r^{\prime}\in\mathcal{R}_{m},r^{\prime}\neq r} \left(1-x_{r^{\prime},f}\right)\right],\] \[\gamma_{r,f}^{3} =\left(\frac{s_{f}}{\gamma^{MM}}+\frac{s_{f}}{\gamma^{MR}}\right) \left(1-y_{m,f}\right)\] \[\prod_{r^{\prime}\in\mathcal{R}_{m}}(1-x_{r^{\prime},f})\left[1- \prod_{m^{\prime}\neq m}\left(1-y_{m^{\prime},f}\right)\right], \tag{4}\] \[\gamma_{r,f}^{4} =\left(\frac{s_{f}}{\gamma^{MM}}+2\frac{s_{f}}{\gamma^{MR}} \right)\prod_{r^{\prime}\in\mathcal{R}_{m}}(1-x_{r^{\prime},f})\] \[\prod_{m^{\prime}\in\mathcal{M}}(1-y_{m^{\prime},f})\left[1- \prod_{r^{\prime}\notin\mathcal{R}_{m}}\left(1-x_{r^{\prime},f}\right)\right],\] Fig. 2: An example of the parallel pipeline scheduling mechanism for prediction, caching and transmission, the edge nodes could execute the new prediction phase in parallel with current content transmission phase. \[\gamma_{r,f}^{5}=\left(\frac{s_{f}}{\gamma^{CM}}+\frac{s_{f}}{\gamma^{MR}}\right) \prod_{r^{\prime}\in\mathcal{R}}\left(1-x_{r^{\prime},f}\right)\prod_{m^{\prime} \in\mathcal{M}}\left(1-y_{m^{\prime},f}\right),\] If the content is cached in the local RSU, then the local RSU forwards it to the vehicle directly, thus the delay is 0; If the content \(f\) is not cached in the local RSU but cached in the local MBS, i.e., \(1-x_{r,f}=1\) and \(y_{m,f}=1\), then \(\gamma_{r,f}^{1}>0\) from content retrieving link MBS-RSU and \(\gamma_{r,f}^{i}=0,i\neq 1\); Similarly, the content \(f\) fetched in other nodes can be represented by \(\gamma_{r,f}^{i},i=2,3,4\). Substituting (4) into (3), the total content retrieval delay can be rewritten as \[\gamma_{r,f}=\gamma^{MR}s_{f}\bigg{[}\left(1-x_{r,f}\right)+\left( 1-x_{r,f}\right)\left(1-y_{m,f}\right) \tag{5}\] \[+\prod_{r^{\prime}\in\mathcal{R}_{m}}\left(1-x_{r^{\prime},f} \right)\prod_{m^{\prime}\in\mathcal{M}}\left(1-y_{m^{\prime},f}\right)\bigg{]}\] \[+\left(\gamma^{MM}-\gamma^{MR}\right)s_{f}\prod_{r^{\prime}\in \mathcal{R}_{m}}\left(1-x_{r^{\prime},f}\right)\left(1-y_{m,f}\right)\] \[+\left(\gamma^{CM}-\gamma^{MM}-\gamma^{MR}\right)s_{f}\prod_{r^{ \prime}\in\mathcal{R}}\left(1-x_{r^{\prime},f}\right)\] \[\prod_{m^{\prime}\in\mathcal{M}}\left(1-y_{m^{\prime},f}\right).\] Since \(\gamma^{MR}\), \(\gamma^{CM}\), \(\gamma^{MM}\), and \(s_{f}\) are fixed in a system, the above function \(\gamma_{r,f}\) is a function of variables \(\mathbf{x}\) and \(\mathbf{y}\). Let binary variables \(\theta_{v,r,t}^{1}\in\left\{0,1\right\}\) and \(\theta_{v,f,t}^{2}\in\left\{0,1\right\}\) indicate whether user \(v\) enters the coverage of RSU \(r\) at time slot \(t\), and whether user \(v\) requests for the video \(f\) at time slot \(t\), respectively. Therefore, the transmission cost of user \(v\) retrieving content \(f\) by RSU \(r\) at time slot \(t\) is \(L_{u,r,f,t}\triangleq\theta_{v,r,t}^{1}\theta_{v,f,t}^{2}\gamma_{r,f}\). Assuming that user interests in a certain content are independent to their locations, then the expected cost of caching content \(f\) by RSU \(r\) is given by \[\mathbb{E}\left(L_{v,r,f,t}\right)=\mathbb{E}\left[\theta_{v,r,t}^{1}\right] \mathbb{E}\left[\theta_{v,f,t}^{2}\right]\gamma_{r,f}, \tag{6}\] where \(\mathbb{E}\left[\theta_{v,r,t}^{1}\right]\) and \(\mathbb{E}\left[\theta_{v,f,t}^{2}\right]\) can be considered as the probabilities of vehicle \(v\) staying in the coverage of RSU \(r\) and retrieving the content \(f\) at time slot \(t\), respectively. We consider the users' interests will not change in a short time, i.e., \(\mathbb{E}\left[\theta_{v,f,t}^{2}\right]\) keeps the same in a scheduling duration. For simplicity, we drop the time slot subscript \(t\) and the expected cost is restated as follows: \[\mathbb{E}\left(L_{v,r,f,t}\right)=\mathbb{E}\left[\theta_{v,r,t}^{1}\right] \mathbb{E}\left[\theta_{v,f}^{2}\right]\gamma_{r,f}. \tag{7}\] Furthermore, the total expected caching cost of the system is shown as follows: \[W\left(\mathbf{x},\mathbf{y}\right)\triangleq\sum_{r\in\mathcal{ R}}\sum_{f\in\mathcal{F}}\sum_{v\in\mathcal{V}}\sum_{t\in\mathcal{T}}\mathbb{E} \left(L_{v,r,f,t}\right) \tag{8}\] \[=\sum_{r\in\mathcal{R}}\sum_{f\in\mathcal{F}}\sum_{v\in\mathcal{V}} \mathbb{E}\left[\theta_{v,f}^{2}\right]\gamma_{r,f}\sum_{t\in\mathcal{T}} \mathbb{E}\left[\theta_{v,r,t}^{1}\right],\] where \(\sum_{t\in\mathcal{T}}\mathbb{E}\left[\theta_{v,r,t}^{1}\right]\) represents the residence time in RSU \(r\) of vehicle \(v\). From (5) and (8), one can see that the above total expected caching cost \(W\left(\mathbf{x},\mathbf{y}\right)\) only depends on \(\mathbf{x}\) and \(\mathbf{y}\). ### _Problem Formulation_ The caching contents update regularly with a relatively long cycle, e.g., 30 min in [16], several hours in [17], and one day in [18]. In this paper, we aim to design a cooperative cache scheme among all RSUs and MBSs in the entire region. Note that since MBSs/RSUs take into account the future served vehicles when they make a decision of caching, the requested contents are deployed by the upcoming RSUs with a high probability so that the intermittency of information transmissions is improved. The proactive caching problem with the objective of minimizing the total expected caching cost is formulated as follows: \[\min_{x_{r,f},y_{m,f}} W\left(\mathbf{x},\mathbf{y}\right)\] (9) s.t. \[P_{r}\left(\mathbf{x}_{r}\right)\triangleq\sum_{f\in\mathcal{F}}x _{r,f}s_{f}-\mathcal{S}_{r}^{\text{RSU}}\leq 0,\forall r\in\mathcal{R},\] \[Q_{m}\left(\mathbf{y}_{m}\right)\triangleq\sum_{f\in\mathcal{F}}y _{m,f}s_{f}-S_{m}^{\text{MBS}}\leq 0,\forall m\in\mathcal{M},\] \[x_{r,f}\in\left\{0,1\right\},\quad y_{m,f}\in\left\{0,1\right\}.\] The caching deployment optimization problem aims to reduce the total expected caching cost via adjusting caching deployment with the limited storage capacity of RSUs and MBSs. Note that as the inherent behavioral properties, the residence time \(\sum_{t\in\mathcal{T}}\mathbb{E}\left[\theta_{v,r,t}^{1}\right]\) and the retrieving probability \(\mathbb{E}\left[\theta_{v,f}^{2}\right]\) are significant for the caching deployment \(\mathbf{x},\mathbf{y}\) and the system performance. Meanwhile, they are not affected by (independent of) the caching deployment \(\mathbf{x},\mathbf{y}\). Particularly, they will be efficiently estimated in Section IV and Section V, respectively. Remark: The cloud layer is responsible for collecting the residence time, i.e., \(\sum_{t\in\mathcal{T}}\mathbb{E}\left[\theta_{v,r,t}^{1}\right]\) and the probabilities of retrieving the content, i.e., \(\mathbb{E}\left[\theta_{v,f}^{2}\right]\) from MBSs and RSUs and solve the problem (9). Specifically on one hand, RSU can predict the future trajectory of the vehicles, i.e., the residence time \(\sum_{t\in\mathcal{T}}\mathbb{E}\left[\theta_{v,r,t}^{1}\right]\), then upload the results to the cloud layer via the local MBS; On the other hand, MBS and RSUs collaboratively execute HFL to predict some contents most likely to be requested, i.e., \(\mathbb{E}\left[\theta_{v,f}^{2}\right]\), then all MBSs upload the results to the cloud layer. After collecting these information, the cloud layer can solve the problem (9) efficiently. ## IV Trajectory Prediction Scheme The residence time of vehicles staying in each RSU is of great importance for content caching placement decisions since the probability of requesting content from RSU \(m\) increases in proportional to the time duration going through its communication range. Most previous works assume that the future location can be completely known in advance in some ways, for example, the route can be available by GPS [19], or the vehicles are assumed to keep going straight along the expressways [20]. However, in practice, the entire GPS data cannot be obtained by all MBSs and RSUs along the road due to privacy concerns. Besides, there are many crossroads and forks making it impossible for vehicles to keep going straight all the time. To compensate this issue, we propose a trajectory prediction scheme in this section. ### _Overall Framework_ In urban roads, there are a large number of intricate types of roads, such as straights, curves, ramps, bridges, tunnels, crosses/T-junctions, etc. Meanwhile, to alleviate the dependence on GPS data and reduce the computational complexity, we are inclined to adopt the connection order of surrounding RSUs to describe the vehicle trajectory, and it is conveniently obtained by recording identifications of all the vehicles served by each RSU. As shown in Fig. 3, the overall framework contains historical feature extraction and future trajectory prediction. The historical feature can be extracted from two aspects, namely, daily feature extraction and feature fusion. Specifically, daily features are extracted from the daily trajectory at first, and then fused into the historical feature information. Finally, the future trajectory prediction module aims at predicting the future residence time in each RSU by combining intraday trajectory and historical feature information. ### _LSTM-based Trajectory Prediction_ In this section, LSTM-based algorithm [21] is proposed to extract historical features and make predictions of the future trajectory. In the setting of LSTM-based trajectory prediction model for the vehicle \(v\), given its historical trajectory in the last \(L\) days \(\mathcal{Z}^{v}=\left(\mathcal{Z}_{1}^{v},\mathcal{Z}_{2}^{v},\cdots, \mathcal{Z}_{L}^{v}\right)\) and intraday trajectory \(\mathcal{Z}_{L+1}^{v}\), we construct an LSTM-based trajectory prediction scheme to map the historical trajectory to its corresponding residence time vector in all the RSUs \(\mathbf{o}^{v}=(o_{1}^{v},o_{2}^{v},\cdots,o_{R}^{v})\). In this work, for the target range and the period of time, the location of vehicle will be recorded at every interval. By this means, the trajectory sequence of vehicle \(v\) on the \(\ell\)-th day can be expressed as \(\mathcal{Z}_{\ell}^{v}\triangleq\left(z_{\ell,1}^{v},z_{\ell,2}^{v},\cdots, z_{\ell,N}^{v}\right)\), whose element \(z_{\ell,i}^{v}\in\mathcal{R}\) represents the location of vehicle \(v\) on the \(\ell\)-th day at timestamp \(i\), and the longer the consecutive identical positions implies the longer time that the vehicle stays in the same RSU. Note that the daily trajectory has a fixed-length of \(N\) via truncation or padding. #### Iv-B1 Embedding Layer For all the \(R\) RSUs, we use the zero padding method to create the RSU embedding matrix \(\mathbf{R}\in\mathbb{R}^{d_{\text{RSU}}\times(R+1)}\), which is a linear map from the RSU set to a \(d_{\text{RSU}}\)-dimensional vector space. Note that the matrix contains \(R+1\) columns since we consider an additional virtual RSU whose element is padded as 0. The embedding matrix for vehicle \(v\) in the \(\ell\)-th day is given by \[\hat{\mathbf{R}}_{\ell}^{v}=\left[\mathbf{R}_{z_{\ell,1}^{v}},\mathbf{R}_{z_{ \ell,2}^{v}},\cdots,\mathbf{R}_{z_{\ell,N}^{v}}\right], \tag{10}\] where \(\mathbf{R}_{j}\) is the \(j\)-th column of the embedding matrix \(\mathbf{R}\). #### Iv-B2 Daily Feature Extraction The information of daily trajectory is extracted by the first LSTM structure [21, SS10.10], i.e. LSTM\({}_{1}\): \[\mathbf{h}_{\ell}^{v},\mathbf{c}_{\ell}^{v}=\text{LSTM}_{1}\left(\hat{ \mathbf{R}}_{\ell}^{v}\right),\ell=1,2,\cdots,L, \tag{11}\] where \(\mathbf{h}_{\ell}^{v},\mathbf{c}_{\ell}^{v}\) are the hidden state vector and cell state vector, respectively. #### Iv-B3 Feature Fusion The information of periodic behavioral characteristics is extracted by the second LSTM structure, i.e. LSTM\({}_{2}\): \[\mathbf{h}_{\text{his}}^{v},\mathbf{c}_{\text{his}}^{v}=\text{LSTM}_{2} \left(\mathbf{h}_{1}^{v},\mathbf{c}_{1}^{v},\mathbf{h}_{2}^{v},\mathbf{c}_{2 }^{v},\cdots,\mathbf{h}_{L}^{v},\mathbf{c}_{L}^{v}\right), \tag{12}\] where \(\mathbf{h}_{\text{his}}^{v},\mathbf{c}_{\text{his}}^{v}\) are the final historical feature information. #### Iv-B4 Residence Time Prediction Finally, after extracting the historical information of previous trajectory, the prediction of the residence time is given by \[\bar{\mathbf{h}}^{v},\bar{\mathbf{c}}^{v} =\text{LSTM}_{3}\left(\hat{\mathbf{R}}_{L+1}^{v},\mathbf{h}_{ \text{his}}^{v},\mathbf{c}_{\text{his}}^{v}\right), \tag{13}\] \[\hat{\mathbf{o}}^{v} =\text{ReLU}\left(\bar{\mathbf{h}}^{v}\mathbf{W}+\mathbf{b} \right),\] where \(\mathbf{W}\) and \(\mathbf{b}\) are trainable parameters. ReLU is an activation function defined as \(\text{ReLU}(x)\triangleq\max\left\{0,x\right\}\). Moreover, all learnable parameters are denoted by \(\boldsymbol{\theta}^{\text{traj}}\) including parameters in all LSTM structures and \(\mathbf{W},\mathbf{b}\). In the model training, the input is the sequence \(\mathcal{Z}_{\ell}^{v},\ell=1,2,\cdots,L+1\), the expected output is the corresponding residence time vector \(\mathbf{o}^{v}\), and the mean squared error loss is adopted as the objective function: \[\mathcal{L}^{\text{traj}}=\sum_{v\in\mathcal{V}}\frac{1}{2}\|\mathbf{o}^{v}- \hat{\mathbf{o}}^{v}\|^{2}. \tag{14}\] We adopt offline learning to train the trajectory prediction model which is deployed in all RSUs after training for prediction tasks. Only forward propagation is taken in the prediction stage, so there is no time limit for the training phase. After prediction, the final \(\hat{\mathbf{o}}^{v}\) is regarded as an estimation of \(\sum_{t\in\mathcal{T}}\mathbb{E}\left[\theta_{v,t}^{1}\right]\) that can be used in \(W\left(\mathbf{x},\mathbf{y}\right)\) in Problem (9). Note that the residence time can be computed directly. ## V Recommendation System Scheme Since the sequential dynamics are a key feature to capture the context of vehicles' recent activities, in this section, we take the SASRec-based network to predict future content requests of vehicles. Furthermore, with the consideration of the increasing privacy concerns and the ever-growing distributed training data, we propose a Hierarchical Federated learning (HFL) structure to train the SASRec network for each cluster. ### _SASRec Model_ As shown in Fig. 4(a), in the sequential recommendation setting, since the lengths of requested content sequences of each vehicle might be different, it is not desirable to predict the future requirements with all previous contents. We consider a fixed size for all user requested content sequences by truncating or padding, and the maximum length is set as \(I\) for the vehicle \(v\), i.e., \(\mathcal{T}^{v}=(F_{1}^{v},F_{2}^{v},\cdots,F_{I}^{v})\). During training, at the \(i\)-th request file, the model utilizes the previous \(i\) files, i.e., \((0,0,\cdots,0,F_{1}^{v},F_{2}^{v},\cdots,F_{i}^{v})\), to predict the next \(I^{\prime}>1\) files, where \(I-I^{\prime}-i\) files are padded with \(0\). In this paper, we extend the original SASRec model with \(I^{\prime}=1\) in [22] to \(I^{\prime}>1\), allowing for the recommendation of multiple contents of interest for each vehicle so as to meet the requirements of the vehicle to a great extent. #### Iii-B1 Embedding Layer For the total of \(F\) available files, we use the zero padding method to create two embedding matrices \(\mathbf{M}\in\mathbb{R}^{d\times(F+1)}\) and \(\mathbf{P}\in\mathbb{R}^{d\times(I-I^{\prime})}\) to denote the content embedding matrix and the positional embedding matrix, respectively, where \(d\) is the latent dimensionality of both embedding matrices. Note that the content embedding matrix contains \(F+1\) columns since we consider an additional virtual file whose element is padded as 0. Then, the embedding matrix for vehicle \(v\) is given by \[\hat{\mathbf{E}}^{v}=\left[\mathbf{M}_{F_{1}^{v}}+\mathbf{P}_{1},\mathbf{M}_{F _{2}^{v}}+\mathbf{P}_{2},\cdots,\mathbf{M}_{F_{I-I^{\prime}}^{v}}+\mathbf{P}_{ I-I^{\prime}}\right], \tag{15}\] where \(\mathbf{M}_{j}\) and \(\mathbf{P}_{j}\) are the \(j\)-th columns of the embedding matrices \(\mathbf{M}\) and \(\mathbf{P}\), respectively. #### Iii-B2 Self-Attention Block The information of previously consumed contents is extracted by self-attention layer and point-wise feed-forward network. Specifically, the self-attention operation takes the embedding matrix as input, converts it to three matrices through linear projections, and feeds them into an attention layer: \[\hat{\mathbf{S}}^{v}\triangleq\text{SA}\left(\hat{\mathbf{E}}^{v}\right)= \text{Attention}\left(\hat{\mathbf{E}}^{v}\mathbf{W}^{Q},\hat{\mathbf{E}}^{v} \mathbf{W}^{K},\hat{\mathbf{E}}^{v}\mathbf{W}^{V}\right) \tag{16}\] \[=\text{Softmax}\left(\frac{\hat{\mathbf{E}}^{v}\mathbf{W}^{Q} \left(\hat{\mathbf{E}}^{v}\mathbf{W}^{K}\right)^{T}}{\sqrt{d}}\right)\hat{ \mathbf{E}}^{v}\mathbf{W}^{V},\] where \(\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V}\in\mathbb{R}^{d\times d}\) denote the projection matrices, Attention denotes the function of scaled dot-product attention mechanism, \(d\) is the scale factor to avoid overly large values of the inner product. In addition to attention sub-layers, our model contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between: \[\hat{\mathbf{F}}^{v}\triangleq\text{FFN}\left(\hat{\mathbf{S}}^{v}\right)= \text{ReLU}\left(\hat{\mathbf{S}}^{v}\mathbf{W}_{1}+\mathbf{b}_{1}\right) \mathbf{W}_{2}+\mathbf{b}_{2}, \tag{17}\] where \(\mathbf{W}_{1},\mathbf{W}_{2}\) are \(d\times d\) matrices and \(\mathbf{b}_{1},\mathbf{b}_{2}\) are \(d\) dimensional vectors. Besides, three efficient policies can also be considered: i) stacking self-attention blocks to make the model learn more complex content transitions; ii) adopting a dropout operation to avoid overfitting; iii) using residual connections Fig. 4: (a) A simplified diagram showing the training process of SASRec with \(I=5,I^{\prime}=3\). At each time step, the model considers all previous items to predict the next requested contents. (b) SASRec-model structure. Fig. 3: Architecture of trajectory prediction model, which consists of historical feature extraction and fusion, as well as future trajectory prediction and layer normalization to stabilize and accelerate the network training process. #### Iv-A3 Output Layer Finally, after multiple self-attention blocks extract information of previously requested contents, the prediction of the next contents is given by \[\hat{r}_{i,f}^{v}=\text{Sigmoid}\left(\hat{\mathbf{F}}_{i}^{v}\mathbf{M}_{f} \right), \tag{18}\] where \(\hat{\mathbf{F}}_{i}^{v}\) is the \(i\)-th row of the matrix \(\hat{\mathbf{F}}^{v}\) and denotes the feature vector of the vehicle \(v\) after the \(i\)-th request, \(\mathbf{M}_{f}\) is the \(f\)-th row of matrix \(\mathbf{M}\) and denotes the item embedding vector of content \(f\). The goal of original SASRec model [22] is to seek to identify which items are'relevant' from one user's historical behavior and use them to predict the next item. Limited by the performance of predicting the next one item accurately, it is not effective to predict the future consecutive multiple items with recursive multi-step forecast method. As shown in Fig. 4(b), the redesigned SASRec model adds a Sigmoid layer to represent the probability of a vehicle requesting all contents over a period of time in the future. In the network training, let \(\boldsymbol{\theta}^{\text{rec}}=\left\{\mathbf{M},\mathbf{P},\mathbf{W}^{Q}, \mathbf{W}^{K},\mathbf{W}^{V},\mathbf{W}_{1},\mathbf{W}_{2},\mathbf{b}_{1}, \mathbf{b}_{2}\right\}\) denote all the learnable parameters, and we adopt the following binary cross entropy loss as the objective function: \[\mathcal{L}^{\text{rec}}\triangleq\sum_{v\in\mathcal{V}}\mathcal{ L}_{v}^{\text{rec}} \tag{19}\] \[\triangleq\sum_{v\in\mathcal{V}}\sum_{i=1}^{I}\left[\sum_{f\in \mathcal{F}_{i}^{v}}\log\left(\tilde{r}_{i,f}^{v}\right)+\sum_{f\notin\mathcal{ F}^{v}}\log\left(1-\tilde{r}_{i,f}^{v}\right)\right].\] Different from trajectory prediction in the last section, the training of recommendation system by minimizing \(\mathcal{L}^{\text{rec}}\) can only be executed when the vehicle stays in the coverage of one certain RSU. The distributed training method will be introduced in the next subsection. After training, the estimation of \(\mathbb{E}\left[\theta_{v,f}^{2}\right]\) for the next requested items can be computed via the same steps as (15)-(18), which can be used in \(W\left(\mathbf{x},\mathbf{y}\right)\) in Problem (9). ### _HFL-based SASRec System_ In order to protect the privacy of user data, we take an HFL-based structure to train the SASRec systems instead of the centralized training in [22]. Compared with the traditional federated learning designed for a single cluster, HFL architecture is more suitable for more datasets provided by massive vehicles in a larger network, which can also improve the accuracy of the model. On the other hand, the vehicle stays in the MBS for a longer time, which helps to have more time to stabilize the model's performance. Thus, as shown in Fig. 5, we consider an HFL system that has one MBS \(m\), \(R^{m}\) RSUs indexed by \(\mathcal{R}_{m}=\left\{1,2,\cdots,R^{m}\right\}\), and \(V^{m}\) vehicles indexed by \(\mathcal{V}_{m}=\left\{1,2,\cdots,V^{m}\right\}\). RSU \(r\in\mathcal{R}_{m}\) manages \(V_{r}^{m}\) vehicles indexed by \(\mathcal{V}_{r}^{m}=\left\{1,2,\cdots,V_{r}^{m}\right\}\). The key steps of the HFL proceed as follows. After every \(\kappa_{1}\) local updates at each vehicle, each RSU will collect and aggregate local models from its vehicles, and then distribute the aggregated model to them. After every \(\kappa_{2}\) edge model aggregations at each RSU, the MBS will collect and aggregate all edge models from all RSUs in its cluster, and distribute the latest global model to them, which means the global model aggregations at MBSs happen every \(\kappa_{1}\kappa_{2}\) local updates. Let \(\boldsymbol{\theta}_{v}^{\text{rec}}\left(\kappa\right)\) denote the local model parameters of vehicle \(v\in\mathcal{V}_{r}^{m}\) after the \(\kappa\)-th local update. The evolution of local model parameters \(\boldsymbol{\theta}_{v}^{\text{rec}}\left(\kappa\right)\) is given by \[\boldsymbol{\theta}_{v}^{\text{rec}}\left(\kappa\right)=\begin{cases} \boldsymbol{\theta}_{v}^{\text{rec}}\left(\kappa-1\right)-\eta^{\text{rec}} \nabla\mathcal{L}_{v}^{\text{rec}},&\kappa|\kappa_{1}\neq 0\\ \dfrac{\sum_{v\in\mathcal{V}_{r}^{m}}\boldsymbol{\theta}_{v}^{\text{rec}}\left( \kappa-1\right)}{V_{r}^{m}},&\kappa|\kappa_{1}=0,\kappa|\kappa_{1}\kappa_{2} \neq 0\\ \dfrac{\sum_{v\in\mathcal{V}^{m}}\boldsymbol{\theta}_{v}^{\text{rec}}\left( \kappa-1\right)}{V^{m}},&\kappa|\kappa_{1}\kappa_{2}=0\end{cases} \tag{20}\] where \(\eta^{\text{rec}}\) is the step size of gradient descent. Different from offline training for trajectory prediction model, the training for recommendation system is time-sensitive due to the high dynamics of vehicles. All vehicles continuously interact with all RSUs and MBSs along the road to transmit gradients other than raw historical user preferences, so as to protect user privacy. Once a vehicle leaves the current RSU before accomplish local training, it fails to upload the updated model to the target RSU, which might lead to waste of computation capacity. The authors in [20] propose a simple measure by selecting slow-moving vehicles that can finish local training and uploading before leaving as the participants in edge model training. The details of HFL-based SASRec system are presented in Algorithm 1. Note that in the HFL-based SASRec system, the convergence is guaranteed in [23, SSIII]. In our future work, we will further improve the performance of the content popularity prediction scheme to better adapt to the characteristics of vehicular networks, including establishing more effective prediction models for special areas and developing faster training schemes. ``` 0: Initial model parameter \(\boldsymbol{\theta}_{v}^{\text{rec}}\left(0\right)\). Output: \(\left\{\mathbb{E}\left[\theta_{v,f}^{2}\right]\right\}_{v\in\mathcal{V},f\in \mathcal{F}}\) 1:for\(\kappa=1,2,...\)do 2:for each cluster in parallel do 3:if\(\kappa|\kappa_{1}\neq 0\)then 4: Each vehicle updates its local model with \(\boldsymbol{\theta}_{v}^{\text{rec}}\left(\kappa\right)=\boldsymbol{\theta}_{v}^ {\text{rec}}\left(\kappa-1\right)-\eta\nabla\mathcal{L}_{v}^{\text{rec}}\) in parallel. 5:elseif\(\kappa|\kappa_{1}\kappa_{2}\neq 0\)then 6: Each RSU in cluster \(m\) aggregates models: \(\boldsymbol{\theta}_{v}^{\text{rec}}\left(\kappa\right)=\frac{\sum_{v\in \mathcal{V}_{r}}\boldsymbol{\theta}_{v}^{\text{rec}}\left(\kappa-1\right)}{V_{r}^ {m}}\) and downloads to all served vehicles in parallel. 7:else 8: MBS \(m\) aggregates models: \(\boldsymbol{\theta}_{v}^{\text{rec}}\left(\kappa\right)=\frac{\sum_{v\in \mathcal{V}_{r}}\boldsymbol{\theta}_{v}^{\text{rec}}\left(\kappa-1\right)}{V^{m}}\) and downloads to all vehicles. 9:endif 10:endfor 11:endfor 12:Estimate the probability of vehicles \(v\) retrieving content \(f\), i.e., \(\mathbb{E}\left[\theta_{v,f}^{2}\right]=\text{Sigmoid}\left(\hat{r}_{I,f}^{v}\right)\). ``` **Algorithm 1** HFL-based SASRec ## VI Dynamic Content Caching Scheme In this section, we propose an adaptive gradient descent-based content caching policy to tackle the large-scale 0-1 constrained problem, including continuous relaxation and penalty coefficient adaptation. Moreover, dynamic content caching is to integrate the proposed caching policy with the aforementioned trajectory prediction and content popularity prediction for the dynamic topology and the real-time request. ### _Penalty Method_ Solving the original problem (9) faces two main challenges: i) the number of binary decision variables \(x_{r,f}\) and \(y_{m,f}\) are tremendous due to abundant contents; and ii) the number of constraints are enormous due to the high density of edge nodes. Both issues severely hinder the efficacy of tackling this large-scale optimization problem. Inspired by [24], we first relax \((M+R)F\) constraints about binary decision variables to \[0\leq x_{r,f}\leq 1,\quad 0\leq\quad y_{m,f}\leq 1. \tag{21}\] Since tackling the original caching problem with numerous constraints that may greatly exceed nowadays computing capability, the Sigmoid function is introduced to remove these constraints as: \(x_{r,f}=\text{Sigmoid}(\tilde{x}_{r,f}),\tilde{x}_{r,f}\in\mathbb{R}\) and \(y_{m,f}=\text{Sigmoid}(\tilde{y}_{m,f}),\tilde{y}_{m,f}\in\mathbb{R}\). More formally, the binary decision variables are relaxed as follows: \[\begin{split} x_{r,f}&\triangleq h\left(\tilde{x} _{r,f}\right)=\text{Sigmoid}\left(\tilde{x}_{r,f}\right),\\ y_{m,f}&\triangleq h\left(\tilde{y}_{m,f}\right) =\text{Sigmoid}\left(\tilde{y}_{m,f}\right).\end{split} \tag{22}\] With the proposed approximation, the original problem can be reformulated as follows: \[\begin{split}\min_{\tilde{x}_{r,f},\tilde{y}_{m,f}}W\left(h \left(\tilde{\mathbf{x}}\right),h\left(\tilde{\mathbf{y}}\right)\right)\\ \text{s.t.}&\quad P_{r}\left(h\left(\tilde{\mathbf{ x}}_{r}\right)\right)\leq 0,\quad\forall r\in\mathcal{R},\\ Q_{m}\left(h\left(\tilde{\mathbf{y}}_{m}\right)\right)\leq 0, \quad\forall m\in\mathcal{M}.\end{split} \tag{23}\] This relaxation reduces the number of constraints from \((M+R)(F+1)\) to \(M+R\). For ease of notations in this section, we simply use \(W\left(\mathbf{\tilde{x}},\mathbf{\tilde{y}}\right)\), \(P_{r}\left(\tilde{\mathbf{x}}_{r}\right)\), and \(Q_{m}\left(\tilde{\mathbf{y}}_{m}\right)\) to represent \(W\left(h\left(\tilde{\mathbf{x}}\right),h\left(\tilde{\mathbf{y}}\right)\right)\), \(P_{r}\left(h\left(\tilde{\mathbf{x}}_{r}\right)\right)\), and \(Q_{m}\left(h\left(\tilde{\mathbf{y}}_{m}\right)\right)\), respectively, and use \(\mathbf{w}=\left(\mathbf{w}_{\tilde{\mathbf{x}}_{r}},\mathbf{w}_{\tilde{ \mathbf{y}}_{r}}\right)\), \(\mathbf{p}_{r}\), and \(\mathbf{q}_{m}\) to represent their gradients w.r.t. \(\mathbf{\tilde{x}},\mathbf{\tilde{y}}\), respectively. The penalty method in [25, Eq. 17.2] enables us to rewrite (23) as follows: \[\begin{split}\min_{\tilde{\mathbf{x}},\tilde{\mathbf{y}}}& L\triangleq W\left(\mathbf{\tilde{x}},\mathbf{\tilde{y}}\right)+\frac{1}{2} \beta\sum_{r\in\mathcal{R}}\text{ReLU}\left[P_{r}\left(\tilde{\mathbf{x}}_{r} \right)\right]^{2}\\ &+\frac{1}{2}\beta\sum_{m\in\mathcal{M}}\text{ReLU}\left[Q_{m} \left(\tilde{\mathbf{y}}_{m}\right)\right]^{2},\end{split} \tag{24}\] where \(L\) is the extended objective function and \(\beta>0\) is the penalty coefficient. We utilize the gradient descent method to optimize \(L\) by \[\mathbf{\tilde{x}}_{r}\leftarrow\mathbf{\tilde{x}}_{r}-\eta\left(\nabla L \right)_{\mathbf{\tilde{x}}_{r}},\quad\mathbf{\tilde{y}}_{m}\quad\gets \mathbf{\tilde{y}}_{m}-\eta\left(\nabla L\right)_{\mathbf{\tilde{y}}_{m}}, \tag{25}\] where \(\eta>0\) is a sufficiently small stepsize (or say learning rate). \(\nabla L\) is the gradient of \(L\) and its \(\mathbf{\tilde{x}}_{r}\)-component and \(\mathbf{\tilde{y}}_{m}\)-component are given by \[\begin{split}\left(\nabla L\right)_{\mathbf{\tilde{x}}_{r}}& =\mathbf{w}_{\tilde{\mathbf{x}}_{r}}+\beta\text{ReLU}\left[P_{r} \left(\tilde{\mathbf{x}}_{r}\right)\right]\mathbf{p}_{r},\\ \left(\nabla L\right)_{\mathbf{\tilde{y}}_{m}}&= \mathbf{w}_{\mathbf{\tilde{y}}_{m}}+\beta\text{ReLU}\left[Q_{m}\left(\tilde{ \mathbf{y}}_{m}\right)\right]\mathbf{q}_{m}.\end{split} \tag{26}\] The role of \(\beta\) is essential for the convergence of the optimization process. From experience, a large \(\beta\) is preferred for a fast convergence rate, however, an overlarge \(\beta\) will disturb the gradient descent algorithm once faced a small cost overrun. Therefore, we propose the following adaptive gradient descent algorithm that dynamically controls \(\beta\) in the next subsection, where the coefficient is adjusted based on information from the solution obtained at last iteration. ### _Adaptive Gradient Descent Algorithm_ We first introduce the following preliminary result that the objective function and constraints have Lipschitz continuous gradients. **Proposition 1**.: _The gradients of the objective function and the constraints are all Lipschitz continuous, i.e.,_ \[\begin{split}\|\nabla W\left(\mathbf{\tilde{x}}_{1},\mathbf{ \tilde{y}}_{1}\right)-\nabla W\left(\mathbf{\tilde{x}}_{2},\mathbf{\tilde{y}}_ {2}\right)\|\leq\lambda_{w}\|(\mathbf{\tilde{x}}_{1},\mathbf{\tilde{y}}_{1})- (\mathbf{\tilde{x}}_{2},\mathbf{\tilde{y}}_{2})\|,\\ \|\nabla P_{r}\left(\mathbf{\tilde{x}}_{r,1}\right)-\nabla P_{r} \left(\mathbf{\tilde{x}}_{r,2}\right)\|\leq\lambda_{r}\|\mathbf{\tilde{x}}_{ r,1}-\mathbf{\tilde{x}}_{r,2}\|,\\ \|\nabla Q_{m}\left(\mathbf{\tilde{y}}_{m,1}\right)-\nabla Q_{m} \left(\mathbf{\tilde{y}}_{m,2}\right)\|\leq\lambda_{m}\|\mathbf{\tilde{y}}_{m,1 }-\mathbf{\tilde{y}}_{m,2}\|.\end{split} \tag{27}\] Fig. 5: Framework of the HFL-based SASRec system. Proof.: See Appendix A. The key idea of the adaptive gradient descent algorithm is to balance all the descent directions of the objective function and the budget constraints, which ensures the objective function continue to decline under the premise of guaranteed constraints. Specifically, when \(P_{r}(\tilde{\mathbf{x}}_{r})\leq 0,Q_{m}(\tilde{\mathbf{y}}_{m})\leq 0\), i.e., no cost overrun, there is no need to control \(\beta\) thanks to the rectifier ReLU. Otherwise, we need simply to set \(\beta=0\) to enable a more aggressive search for the objective without considering the budget constraint. When \(P_{r}(\tilde{\mathbf{x}}_{r})>0,Q_{m}(\tilde{\mathbf{y}}_{m})>0\), we need to adjust \(\eta\) and \(\beta\) for its steepest gradient descent direction. Next, we give some conditions about \(\eta\) and \(\beta\) to decrease the objective and constraints. **Proposition 2**.: _The value of objective function does not increase after one gradient descent update, i.e., \(W\left(\tilde{\mathbf{x}}-\eta\left(\nabla L\right)_{\tilde{\mathbf{x}}}, \tilde{\mathbf{y}}-\eta\left(\nabla L\right)_{\tilde{\mathbf{y}}}\right)\leq W \left(\tilde{\mathbf{x}},\tilde{\mathbf{y}}\right),\) if the following sufficient conditions hold:_ \[0\leq\eta\leq\eta_{w}\triangleq\frac{2\left(\|\mathbf{w}\|^{2}+\beta\phi \right)}{\lambda_{w}\left(\|\left(\nabla L\right)_{\tilde{\mathbf{x}}_{r}}\| ^{2}+\|\left(\nabla L\right)_{\tilde{\mathbf{y}}_{m}}\|^{2}\right)}, \tag{28}\] \[\beta\left\{\begin{aligned} &\geq 0,\qquad\qquad\phi\geq 0,\\ &\leq-\frac{\|\mathbf{w}\|^{2}}{\phi},\qquad\phi<0,\end{aligned}\right. \tag{29}\] _where \(\phi\) is defined as \(\phi\triangleq\sum_{r\in\mathcal{R}}\text{ReLU}\left[P_{r}\left(\tilde{ \mathbf{x}}_{r}\right)\right]\mathbf{p}_{r}^{T}\mathbf{w}_{\tilde{\mathbf{x}}_ {r}}+\sum_{m\in\mathcal{M}}\text{ReLU}\left[Q_{m}\left(\tilde{\mathbf{y}}_{m} \right)\right]\mathbf{q}_{m}^{T}\mathbf{w}_{\tilde{\mathbf{y}}_{m}}\)._ Proof.: See Appendix B. **Proposition 3**.: _The caching constraint in RSU, i.e., \(P_{r}\left(\tilde{\mathbf{x}}_{r}\right)>0\), does not increase after one gradient descent update, i.e. \(P_{r}\left(\tilde{\mathbf{x}}_{r}-\eta\left(\nabla L\right)_{\tilde{\mathbf{x }}_{r}}\right)<P_{r}\left(\tilde{\mathbf{x}}_{r}\right)\) if the sufficient condition holds:_ \[0\leq\eta\leq\frac{2\left(\mathbf{w}_{\tilde{\mathbf{x}}_{r}}^{T }\mathbf{p}_{r}+\beta\text{ReLU}\left[P_{r}\left(\tilde{\mathbf{x}}_{r} \right)\right]\|\mathbf{p}_{r}\|^{2}\right)}{\lambda_{r}\|\left(\nabla L\right) _{\tilde{\mathbf{x}}_{r}}\|^{2}}\triangleq\eta_{r}^{1}, \tag{30a}\] \[\beta\geq-\frac{\mathbf{w}_{\tilde{\mathbf{x}}_{r}}^{T}\mathbf{p}_{r} }{\text{ReLU}\left[P_{r}\left(\tilde{\mathbf{x}}_{r}\right)\right]\|\mathbf{p} _{r}\|^{2}}\triangleq\beta_{r}^{\text{RSU}}. \tag{30b}\] _On the other hand, the caching constraint in RSU, i.e., \(P_{r}\left(\tilde{\mathbf{x}}_{r}\right)\leq 0\), still holds after one gradient descent update, i.e. \(P_{r}\left(\tilde{\mathbf{x}}_{r}-\eta\mathbf{w}_{\tilde{\mathbf{x}}_{r}} \right)\leq 0\) and \(\mathbf{w}_{\tilde{\mathbf{x}}_{r}}\neq 0\), if the following sufficient condition holds:_ \[0\leq\eta\leq\frac{\mathbf{w}_{\tilde{\mathbf{x}}_{r}}^{T} \mathbf{p}_{r}+\sqrt{\left(\mathbf{w}_{\tilde{\mathbf{x}}_{r}}^{T}\mathbf{p}_ {r}\right)^{2}-2\lambda_{r}\|\mathbf{w}_{\tilde{\mathbf{x}}_{r}}\|^{2}P_{r} \left(\tilde{\mathbf{x}}_{r}\right)}}{\lambda_{r}\|\mathbf{w}_{\tilde{ \mathbf{x}}_{r}}\|^{2}}\triangleq\eta_{r}^{2},\] \[\beta=0. \tag{31}\] Proof.: See Appendix C. Similarly, to decrease \(Q_{m}\left(\tilde{\mathbf{y}}_{m}\right)\) when the constraint in MBS is violated, \(\beta\) and \(\eta\) should be adjusted as follows \[0\leq\eta\leq\frac{2\left(\mathbf{w}_{\tilde{\mathbf{y}}_{m}}^{T} \mathbf{q}_{m}+\beta\text{ReLU}\left[Q_{m}\left(\tilde{\mathbf{y}}_{m}\right) \right]\|\mathbf{q}_{m}\|^{2}\right)}{\lambda_{r}\|\left(\nabla L\right)_{ \tilde{\mathbf{y}}_{m}}\|^{2}}\triangleq\eta_{m}^{1}, \tag{32a}\] \[\beta\geq-\frac{\mathbf{w}_{\tilde{\mathbf{y}}_{m}}^{T}\mathbf{q}_{m} }{\text{ReLU}\left[Q_{m}\left(\tilde{\mathbf{y}}_{m}\right)\right]\|\mathbf{q }_{m}\|^{2}}\triangleq\beta_{m}^{\text{MBS}}. \tag{32b}\] When \(Q_{m}\left(\tilde{\mathbf{y}}_{m}\right)\leq 0\), the caching constraint still holds after one gradient descent update, i.e. \(Q_{m}\left(\tilde{\mathbf{y}}_{m}-\eta\mathbf{w}_{\tilde{\mathbf{y}}_{m}} \right)\leq 0\), if \(\eta\) and \(\beta\) satisfy the following constraints: \[0\leq\eta\leq\frac{\mathbf{w}_{\tilde{\mathbf{y}}_{m}}^{T}\mathbf{q}_{m}+\sqrt{ \left(\mathbf{w}_{\tilde{\mathbf{y}}_{m}}^{T}\mathbf{q}_{m}\right)^{2}-2 \lambda_{r}\|\mathbf{w}_{\tilde{\mathbf{y}}_{m}}\|^{2}Q_{m}\left(\tilde{ \mathbf{y}}_{m}\right)}}{\lambda_{m}\|\mathbf{w}_{\tilde{\mathbf{y}}_{m}}\|^{2}}\] \[\triangleq\eta_{m}^{2}, \tag{33}\] \[\beta=0.\] Note that if all constraints are satisfied, \(\beta\) and \(\eta\) should be set as follows to decrease the objective function: \[\beta=0,\quad 0\leq\eta\leq\min\left\{\eta_{r}^{2},\eta_{m}^{2}\right\}. \tag{34}\] If all of (29), (30b), and (32b) can be achieved, while other constraints are violated, the best outcome can be obtained by setting \(\beta\) and \(\eta\) as follows to decrease the objective function and constraints at the same time: \[\beta\left\{\begin{aligned} &\geq\max\left\{0,\beta_{r}^{\text{RSU}}, \beta_{m}^{\text{MBS}}\right\},&\phi\geq 0,\\ &\in\left[\max\left\{\beta_{r}^{\text{RSU}},\beta_{m}^{\text{MBS}} \right\},-\frac{\|\mathbf{w}\|^{2}}{\phi}\right],\\ &\phi<0\&\xi-\frac{\|\mathbf{w}\|^{2}}{\phi}>\beta_{r}^{ \text{RSU}},\beta_{m}^{\text{MBS}},\end{aligned}\right. \tag{35a}\] \[0\leq\eta\leq\min\left\{\eta_{w},\eta_{r}^{1},\eta_{m}^{1}\right\}, \tag{35b}\] However, the non-increase in the objective function and the decrease in the constraints may not be fulfilled at the same time. Considering these conditions with some violated constraints, after one gradient descent update, \(\beta\) and \(\eta\) should be set as follows to satisfy the constraints: \[\beta\geq\max\left\{\beta_{r}^{\text{RSU}}+\epsilon_{r},\beta_{m}^{\text{ MBS}}+\epsilon_{m}\right\}, \tag{36}\] \[\eta>0\text{ is sufficiently small},\] where \(\epsilon_{r}\) and \(\epsilon_{m}\) are positive values that can analytically decided by the following Prop. 4. **Proposition 4**.: _There will be no cost overrun, i.e. \(P_{r}\left(\tilde{\mathbf{x}}_{r}\right)>0,Q_{m}\left(\tilde{\mathbf{y}}_{m} \right)>0\), after one gradient ascent update if_ \[\epsilon_{r}=\frac{1}{\eta\|\mathbf{p}_{r}\|^{2}},\qquad\epsilon_{m}=\frac{1 }{\eta\|\mathbf{p}_{m}\|^{2}}. \tag{37}\] Proof.: See Appendix D. Finally, We present the convergence analysis and computational complexity analysis on our proposed adaptive gradient descent algorithm. _Convergence Analysis:_ The convergence is guaranteed by the following two facts. First, the objective value of Problem (23) is non-increasing over iterations, even with arbitrarily initialization, all the constraints in Problem (23) can be satisfied by tuning \(\beta\) and \(\eta\). Second, the optimal value of problem (23) is bounded from below due to the cache constraint. Thus, the objective value is guaranteed to converge. Furthermore, since each binary decision variable is bounded, thus there must exist a convergent subsequence. _Complexity Analysis:_ The complexity of updating \(\mathbf{x}\) and \(\mathbf{y}\) mainly depends on the computation of \(\beta\) and the optimization of (24) with gradient descent method. Firstly, it is necessary to compute the penalty coefficient \(\beta\) with gradient computation and matrix multiplication according to (30b) and (32b), whose complexity is \(\mathcal{O}\left(F(R+M)\right)\). Secondly, we need to optimize the extended objective function with gradient descent method, whose complexity is \(\mathcal{O}\left(F(R+M)\right)\). Suppose that the proposed algorithm requires \(T\) iterations to converge in total. Therefore, the complexity of evaluating \(\mathbf{x}\) and \(\mathbf{y}\) is \(\mathcal{O}(TF(R+M))\). ### _Practical Adaptive Gradient Descent Algorithm_ Although the convergence can be guaranteed as stated in the last subsection, some parameters are difficult to calculate accurately, especially for all the Lipschitz constants. If the stepsize is too small, the convergence rate will be very slow, which is unadaptive to the characteristics of rapid changes in the vehicular networks. In this section, we propose a practical scheme to optimize cooperative cache problem. Eqs. (34), (35), and (36) offer a fresh insight into the update of \(\beta\), and we give one realization as follows: \[\beta=\left\{\begin{aligned} &\max\left\{0,\beta_{r}^{\text{RSU}}, \beta_{m}^{\text{MBS}}\right\},&\phi\geq 0,\\ &-\frac{\left\|\mathbf{w}\right\|^{2}}{2\phi}+\frac{1}{2}\max\left\{ \beta_{r}^{\text{RSU}},\beta_{m}^{\text{MBS}}\right\},&\\ &\phi<0\xi-\frac{\left\|\mathbf{w}\right\|^{2}}{\phi}>\beta_{r}^{ \text{RSU}},\beta_{m}^{\text{MBS}},&\\ &\max\left\{\beta_{r}^{\text{RSU}}+\epsilon_{r},\beta_{m}^{\text{ MBS}}+\epsilon_{m}\right\},&\text{otherwise}.\end{aligned}\right. \tag{38}\] Besides, for the purpose of reducing computational complexity and accelerating convergence, we take a slightly larger constant stepsize \(\eta\). Note that when there is only one constraint, our realization of \(\beta\) is same as the scheme in [24, Eq. 6]. ### _Dynamic Content Caching_ For adapting to the dynamic topology and the real-time request, dynamic content caching mainly integrates the proposed adaptive gradient descent-based caching policy with the trajectory prediction and content popularity predictions. The overall algorithm is outlined in Algorithm 2. Generally speaking, prediction and caching are executed periodically. In Step 2, RSUs first recognize all vehicles in their coverage, and then predict the future trajectory with the proposed algorithm in Section IV. In Step 3, MBSs and RSUs collaboratively execute the HFL-based SASRec algorithm proposed in Section V to predict the future content requests for each vehicles. Step 4 combines trajectory prediction with content prediction to get the content popularity for each RSU. Steps 5-11 optimize the target caching problem by relaxing it to an approximate problem. Specifically, Steps 6-7 select a suitable \(\beta\) to balance all the descent directions; Steps 8-9 compute gradient and update the caching. Finally, Step 12 aims to cache contents for each RSU and MBS according to the caching decisions. **Input:** Stepsize \(\eta\), initial \(\beta=0\). ``` 1:for episode=1, 2,...do 2:Prediction: Predict the time of each vehicle entering each RSU \(\mathbf{P}_{1}\) in Eq. (13). 3:Prediction: Predict the probability of each vehicle requesting each content \(\mathbf{P}_{2}\) in Alg. 1. 4: Compute the content popularity of each RSU, \(\mathbf{P}=\mathbf{P}_{1}^{T}\mathbf{P}_{2}\). 5:repeat 6: Compute \(\beta_{r}^{\text{RSU}},\beta_{m}^{\text{MBS}}\) with (30b), (32b) when constraints have been violated. 7: Compute \(\beta\) with (38). 8: Compute gradient \(\nabla L\) of extend objective function in (24) with given \(\beta\). 9: Update the optimization variables \(\tilde{\mathbf{x}},\tilde{\mathbf{y}}\) with (25). 10:until Convergence 11:Caching: Sort \(\tilde{\mathbf{x}},\tilde{\mathbf{y}}\), and cache contents in turn until the constraint is violated. 12:endfor ``` **Algorithm 2** Dynamic Caching Algorithm ## VII Results and Discussions ### _Simulation Setup_ In this section, we numerically evaluate the performance of the proposed proactive content caching scheme. In this simulation, there are 1107 vehicles served by the hierarchical cooperative caching network, the cached contents of RSUs and MBSs are determined based on the prediction of the vehicle mobility and content preference. As shown in Fig. 6, the map is roughly \(3km\times 3km\) range in Shenzhen. For the purpose of covering the main streets entirely, the urban area is divided into 8 clusters, each of which consists of 1 MBS and 10 RSUs. The mobility trajectory over the road network is generated by SUMO simulator to imitate behaviors of vehicles [26]. The dataset of the recommendation system used in our experiments is MovieLens 1M dataset collected from the MovieLens website [27]. About 1 million ratings are contained in this dataset, which came from 6040 anonymized users on 3416 movies. To simulate the process of vehicles' requests, the rated movies are assumed as request contents from vehicles. The system parameters used for our simulations are listed in Table I. ### _Baseline Schemes and Metrics_ We adopt four baseline schemes for comparison. For baseline 1, we adopt the least recently used (LRU) scheme which is a common caching strategy. In this case, RSUs and MBSs firstly remove the least recently used content in the cache when the limit of cache capacity is reached. For baseline 2, we adopt the random scheme which caches the files randomly. For baseline 3, we evaluate the noncooperative caching performance to indicate the impact of cooperation on the whole system. In this case, all the RSUs and MBSs independently determine its deployment with the prediction of vehicles' future trajectory and content popularity. For baseline 4, it takes cooperative content caching scheme given the prior knowledge of the exact future trajectory and content requests from vehicles, which can be treated as the optimal solution for all the cache schemes. In the simulation, we mainly compare the performance of different caching strategies in terms of the hit ratio and average delay. #### Vii-B1 Hit Ratio Hit ratio is the essential metric to evaluate the proposed scheme that measures the effectiveness of a cache decision in fulfilling content requests [20]. To show the impact of the cooperative caching, one cache hit means the requested content is delivered by the cache within the cluster, whereas a cache miss means the requested content is not stored in the cluster. Hit ratio is calculated as follows: \[\text{hit ratio}=\frac{\text{cache hits}}{\text{cache hits + cache miss}}. \tag{39}\] #### Vii-B2 Average Delay The latency for each content shown in (4) is determined by the cache schemes, the retrieval process of HCCN and the unit delays \(\gamma_{CM},\gamma_{MR},\gamma_{MM}\). Furthermore, as another important metric to evaluate the user experience, the average delay is defined as \[\text{average delay} \tag{40}\] \[=\frac{\text{Total prefetching delay of all requested contents}}{\text{Total number of requested contents}}.\] In the following, we investigate the convergence of cooperative content caching in Subsection VII-C while the impact of the different system parameters, i.e. MBS size and RSU size, is studied to evaluate hit ratio in Subsection VII-F and average delay in Subsection VII-G. ### _Convergence of Cooperative Content Caching_ In Fig. 7, we investigate the impact of different RSU size \(S_{r}^{\text{RSU}}\) on the convergence of the proposed adaptive gradient descent method for large-scale optimization problem. Fig. 7 sketches the number of iterations versus the objective function (delay) by considering two cases with configuration given by : i) RSU size = 140, MBS size = 800; 2) RSU size = 410, MBS size = 800. As shown in Fig. 7, the objective presents a tendency to decrease by adjusting the penalty parameter \(\beta\) all the time in a practical adaptive gradient descent algorithm. Through subsequent simulations, it can optimize cache deployment efficiently, and thus guarantee a high hit ratio and low delay. ### _Accuracy of the Trajectory Prediction_ In Fig. 8, we compare the proposed trajectory prediction method with the classical prediction by partial matching (PPM) that conducts prediction of the next location by computing the frequency [5]. As Fig. 8 shows, the accuracy increases by \(6\%\) on average by using our proposed method. The performance gain can be explained as follows. The probable path can be represented by a different RSU sequence because of large coverage area overlap among nearby RSUs. The probability is set as 0 when a given RSU location sequence never occurs in the PPM model, while our method has ability to identify the importance of different location sequences efficiently even a given sequence never occurs. On the other hand, due to the influence of traffic lights, too long residue time may exceed the depth of the tree, which may cause inaccurate predictions with PPM. In addition, it is difficult to obtain the optimal length of paths of the Trie structure in the PPM model in practice. In general, our proposed method can guarantee the effectiveness of trajectory prediction. ### _Effectiveness of the HFL-based SASRec System_ In Fig. 9, we compare the centralized training with HFL to train the SASRec network. We adopt the commonly-used recommendation performance metric, namely, Top-\(N\) hit ratio with \(N=50\) and 100, which can be denoted as HR@50 and HR@100. For each vehicle, we randomly select 500 negative files, and rank these files with the ground-truth files. As Fig. 9 shows, HR@50 and HR@100 decrease by only \(6\%\) and \(4\%\), respectively. However, the data privacy can not be guaranteed and the communication overhead is large since the raw data needs to be sent to RSUs and MBSs in the centralized learning manner. Fig. 6: Simulation setup of RSUs placement in Shenzhen. Fig. 7: Convergence of our adaptive gradient descent method algorithms for different values of \(S_{r}^{RSU}\) when \(S_{m}^{MBS}=800\). ### _Hit Ratio Evaluation_ To investigate the impact of caching capacities of RSUs and MBS, we plot Fig. 10(a) to depict the cache hit ratio for varying RSU cache sizes from 230 to 350 contents given 800 contents in MBSs cache, and plot Fig. 10(b) to depict the cache hit ratio for varying MBS cache sizes from 300 to 1200 contents with cached 300 contents in RSUs. The results demonstrate that our proposed algorithm outperforms the LRU, random, and non-cooperative caching schemes. With the increase of cache size, the cache hit ratios of all the caching schemes rise. As expected, the lowest cache hit ratio is presented by the classical LRU and random scheme (baseline 1 and baseline 2). The noncooperative caching scheme (baseline 2) outperforms LRU and random scheme because they extract historical trajectory features to predict the future residence time in each RSU, and extract features from the content request history of connected vehicles to predict precise content popularity. Random scheme does not consider any feature of the current environment. LRU only follows static rules without considering dynamically changing content popularity. Since the proposed cooperative caching scheme jointly optimizes the cached contents of all the RSUs, the contents are more likely to be fetched from a neighbor edge node instead of the Internet when it is not cached by the local RSU. Therefore, it can significantly improve resource utilization and show a better performance than the noncooperative caching scheme. Baseline 3 provides the best cache hit ratio since it has the prior knowledge of content requests and trajectory from vehicles in the future, and leverages the advantages of cooperative caching. Compared with LRU caching scheme, the hit ratios are increased \(18.8\%\) and \(14.7\%\) by using our proposed method in the cases of RSU size = 220, MBS size = 800 and RSU size = 300, MBS sizes = 550, respectively. ### _Average Delay Evaluation_ To further evaluate the performance of cooperative caching scheme, we plot Fig. 11(a) to depict the average delay for varying RSU cache sizes from 230 to 350 contents given 800 contents in MBSs cache, and plot Fig. 11(b) to depict the average delay for varying MBS cache sizes from 300 to 1200 contents with cached 300 contents in RSUs. The results also demonstrate that our proposed algorithm outperforms the LRU, random, and non-cooperative caching schemes in terms of average delay. With the increase of cache size, the average delay of all the caching schemes decline. As expected, the cooperative cache with prior information performs the best, our proposed scheme is the next best, and the classical LRU and random scheme perform worse than any other schemes. The performance gain can be explained by the superiority of cooperative cache scheme. Although the noncooperative cache scheme based on prediction (baseline 2) is slightly better than LRU in terms of average latency, our proposed cooperative cache scheme gains a huger advantage over the LRU. Compared with LRU caching scheme, the average latency is reduced \(19.1\%\) and \(16.1\%\) by using our proposed method in the cases of RSU size = 220, MBS size = 800 and RSU size = 300, MBS sizes = 550, respectively. ## VIII Conclusions In this paper, we propose an HCCN architecture to adapt to the dynamic properties of VANET topology, provide real-time content popularity prediction, and reduce communication costs. And a pipeline scheduling mechanism is utilized to parallelly execute prediction and transmission tasks. To verify the effectiveness of the proposed framework, we simulate the urban roads around Shenzhen University. We firstly make the utmost of the spatio-temporal correlation of historical trajectory data, and then design an LSTM-based model to predict the residence time in each RSU for vehicles in the near future. With the growing concern on data privacy, we propose an HFL-based structure to train the SASRec network for each cluster so as to predict future content popularity in each RSU. Finally, based on the aforementioned trajectory prediction and content popularity prediction results, we propose an adaptive gradient descent-based algorithm to solve a large-scale 0-1 constrained problem and enhance the performance of content caching. Numerical results demonstrate that our proposed cooperative caching scheme achieves a satisfactory performance close to ideal cooperative caching schemes with prior information. Furthermore, we confirm the huge potential of our proposed hierarchical cooperative caching network architecture and a pipeline scheduling mechanism in hit ratio and low latency in future stream media content caching systems. Fig. 8: Comparison of the proposed trajectory method with PPM. Fig. 9: Comparison of centralized training with HFL method. We predict the next 5 files for 100 vehicles in the coverage of a BS and 10 RSU are deployed in the edge layer. ## Appendix A Proof of Proposition 1 Due to the multivariate polynomial form of function \(\gamma_{r,f}\) in (5), their first and second order partial derivatives are bounded for the variables in the region \([0,1]\). Based on these bounded partial derivatives, the Hessian of \(\gamma_{r,f}\) is a bounded matrix, and the the largest eigenvalue is bounded. Therefore, \(\gamma_{r,f}\) has a local Lipschitz continuous gradient w.r.t \(\left(\mathbf{x},\mathbf{y}\right)\). Since Sigmoid function also has Lipchitz continuous gradient, \(\gamma_{r,f}\) has local Lipschitz continuous gradient w.r.t \(\left(\tilde{\mathbf{x}},\tilde{\mathbf{y}}\right)\). Furthermore, the objective function, as a linear combination of \(\gamma_{r,f}\), also has local Lipschitz continuous gradient w.r.t \(\left(\tilde{\mathbf{x}},\tilde{\mathbf{y}}\right)\). Since \(P_{r}\) and \(Q_{m}\) are linear functions w.r.t \(\mathbf{x}\) and \(\mathbf{y}\), they have local Lipschitz continuous gradients w.r.t \(\mathbf{x},\mathbf{y}\). Furthermore, following the similar steps, they have local Lipschitz continuous gradients w.r.t \(\tilde{\mathbf{x}},\tilde{\mathbf{y}}\). ## Appendix B Proof of Proposition 2 From the fact that the gradient of \(W\left(\tilde{\mathbf{x}},\tilde{\mathbf{y}}\right)\) is Lipschitz continuous, we have \[W\left(\tilde{\mathbf{x}}-\eta\left(\nabla L\right)_{\tilde{ \mathbf{x}}},\tilde{\mathbf{y}}-\eta\left(\nabla L\right)_{\tilde{\mathbf{y}} }\right)-W\left(\tilde{\mathbf{x}},\tilde{\mathbf{y}}\right)\] \[\leq -\eta\left[\left(\nabla L\right)_{\tilde{\mathbf{x}}_{r}}^{T} \mathbf{w}_{\tilde{\mathbf{x}}_{r}}+\left(\nabla L\right)_{\tilde{\mathbf{y}} _{m}}^{T}\mathbf{w}_{\tilde{\mathbf{y}}_{m}}\right]\] \[\qquad\quad+\frac{\lambda_{w}}{2}\eta^{2}\bigg{[}\left\|\left( \nabla L\right)_{\tilde{\mathbf{x}}_{r}}\right\|^{2}+\left\|\left(\nabla L \right)_{\tilde{\mathbf{y}}_{m}}\right\|^{2}\bigg{]}\] \[= -\eta\bigg{[}\|\mathbf{w}\|^{2}+\beta\bigg{(}\sum_{r\in\mathcal{ R}}\text{ReLU}\left[P_{r}\left(\tilde{\mathbf{x}}_{r}\right)\right]\mathbf{p}_{r}^{T} \mathbf{w}_{\tilde{\mathbf{x}}_{r}}\] \[\qquad\quad+\sum_{m\in\mathcal{M}}\text{ReLU}\left[Q_{m}\left( \tilde{\mathbf{y}}_{m}\right)\right]\mathbf{q}_{m}^{T}\mathbf{w}_{\tilde{ \mathbf{y}}_{m}}\bigg{)} \tag{41}\] \[\qquad\quad-\frac{\lambda_{w}}{2}\eta\left(\left\|\left(\nabla L \right)_{\tilde{\mathbf{x}}_{r}}\right\|^{2}+\left\|\left(\nabla L\right)_{ \tilde{\mathbf{y}}_{m}}\right\|^{2}\right)\bigg{]}\] \[= -\eta\left[\|\mathbf{w}\|^{2}+\beta\phi-\frac{\lambda_{w}}{2} \eta\left(\left\|\left(\nabla L\right)_{\tilde{\mathbf{x}}_{r}}\right\|^{2}+ \left\|\left(\nabla L\right)_{\tilde{\mathbf{y}}_{m}}\right\|^{2}\right)\right]\] \[\leq 0,\] where the last inequality holds because of (28) and (29), and \(W\left(\tilde{\mathbf{x}},\tilde{\mathbf{y}}\right)\) in non- increasing. Fig. 11: (a) Average delay versus RSU size in the range 230–350 when MBS size is 800. (b) Average delay versus MBS size in the range 550–750 when RSU size is 300. Fig. 10: (a) Hit ratio versus RSU size in the range 230–350 when MBS size is 800. (b) Hit ratio versus MBS size in the range 550–750 when RSU size is 300. ## Appendix C Proof of Proposition 3 From the fact that the gradient of \(P_{r}\) is Lipschitz continuous, we have \[\begin{split}& P_{r}\left(\tilde{\mathbf{x}}_{r}-\eta\left(\nabla L \right)_{\tilde{\mathbf{x}}_{r}}\right)-P_{r}\left(\tilde{\mathbf{x}}_{r} \right)\\ &\leq-\eta\left(\nabla L\right)_{\tilde{\mathbf{x}}_{r}}^{T} \mathbf{p}_{r}+\frac{\lambda_{r}}{2}\eta^{2}\|\left(\nabla L\right)_{\tilde{ \mathbf{x}}_{r}}\|^{2}\\ &=-\eta(\mathbf{w}_{\tilde{\mathbf{x}}_{r}}^{T}\mathbf{p}_{r}+ \beta\text{ReLU}\left[P_{r}\left(\tilde{\mathbf{x}}_{r}\right)\right]\| \mathbf{p}_{r}\|^{2}-\frac{\lambda_{r}}{2}\eta\|\left(\nabla L\right)_{\tilde {\mathbf{x}}_{r}}\|^{2})\\ &\leq 0,\end{split} \tag{42}\] where the last inequality holds because of (30a) and (30b), and the objective \(W\left(\tilde{\mathbf{x}},\tilde{\mathbf{y}}\right)\) does not increase. On the other hand, the sufficient condition for the constraint holds is given by \[P_{r}\left(\tilde{\mathbf{x}}_{r}-\eta\mathbf{w}_{\tilde{\mathbf{x}}_{r}} \right)\leq P_{r}\left(\tilde{\mathbf{x}}_{r}\right)-\eta\mathbf{w}_{\tilde{ \mathbf{x}}_{r}}^{T}\mathbf{p}_{r}+\frac{\lambda_{r}}{2}\eta^{2}\|\mathbf{w}_{ \tilde{\mathbf{x}}_{r}}\|^{2}\leq 0. \tag{43}\] Therefore, the desired condition of \(\eta\) in (31) can be derived. ## Appendix D Proof of Proposition 4 From the first order Taylor series expansion around \(\tilde{\mathbf{x}}_{r}\) in RSU caching constraint, we have \[\begin{split} 0&\geq P_{r}\left(\tilde{\mathbf{x}}_{r}- \eta\left(\nabla L\right)_{\tilde{\mathbf{x}}_{r}}\right)\\ &=P_{r}\left(\tilde{\mathbf{x}}_{r}\right)-\eta\left(\nabla L \right)_{\tilde{\mathbf{x}}_{r}}^{T}\mathbf{p}_{r}+o\left(\eta\|\left(\nabla L \right)_{\tilde{\mathbf{x}}_{r}}\|\right)\\ &=P_{r}\left(\tilde{\mathbf{x}}_{r}\right)-\eta\mathbf{w}_{\tilde {\mathbf{x}}_{r}}^{T}\mathbf{p}_{r}-\beta\eta P_{r}\left(\tilde{\mathbf{x}}_{r }\right)\|\mathbf{p}_{r}\|^{2}\\ &\qquad\qquad\qquad+o\left(\eta\|\left(\nabla L\right)_{\tilde{ \mathbf{x}}_{r}}\|\right),\end{split} \tag{44}\] which implies that so for positive but sufficiently small \(\eta\), \[\beta\geq\frac{1}{\eta\|\mathbf{p}_{r}\|^{2}}-\frac{\mathbf{w}_{\tilde{ \mathbf{x}}_{r}}^{T}\mathbf{p}_{r}}{P_{r}\left(\tilde{\mathbf{x}}_{r}\right) \|\mathbf{p}_{r}\|^{2}}. \tag{45}\] Similarly, in MBS caching constraint, \(\beta\) should be set \[\beta\geq\frac{1}{\eta\|\mathbf{q}_{m}\|^{2}}-\frac{\mathbf{w}_{\tilde{ \mathbf{y}}_{m}}^{T}\mathbf{q}_{m}}{Q_{m}\left(\tilde{\mathbf{y}}_{m}\right)\| \mathbf{q}_{m}\|^{2}}. \tag{46}\] Therefore, the desired condition of \(\epsilon_{r}\) and \(\epsilon_{m}\) in (37) can be derived.
2308.01109
Signed double Roman domination on cubic graphs
The signed double Roman domination problem is a combinatorial optimization problem on a graph asking to assign a label from $\{\pm{}1,2,3\}$ to each vertex feasibly, such that the total sum of assigned labels is minimized. Here feasibility is given whenever (i) vertices labeled $\pm{}1$ have at least one neighbor with label in $\{2,3\}$; (ii) each vertex labeled $-1$ has one $3$-labeled neighbor or at least two $2$-labeled neighbors; and (iii) the sum of labels over the closed neighborhood of any vertex is positive. The cumulative weight of an optimal labeling is called signed double Roman domination number (SDRDN). In this work, we first consider the problem on general cubic graphs of order $n$ for which we present a sharp $n/2+\Theta(1)$ lower bound for the SDRDN by means of the discharging method. Moreover, we derive a new best upper bound. Observing that we are often able to minimize the SDRDN over the class of cubic graphs of a fixed order, we then study in this context generalized Petersen graphs for independent interest, for which we propose a constraint programming guided proof. We then use these insights to determine the SDRDNs of subcubic $2\times m$ grid graphs, among other results.
Enrico Iurlano, Tatjana Zec, Marko Djukanovic, Günther R. Raidl
2023-08-02T12:37:23Z
http://arxiv.org/abs/2308.01109v1
# Signed Double Roman Domination on Cubic Graphs ###### Abstract The signed double Roman domination problem is a combinatorial optimization problem on a graph asking to assign a label from \(\{\pm 1,2,3\}\) to each vertex feasibly, such that the total sum of assigned labels is minimized. Here feasibility is given whenever (i) vertices labeled \(\pm 1\) have at least one neighbor with label in \(\{2,3\}\); (ii) each vertex labeled \(-1\) has one 3-labeled neighbor or at least two 2-labeled neighbors; and (iii) the sum of labels over the closed neighborhood of any vertex is positive. The cumulative weight of an optimal labeling is called signed double Roman domination number (SDRDN). In this work, we first consider the problem on general cubic graphs of order \(n\) for which we present a sharp \(n/2+\Theta(1)\) lower bound for the SDRDN by means of the discharging method. Moreover, we derive a new best upper bound. Observing that we are often able to minimize the SDRDN over the class of cubic graphs of a fixed order, we then study in this context generalized Petersen graphs for independent interest, for which we propose a constraint programming guided proof. We then use these insights to determine the SDRDNs of subcubic \(2\times m\) grid graphs, among other results. keywords: Signed Double Roman domination, Cubic graphs, Discharging method, Generalized Petersen graphs Msc: 05C78, 05C35, 90C27 + Footnote †: journal: Computer Science ## 1 Introduction The signed double Roman domination problem (SDRDP) is a natural combination of the classical _signed domination problem_[7] and the so-called _double Roman domination problem_[5]. The latter, in turn, is a variant of the _Roman domination problem_ (RDP) [20; 6] well-known from contexts, where it is required to economically distribute resources over a network while still ensuring to have a locally available backup resource; practical application scenarios are, e.g. optimal placement of servers [15], or the reduction of energy consumption in wireless sensor networks [9]. Originally, the RDP was motivated by a strategy of the Roman emperor Constantine (c.f. [20]) on how to secure his empire with minimum amount of legions. In [11], it is pointed out that one can use signed domination to model winning strategies for problems where it is required to locally obtain majority votes. From the perspective of classical domination, studying cubic graphs has a long tradition. In fact, it was already shown in 1980 by Kikuno et al. [13] that the problem is NP-complete on planar cubic graphs. Another influential work was done by Reed [17] in 1996, who derived a sharp upper bound for graphs of minimum vertex degree three; one of his conjectures about the improvability on connected cubic graphs was later falsified and updated in [14]. Apart from the famous dominating set problem, during the last decades, considerable interest has emerged in solving also such more constrained variants of domination problems, in particular their restrictions on specific graph classes: Another important class studied under these aspects is the one of grid graphs for which the dominating set problem [10], the \(2\)-domination problem [16], and the RDP [16] have been solved to optimality. In the following, we consider undirected simple graphs. For such a graph \(G=(V,E)\) and a vertex \(v\in V\), we denote by \(N(v):=\{w\in V\mid vw\in E\}\) the open neighborhood of \(v\) and by \(N[v]:=N(v)\cup\{v\}\) its closure. The order of a graph \(G\) refers to the cardinality \(|V|\) of its set of vertices. Graph \(G\) is called \(d\)-regular, if \(|N(v)|=d\), for any \(v\in V\). A _cubic_ graph is a \(3\)-regular graph. Given a graph \(G=(V,E)\) and a labeling function \(f:V\to\mathbb{R}\), for any subset \(S\subseteq V\), we define the _cumulative weight_ of \(f\) restricted to \(S\) as \(w_{f}(S):=\sum_{s\in S}f(s)\). We also write \(w_{f}(G)\) for \(w_{f}(V)\), and when the function \(f\) is clear from the context, we omit \(f\) in the subscript. Often we directly identify a function \(f:V\to\{-1,1,2,3\}\) with its associated preimages \(V_{i}:=f^{-1}(\{i\})=\{v\in V\mid f(v)=i\}\), \(i\in\{\pm 1,2,3\}\). We denote \(\mathbb{N}=\{0,1,2,\ldots\}\). In some definitions, for simplicity, the vertices will be indexed by \(\mathbb{Z}_{m}\), the residue class ring modulo \(m\). For a set \(A\), by \(\mathbb{1}_{A}(x)\), we refer to its indicator function. Following [1], for a given graph \(G=(V,E)\), a function \(f:V\to\{\pm 1,2,3\}\) is called _signed double Roman domination function_ (SDRDF) on \(G\) if the following conditions (1a)-(1c) are met. For all \[u\in V_{-1},\] there exists \[v\in N(u)\cap V_{3}\] or there exist distinct \[v_{1},v_{2}\in N(u)\cap V_{2}\] . (1a) For all \[u\in V_{1},\] there exists \[v\in N(u)\cap(V_{2}\cup V_{3})\] . (1b) For all \[u\in V,\ w_{f}(N[u])\geqslant 1,\] i.e., the cumulative weight of \[N[u]\] is positive. (1c) We call \(\gamma_{\mathrm{sdR}}(G):=\min\{w_{f}(V)\}\mid f\) is a SDRDF on \(G\}\)_signed double Roman domination number of_\(G\) (SDRDN). Existing vertices \(v\), \(v_{1}\) and \(v_{2}\) in (1a) and (1b) are said to _defend_ the respective vertex \(u\). A generalization of the SDRDP is the signed double Roman \(k\)-domination problem (SD\(k\)RDP), originally proposed in [3] (\(k\in\mathbb{N}\setminus\{0\}\) fixed), requiring the fulfillment of the conditions (1a)-(1c) plus the additional restriction \(w_{f}(N[u])\geqslant k\) for all vertices \(u\in V\). The minimum weight taken over all labelings satisfying the latter property determines the so-called _SD\(k\)RD number_\(\gamma_{\mathrm{sdR},k}(G)\). We introduce notation for special classes of (sub)cubic graphs in what follows: On the one hand, for \(m\in\mathbb{N}\setminus\{0,1,2\}\) and \(k\in\mathbb{Z}_{m}\setminus\{0\}\), the _generalized Petersen graph_\(P_{m,k}\) comprises vertex set \(\{u_{i},v_{i}\mid i\in\mathbb{Z}_{m}\}\) and has edge set \(\{u_{i}u_{i+1},v_{i}v_{i+k},u_{i}v_{i}\mid i\in\mathbb{Z}_{m}\}\). We refer to the value \(k\in\mathbb{Z}_{m}\) as _shift parameter_ and remark that \(P_{m,1}\) is isomorphic to the _\(m\)-prism graph_. On the other hand, we define the \(\ell\times m\)_grid graph_\(G_{\ell,m}\) on the set of vertices \(\{0,\ldots,\ell-1\}\times\{0,\ldots,m-1\}\subseteq\mathbb{R}\times\mathbb{R}\), for which two vertices are adjacent if their Euclidean distance equals one [6]. For \(\ell=2\) we introduce a briefer notation which identifies \((0,i)\in\mathbb{R}^{2}\) with the symbol \(u_{i}\) and \((1,i)\in\mathbb{R}^{2}\) with \(v_{i}\), \(i=0,\ldots,m-1\). Finally, a flower snark FS\({}_{m}\) (\(m\geqslant 5\)) is a graph with vertex set \(V=\{a_{i},b_{i},c_{i},d_{i}\mid i\in\mathbb{Z}_{m}\}\) and edge set \(E\) formed by the union of the three sets \(\{a_{i}b_{i},a_{i}c_{i},a_{i}d_{i}\mid i\in\mathbb{Z}_{m}\}\), \(\{b_{i}b_{i+1}\mid i\in\mathbb{Z}_{m}\}\), and \(\{c_{0}c_{1},c_{1}c_{2},\ldots,c_{m-2}c_{m-1},c_{m-1}d_{0},d_{0}d_{1},d_{1}d_{2 },\ldots,d_{m-2}d_{m-1},c_{0}d_{m-1}\}\). These three specific graph classes are visualized in Figure 1. The main contributions of this work are as follows. * A lower bound for \(\gamma_{\mathrm{sdR}}\) on cubic graphs twice as high as the so far best known one is derived via the discharging method. It turns out to even be optimally sharp, settling the missing case \(k=1\) of the collection of optimal lower bounds for the SD\(k\)RDP pointed out in [3]. * Tight or even optimal bounds on \(\gamma_{\mathrm{sdR}}\) are established and proved for * selected subclasses of generalized Petersen graphs, * \(2\times m\) grid graphs, * and flower snarks. For some results we design an inductive proof relying on constraint programming [18]. * Additionally, best known upper bounds for \(\gamma_{\mathrm{sdR}}\) and \(\gamma_{\mathrm{sdR},2}\) on (connected) cubic graphs are improved. In the remainder of this introduction, we give an overview of relevant recent results from the literature. For the SDRDP, it is shown that calculating \(\gamma_{\mathrm{sdR}}\) on bipartite as well as on chordal graphs is NP-hard [1]. Moreover, exact values of \(\gamma_{\mathrm{sdR}}\) are established for special classes of graphs, including complete graphs, paths, cycles, and complete bipartite graphs. In [2], lower bounds for \(\gamma_{\mathrm{sdR}}\) are obtained in dependence of the minimum respectively maximum vertex degree; furthermore, it is shown that in the absence of isolated vertices \(\gamma_{\mathrm{sdR}}(G)\geqslant(19n-24m)/9\), where \(n\) and \(m\) denote the order of \(G\) and the number of edges in \(G\), respectively. For trees, in [1], it is shown that \(\gamma_{\mathrm{sdR}}\geqslant 4\sqrt{n/3}-n\) and that trees attaining the bound can be characterized. Calculating \(\gamma_{\mathrm{sdR}}\) on digraphs is addressed in [4]. Results concerning upper bounds for the SD\(k\)RD number \(\gamma_{\mathrm{sdR},k}\) on general graphs as well as on specific graph classes such as regular graphs and bipartite graphs are given in [3]. More specifically, we are interested in improving the following result. **Theorem 1** ([3, Theorem 3.4]).: _In the setting of connected cubic graphs1, the following bounds for \(\gamma_{\mathrm{sdR},k}\) apply. Moreover, the lower bounds are optimal for \(k\in\{2,3,4,5\}\)._ Footnote 1: The lower bound also applies for non-connected cubic graphs [3, Proposition 2]. \[\frac{kn}{4}\leqslant\gamma_{\mathrm{sdR},k}\leqslant\frac{13n}{8}. \tag{2}\] In contrast to the trivial worst-case upper bound \(\gamma_{\mathrm{sdR}}\leqslant 2n\) on general graphs, this shows that a smaller upper bound can be achieved on cubic graphs. In fact, for \(k=1\) the latter result just affirms (for connected cubic graphs) \[\frac{n}{4}\leqslant\gamma_{\mathrm{sdR}}\leqslant\frac{13n}{8}. \tag{3}\] Figure 1: Exemplary graphs for the special graph classes considered in this work. As an auxiliary tool, we will fall back on the following concept from [12], the so-called _\(\alpha\)-total domination number_\(\gamma_{\alpha,\mathrm{t}}(G)\). For \(0<\alpha<1\), \(\gamma_{\alpha,\mathrm{t}}(G)\) is defined as the minimum cardinality of an _\(\alpha\)-total dominating set of \(G\)_, i.e., a total dominating set \(S\subseteq V\) satisfying that any vertex \(v\in V\setminus S\) fulfills \(|N(v)\cap S|\geqslant\alpha|N(v)|\). **Theorem 2** ([12, Theorem 10.b]).: _Let \(G\) be a cubic graph of order \(n\). For \(1/3<\alpha\leqslant 2/3\), we have \(n/2\leqslant\gamma_{\alpha,\mathrm{t}}(G)<3n/4\)._ ## 2 Main results We employ \(\alpha\)-total domination to improve the upper bound in Theorem 1 (for \(k=1\) and \(k=2\)) by a factor of approximately \(0.77\). **Proposition 1**.: _We have \(\gamma_{\mathrm{sdR},2}(G)<5n/4\) and \(\gamma_{\mathrm{sdR}}(G)<5n/4\) for cubic graphs \(G\) of order \(n\)._ Proof.: For \(G=(V,E)\), we select a totally dominating subset \(S\subseteq V\) such that each vertex \(v\in V\setminus S\) has at least two neighbors in \(S\), which corresponds to an \(\alpha\)-total dominating set in \(G\) with \(\alpha=2/3\). Pick the labeling \(f\) satisfying \(V_{2}=S\) and \(V_{-1}=V\setminus S\). We check that the cumulative weight of any closed neighborhood is at least \(2\): In the neighborhood of any vertex \(v\in V_{2}\), at least one neighbor must be labeled \(2\) by total domination. Consequently \(w_{f}(N[v])\geqslant 2\). On the other hand, each \(v\in V_{-1}\) has at least two neighbors in \(V_{2}\) (by the \(\alpha\)-domination property), again verifying \(w_{f}(N[v])\geqslant 2\). Adding up all labels, according to Theorem 2 we obtain \[w_{f}(V)=2|V_{2}|-|V_{-1}|<2\cdot\frac{3n}{4}-\frac{n}{4}=\frac{5n}{4}.\] Since we managed to reduce the upper bound (2), as in [3], we pose ourselves the question if \(\gamma_{\mathrm{sdR}}\leqslant n\) for connected cubic graphs; see Section 3 for further thoughts. Let us add an observation stating that in the setting of _cubic graphs_, formulating that a labeling \(f\) is a SDRDF, is expressible in an arithmetic-free manner. It will be useful to abbreviate the verification of the SDRDF property in many situations. **Observation 1**.: _Condition (1c) can be replaced by the following equivalent one._ \[\text{For all }v\in V\text{, there are distinct }v_{1},v_{2}\in N[v]\text{ such that }-1\not\in\{f(v_{1}),f(v_{2})\}.\] (1c') _More precisely, it is possible to replace (1a)-(1c) by the conjunction of (1a)-(1b) and (1c')._ Proof.: We start by showing that our altered condition implies the original one (1a)-(1c). Firstly, if \(v\in V_{2}\cup V_{3}\) and there is at least one further positively labeled vertex in \(N(v)\), positivity of \(w_{f}(N[v])\) ensues. Secondly, any \(v\in V_{1}\) verifying (1b) and (1c') implies the existence of a vertex in \((V_{2}\cup V_{3})\cap N[v]\) allowing to conclude \(w_{f}(N[v])\geqslant 1+2+2\cdot(-1)=1\). Thirdly, any \(v\in V_{-1}\) with two distinct vertices \(v_{1},v_{2}\in N(v)\setminus V_{-1}\) satisfying (1a) must fulfill \(\{f(v_{1}),f(v_{2})\}\in\{\{2\},\{3,1\},\{3,2\},\{3\}\}\) implying (1c). Now we address the other proof direction by proving its contrapositive: Suppose \(|N[v]\cap V_{-1}|\geqslant 3\) for some \(v\in V\). This automatically implies, for some \(x\in\{\pm 1,2,3\}\), that \(w_{f}(N[v])=-3+x\leqslant 0\). We can therefore certify invalidity of (1c) for the labeling. We come up with the subsequent lower bound on cubic graphs, which improves upon (3) by a factor of two. Later, in Remark 1, we will show this lower bound to even be optimal. **Theorem 3**.: _For any cubic graph \(G\) of order \(n\) we have_ \[\gamma_{\mathrm{sdR}}(G)\geqslant\begin{cases}n/2&\text{if }n\equiv 0\pmod{4} \\ n/2+1&\text{if }n\equiv 2\pmod{4}.\end{cases} \tag{4}\] Proof.: First, note that odd values for \(n\) in (4) are irrelevant, as it is well known that vertex sets of cubic graphs have even cardinality, according to the Handshaking Lemma. The proof is divided into two steps. _Step 1. The lower bound \(n/2\) applies._ Let \(f\) be an arbitrary SDRDF on \(G\). We define the function \(g\) as the final product of the following discharging rules 1-1, executed one by one in succession; cf. [19]. In these discharging rules, we think of the vertex \(v\) as transmitting the charge quantity \(1/4\), \(3/4\), respectively \(5/4\) to each of its specified neighbors. 1. For each \(v\in V\), let \(g(v)=f(v)\) at the beginning of the procedure. 2. Update \(g(v)\,\leftarrow\,g(v)-|N(v)\cap V_{-1}|/4\), for all \(v\in V_{1}\), and update \(g(u)\,\leftarrow\,g(u)+1/4\), for all \(u\in N(v)\cap V_{-1}\). 3. Update \(g(v)\,\leftarrow\,g(v)-3|N(v)\cap V_{-1}|/4\), for all \(v\in V_{2}\), and update \(g(u)\,\leftarrow\,g(u)+3/4\), for all \(u\in N(v)\cap V_{-1}\). We note that in this procedure, after any rule application, the equality \(w_{g}(V)=w_{f}(V)\) is preserved. Observe that after the termination of this procedure, we have \(g(v)\geqslant 1/2\) for each vertex \(v\in V\): By cubicity, condition 1 ensures that each \(v\not\in V_{-1}\) is adjacent to at most two vertices labeled \(-1\) and each \(v\in V_{-1}\) is adjacent to at most one vertex labeled \(-1\). Hence, after application of all the rules 1-1 on \(f\), we obtain the subsequent implications. \[v\in V_{1} \implies\,g(v)\geqslant f(v)-2\cdot\frac{1}{4}=\frac{1}{2}, \tag{5}\] \[v\in V_{2} \implies\,g(v)\geqslant f(v)-2\cdot\frac{3}{4}=\frac{1}{2},\] (6) \[v\in V_{3} \implies\,g(v)\geqslant f(v)-2\cdot\frac{5}{4}=\frac{1}{2},\] (7) \[v\in V_{-1}\wedge N(v)\cap V_{3}=\emptyset \implies\,g(v)\geqslant f(v)+2\cdot\frac{3}{4}=\frac{1}{2},\] (8) \[v\in V_{-1}\wedge N(v)\cap V_{3}\neq\emptyset \implies\,g(v)\geqslant f(v)+\frac{1}{4}+\frac{5}{4}=\frac{1}{2}. \tag{9}\] Bound (8) applies since the implication's premise enforces that \(v\) must have at least two neighbors labeled \(2\). On the other hand, bound (9) applies because, apart from one \(3\)-labeled neighbor of \(v\) given by the premise, there must be one more neighbor from \(V\setminus V_{-1}\) (the minimum value of \(g(v)\) is obtained in the situation when this neighbor is labeled \(1\), and the remaining third neighbor is labeled \(-1\), yielding \(g(v)=f(v)+1/4+5/4=1/2\)). Consequently, at the end of this procedure, we have \(g(v)\geqslant 1/2\), for each \(v\in V\), implying \(w_{f}(V)=w_{g}(V)=\sum_{v\in V}g(v)\geqslant|V|/2\). _Step 2. The lower bound is refinable for \(n\equiv 2\pmod{4}\)._ Let \(g:V\to\mathbb{R}\) be the function arising from \(f\) via the discharging method in Step 1. We make a case distinction. _Case 1._ There is a vertex \(s\in V_{1}\cup V_{2}\cup V_{3}\) having less than two neighbors in \(V_{-1}\). We show that the bound \(n/2\) cannot be attained by \(f\): In fact, \[\sum_{v\in V_{1}\cup V_{2}\cup V_{3}}g(v) =g(s)+\sum_{v\in V_{1}\cup V_{2}\cup V_{3}\setminus\{s\}}g(v)\] \[\geqslant g(s)+\frac{|V_{1}\cup V_{2}\cup V_{3}|-1}{2}\] \[\geqslant\mathbb{1}_{\,V_{1}}(s)(1-\tfrac{1}{4})+\mathbb{1}_{\,V _{2}}(s)(2-\tfrac{3}{4})+\mathbb{1}_{\,V_{3}}(s)(3-\tfrac{5}{4})+\frac{|V_{1} \cup V_{2}\cup V_{3}|-1}{2}\] \[>\frac{|V_{1}\cup V_{2}\cup V_{3}|}{2},\] and therefore \(w_{f}(V)=\sum_{v\in V}g(v)>n/2\). _Case 2._ Assume all vertices in \(V_{1}\cup V_{2}\cup V_{3}\) have two neighbors in \(V_{-1}\). Let \(n=4\ell+2\) where \(\ell\in\mathbb{N}\setminus\{0\}\). For \(v\in V_{-1}\) having three neighbors in \(V_{1}\cup V_{2}\cup V_{3}\), in (8) and (9), we face even strict majorization \(g(v)>\tfrac{1}{2}\). Therefore, there exists \(\varepsilon>0\) such that we can estimate via (5)-(9), \[\sum_{v\in V}g(v) =\sum_{v\in V_{1}\cup V_{2}\cup V_{3}}g(v)+\sum_{\begin{subarray} {c}v\in V_{-1}\\ |N(v)\cap(V_{1}\cup V_{2}\cup V_{3})|=2\end{subarray}}g(v)+\sum_{ \begin{subarray}{c}v\in V_{-1}\\ |N(v)\cap(V_{1}\cup V_{2}\cup V_{3})|=3\end{subarray}}g(v) \tag{10}\] \[\geqslant\tfrac{1}{2}|V_{1}\cup V_{2}\cup V_{3}|+\tfrac{1}{2}\, \left|\{v\in V_{-1}:|N(v)\cap(V_{1}\cup V_{2}\cup V_{3})|=2\}\right|\] \[\qquad+(\tfrac{1}{2}+\varepsilon)\left|\{v\in V_{-1}:|N(v)\cap(V_ {1}\cup V_{2}\cup V_{3})|=3\}\right|. \tag{11}\] From (11) we obtain that whenever \(\left|\{v\in V_{-1}:|N(v)\cap(V_{1}\cup V_{2}\cup V_{3})|=3\}\right|\neq\emptyset\), then we have even more strongly \(w_{f}(V)=w_{g}(V)=\sum_{v\in V}g(v)>|V|/2\). Indeed, in our considered case, this non-emptiness occurs: An edge-counting argument applied to the fact that the vertices in \(V_{1}\cup V_{2}\cup V_{3}\) have precisely two neighbors in \(V_{-1}\) and the fact that each vertex in \(V_{-1}\) must have _at least_ two neighbors in \(V_{1}\cup V_{2}\cup V_{3}\) shows that \(|V_{1}\cup V_{2}\cup V_{3}|\geqslant|V_{-1}|\). The set \(V_{1}\cup V_{2}\cup V_{3}\) must be of even cardinality, as for each of its vertices--apart from the two edges connecting the vertex with \(V_{-1}\)--the third edge must be incident to a vertex in \(V_{1}\cup V_{2}\cup V_{3}\). Moreover, this implies that \(|V_{1}\cup V_{2}\cup V_{3}|>2\ell+1>|V_{-1}|\). The pigeonhole principle shows that at least one vertex labeled \(-1\) must have three neighbors in \(V_{1}\cup V_{2}\cup V_{3}\). **Remark 1**.: _As we will see, the lower bound (4) for cubic graphs is optimally sharp, as, e.g., \(P_{n/2,3}\) are (connected) cubic graphs attaining the bound._ ### Cubic graphs with extremal properties: Generalized Petersen graphs Let us start our considerations with the following result. **Theorem 4**.: _We have \(\gamma_{\mathrm{sdR}}(P_{m,k})=m\) whenever \(m\geqslant 4\) is even and \(k\) is odd._ Proof.: Choose the labeling with \(V_{-1}=\{u_{2i},v_{2i}\mid i=0,\dots,m/2-1\}\) and \(V_{2}=V\setminus V_{-1}\). Then \(w(V)=m\), and the SDRDF constraints are met. In fact, this function has for each vertex \(u\in\{u_{2i}\mid i=0,\dots,m/2-1\}\) the two 2-labeled defenders \(u_{2i-k}\), \(u_{2i+k}\). By the same index shift \(i\mapsto i\pm k\), we see that \(v\in\{v_{2i}\mid i=0,\dots,m/2-1\}\) has two defenders. Recalling (1c'), the existence of these defenders also guarantees that the vertices \(u\) and \(v\) have positive cumulative weight on their closed neighborhoods. For the vertices \(w\in V\setminus V_{-1}=V_{2}=\{u_{2i+1},v_{2i+1}\mid i=0,\dots,m/2-1\}\), the positivity is guaranteed by the fact that \(\{u_{2i+1},v_{2i+1}\}\subseteq N[w]\cap V_{2}\). Finally, as the weight of the constructed SDRDF coincides with the lower bound of the previous Theorem 3, the SDRDF is optimal. **Theorem 5**.: _For the generalized Petersen graph \(P_{m,3}\), \(m\geqslant 8\), we have_ \[\gamma_{\rm sdR}(P_{m,3})=\begin{cases}m&\text{if }m\equiv 0\pmod{2},\\ m+1&\text{else}.\end{cases} \tag{12}\] Proof.: For even \(m\), optimal constructions proving (12) have already been found, cf. Theorem 4 for \(k=3\). To show that the right-hand side of (12) is an _upper bound_ for \(\gamma_{\rm sdR}(P_{m,3})\) for odd \(m\), we distinguish two cases, both constructing a particular SDRDF on \(P_{m,3}\); in Figure 2 supportive visualizations of the underlying scheme for both are given. _Case 1. \(m\equiv 1\pmod{4}\)_. Let \(f\) be the labeling with \(V_{2}=\{u_{4i},u_{4i+1},v_{4i+2},v_{4i+3}\mid i=0,\ldots,\frac{m-9}{4}\}\cup\{u _{m-5},u_{m-4},u_{m-2},v_{m-2}\}\), \(V_{1}=\{v_{m-3},v_{m-1}\}\), and \(V_{-1}=V\setminus(V_{2}\cup V_{1})=\{u_{4i+2},u_{4i+3},v_{4i},v_{4i+1}\mid i=0,\ldots,\frac{m-9}{4}\}\cup\{u_{m-3},u_{m-1},\\ v_{m-5},v_{m-4}\}\). The satisfaction of all SDRDF constraints by \(f\) is argued in Table A.1 in the appendix. This implies \(\gamma_{\rm sdR}(P_{m,3})\leqslant w_{f}(P_{m,3})=2|V_{2}|+|V_{1}|-|V_{-1}|=2( m-1)+2-(m-1)=m+1\). _Case 2. \(m\equiv 3\pmod{4}\)_. We construct a labeling \(f\) satisfying \(V_{3}=\{v_{m-3}\}\), \(V_{2}=\{u_{4i+2},u_{4i+3},\)\(v_{4i},v_{4i+1}\mid i=0,\ldots,\frac{m-15}{4}\}\cup\{u_{m-9},u_{m-7},u_{m-5},u_{m-1}, v_{m-11},v_{m-10},v_{m-5},v_{m-4}\}\), \(V_{1}=\{u_{m-2},v_{m-9},\)\(v_{m-7}\}\), and \(V_{-1}=V\setminus(V_{2}\cup V_{1})=\{u_{4i},u_{4i+1},v_{4i+2},v_{4i+3}\mid i=0, \ldots,\frac{m-15}{4}\}\cup\{u_{m-11},u_{m-10},u_{m-8},\\ u_{m-6},u_{m-4},u_{m-3},v_{m-8},v_{m-6},v_{m-2},v_{m-1}\}\). We check that \(f\) is a SDRDF in Table A.2 in the appendix. Therefore, we conclude \(\gamma_{\rm sdR}(P_{m,3})\leqslant w_{f}(P_{m,3})=3|V_{3}|+2|V_{2}|+|V_{1}|-| V_{-1}|=3+2(m-3)+3-(m-1)=m+1\). Finally, it remains to show that the right-hand side of (12) is also a _lower bound_ for \(\gamma_{\rm sdR}(P_{m,3})\) when \(m\) is odd. However, this follows directly from Theorem 3 and concludes our proof. In the following, we point out that the graph \(P_{m,1}\)--with the exception of \(m\equiv 1\pmod{4}\)--attains the lower bound in (4), too. For tackling the aforementioned exceptional case, we state in the following two technical results as Lemma 1 and Lemma 2. These results incorporate an approach to determine \(\gamma_{\rm sdR}\) for a sufficiently structured rotationally symmetric graph. The method relies on a computer-aided exhaustive search for optima on fixed small subgraphs. It seems applicable to other domination-like problems, too. Figure 2: Optimal SDRDFs for \(P_{m,3}\) when \(m=4\ell+1\) respectively \(m=4\ell+3\). In both cases, a label pattern of width 4 is periodically repeated \(\ell-1\) respectively \(\ell-2\) times to finally be flanked by a termination pattern of width 5 respectively 11. The labeling is exemplarily illustrated for \(m=13\) respectively \(m=19\). **Lemma 1**.: _We consider vertex sets \(L:=\{\ell_{b},\ell_{b,i},\ell_{t},\ell_{t,i}\}\), \(R:=\{r_{b},r_{b,i},r_{t},r_{t,i}\}\), \(C:=\{u_{i},v_{i}\mid i=0,\ldots,7\}\), and \(C^{\prime}:=\{u_{i},v_{i}\mid i=0,\ldots,3\}\). Let \(G\) and \(G^{\prime}\) be the grid graphs having vertex sets \(V:=L\cup C\cup R\) and \(V^{\prime}:=L\cup C^{\prime}\cup R\), respectively, and edges as depicted in Figures 2(a) and 2(b). Let \(f:V\to\{\pm 1,2,3\}\), respectively \(f^{\prime}:V^{\prime}\to\{\pm 1,2,3\}\) satisfy the SDRDP constraints (1a)-(1c) in all vertices except possibly for those in \(\{\ell_{t},\ell_{b},r_{t},r_{b}\}\). Moreover, let us assume that \(f\) attains minimal cumulative weight on \(C\) and \(f^{\prime}\) attains minimal cumulative weight on \(C^{\prime}\).2 Then, the following properties hold._ Footnote 2: I.e., \(f\) and \(f^{\prime}\) can both not be improved by updating their values just on \(C\) and \(C^{\prime}\), respectively. * _For_ \(k\leqslant 5\)_,_ \(w_{f}(C)\neq k\)_._ * _For_ \(k\in\{6,7,9\}\)_, whenever_ \(w_{f}(C)=k\)_, then_ \(w_{f^{\prime}}(C^{\prime})=k-4\)_._ Proof.: Exhaustively, per given parameter choice \(d=(\ell_{\mathrm{b}},\ell_{\mathrm{b},\mathrm{i}},\ell_{\mathrm{t}},\ell_{ \mathrm{t},\mathrm{i}},r_{\mathrm{b}},r_{\mathrm{b},\mathrm{i}},r_{\mathrm{t} },r_{\mathrm{t},\mathrm{i}})\in\{\pm 1,2,3\}\)8, i.e., by fixing the labels on the delimiting vertices in \(L\cup R\), we can determine a SDRDF being minimal with respect to the cumulative weight restricted to \(C\) (respectively to \(C^{\prime}\)). Footnote 8: I.e., \(f\) and \(f^{\prime}\) can both not be improved by updating their values just on \(C\) and \(C^{\prime}\), respectively. Algorithm 1 in the appendix explains how we carried this out computationally. After symmetry breaking (see Remark 2), the algorithm exhaustively examines several cases, ultimately showing that the smallest attainable optimal weight is \(6\), which proves claim (i). Furthermore, (ii) is valid, as we observe that all hereby obtained minima over \(C\) attaining the value \(k\in\{6,7,9\}\) are accompanied by a respective minimum of \(k-4\) on the smaller center \(C^{\prime}\) in \(G^{\prime}\) with the same delimiting constellation \(d\). **Remark 2** (Symmetry breaking).: _We employ vertical and horizontal flipping and point reflection through the center, i.e., a labeling for \(\begin{bmatrix}\ell_{t}&\ell_{t,i}&r_{t,i}&r_{t}\\ \ell_{b}&\ell_{b,i}&r_{b,i}&r_{b}\end{bmatrix}\) is oftentimes represented by a respective labeling for \(\begin{bmatrix}\ell_{b}&\ell_{b,i}&r_{b,i}&r_{b}\\ \ell_{t}&\ell_{t,i}&r_{t,i}&r_{t}\end{bmatrix}\), \(\begin{bmatrix}r_{t,i}&r_{t}&\ell_{t,i}&\ell_{t,i}\\ r_{b,i}&r_{b}&\ell_{b}&\ell_{b,i}\end{bmatrix}\), or \(\begin{bmatrix}r_{b}&r_{b,i}&\ell_{b,i}&\ell_{b}\\ r_{t}&r_{t,i}&\ell_{t,i}&\ell_{t}\end{bmatrix}\). Instead of the \(4^{8}=65536\) constellations, it is herewith sufficient to fall back to only a fraction of them, which, after removal of the constellations placing more than two (\(-1\))-labels inside \(\langle\ell_{t},\ell_{t,i},\ell_{b},\ell_{b,i}\rangle\) or inside \(\langle r_{t,i},r_{t},r_{b,i},r_{b}\rangle\) (hence violating (1c)), contains \(14940\) cases. To keep the argument conceptually simple, we did not eliminate further parameter constellations, which a priori might indicate non-optimality._ Given a fixed \(P_{m,1}\), \(m\geqslant 13\) with an optimal SDRDF function \(f\) defined on it, we say that a \(2\times 12\) subblock of \(P_{m,1}\), i.e., a subset of vertices \(\{v_{i+j},u_{i+j}\mid j=0,\ldots,11\}\) for some \(i\in\mathbb{Z}_{m}\), has the _quality-transferring property w.r.t._\(f\), if the vertices \(\{\ell_{\mathrm{b}},\ell_{\mathrm{b},\mathrm{i}},\ell_{\mathrm{t}},\ell_{ \mathrm{t},\mathrm{i}},r_{\mathrm{b}},r_{\mathrm{b},\mathrm{i}},r_{\mathrm{t} },r_{\mathrm{t},\mathrm{i}}\}\cup\{v_{0},u_{0},\ldots,v_{3},u_{3}\}\) of the Figure 3: The graph \(G^{\prime}\) in (2(b)) is the result of deleting four of the vertical edges from \(G\) in (2(a)) and successively performing eight edge contractions. graph \(G^{\prime}\) in Figure 2(b) can be labeled by a function \(\tilde{f}\) in such a way that3 Footnote 3: Entry-wise equality of \(2\times 4\) arrays is meant in (13). \[\begin{bmatrix}\tilde{f}(\ell_{\text{t}})&\tilde{f}(\ell_{\text{t,i}})& \tilde{f}(r_{\text{t,i}})&\tilde{f}(r_{\text{t}})\\ \tilde{f}(\ell_{\text{b}})&\tilde{f}(\ell_{\text{b,i}})&\tilde{f}(r_{\text{b, i}})&\tilde{f}(r_{\text{b}})\end{bmatrix}=\begin{bmatrix}f(v_{i})&f(v_{i+1})&f(v_{i+1 0})&f(v_{i+11})\\ f(u_{i})&f(u_{i+1})&f(u_{i+10})&f(u_{i+11})\end{bmatrix},\] (13) \[w_{\tilde{f}}(\{u_{0},\ldots,u_{3}\}\cup\{v_{0},\ldots,v_{3}\}) \leqslant w_{f}(\{u_{i+2},\ldots,u_{i+9}\}\cup\{v_{i+2},\ldots,v_{i+9}\})-4,\] (14) \[\text{and }\tilde{f}\text{ satisfies (\ref{eq:eq:eq:eq:eq:eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq weight \(k\in\{6,7,9\}\), the quality-transferring property. It is therefore impossible, that any \(2\times 8\) subblock of \(P_{m+4}\) attains the cumulative weight \(6\), \(7\), or \(9\). In particular, we have shown that necessarily \[w_{f}(M_{i})\geqslant 8,\text{ for all }i\in\mathbb{Z}_{m+4}. \tag{23}\] By Theorem 3 we know \(\gamma_{\mathrm{sdR}}(P_{m+4,1})\geqslant m+4+1\). Hence, there must exist an index \(i^{\prime}\) such that \(w_{f}(M_{i^{\prime}})\geqslant 9\)--otherwise, we would have \(w_{f}(M_{i})=8\) for all \(i\), implying \(w_{f}(V)=\sum_{i\in\mathbb{Z}_{m+4}}w_{f}(M_{i})/8=8(m+4)/8=m+4\) and contradicting Theorem 3. However, for \(i^{\prime}\) we even must have \(w_{f}(M_{i^{\prime}})\geqslant 10\) according to our previously observed impossibility to attain weight \(9\). To conclude that \(w_{f}(P_{m+4,1})<m+4+2\) always leads to a contradiction, we distinguish two cases. _Case 1._ Suppose \(m+4=8\ell+5\), \(\ell\in\mathbb{N}\). Observation 3 (i) tells us that either \(f\) on \(P_{m+4,1}\) has the quality-transferring property (immediate contradiction to (22)) or there exists a suitable index \(i(5)\in\mathbb{Z}_{m+4}\) for which \(A:=\{u_{i(5)},v_{i(5)},\ldots,u_{i(5)+12},\)\(v_{i(5)+12}\}\) induces a \(2\times 13\) subblock of cumulative weight not smaller than \(15\) leading to a lower bound exceeding the upper bound in (17), as can be seen via the following argument: Partition the vertices of \(V\setminus A\) into \(\ell-1\) subblocks of dimensions \(2\times 8\), and apply (23) on them. Then, \(w_{f}(V)=w_{f}(V\setminus A)+w_{f}(A)\) which can be bounded from below by \(8(\ell-1)+15=8\ell+5+2=m+4+2\) and contradicts (17). _Case 2._ Suppose \(m+4=8\ell+1\), \(\ell\in\mathbb{N}\). Observation 3 (ii) guarantees that either \(f\) on \(P_{m+4,1}\) has the quality-transferring property (immediate contradiction to (22)) or there exists a suitable index \(i(1)\in\mathbb{Z}_{m+4}\) for which \(\{u_{i(1)},v_{i(1)},\ldots,u_{i(1)+8},v_{i(1)+8}\}\) induces a \(2\times 9\) subblock of cumulative weight not smaller than \(11\) leading to a lower bound exceeding the upper bound in (17), as can be seen via the following argument: Similarly as before we can estimate \(w_{f}(P_{m+4,1})\geqslant 8(\ell-1)+11=8\ell+1+2=m+4+2\), yielding again a contradiction to (17). **Theorem 6**.: _For the generalized Petersen graph \(P_{m,1}\), \(m\geqslant 3\), we have_ \[\gamma_{\mathrm{sdR}}(P_{m,1})=\begin{cases}m&\text{if }m\equiv 0\pmod{2}\\ m+1&\text{if }m\equiv 3\pmod{4}\\ m+2&\text{if }m\equiv 1\pmod{4}.\end{cases} \tag{24}\] Proof.: For even \(m\), \(\gamma_{\mathrm{sdR}}(P_{m,1})=m\) follows directly from Theorem 4 for \(k=1\). For \(m=4\ell+1\) the claim has been shown in Lemma 2. The _upper bound_ for the case \(m=4\ell+3\) is given in Figure 3. For the _lower bound_ for \(\gamma_{\mathrm{sdR}}(P_{4\ell+3,1})\), we apply Theorem 3 to \(n=8\ell+6\) (the count of vertices in \(P_{m,1}\)) and conclude \(\gamma_{\mathrm{sdR}}(P_{4\ell+3,1})\geqslant n/2+1=4\ell+4=m+1\). Figure 4: Schemes for optimal labelings given in Theorem 6 for the graph \(P_{m,1}\). ### Consequences for the grid graph \(G_{2,m}\) As a byproduct of the results on cubic graphs, particularly on \(P_{m,1}\), we obtain the following result about optimal SDRDFs on \(2\times m\) grid graphs. **Theorem 7**.: _For \(m\geqslant 5\), we have_ \[\gamma_{\mathrm{sdR}}(G_{2,m})=\begin{cases}m+1&\text{if }m\equiv 1\pmod{4}\\ m&\text{otherwise.}\end{cases} \tag{25}\] Proof.: For values \(m=1,\ldots,13\), the sequence of respective \(\gamma_{\mathrm{sdR}}(G_{2,m})\)-values can be calculated by exhaustion and corresponds to \(\langle 2,4,2,5,6,6,7,8,10,10,11,12,14\rangle\). This confirms (25) for \(5\leqslant m\leqslant 13\). For higher values of \(m\), the fact that the right-hand side of (25) majorizes \(\gamma_{\mathrm{sdR}}(G_{2,m})\) can be read off the labeling schemata given in Figure 15 in the appendix. In all four cases, it is easy to recognize that the respective labelings give valid SDRDFs; thus, the respective upper bounds apply. To show the optimality of the derived upper bounds, we use the subsequent principle, which extends the graph \(G_{2,m}\) to a cubic graph. For even \(m\), the argumentation is less subtle and is better suited to understand the principle. Let \(m\equiv 0\pmod{2}\). Starting from an optimal SDRDF labeled graph \(G_{2,m}\), counting \(2m\equiv 0\pmod{4}\) vertices, we construct a SDRDF labeled cubic graph \(\tilde{G}=(\tilde{V},\tilde{E})\) with six additional fresh vertices having collective weight \(4\); see Figure 4(a). In total, eleven fresh edges are added during this construction. As only new vertices labeled \(1\), both already defended by new vertices labeled \(2\), are neighbored to the initial graph \(G_{2,m}\), the SDRDF requirements are satisfied. By cubicity and using the bound (4), this implies that \(\gamma_{\mathrm{sdR}}(G_{2,m})+4=w(V)+4=w(\tilde{V})\geqslant|\tilde{V}|/2+1=(2 m+6)/2+1\), and consequently \(\gamma_{\mathrm{sdR}}(G_{2,m})\geqslant m\). The rest of the proof is now dedicated to the case \(m\not\equiv 0\pmod{2}\). Let \(R:=\{u_{m-2},u_{m-1},v_{m-2},\)\(v_{m-1}\}\). In the following, we consider the sequence of vertices \(s_{p}:=\langle w_{0},\,\ldots,w_{p-1};x_{0},\,\ldots,x_{p-1}\rangle\), \(p=4,6\), to which we want to associate a respective sequence of labels. These vertices will be part of a \(2\times p\) grid graph \(H_{p}\), which will be connected to our studied grid graph \(G_{2,m}\), see Figure 4(b). The argumentation for the lower bound \(m\) respectively \(m+1\) of \(\gamma_{\mathrm{sdR}}(G_{2,m})\) is split into several cases, depending on the distribution of the vertices labeled \(-1\) inside \(R\), in which we extend \(G_{2,m}\) to a suitably labeled version of \(P_{m+p,1}\) when needed. In each of the following cases, the claimed bound holds. Note that it is enough, by condition (1c), to consider at most two vertices in \(V_{-1}\cap R\). _Case 1._\(|V_{-1}\cap R|=1\). _Subcase 1.1._\(V_{-1}\cap R\in\{\{u_{m-1}\},\{v_{m-1}\}\}\). W.l.o.g. \(V_{-1}\cap R=\{v_{m-1}\}\). * If \(v_{m-1}\) is defended by its lower \(3\)-labeled neighbor \(u_{m-1}\), then in the extended graph in Figure 4(b), for \(p=4\), we choose for \(s_{4}\) the sequence of labels \(\langle-1,\,-1,2,1;1,2,-1,2\rangle\), yielding additional weight \(5\). By this we get a SDRDF on \(P_{m+4,1}\) with weight of \(\gamma_{\mathrm{sdR}}(G_{2,m})+5\), which implies the Figure 5: Extending \(G_{2,m}\) to a cubic graph via different constructions. inequality \(\gamma_{\mathrm{sdR}}(G_{2,m})+5\geqslant\gamma_{\mathrm{sdR}}(P_{m+4,1})\). Since, for \(m\equiv 1\pmod{4}\) by Theorem 6 we have \(\gamma_{\mathrm{sdR}}(P_{m+4,1})=(m+4)+2=m+6\), we obtain \(\gamma_{\mathrm{sdR}}(G_{2,m})+5\geqslant m+6\), i.e., \(\gamma_{\mathrm{sdR}}(G_{2,m})\geqslant m+1\). On the other hand, for \(m\equiv 3\pmod{4}\) also by Theorem 6 we have \(\gamma_{\mathrm{sdR}}(P_{m+4,1})=(m+4)+1=m+5\), which yields \(\gamma_{\mathrm{sdR}}(G_{2,m})+5\geqslant m+5\), i.e., \(\gamma_{\mathrm{sdR}}(G_{2,m})\geqslant m\). * If \(v_{m-1}\) is defended by its left \(3\)-labeled neighbor \(v_{m-2}\), the labeling of \(G_{2,m}\) even cannot be optimal: either \(u_{m-1}\) is an unnecessary defender, or \(u_{m-1}\) is labeled \(1\) which implies that \(u_{m-2}\) has a label from \(\{2,3\}\) in turn implying that \(\langle u_{m-2},u_{m-1}\rangle\) should have received labels \(\langle 3,-1\rangle\) to reduce weight. * Also the scenario of purely \(2\)-labeled neighbors of \(v_{m-1}\) has to be considered: Recall that the label of \(u_{m-2}\) is positive by assumption. Hence, we can relabel \(R\) such that \(\{u_{m-2},v_{m-2}\}\subseteq V_{3}\) and \(\{u_{m-1},v_{m-1}\}\subseteq V_{-1}\). By Observation 4 in the appendix, we know that the latter boundary constraints imply that \(w_{f}(G_{2,m})\) cannot under-run the bound \(m+1\) respectively \(m\) when \(m\equiv 1\pmod{4}\) respectively \(m\equiv 3\pmod{4}\). Therefore, we do not need to come up with another construction here. _Subcase 1.2._\(V_{-1}\cap R\in\{\{u_{m-2}\},\{v_{m-2}\}\}\). W.l.o.g. let \(V_{-1}\cap R=\{v_{m-2}\}\). We can just add the connecting edges \(u_{m-1}u_{0}\) and \(v_{m-1}v_{0}\). By positivity of the righter-most labels in \(R\), this fulfills all SDRDF constraints at no additional weight cost. _Case 2._\(|V_{-1}\cap R|=0\). Replicate the construction of Subcase 1.2. _Case 3._\(|V_{-1}\cap R|=2\). _Subcase 3.1._ Horizontal occurrences, i.e., \(V_{-1}\cap R\in\{\{u_{m-2},u_{m-1}\},\{v_{m-2},v_{m-1}\}\}\). W.l.o.g. assume \(V_{-1}\cap R=\{v_{m-2},v_{m-1}\}\). In this subcase, we proceed as in the first paragraph of Subcase 1.1 (in the extended graph in Figure 4(b) the sequence \(s_{4}\) shall have associated labels \(\langle-1,\,-1,2,1;1,2,-1,2\rangle\)). Note that \(\langle u_{m-2},u_{m-1}\rangle\) must necessarily have the labels \(\langle x,3\rangle\) where \(x\geqslant 1\). Clearly, despite \(w_{3},x_{3}\) in \(H_{4}\) are joined potentially both with vertices labeled \(-1\), they will not violate condition (1c) as abundantly defended. Therefore, the entire labeling is a SDRDF having an additional weight cost of \(5\) due to the vertices in \(H_{4}\). _Subcase 3.2._ Vertical occurrences (interior), i.e., \(V_{-1}\cap R=\{u_{m-2},v_{m-2}\}\). We note that at least one label of the necessarily positively labeled vertices \(u_{m-1}\), \(v_{m-1}\) must further have assigned label \(2\) or \(3\) - w.l.o.g. assume \(v_{m-1}\in V_{2}\cup V_{3}\) and \(u_{m-1}\in V_{1}\cup V_{2}\cup V_{3}\). We now consider the extended graph in Figure 4(b), where for the sequence of vertices \(s_{4}\), we pick the sequence of labels \(\langle-1,3,-1,1;\,-1,2,\,-1,2\rangle\), costing additional weight \(4\). We now prove this subcase using Theorem 6 as in Subcase 1.1. _Subcase 3.3._ Vertical occurrences (righter-most), i.e., \(V_{-1}\cap R=\{u_{m-1},v_{m-1}\}\). Necessarily, we have that \(u_{m-2},v_{m-2}\in V_{3}\). This situation is observed in the third paragraph of Subcase 1.1 and concluded by Observation 4. _Subcase 3.4._ Diagonal occurrences, i.e., \(V_{-1}\cap R\in\{\{u_{m-2},v_{m-1}\},\{v_{m-2},u_{m-1}\}\}\). W.l.o.g. assume \(V_{-1}\cap R=\{u_{m-2},v_{m-1}\}\). Note that \(\langle v_{m-2},u_{m-1}\rangle\) must necessarily have associated label sequence \(\langle x,3\rangle\) where \(x\geqslant 1\). For \(x\geqslant 2\), in the extended graph in Figure 4(b), for \(p=4\), pick for \(s_{4}\) the sequence of labels \(\langle 1,\,-1,-1,3;\,-1,3,1,1\rangle\), costing additional weight \(6\). Finally, we update the label value of \(u_{m-1}\) to \(2\) (not violating the SDRDF constraints). Hence, finally, we obtain a graph \(P_{m+4,1}\) costing additional weight \(5\) and conclude this subcase again as in the first part of Subcase 1.1. For the case \(x=1\) we observe how vertices \(v_{0},v_{1},u_{0},u_{1}\) are labeled. * If neither \(\{v_{1},u_{0}\}\subseteq V_{-1}\) nor \(\{v_{0},u_{1}\}\subseteq V_{-1}\), i.e., we do not have a diagonal of vertices in \(V_{-1}\) on the left side of \(G_{2,m}\), then for the horizontally flipped labeling4 the claim follows directly from one of the previously settled (sub)cases 1, 2, 3.1, 3.2, or 3.3 of this proof. * If \(\{u_{0},v_{1}\}\subseteq V_{-1}\), then necessarily \(v_{0}\in V_{3}\) and \(u_{1}\not\in V_{-1}\). Hence, making use of the construction given in Figure 5b (\(p=6\)) to extend the graph \(G_{2,m}\) to \(P_{m+6,1}\), where we associate the sequence of labels \(\langle 1,-1,-1,3,3,-1;\)\(-1,3,1,-1,-1,1\rangle\) to \(s_{6}\), we obtain a SDRDF on \(P_{m+6,1}\) of total weight \(\gamma_{\mathrm{sdR}}(G_{2,m})+6\). For \(m\equiv 1\pmod{4}\) by Theorem 6 we have \(\gamma_{\mathrm{sdR}}(P_{m+6,1})=(m+6)+1=m+7\), which implies \(\gamma_{\mathrm{sdR}}(G_{2,m})+6\geqslant m+7\), i.e. \(\gamma_{\mathrm{sdR}}(G_{2,m})\geqslant m+1\). On the other hand, for \(m\equiv 3\pmod{4}\) also by Theorem 6 we have \(\gamma_{\mathrm{sdR}}(P_{m+6,1})=(m+6)+2=m+8\), which yields \(\gamma_{\mathrm{sdR}}(G_{2,m})+6\geqslant m+8\), i.e., \(\gamma_{\mathrm{sdR}}(G_{2,m})\geqslant m+2>m\). * If \(\{u_{1},v_{0}\}\subseteq V_{-1}\), then necessarily \(u_{0},v_{1}\not\in V_{-1}\). Hence we can add the edges \(u_{m-1}u_{0}\), \(v_{m-1}v_{0}\) to \(G_{2,m}\) obtaining a SDRDF on \(P_{m,1}\). **Proposition 2**.: _For \(\mathrm{FS}_{m}\), \(m\geqslant 5\), we have \(2m\leqslant\gamma_{\mathrm{sdR}}(\mathrm{FS}_{m})\leqslant 2m+1\)._ Proof.: Let us first show the validity of the _upper bound_, i.e. \(\gamma_{\mathrm{sdR}}(\mathrm{FS}_{m})\leqslant 2m+1,m\geqslant 5\). _Case 1. \(m\equiv 0\pmod{3}\)._ We choose the labeling with \(V_{1}=\{a_{m-1},c_{m-1}\}\), \(V_{2}=\{b_{3i},b_{3i+1},c_{3i+1},d_{3i},d_{3i+2}\mid i=0,1,\ldots,\frac{m-3}{ 3}\}\)\(\cup\{c_{3i+2}\mid i=0,1,\ldots,\frac{m-6}{3}\}\), and \(V_{-1}=V\setminus(V_{1}\cup V_{2})=\{b_{3i+2},c_{3i},d_{3i+1}\mid i=0,1,\ldots,\frac{m-3}{3}\}\cup\{a_{i}\mid i=0,1,\ldots m-2\}\); for \(m=9\), this is illustrated in Figure 6. One can easily check that the SDRDF properties are satisfied. Consequently, we have \(\gamma_{\mathrm{sdR}}(\mathrm{FS}_{m})\leqslant w_{f}(\mathrm{FS}_{m})=2|V_{ 2}|+|V_{1}|-|V_{-1}|=2(2m-1)+2-(2m-1)=2m+1\). _Case 2. \(m\equiv 1\pmod{3}\)._ We pick the labeling with \(V_{1}=\{a_{m-1},b_{m-1}\}\), \(V_{2}=\{b_{3i},b_{3i+2},c_{3i},c_{3i+1},d_{3i+1},d_{3i+2}\mid i=0,1,\ldots, \frac{m-4}{3}\}\)\(\cup\{c_{m-1}\}\), and \(V_{-1}=V\setminus(V_{1}\cup V_{2})=\{a_{i}\mid i=0,1,\ldots,m-2\}\cup\{b_{3i+1}, c_{3i+2},d_{3i}\mid i=0,1,\ldots\frac{m-4}{3}\}\cup\{d_{m-1}\}\); for \(m=13\), this is illustrated in Figure 7. Again one can quickly check that \(f\) is indeed a SDRDF. Therefore, \(w_{f}(\mathrm{FS}_{m})=2|V_{2}|+|V_{1}|-|V_{-1}|=2(2m-1)+2-(2m-1)=2m+1\) is an upper bound for \(\gamma_{\mathrm{sdR}}(\mathrm{FS}_{m})\). _Case 3. \(m\equiv 2\pmod{3}\)._ Choose the labeling with \(V_{1}=\{a_{m-2},d_{m-2}\}\), \(V_{2}=\{b_{3i},b_{3i+1},c_{3i+1},c_{3i+2},d_{3i},d_{3i+2}\mid i=0,1,\ldots, \frac{m-5}{3}\}\)\(\cup\{b_{m-2},c_{m-1},d_{m-1}\}\), and \(V_{-1}=V\setminus(V_{1}\cup V_{2})=\{a_{i}\mid i=0,1,\ldots,m-3,m-1\}\cup\{b_{3 i+2},c_{3i},\) Figure 6: SDRDFs for \(\mathrm{FS}_{m}\) when \(m=9\) (left) respectively \(m=11\) (right). Thinking of the vertices as placed on a grid, a labeling pattern of dimensions \(4\times 3\), periodically repeated and finally flanked by an individual termination pattern of dimensions \(4\times 3\) (left), respectively \(4\times 2\) (right), can be read off. These labeling patterns generalize to higher values of \(m\) of congruency \(m\equiv 0\pmod{3}\) and \(m\equiv 2\pmod{3}\), respectively. \(d_{3i+1}\mid i=0,1,\ldots\frac{m-5}{3}\}\cup\{a_{m-1},b_{m-1},c_{m-2}\}\); for \(m=11\), this is illustrated in Figure 6. One can easily see that \(f\) is indeed a SDRDF, implying \(\gamma_{\mathrm{sdR}}(\mathrm{FS}_{m})\leqslant w_{f}(\mathrm{FS}_{m})=2|V_{2} |+|V_{1}|-|V_{-1}|=2(2m-1)+2-(2m-1)=2m+1\). Concerning the _lower bound_, we obtain \(\gamma_{\mathrm{sdR}}(\mathrm{FS}_{m})\geqslant 2m\) from Theorem 3, which concludes the proof. ## 3 Conclusions and future work In this work, we studied the signed Roman domination problem on cubic graphs in detail. The discharging method turned out to be a powerful tool allowing us to come up with a sharp lower bound. In this context, we were able to take advantage of some findings on \(\alpha\)-total domination and thus improve the upper bound. Moreover, we emphasized the importance of generalized Petersen graphs as paramount examples of cubic graphs attaining this best possible lower bound. We have presented a constraint programming driven approach that seems adaptable to several other classes of rotationally symmetric graphs, and furthermore can easily be applied to other forms of domination. The achieved results form the foundation for several interesting future research questions. In addition to the obtained sharp lower bound for \(\gamma_{\mathrm{sdR}}\) on cubic graphs, it would be interesting to find a sharp upper bound. Proving a sharp asymptotic upper bound might be interesting, too. We here mean to study, given a class of graphs \(\mathcal{G}\) of unbounded order, the quantity \[c_{\mathrm{sdR}}(\mathcal{G}):=\limsup_{\begin{subarray}{c}G\in\mathcal{G},G= (V,E)\\ |V|\to\infty\end{subarray}}|V|^{-1}\gamma_{\mathrm{sdR}}(G). \tag{26}\] Slightly differing from a related quantity studied by Egunjobi and Haynes (8, p. 72), the latter captures the behavior of the maximum per-vertex average weight when graph sizes are supposed to grow, therefore neglecting all small graphs of high average weight. By Proposition 1, we already know that \(c_{\mathrm{sdR}}(\mathcal{C})\leqslant 5/4\) for the class \(\mathcal{C}\) of cubic graphs; this bound is, however, unlikely to be sharp. Identifying subclasses \(\mathcal{C}^{\prime}\) of cubic graphs having maximum \(c_{\mathrm{sdR}}(\mathcal{C}^{\prime})\)-value seems challenging. In this regard, we make the following observation. **Observation 2**.: _There are subclasses \(\mathcal{C}^{\prime}\) of cubic graphs, for which \(c_{\mathrm{sdR}}(\mathcal{C}^{\prime})\geqslant 7/10\). In particular, \(c_{\mathrm{sdR}}(\mathcal{C})\geqslant 7/10\)._ Proof.: Let \(\mathcal{C}^{\prime}\) contain all graphs \(G_{k}\), \(k\in\mathbb{N}\setminus\{0\}\), where \(G_{k}\) is made up by \(k\) connected components all being isomorphic to \(P_{5,1}\) (cubic). Each graph \(G_{k}\) consists of \(n=10k\) vertices and has SDRDF weight \(7k\). Consequently, \(c_{\mathrm{sdR}}(\mathcal{C}^{\prime})=7/10\) Figure 7: A SDRDF for the graph \(\mathrm{FS}_{13}\) (information displayed as in Figure 6). Again the labeling scheme, consisting of a periodically repeating \(4\times 3\) pattern of labels, which is flanked by a terminating \(4\times 4\) pattern of labels, naturally generalizes to higher values of \(m\equiv 1\pmod{3}\). If we set our attention on the class \(\mathcal{C}_{\text{conn}}\) of _connected_ cubic graphs, the dynamic might change, and we pose ourselves the following question. **Problem 1**.: 1. _How large can_ \(\rho>1/2\) _be chosen such that_ \(c_{\text{\rm sdR}}(\mathcal{C}_{\text{conn}})\geqslant\rho\)_?_ 2. _Is it possible that_ \(c_{\text{\rm sdR}}(\mathcal{C}_{\text{conn}})\geqslant 9/16\)_?_ 3. _Do the graphs_ \(P_{m,2}\) _attain the bound in (_ii_) (such an average weight is attained for_ \(m=8,16\)_)?_ In preliminary work, we constructed optimal SDRDFs for \(2\times m\) grid graphs, and for paths of length \(m\) such graphs have been determined in [1]. This naturally raises the following challenge concerning general \(\ell\times m\) grid graphs. **Problem 2**.: _Determine \(\gamma_{\text{\rm sdR}}\) on \(\ell\times m\) grid graphs for further (small) values \(\ell\in\mathbb{N}\) and general \(m\in\mathbb{N}\)._ For solving Problem 2 it might be a reasonable strategy to obtain sharp bounds for \(\gamma_{\text{\rm sdR}}\) on \(4\)-regular graphs. Moreover, the fact that the signed domination problem is NP-hard on grids [21] leads to the following question when \(\ell\) is kept general. **Problem 3**.: _Is it NP-hard to determine the existence of a SDRDF on an \(\ell\times m\) grid graph with a weight not exceeding a given limit?_ From our experience in the setting of the SDRDP, the requirement of a particular "balance" of defenders and defendants, as well as the higher flexibility on how to defend, make it challenging in comparison to the domination-type problems mentioned earlier. ## Acknowledgments Enrico Iurlano and Gunther Raidl are supported by Austria's Agency for Education and Internationalization under grant BA05/2023. Tatjana Zec and Marko Djukanovic are supported by the bilateral project between Austria and Bosnia and Herzegovina funded by the Ministry of Civil Affairs of Bosnia and Herzegovina under grant no. 1259074. Moreover, this project is partially funded by the Doctoral Program "Vienna Graduate School on Computational Optimization", Austrian Science Fund (FWF), grant W1260-N35.
2310.14028
GASCOM: Graph-based Attentive Semantic Context Modeling for Online Conversation Understanding
Online conversation understanding is an important yet challenging NLP problem which has many useful applications (e.g., hate speech detection). However, online conversations typically unfold over a series of posts and replies to those posts, forming a tree structure within which individual posts may refer to semantic context from higher up the tree. Such semantic cross-referencing makes it difficult to understand a single post by itself; yet considering the entire conversation tree is not only difficult to scale but can also be misleading as a single conversation may have several distinct threads or points, not all of which are relevant to the post being considered. In this paper, we propose a Graph-based Attentive Semantic COntext Modeling (GASCOM) framework for online conversation understanding. Specifically, we design two novel algorithms that utilise both the graph structure of the online conversation as well as the semantic information from individual posts for retrieving relevant context nodes from the whole conversation. We further design a token-level multi-head graph attention mechanism to pay different attentions to different tokens from different selected context utterances for fine-grained conversation context modeling. Using this semantic conversational context, we re-examine two well-studied problems: polarity prediction and hate speech detection. Our proposed framework significantly outperforms state-of-the-art methods on both tasks, improving macro-F1 scores by 4.5% for polarity prediction and by 5% for hate speech detection. The GASCOM context weights also enhance interpretability.
Vibhor Agarwal, Yu Chen, Nishanth Sastry
2023-10-21T14:45:26Z
http://arxiv.org/abs/2310.14028v1
# GASCOM: Graph-based Attentive Semantic Context Modeling for Online Conversation Understanding ###### Abstract. Online conversation understanding is an important yet challenging NLP problem which has many useful applications (e.g., hate speech detection). However, online conversations typically unfold over a series of posts and replies to those posts, forming a tree structure within which individual posts may refer to semantic context from higher up the tree. Such semantic cross-referencing makes it difficult to understand a single post by itself; yet considering the entire conversation tree is not only difficult to scale but can also be misleading as a single conversation may have several distinct threads or points, not all of which are relevant to the post being considered. In this paper, we propose a **G**raph-based **A**tentive **S**emantic **CO**ntext **M**odeling (GASCOM) framework for online conversation understanding. Specifically, we design two novel algorithms that utilise both the graph structure of the online conversation as well as the semantic information from individual posts for retrieving relevant context nodes from the whole conversation. We further design a _token-level_ multi-head graph attention mechanism to pay different attentions to different tokens from different selected context utterances for fine-grained conversation context modeling. Using this semantic conversational context, we re-examine two well-studied problems: polarity prediction and hate speech detection. Our proposed framework significantly outperforms state-of-the-art methods on both tasks, improving macro-F1 scores by 4.5% for polarity prediction and by 5% for hate speech detection. The GASCOM context weights also enhance interpretability. graph-based models, online conversation understanding, hate speech, polarity prediction + Footnote †: [leftmargin=*] attention weights to different tokens from different selected context utterances via a further _token-level multi-head graph attention mechanism_ when aggregating their semantic meanings (Step _ii)_. We highlight our contributions as follows: * We propose a general deep learning framework -- GASCOM -- for online conversation understanding by effectively utilizing conversational context to augment the semantic meanings of the target utterance. * We design two novel _semantic-aware graph-based conversation context selection algorithms_ for retrieving relevant context nodes from an online conversation which consider both the graph structure and semantic meanings of the conversation context. * We design a _token-level multi-head graph attention mechanism_ to pay different attentions to different tokens from different selected context utterances for fine-grained conversation context modeling. * We show that our proposed framework significantly outperforms state-of-the-art methods on two very important online conversation understanding tasks, including polarity prediction and hate speech detection by 4.5% and 5% in macro-F1, respectively. Experimental results also show that our proposed framework has good interpretability. ## 2. Related Work ### Online Conversation Understanding Online conversation understanding is an important yet challenging NLP problem which has many useful applications (e.g., hate speech detection). Like conventional threaded conversations, context of an utterance might refer to previous utterances in online conversations. But instead of alternating sequence of utterances, online conversations usually form a tree structure where a comment can get multiple replies. Previous works have either utilized the semantic meanings of the conversation context (Han et al., 2015; Chen et al., 2016; Chen et al., 2017) or its tree structure (Bos and Markert, 2017) to sample the relevant utterances. Ashraf et al. (Han et al., 2015) used replies to short Youtube comments as additional conversational context for abusive language detection. Agarwal et al. (Agarwal et al., 2017) introduced GraphNLI that utilizes the tree structure to capture the conversational context through a biased root-seeking random walk. It selects the conversational context probabilistically with a probability \(p\) without looking at the semantic meanings. Furthermore, it uses weighted average aggregation to discount and aggregate the conversational context with a discount factor \(\gamma\). These hyper-parameters depend upon the dataset and therefore, needs to be fine-tuned. Moreover, the discount factor gives exponentially decreasing weights as the graph walk moves away from the target comment node, which may not be always true and depends upon the context of the neighbouring nodes such as ancestors and sibling nodes. Unlike previous works, our proposed model utilizes both the semantic meaning and tree-structure to capture the conversational context. We propose novel semantic-aware random walk algorithms for retrieving relevant context nodes from the conversation which are driven by the strength of their semantic relationship. Furthermore, we propose a multi-head graph attention mechanism to learn different attention weights to different tokens from different selected context utterances when aggregating their semantic meanings. #### 2.1.1. **Polarity Prediction** Polarity prediction aims to identify the argumentative relations of _attack_ and _support_ between natural language arguments in online debates and conversations wherein one comment replies to the other comment. The polarity prediction task is one of the many tasks in the field of _argument mining_ (e.g. Cabrio and Villata (Cabrio and Villata, 2017), Lawrence and Reed (Lawrence and Reed, 2017), Lippi and Torroni (Torroni, 2017)). The polarity prediction task is important because it helps to deduce the stance of a comment with respect to the other. Once we have classified all the replies in a debate, we can apply ideas from argumentation theory to reason about which arguments should be justified. _Argumentation theory_ is a branch of AI that is concerned with the transparent and rational resolution of disagreements (e.g. Rahwan and Simari (Rahwan and Simari, 2017)). The polarity prediction task has been discussed in the literature. For example, Cabrio and Villata (Cabrio and Villata, 2017) has reviewed the task in the context of persuasive essays or political debates. An early example of this work is Cabrio and Villata (Cabrio and Villata, 2018), which applied textual entailment (e.g. Bos and Markert (Bos and Markert, 2017), Dagan et al. (Dagan et al., 2017), MacCartney and Manning (Mancartney and Manning, 2017)) to predict the polarity of replies on the now-defunct Debatepedia dataset. In Cocarascu and Toni (Cocarascu and Toni, 2017), long-short-term memory networks were used to classify polarities. A more recent overview of the polarity prediction task (Cocarascu and Toni, 2017) has provided context-independent neural network baselines. #### 2.1.2. **Hate Speech Detection** Internet debates, especially those about controversial topics, can easily spread hate and misinformation (Bos and Markert, 2017; Dasan et al., 2017; Dasan et al., 2017). Hate speech is notoriously difficult to define. A sample of important attempted definitions (e.g. Jahan and Oussalah (Oussalah, 2017)) agree that hate speech is a public language that attacks individuals and groups of people because of protected characteristics, for example, their race, skin colour, religion, ancestry, nationality, gender, disability, sexuality and so on. Hate speech, if left unchallenged, can promote and incite harmful societal consequences against individuals and groups such as (but not limited to) physical attacks, psychological intimidation, properly damage, violence and segregation. Therefore, it is important to be able to detect hate speech in online forums, accurately and at scale, such that appropriate action can be taken by the moderators. Figure 1. An example conversation from _Guest_ dataset. (Warning: Contains misogynistic speech) Figure 1 shows an example conversation. The rectangular text boxes represent posts, and the arrows denote which posts reply to which other posts. Suppose we wish to identify whether the text with the thicker border in the bottom left contains hate speech. At first glance, this text appears to mention that hamsters have tiny brains and as such does not appear to be hate speech. However, upon looking at the neighbouring comments, we can notice that the conversation is actually about women. Therefore, this comment is misogynistic as hamsters actually refer to women. Examples like this demonstrate that although hate exists and should be dealt with accordingly, the accurate detection of hate speech is very important. ### Graph Machine Learning in NLP Many NLP problems can be boiled down to graph-based problems in the end. Classical graph-based algorithms have been successfully applied to numerous NLP applications (Krizhevsky et al., 2014). For example, random walk algorithms have been applied to query expansion (Krizhevsky et al., 2014) and keyword extraction (Krizhevsky et al., 2014) where the node pair similarity is measured by the probability scores in the stationary distribution of the random walk on a graph. Graph clustering algorithms have been applied to solve text clustering (Krizhevsky et al., 2014) where a graph of document nodes is constructed to capture the relationships among documents. Label propagation algorithms have been applied to word-sense disambiguation (Krizhevsky et al., 2014) and sentiment analysis (Krizhevsky et al., 2014) by propagating labels from limited labeled nodes to a large amount of similar unlabeled nodes with the assumption of like attracts like. In the past few years, Graph Neural Networks (GNNs) (Glorot et al., 2010; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) have drawn great attention as a special class of neural networks which can model arbitrary graph-structured data. GNNs have been widely applied to various NLP tasks such as machine translation (Krizhevsky et al., 2014), code summarization (Krizhevsky et al., 2014), natural question generation (Glorot et al., 2010; Krizhevsky et al., 2014) and machine reading comprehension (Glorot et al., 2010). We refer the interested reader to the comprehensive survey by (Krizhevsky et al., 2014) for more details. ## 3. Gascom Architecture In this section, we propose our general GASCOM architecture for online conversation understanding which leverages both the semantic meaning of the conversation context and the tree structure of the conversations. In Section 3.1, we discuss how we represent online conversations as discussion trees. In Section 3.2, we propose semantic-aware graph-based algorithms for conversation context selection followed by our model architecture and token-level multi-head graph attention for online conversation understanding in Section 3.3. ### Online Conversations as Discussion Trees For every online discussion D, we construct a discussion tree, where a node represents a post or comment and the edges are directed from a given node to the other (parent) node it is replying to. The discussion forms a tree structure because it starts with a root node, which represents the opening comment or post of the discussion. Every non-root node replies to exactly one other node (out-degree = 1), while all nodes can have zero or more replies to it (in-degree \(\in\) N). Each such node has an associated label depending upon the prediction task. For polarity prediction, the non-root nodes are labelled with support or attack, depending upon whether the post is respectively for or against its parent post. For misogynistic hate speech, each node is labelled as either hate or non-hate. ### Graph-based Semantic Context Graph walk is a principled way of capturing the conversational context for a given comment in a discussion tree for online conversation understanding (Bahdan et al., 2015). Previous works either did not sample the relevant conversational context at all (Chen et al., 2016; Chen et al., 2016) or sampled it probabilistically (Chen et al., 2016) without looking at the semantics. In this section, we propose our semantic-aware random walk strategies that select conversation context through semantic relevance of neighbouring nodes with respect to the target comment node. #### 3.2.1. Similarity-based Random Walk Similarity-based Random Walk is a walk that starts from a target comment node and uses similarity scores to sample the neighbouring nodes in a discussion tree probabilistically. It is a semantic-aware walk and can select relevant neighbouring nodes to sample the conversation context. The similarity score is computed using a pre-trained Sentence-BERT model with cosine similarity (Krizhevsky et al., 2014) between a pair of nodes, as widely used in the literature (Chen et al., 2016; Chen et al., 2016; Krizhevsky et al., 2014). It works well in our use case. However, our approach is agnostic to the choice of text similarity metric. The resultant similarity score is normalised by dividing every score with the sum of all the scores to get their corresponding probability values (between 0 and 1) that sum to 1. Starting from a target comment node, these similarity scores and their corresponding probabilities are calculated for every one-hop neighbouring node directly connected with the target node. Then the random walk selects one of the nodes according to their assigned probabilities and therefore, the walk is non-deterministic. Likewise, the walk continues to move towards neighbours of the neighbours and so on till it achieves a walk-length of \(L\) nodes or reaches one of the root or leaf nodes. \(L\) is the maximum number of distinct nodes sampled by a graph walk including the starting node and therefore, the walk-length is \((L-1)\). It is important to limit the walk length by defining \(L\) because online conversations can grow at a large scale, and capturing far away nodes through random walks can lead to the over-smoothing problem (Golov calculated by normalising the weights with the sum of all the attention weights. The resultant probability values sum to 1 so that the non-deterministic random walk selects one of the nodes according to their assigned probabilities. Likewise, the walk continues to move towards neighbours of neighbours and so on till it achieves a walk-length \(L-1\). Besides the aforementioned attention-modulated random walk, we also explore its deterministic version called **attention-modulated graph walk** that selects a neighbouring node with the highest attention weight and then moves towards the neighbours of the selected neighbouring node and so on till the walk-length \(L-1\). ### Token-level Multi-Head Graph Attention GASCOM is an attention-based deep learning architecture which uses token-level multi-head graph attention to find the relevant conversational context useful for understanding online conversations. The GASCOM architecture is illustrated in Figure 2. At first, \(L\) comments (nodes) are sampled through one of the proposed semantic-aware graph walk algorithms. Then these comments are input into the RoBERTa (Zhu et al., 2017) model to get their corresponding token-level embeddings. Let \(\vec{E}_{i}\) be the token-level embeddings for comment \(i\) obtained from RoBERTa, where \(i\in[1,L]\). The mean pooling operation is performed on token-level embeddings \(\vec{E}_{1}\) of the target comment (comment 1) to derive its corresponding fixed-sized sentence embedding \(\vec{u}\) as shown in equation 1. \[\vec{u}=MeanPool(\vec{E}_{1}) \tag{1}\] Next we have conversational context embeddings \(E_{i}\), where \(i\in[2,L]\) from which we need to find the relevant context for online conversation understanding. We propose the token-level multi-head graph attention mechanism to pay different attentions to different tokens from different selected context utterances for fine-grained conversation context modeling. The embedding vector \(\vec{v}\) denotes the aggregated conversational context in Figure 2. To find the relevant context nodes for the given target comment node, token-level embeddings of the surrounding nodes, including the parent node (comment 2), are input into the multi-head attention layer. Multi-head attention (Wang et al., 2017) is computed at the token-level between comment 2 (parent comment) and all the other neighbouring Figure 2. GASCOM architecture. Token-level Multi-head Graph Attention (Section 3.3) is computed between comment 2 (parent comment) and all the other neighbouring comments sampled through graph walks (Section 3.2). The token-level attention outputs \(\vec{O}_{i}\) are mean-pooled to get their corresponding sentence embeddings \(\vec{S}_{i}\) which are then aggregated through node-level mean pooling to get the resultant embedding \(\vec{v}\). \(N\), \(T_{i}\), \(E\) represent batch size, target sequence length and dimension size, respectively. Dashed box is optional and is only employed in polarity prediction task, but not in hate speech detection. comments that are input into the model. We apply multi-head attention with the parent node and not the target node because our preliminary experiments show that the parent node is the most important context node. Therefore, we start random walks from the parent node (see Section 4.6.2) and augment the semantic embedding of the parent node by selecting relevant context nodes and keeping the target node embedding as it is. When applying the multi-head attention mechanism, we set the query matrix \(\vec{Q}\) to the comment 2 embedding \(\vec{E}_{2}\), both key matrix \(\vec{K}_{i}\) and value matrix \(\vec{V}_{i}\) to the comment i embedding \(\vec{E}_{i}\) as shown in equation 2. \[\vec{Q}=\vec{E}_{2};\vec{K}_{i}=\vec{E}_{i};\vec{V}_{i}=\vec{E}_{i} \tag{2}\] These \(\vec{Q}\), \(\vec{K}_{i}\), and \(\vec{V}_{i}\) are input into the multi-head attention (\(h\) attention heads) as shown in equations 3, 4 and 5, which learns attention weights for each of the \(L-1\) comments according to their relevance and returns their corresponding attention outputs. Note that \(\vec{W}_{j}^{Q}\), \(\vec{W}_{j}^{K}\) and \(\vec{W}_{j}^{V}\) are linear projection weight matrices, and \(d_{k}\) is a scaling factor which is set to the dimension size of \(\vec{W}_{j}^{K}\). \[Attention(\vec{Q},\vec{K},\vec{V})=softmax(\frac{\vec{Q}\vec{K}^{T}}{\sqrt{d _{k}}})\vec{V} \tag{3}\] \[\text{head}_{j}^{i}=Attention(\vec{Q}\vec{W}_{j}^{Q},\vec{K}_{i}\vec{W}_{j}^{ K},\vec{V}_{i}\vec{W}_{j}^{V}) \tag{4}\] \[MultiHead(\vec{Q},\vec{K}_{i},\vec{V}_{i})=Concat(\{\text{head}_{j}^{i}\}_{ j=1}^{h}) \tag{5}\] The final representation \(\vec{O}_{i}\) for a comment i is the dot product of multi-head attention output \(MultiHead(\vec{Q},\vec{K}_{i},\vec{V}_{i})\) with linearly projected matrix \(\vec{W}^{O}\) as in equation 6. Mean pooling operation is applied to all the token-level attention outputs \(\vec{O}_{i}\) to get their corresponding fixed-length sentence embeddings \(\vec{S}_{i}\) as in equation 7. Then, the mean pooling operation is performed again on these \(L-1\) sentence embeddings to get a resultant embedding \(\vec{v}\) which represents the conversational context as in equation 8. \[\vec{O}_{i}=MultiHead(\vec{Q},\vec{K}_{i},\vec{V}_{i})\vec{W}^{O} \tag{6}\] \[\vec{S}_{i}=MeanPool(\vec{Q}_{i}) \tag{7}\] \[\vec{v}=MeanPool(\vec{S}_{2},\vec{S}_{3},...,\vec{S}_{L})) \tag{8}\] For polarity prediction, cross-attention between the target (comment 1) and parent (comment 2) is also employed (as shown in the optional dashed box in Figure 2) to further improve the performance. The concatenation of comment 1 and comment 2, separated by the [_SEP_] token, is input into the RoBERTa model to get the resultant token-level embedding \(\vec{E}_{1,2}\) which after mean pooling becomes sentence embedding \(\vec{w}\) as shown in equation 9. \[\vec{w}=MeanPool(\vec{E}_{1,2}) \tag{9}\] Once we have embeddings \(\vec{u}\), \(\vec{v}\) and \(\vec{w}\) (optional), we calculate an element-wise difference vector \(|\vec{u}-\vec{v}|\). We then concatenate all four vectors \(\vec{u}\), \(\vec{v}\), \(|\vec{u}-\vec{v}|\) and \(\vec{w}\) together to get the final embedding vector, which is then fed into a softmax layer for the downstream prediction task. ## 4. Experiments and Results ### Data, Baselines and Evaluation Metrics **Kialo dataset.** We use the publicly available Kialo dataset for polarity prediction task. The Kialo dataset contains data from 1, 560 discussions hosted on Kialo, a debating platform as used by Agarwal et al. (Agarwal et al., 2018), Boschi et al. (Boschi et al., 2019), Young et al. (Young et al., 2019, 2020). Each reply in the debate is clearly labelled as attacking (negative) or supporting (positive). Table 1 shows the class frequencies. **Guest dataset.** We use the publicly available Guest dataset previously compiled by Guest et al. (Guest et al., 2019) for hate speech detection task. It is an expert-annotated, hate speech dataset, sourced from Reddit. This dataset looks at the specific type of hate against women - misogyny. Therefore, the positive class is "misogynistic" and negative class is "non-misogynistic". Table 1 shows the class frequencies. For both the datasets, discussions have a tree structure with a root node that represents the start of the conversation. For polarity (hate speech) prediction, every reply is either a support (hate) or attack (non-hate). We randomly sample 80% of the instances into the training set with the remainder 20% serving as the test set for both the datasets. **Baselines.** We compare GASCOM with the relevant baselines including Bag-of-Words + Logistic Regression, Sentence-BERT with classification layer (Wang et al., 2017), BERT (Kalalal et al., 2019) with root-seeking graph walk + MLP (Agarwal et al., 2018), Graph Convolutional Networks (GCNs) (Wang et al., 2017) and GraphNLI (Agarwal et al., 2018). We use two-layered GCN for node classification with S-BERT (Wang et al., 2017) to obtain sentence embeddings for each node. GraphNLI captures the conversational (global) context of the conversations through probabilistic root-seeking random walk without looking at the semantics. **Evaluation Metrics.** We use Accuracy, Macro-F1, Precision and Recall as evaluation metrics. Given the class imbalance nature of Guest dataset, we also use PR AUC score which is area under the precision-recall (PR) curve. ### Model Settings We use a batch size of 8, Adam optimizer with learning rate \(2\times 10^{-5}\), \(h=5\) attention heads, walk-length \(L=6\) and a linear learning rate warm-up over 10% of the training data. We make GASCOM model end-to-end trainable by minimizing the cross-entropy loss computed based on the model predictions and ground-truth labels. We implement the model using Transformers (Zhu et al., 2019) and PyTorch (Zhu et al., 2019) libraries and train it for 4 epochs. ### Experimental Results Tables 2 and 3 show the performance of GASCOM model and various baselines on the _Kialo_ dataset for polarity prediction and _Guest_ dataset for hate speech detection, respectively. GASCOM model performs significantly better than all the baseline models in macro-F1 and PR AUC scores. Graph Convolutional Networks (GCNs) have performed significantly poorer than GASCOM and other baselines due to the incorporation of too much potentially irrelevant and noisy conversational contexts. Specifically, GCNs incorporate all the nodes (broader context) within a given neighborhood (1-hop and 2-hop in our case) of the node being classified, which is not as effective as semantic-aware conversational context selection for capturing deeper context by GASCOM. Furthermore, too much nearby context may result in very noisy node embeddings, which weakens the model's predictive ability. GASCOM with a biased root-seeking random walk performs about 3 and 3.5 percentage points better in macro-F1 than GraphNLI with the same random walk for polarity prediction and hate speech detection, respectively. This shows the better ability of multi-head graph attention in GASCOM to find the relevant conversational context as compared to the weighted average strategy in GraphNLI. Similarity-based and Attention-modulated random walks perform better than their corresponding deterministic graph walks. Non-deterministic random walk helps in selecting more diverse set of neighbouring comments instead of always the most relevant ones according to similarity scores or attention weights. The similarity-based random walk with GASCOM gives an overall macro-F1 score of 83.42% for polarity prediction, which is about 4.5 percentage points higher than GraphNLI with random walk and weighted aggregation and a macro-F1 score of 78.69% for hate speech detection, which is about 4 percentage points higher. Attention-modulated random walk with GASCOM slightly outperforms the similarity-based random walk with an overall macro-F1 of 83.46% and 80.03% for polarity prediction and hate speech detection respectively. Our discussion trees are relatively small with a small walk length which limits the memory and time footprint for conversation context selection using random walks. A Kialo discussion tree has a mean of 204 nodes (arguments). ### Hyperparameter Analysis Table 4 shows the effect of walk length \(L\) on the performance of GASCOM model. For polarity prediction, walk length \(L=6\) performs better than \(L=4\) whereas walk length of 6 and 10 perform similarly with \(L=10\) performing slightly better. Since there is no significant performance gain as opposed to the increased complexity of processing 10 sentences, we choose \(L=6\) to be the optimal walk length. For hate speech detection, clearly the walk length \(L=6\) performs the best. On the other hand, Agarwal et al. (Agarwal et al., 2016) found \(L=4\) to be the optimal walk length for GraphNLI. Therefore, semantic-aware graph walks and attention mechanism in GASCOM allows to input larger conversational context. ### Ablation Studies In this section, we discuss various ablation studies to understand different components of GASCOM model and their contributions. #### 4.5.1. **Comparison of Token-level Multi-head Graph Attention with Others** Table 5 shows the performance comparison of token-level multi-head graph attention on GASCOM model with others for the polarity prediction task. First is the average aggregation that weighs all the sampled neighbouring comments equally and computes average of these embeddings to get the resultant embedding \(v\). Sentence-level multi-head attention is for the sentence-level embeddings of the neighbouring comments with mean pooling to get the resultant embedding \(v\). Both of these aggregation strategies perform similarly in terms of their macro-F1 score with a slightly better performance of sentence-level attention. Our proposed token-level multi-head graph attention performs significantly better with an overall gain of about 4 percentage points in macro-F1 as compared to the two baselines strategies. #### 4.5.2. **Comparison of Semantic-aware Conversation Context Selection with Naive Strategies** Our semantic-aware conversation context selection strategies - similarity-based and attention-modulated random walks are compared with naive strategies. The first naive strategy is parent-child nodes which uses just the parent comment as additional context for the target comment. Second is \begin{table} \begin{tabular}{l|c c c c} \hline \hline **Task** & **Dataset** & **Positive class** & **Negative class** & **Positive class \%** & **Negative class \%** \\ \hline Polarity prediction & Kialo & 139, 722 & 184, 651 & 43.1 & 56.9 \\ Hate speech detection & Guest & 699 & 5, 868 & 10.6 & 89.4 \\ \hline \hline \end{tabular} \end{table} Table 1. Class frequencies and percentages for each dataset. “Positive” refers to _supportive_ replies in Kialo, and the _presence_ of misogynistic hate speech in the Guest dataset. “Negative” refers to _attacking_ replies in Kialo, and the _absence_ of misogynistic hate speech. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline **Model** & **Accuracy** & **macro-F1** & **Precision** & **Recall** & **PR AUC** \\ \hline Bag-of-Words + Logistic Regression & 67.00 & 62.00 & 62.00 & 62.00 & 65.91 \\ Sentence-BERT with classification layer & 79.86 & 75.81 & 77.86 & 73.86 & 79.32 \\ BERT: Root-seeking Graph Walk + MLP & 70.27 & 52.32 & 44.87 & 64.12 & 55.71 \\ Graph Convolutional Networks & 57.94 & 57.82 & 67.74 & 50.44 & 60.83 \\ GraphNLI: Graph Walk + Weighted Avg. & 81.97 & 76.89 & 76.83 & 76.96 & 80.34 \\ GraphNLI: Random Walk + Weighted Avg. & 81.95 & 78.96 & 78.94 & 78.99 & 81.90 \\ \hline GASCOM: Random Walk & 82.01 & 81.62 & 81.68 & 81.57 & 84.69 \\ GASCOM: Similarity-based Graph Walk & 83.49 & 83.15 & 83.17 & 83.13 & 86.23 \\ GASCOM: Similarity-based Random Walk & **83.73** & 83.42 & 83.45 & 83.39 & 86.73 \\ GASCOM: Attention-modulated Graph Walk & 83.59 & 83.25 & 83.27 & 83.23 & 86.59 \\ GASCOM: Attention-modulated Random Walk & 83.71 & **83.46** & **83.52** & **83.41** & **86.89** \\ \hline \hline \end{tabular} \end{table} Table 2. Performance (in %) on _Kialo_ dataset for polarity prediction. random 6 two-hop neighbours strategy which selects 6 nodes (\(L=6\)) randomly from two-hop neighbors for a target node. Third strategy is biased root-seeking random walk (Beng et al., 2017) which is a non-deterministic walk that selects neighbouring nodes probabilistically by setting the probability \(p\) hyperparameter. Fourth is similarity-based top 6 strategy, a naive semantic-aware strategy which selects top 6 most similar nodes based on the cosine similarity from two-hop neighbours of the target comment. Table 6 shows the performance comparisons. For both tasks, our semantic-aware strategies perform better than naive as well as root-seeking random walk strategies with attention-modulated random walk performing the best. ### Model Analysis In this section, we show that cross-attention improves polarity prediction in Section 4.6.1. We also show that semantic-aware graph walks starting at parent nodes improve the model performance in Section 4.6.2. \begin{table} \begin{tabular}{l|c c} \hline \hline **Strategy** & **Accuracy** & **macro-F1** \\ \hline Polarity Prediction & & \\ \hline Parent-Child nodes & 83.48 & 83.12 \\ Random 6 2-hop nhds & 83.55 & 83.24 \\ Biased Root-seeking & & \\ Random Walk & 82.01 & 81.62 \\ Similarity-based top 6 & & \\ 2-hop nhds & 83.53 & 83.22 \\ **Similarity-based** & & \\ **Random Walk** & **83.73** & 83.42 \\ **Attn-modulated** & & \\ **Random Walk** & 83.71 & **83.46** \\ \hline \hline Hate Speech Detection & & \\ \hline Parent-Child nodes & 93.18 & 75.93 \\ Random 6 2-hop nhds & 92.95 & 75.36 \\ Biased Root-seeking & & \\ Random Walk & 93.57 & 78.35 \\ Similarity-based top 6 & & \\ 2-hop nhds & 92.98 & 76.85 \\ **Similarity-based** & & \\ **Random Walk** & 93.73 & 78.69 \\ **Attn-modulated** & & \\ **Random Walk** & **94.14** & **80.03** \\ \hline \hline \end{tabular} \end{table} Table 6. Performance (in %) comparison of proposed semantic-aware Graph Walk strategies with naive strategies for GASCOM model. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline **Model** & **Accuracy** & **macro-F1** & **Precision** & **Recall** & **PR AUC** \\ \hline Bag-of-Words + Logistic Regression & 92.08 & 61.45 & 56.98 & 71.49 & 44.84 \\ Sentence-BERT with classification layer & 92.28 & 68.79 & 74.96 & 65.47 & 54.34 \\ BERT: Root-seeking Graph Walk + MLP & 92.14 & 66.56 & 61.89 & 71.65 & 51.27 \\ Graph Convolutional Networks & 92.17 & 67.11 & 62.44 & 72.45 & 52.35 \\ GraphNLI: Graph Walk + Weighted Avg. & 93.06 & 73.56 & 80.63 & 69.48 & 57.84 \\ GraphNLI: Random Walk + Weighted Avg. & 93.18 & 74.79 & 80.89 & 70.90 & 59.98 \\ \hline GASCOM: Random Walk & 93.57 & 78.35 & 78.60 & 78.10 & 66.68 \\ GASCOM: Similarity-based Graph Walk & 92.95 & 75.59 & 79.50 & 72.79 & 60.63 \\ GASCOM: Similarity-based Random Walk & 93.73 & 78.69 & **83.69** & 75.23 & 66.89 \\ GASCOM: Attention-modulated Graph Walk & 92.63 & 75.50 & 76.95 & 74.25 & 60.57 \\ GASCOM: Attention-modulated Random Walk & **94.14** & **80.03** & 81.45 & **78.75** & **67.19** \\ \hline \hline \end{tabular} \end{table} Table 3. Performance (in %) on _Guest_ dataset for hate speech detection. \begin{table} \begin{tabular}{l|c c c} \hline \hline **Strategy** & **Accuracy** & **macro-F1** \\ \hline Polarity Prediction & & \\ \hline Parent-Child nodes & 83.48 & 83.12 \\ Random 6 2-hop nhds & 83.55 & 83.24 \\ Biased Root-seeking & & \\ Random Walk & 82.01 & 81.62 \\ Similarity-based top 6 & & \\ 2-hop nhds & 83.53 & 83.22 \\ **Similarity-based** & & \\ **Random Walk** & **83.73** & 83.42 \\ **Attn-modulated** & & \\ **Random Walk** & 83.71 & **83.46** \\ \hline Hate Speech Detection & & \\ \hline Parent-Child nodes & 93.18 & 75.93 \\ Random 6 2-hop nhds & 92.95 & 75.36 \\ Biased Root-seeking & & \\ Random Walk & 93.57 & 78.35 \\ Similarity-based top 6 & & \\ 2-hop nhds & 92.98 & 76.85 \\ **Similarity-based** & & \\ **Random Walk** & 93.73 & 78.69 \\ **Attn-modulated** & & \\ **Random Walk** & **94.14** & **80.03** \\ \hline \hline \end{tabular} \end{table} Table 6. Performance (in %) comparison of proposed semantic-aware Graph Walk strategies with naive strategies for GASCOM model. \begin{table} \begin{tabular}{l|c c c} \hline \hline **Model** & **Accuracy** & **macro-F1** & **Precision** & **Recall** & **PR AUC** \\ \hline Bag-of-Words + Logistic Regression & 92.08 & 61.45 & 56.98 & 71.49 & 44.84 \\ Sentence-BERT with classification layer & 92.28 & 68.79 & 74.96 & 65.47 & 54.34 \\ BERT: Root-seeking Graph Walk + MLP & 92.14 & 66.56 & 61.89 & 71.65 & 51.27 \\ Graph Convolutional Networks & 92.17 & 67.11 & 62.44 & 72.45 & 52.35 \\ GraphNLI: Graph Walk + Weighted Avg. & 93.06 & 73.56 & 80.63 & 69.48 & 57.84 \\ GraphNLI: Random Walk + Weighted Avg. & 93.18 & 74.79 & 80.89 & 70.90 & 59.98 \\ \hline GASCOM: Random Walk & 93.57 & 78.35 & 78.60 & 78.10 & 66.68 \\ GASCOM: Similarity-based Graph Walk & 92.95 & 75.59 & 79.50 & 72.79 & 60.63 \\ GASCOM: Similarity-based Random Walk & 93.73 & 78.69 & **83.69** & 75.23 & 66.89 \\ GASCOM: Attention-modulated Graph Walk & 92.63 & 75.50 & 76.95 & 74.25 & 60.57 \\ GASCOM: Attention-modulated Random Walk & **94.14** & **80.03** & 81.45 & **78.75** & **67.19** \\ \hline \hline \end{tabular} \end{table} Table 4. Impact of walk length \(L\) on GASCOM model (performance in %). #### 4.6.1. **Cross-Attention Improves Polarity Prediction** Table 7 clearly shows the improvement of 1.5 percentage points in macro-F1 in predicting the polarities on _Kialo_ dataset with cross-attention embedding \(w\) in GASCOM model. #### 4.6.2. **Semantic-aware Graph Walks Starting at Parent Nodes Improve Performance** Table 8 shows the performance comparison of semantic-aware graph walks starting from the parent node versus the target node. For both tasks, conversation context selection strategies perform significantly better when they start from the parent node. In online conversations, generally the target comment (reply) uses the context of immediate parent comment and therefore replies to it directly. Furthermore, in polarity prediction, polarity of the target node is with respect to the parent node. Hence, deterministic inclusion of parent nodes in random walks is better than non-deterministic inclusion in case of semantic-aware random walk starting from the target node itself. ### Case Study and Interpretability Analysis For interpretability analysis of the semantic-aware graph walks and token-level multi-head graph attention, we consider an example conversation from _Guest_ dataset, as shown in Figure 3. Bolded (bottom left) node is actually a misogynistic hate speech. Baseline models such as BERT can not accurately predict it as hate without the conversational context, whereas our GASCOM model can accurately predict it as hate. Every neighbouring node in the conversation has attention weight (Section 3.2.2) and similarity score (Section 3.2.1) obtained by comparing its sentence-level embeddings with the parent embeddings of the target node. As seen in Figure 3, token-level graph attention spreads attention weights across the neighbouring nodes and gives higher weights to the relevant nodes. For example, graph attention gives higher weights to the sibling nodes A and B in the parallel thread because they talk about women with some explicit misogynistic words. But similarity-based approach just gives higher weights to the immediate parent and ancestor nodes instead of actually considering the semantic relevance. Besides node-level analysis, we look at the token-level attention to understand which tokens from the selected nodes receive higher weights. Highlighted part of the context indicates token-level attention weights. Darker the red shade, higher are the attention weights. We notice that important keywords or words related to women are actually given higher attention weights, especially for the sibling nodes in the right-side thread. This example illustrates the utility of proposed semantic-aware graph-based conversation context selection algorithms and token-level multi-head graph attention in providing appropriate weights to different nodes and different parts of texts within nodes. ## 5. Conclusions and Future Work We proposed a general deep learning framework for online conversation understanding. Our proposed framework employs novel semantic-aware graph-based conversation context selection algorithms and token-level multi-head graph attention mechanism to effectively utilize conversational context to augment the semantic meanings of the target utterance. We demonstrated the superiority of the proposed approach on two important online conversation understanding tasks including polarity prediction and hate speech detection. Future directions include jointly training the graph-based conversation context selection module and the remaining modules for improved performance. ## 6. Limitations In forums such as BBC's _Have Your Say_1, there is no explicit threaded reply structure, requiring us to infer from the text of a reply which other post it is replying to, to construct discussion trees. In this less restrictive user interface, a single post may refer to or reply to multiple other posts, creating more than one edge and a conversation structure that is no longer a tree but a more general graph. We believe that the GASCOM model would work in this more general context as well, with the random walk sampling Figure 3. An example conversation from _Guest_ dataset. Bolded (bottom left) node is a hate speech (misogyny). Every neighbouring node has attention weight and similarity score obtained with its sentence-level embeddings. Highlighted part of the context indicates token-level attention weights. Darker the shade, higher are the attention weights. \begin{table} \begin{tabular}{l|c} \hline **Model** & **macro-F1** \\ \hline Polarity Prediction & \\ \hline Similarity-based Graph Walk & 78.90 \\ Similarity-based Graph Walk & \\ starting at parent node & 83.15 \\ Similarity-based Random Walk & 78.96 \\ Similarity-based Random Walk & \\ starting at parent node & **83.42** \\ \hline Hate Speech Detection & \\ \hline Similarity-based Graph Walk & 72.01 \\ Similarity-based Graph Walk & \\ starting at parent node & 75.59 \\ Similarity-based Random Walk & 76.20 \\ Similarity-based Random Walk & \\ starting at parent node & **78.69** \\ \hline \end{tabular} \end{table} Table 8. Impact of Semantic-aware Graph Walks starting from the parent node on the performance of the GASCOM model (performance in %). all the available conversational context. However, this has not been tested empirically. GASCOM makes use of conversation context from surrounding posts. While GASCOM can label each post as a conversation evolves and new posts are added, it becomes more effective only after a reasonable number of replies have been added. At the beginning of a conversation, when not a lot of conversation context is available, GASCOM will likely only perform similar to baseline models such as BERT, which also operate without the additional context. Currently, GASCOM is trained on English conversations. However, with widespread multilingual online conversations and conversations in low resource languages, there is a need to build models for online conversation understanding in multilingual and low resource settings. It is easy to adapt GASCOM for low resource and multilingual settings by using language models trained on specific langauges as encoders instead of RoBERTa trained on English corpus. ## 7. Ethical Considerations * **Confidentiality:** Access to data is critical to the effectiveness of our work. We only use publicly available de-identified datasets. * **Fairness and Biases:** AI models trained on huge amounts of data can have the potential for various kinds of biases associated due to the dataset and the proposed model. * **Potential for Harm:** AI models are not 100% accurate. In case of hate speech detection, false positives (falsely labelled hate comment) and false negatives (undetected hate) can be an issue in social media platforms. Any prediction by GASCOM of hate speech will need to be manually verified. Our multi-head graph attention mechanism can aid this process through a more nuanced interpretation of the results (as demonstrated in Figure 3).
2302.12890
Edge-Based Detection and Localization of Adversarial Oscillatory Load Attacks Orchestrated By Compromised EV Charging Stations
In this paper, we investigate an edge-based approach for the detection and localization of coordinated oscillatory load attacks initiated by exploited EV charging stations against the power grid. We rely on the behavioral characteristics of the power grid in the presence of interconnected EVCS while combining cyber and physical layer features to implement deep learning algorithms for the effective detection of oscillatory load attacks at the EVCS. We evaluate the proposed detection approach by building a real-time test bed to synthesize benign and malicious data, which was generated by analyzing real-life EV charging data collected during recent years. The results demonstrate the effectiveness of the implemented approach with the Convolutional Long-Short Term Memory model producing optimal classification accuracy (99.4\%). Moreover, our analysis results shed light on the impact of such detection mechanisms towards building resiliency into different levels of the EV charging ecosystem while allowing power grid operators to localize attacks and take further mitigation measures. Specifically, we managed to decentralize the detection mechanism of oscillatory load attacks and create an effective alternative for operator-centric mechanisms to mitigate multi-operator and MitM oscillatory load attacks against the power grid. Finally, we leverage the created test bed to evaluate a distributed mitigation technique, which can be deployed on public/private charging stations to average out the impact of oscillatory load attacks while allowing the power system to recover smoothly within 1 second with minimal overhead.
Khaled Sarieddine, Mohammad Ali Sayed, Sadegh Torabi, Ribal Atallah, Chadi Assi
2023-02-24T20:52:33Z
http://arxiv.org/abs/2302.12890v1
Edge-Based Detection and Localization of Adversarial Oscillatory Load Attacks Orchestrated By Compromised EV Charging Stations ###### Abstract Recent reports indicate that Electric Vehicle Charging Stations (EVCS) are susceptible to remote exploitation through their vulnerable software/cyber components. More importantly, compromised EVCS can be leveraged to perform coordinated oscillatory load attacks against the interconnected power grid, leading to power grid instability, increased operational costs, and power line tripping. In this paper, we investigate an edge-based approach for the detection and localization of coordinated oscillatory load attacks initiated by exploited EV charging stations against the power grid. We rely on the behavioral characteristics of the power grid in the presence of interconnected EVCS while combining cyber and physical layer features to implement deep learning algorithms for the effective detection of oscillatory load attacks at the EVCS. We evaluate the proposed detection approach by building a real-time test bed to synthesize benign and malicious data, which was generated by analyzing real-life EV charging data collected during recent years. The results demonstrate the effectiveness of the implemented approach with the Convolutional Long-Short Term Memory model producing optimal classification accuracy (99.4%). Moreover, our analysis results shed light on the impact of such detection mechanisms towards building resiliency into different levels of the EV charging ecosystem while allowing power grid operators to localize attacks and take further mitigation measures. Specifically, we managed to decentralize the detection mechanism of oscillatory load attacks and create an effective alternative for operator-centric mechanisms to mitigate multi-operator and MitM oscillatory load attacks against the power grid. Finally, we leverage the created test bed to evaluate a distributed mitigation technique, which can be deployed on public/private charging stations to average out the impact of oscillatory load attacks while allowing the power system to recover smoothly within 1 second with minimal overhead. keywords: Electric Vehicle Charging Stations, Cyber-physical Systems, AI-Detection, Oscillatory Load Attacks, Attacks Mitigation, Cyber Attacks, Grid Stability, IoT + Footnote †: journal: IJEPES ## 1 Introduction The increasing demands for Electric Vehicles (EVs) in recent years have been driven by a number of factors such as governmental policies and incentives, environmental concerns, rising gas prices, and a decrease in the prices of EV batteries, to name a few [1; 2; 3; 4; 5; 6]. As a result of the rapid adoption of EVs, several public/private entities have invested heavily to accelerate the deployment of the supporting EV Charging Stations (EVCSs) in major cities. For instance, the Government of Canada has already invested over $1 billion to support the increased zero-emission EV adoption, with a $680 million initiative towards addressing the lack of charging and refueling stations in Canada by 2027 [7]. Moreover, EVCSs have been equipped with remote connection and communication capabilities, which facilitate smart charging and scheduling of sessions by the EV users' and the operators' remote managing capabilities of the infrastructure. Despite these benefits, the remote control/management capabilities instilled on the Internet-enabled EVCSs open doors for exploiting the EV charging infrastructure through various vulnerable components in the cyber-layer. The EV ecosystem cyber layer consists of a complex system of interconnected components such as the mobile application, EVCS firmware, the back-end cloud management system (CMS), and the communication links/protocols [8, 9, 10] to name a few. In fact, recent reports indicate that the EVCS ecosystem is vulnerable to remote cyber attacks, which have real-life impacts. For instance, in 2022 2 different attacks have been confirmed on the EV ecosystem. In March of 2022, EV charging stations in Russia were hacked and used to display anti-war messages [11] while rendering them unavailable to consumers. In the UK as well, EVCSs were hacked and rendered unavailable while inappropriate content was displayed on their screens [12]. More importantly, while these reported incidents demonstrate the insecurity of the EVCS ecosystem, it raises concerns about the possibility of leveraging such vulnerable devices/systems as an entry point to attack the operations of the interconnected critical infrastructure such as the power gird [8]. For instance, malware-infected workstations and Supervisory Control and Data Acquisition (SCADA) systems were leveraged to attack the Ukrainian power grid in 2017 [13, 14], leading to power line tripping and depriving about a quarter million consumers of using electricity for up to 6 hours. In general, an adversary can exploit the vulnerable components within the EV charging ecosystem (e.g., CMS or the communication links) to create a botnet of infected EVCSs, which can be remotely controlled to launch detrimental attacks against the power grid. For instance, the adversary can command a swarm of EVCSs to start simultaneous charging operations and then stop them repeatedly to impact the stability of the power grid by altering the generator's speed, tripping power lines, overloading the grid, and/or creating frequency instability [8, 15]. In addition to the possible physical impact, such attacks can also impact the power grid's efficiency and operational costs (e.g., line losses and generation costs) [16, 15]. Considering the costly physical and financial implications of large-scale cyber attacks against the power grid, there is a need for effective detection mechanisms on different levels of the multi-layered heterogeneous ecosystem [17] to improve grid resiliency and ensure fault-tolerance. Previous work discussed various ways to detect attacks and malicious activities on the grid. For instance, in [18, 19, 20], the authors proposed deep learning algorithms to create intrusion detection models based on readings obtained from various input sources such as the Phasor Measurement Unit (PMU), control panel logs, snort network alerts, and relay logs. While these mechanisms mainly depend on monitoring the power grid performance and state estimation to detect abnormal behaviors, they may fail to detect covert and stealthy attacks that are hidden or rendered as normal [21]. Moreover, Kabir et al. [21] devised a cyber-layer detection and mitigation mechanism, which is deployed on the CMS [21]. Nevertheless, such centralized detection techniques can be evaded due to the existing vulnerabilities found within the EV charging ecosystem. For instance, adversaries can exploit vulnerable EV's CMS [8] and/or perform man-in-the-middle (MitM) attacks on the communication protocols (e.g., OCPP [9]) to compromise the integrity of the data and the deployed detection models. Such a centralized mechanism in this multilayered and complex ecosystem, provides a single point of failure for the detection and mitigation mechanisms since attackers can easily evade them by attacking other components or layers of the ecosystem. In fact, the utility does not have the flexibility to monitor individual consumer loads in real-time, which hinders its ability to detect and localize attacks initiated by the EV charging system and forces it to depend on historical data and grid measurements [15]. While an oscillatory load attack might be detected through the physical layer detection if it was not built stealthily, the attack can only be detected through its impact, by which time the damage might have already occurred. Even when the utility is able to locate the bus that is showing abnormal behavior, it would not be able to identify the exact consumer load that was used to alter the grid behavior. This is especially true for the EVCS load which is privately owned (by companies operating the infrastructure) hindering the utility's observability over the ecosystem. It is worth highlighting that even when the utility has some observability over the ecosystem, the utility faces a limitation in localizing the source of the attack on the power grid reaching the granularity of identifying the individual EVCS (location, operator, etc.).This is further amplified by the wide distribution of the EVCS ecosystem and the lack of standardization in the deployment (e.g., multiple operators) which increases the complexity of detecting such attacks and localizing them. Additionally, we highlight that CMS detection mechanisms (operator-centric) are limited to attacks launched by public EVCSs and cannot protect the grid against attacks initiated by private charging stations [21]. Furthermore, while the proposed detection mechanism by Kabir et al. [21] seems to be effective, it only focused on a specific type of oscillatory load attacks. Moreover, it resulted in a high false-negative rate (about 30%) for prompt attacks with 20 seconds duration. Finally, their centralized mitigation mechanism does not satisfy the fault tolerance requirements needed to ensure a secure operation of the power grid due to the deployment of the detection and mitigation mechanisms on the CMS. We identify a special kind of adversarial attacks which could be initiated against such centralized and operator-centric mechanisms such as: * Multi-operator oscillatory attack, where an attacker can initiate a multi-operator (exploiting multiple EVCSs belonging to different operators) stealthy attacks that might not be detected or mitigated by the deployed mechanisms suggested by Kabir et al. [21] since it does not have a holistic view of the different silo-ed operators. The attacker in this case depends on the collective impact of the compromised EVCSs that belong to different operators to harm the underlying infrastructure (e.g. power grid). * Using slow oscillatory attacks, the adversary can split their oscillatory attack load into numerous charging station groups to remain stealthy. For example, the attacker can divide the oscillatory attack period in half by launching the attack on double the number of EVCSs. This allows the adversary to initiate a slow-oscillatory behavior on the individual charging stations to remain stealthier. The attack can be designed in such a way that the aggregate load seen by the grid is the same, however, the behavior of the individual charging station or management system is inconspicuous. Therefore, due to the failure of centralized detection mechanisms to detect coordinated attacks and the failure of physical layer detection mechanisms to identify and localize the attacks with high granularity, it is of paramount importance to devise a detection and mitigation mechanism that addresses these gaps. To the best of our knowledge, the attack vectors mentioned above were not considered as part of the previous studies which renders their detection mechanisms ineffective based on the source of the attack. Thus, there is a need to design a detection mechanism that detects attacks initiated by the EV ecosystem against the power grid, which can also address multi-operator load attacks, slow oscillatory stealthy attacks, and all oscillatory load attacks initiated at any vulnerable point of entry into the ecosystem. In this paper, we investigate and propose an edge (EVCS) based deep learning detection mechanism, that can detect oscillatory load attacks in a decentralized manner while providing the operator with an insight into localizing the source of the attack and attributing it to the EVCS ecosystem with the granularity of an individual charging station. We instill resiliency and fault tolerance into the system by distributing the decision-making. We used a unique set of features to extract and mine the behavioral characteristics of a charging station and its connected infrastructure and achieved a 99.4% accuracy to swiftly detect oscillatory load attack by viewing the first 5 seconds of the attack whether it is launched by public or private charging stations. Moreover, we discuss and evaluate a post-detection distributed mitigation mechanism that attenuates the grid oscillation and helps the system recover smoothly. The proposed cyber-layer mitigation mechanism reduces the impact on the grid by distributing the oscillatory load over a period of time while preserving the quality of service for the customers (non-malicious requests that got classified as malicious). After detecting malicious behavior, a random delay block of up to 4 seconds on new requests is added to deprive the adversary of the luxury of coordinating an attack and thus diminish its impact of the attack on the grid. This random block is added to each charging station independently which allows a distributed and lightweight mitigation technique to effectively diminish the attack impact almost instantaneously. To this end, we summarize our contributions as follows: * This work is among the first to propose a practical and effective mechanism for detecting coordinated oscillatory load attacks on the grid using a botnet of EVCSs, which represents a new attack surface that has been shown to be vulnerable at scale. This work sheds light on the need for effective detection mechanisms at various layers of the EV infrastructure to protect the grid by building resiliency and fault tolerance into it. * This work addresses the challenges in collecting EV charging data by exploring real datasets and using our observations to characterize various benign/malicious behaviors and identify main features. We also created a testbed to synthesize a realistic dataset that represents different benign and malicious and studies the impact of the electric vehicle ecosystem. * We propose and evaluate an AI-enabled detection mechanism that can be deployed on private and public charging stations. The results indicate the accuracy and effectiveness of the approach with high precision. We tailored deep learning models to attribute suspicious behaviors and achieved about 99.4% accuracy while allowing for distributed and independent decision-making by integrating hybrid (cyber and physical) features into the decision process. Along with that, the deployment location of our mechanism allowed us to mitigate adversarial attacks (e.g., multi-operator attacks and MitM OCPP attacks) that could be launched against centralized detection mechanisms. Along with that, our approach enables the utility to detect attacks launched from private charging stations as well. * We propose and evaluated an automated distributed lightweight-mitigation mechanism that can effectively neutralize the oscillatory attacks on the power grid. We discuss the practical effectiveness of the method even when possible evasion techniques are employed by adversaries, which makes it a robust and feasible approach. We also take into consideration maintaining a high quality of service for customers, in case of a false positive, by minimizing the delay to a maximum of 4 seconds. The remainder of this paper is organized as follows. In Section 2, we present background information and basic concepts related to the EV ecosystem and related work. In Section 3, we discuss the system model and discuss the methodology and details our proposed detection and mitigation mechanisms. In Section 4, we detail the experimental results of the distributed detection and mitigation mechanisms. Finally, we evaluate and provide a discussion of the results of our proposed approach in Section 5 before providing a concluding remark in Section 6. ## 2 Background and Threat Model The EV charging ecosystem represents a cyber-physical system, which is composed of interacting hardware and software components. ### System Overview The software component is composed of the central management system (CMS) and a mobile application. The software component allows remote monitoring and management of the physical counterpart which constitutes the electric vehicle charging station and the vehicle connected to the power grid. The mobile application allows users to control the charging stations remotely and monitor them by sending requests to the CMS. The CMS enables remote communication between the mobile application and the charging station by interpreting commands and controlling the EVCS. The CMS uses the Open Charge Point Protocol (OCPP) [22] which allows it to perform numerous functionalities on the EVCS such as start, stop, update firmware, etc. The OCPP emerged as a result of an effort to standardize the development and deployment of the EVCSs. Moreover, to standardize the communication between the vehicle and the charging station various standards have been put in place such as (SAE J-1772/J-2293/J-2847/J-2836, IEC62196/61851, ISO/IEC 15118, and chAdeMO) [8]. There are several types of AC and DC chargers for EVs having different charging rates. Level 2 chargers, which are the most common public charging station types, are being upgraded with faster charging Level 2 or even replaced with new Level 3 chargers (fast/superchargers) to improve the user experience and decrease charging times [8]. These charging stations can either be deployed in public or privately at residences and office buildings. In this work, we focus on both as we aim to detect oscillatory switching attacks originating from private and public charging stations unlike [21] that focused on public charging stations. It is worth highlighting that the EVCSs are also connected to the power grid (critical infrastructure) to draw the needed power for vehicles to charge. The connection to the critical infrastructure highlights the importance of securing the deployment of EVCSs against attacks that might impact the power grid. Figure 1: Overview of the EV charging ecosystem and its interactions. ### Related Work In this section, we survey and discuss previous work that tackled the security of the EV charging ecosystem's components and the attacks that are initiated through the ecosystem against the power grid. Sayed et al. [8], studied the impact of the oscillatory switching attacks due to several vulnerabilities found in the ecosystem. Moreover, outside of academia Kaspersky Lab's team [23] analyzed the security of the ChargePoint home charging station and found significant vulnerabilities in its firmware and mobile management application. Moreover, Alcaraz et al. [9] studied the communication protocol between the management system and the charging stations, which allows the adversary to interfere in the communication between the EVCS and the EV resource reservation service. Whereas, in [8], the authors studied the impact of oscillatory load attacks initiated by the EVCS against the power grid. The results showed that an insecure ecosystem pauses a great risk to the critical infrastructure to which it is connected. It can lead to system instability, line tripping, etc. The security vulnerabilities in the EV ecosystem and its critical impact on human lives due to its connection to the power grid [8] motivate the need for a detection mechanism for oscillatory load attacks. In [21], Kabir et al. studied oscillatory load attacks and devised a centralized detection mechanism using a backpropagation neural network. The deep learning model developed can be deployed on a central management system and targets switching attacks that are initiated by the public charging station. Moreover, the proposed approach by Kabir et al. resulted in a 30% false negative for swift 20 seconds attacks which translates to 30% of the attacks being classified as normal, the uncertainty in the results motivates the need for an efficient detection mechanism. It is worth mentioning that such operator-centric mechanisms (mechanisms that are deployed on the CMS of each operator) fail to detect multi-operator oscillatory load attacks due to their ability to view the activity of other operators. In our approach, we proposed a convolutional LSTM model that was able to detect swift attacks with a low false-negative rate. Consequently, using our approach we can defend against multi-operator switching attacks by distributing the decision-making and allowing charging stations to make independent decisions based on shared behavioral characteristics. Different detection mechanisms have been proposed in the literature to identify physical layer attacks, such as False Data Injection, using recurrent neural networks [24], or using Bad Data Detection algorithms that rely on measurement residuals [25]. Moreover, other techniques, such as AdaBoost, random forest, and common path mining, have been studied [26; 27; 28; 29]. The detection of oscillatory load attacks, to the best of our knowledge, hasn't been widely studied in previous work. It is worth mentioning that previous work also lacks localization methods that link attacks to particular physical locations. In our approach because of the portability and our deployment location (EVCS) the operator can identify and localize attacks to the granularity of a charging station which allows the grid operator to create better defenses against attacks. Furthermore, the detection mechanisms that depend on state estimations and knowledge about the power grid using different devices such as PMUs may fail under attacks that target the physical and cyber layers simultaneously [30]. These types of attacks include steps to mislead the control center similar to the Ukrainian power grid attack. Moreover, in [31] the authors created an LSTM deep learning model to detect DDoS attacks that can violate the availability of EVCSs by targeting the management system. They studied different types of DDoS attacks that will affect the availability of resources. In our work, we do not assume that the attack has changed the attributes of network packets. Also, we do not have access to network packets before, throughout, and after the attack unlike [31]. We utilize EVCS logs to deploy a distributed detection mechanism on the charging station. Consequently, Basnet et al. furthered their study to create an IDS to detect FDI and DDoS attacks on photovoltaic controllers [32] whereas we detect oscillatory load attacks on the cyber-layer of the EV ecosystem. Moreover, in [33] the authors devised a ransomware detection mechanism while assuming that the ransomware can initiate DDoS and FDI attacks that might alter the state of charge thresholds. The detection mechanism is based on assembly instructions that are generated after the ransomware starts executing, whereas in [34], the authors proposed an early detection mechanism based on pre-attack (paranoiac) activity that the ransomware performs before executing. In [33], the authors utilized 561 ransomware samples to train and test their deep-learning model. However, there are various classes/families, wherein [34] the authors collected about 3000 ransomware samples, which makes the data set created in [33] unrepresentative. ### Oscillatory Attack Vector In [35], the authors exploited publicly available data of EV chargers of the Manhattan, New York, power grid to design a novel data-driven cyberattack strategy using state-feedback-based partial eigenvalue relocation, which targets frequency stability of the power grid. The current number of EVs is not adequate to create sizable impacts, however, with the increased adoption of EVs and deployment of charging stations to match the demand, the grid will face such attacks and impacts. To initiate an oscillatory load attack from the EVCS surface, several EVCSs have to alter their charging behavior to follow a repeated on-off behavior within a very short period. The oscillatory attacks are characterized by the EV load, duration of the attack, and the instant of switching. These characteristics differ based on the power grid and its loaded conditions. Two variations of the attack exist, charging oscillatory attack, which relies on starting and stopping several charging stations. Whereas, the discharging oscillatory attack relies on charging and discharging connected EVs through several charging stations. Different combinations of the two variations can also be included; however, in our work, we focus on the charging oscillatory attacks, whereas future studies could include the discharging paradigm, vehicle-to-grid (V2G), as it gets rolled out to the public. The oscillatory load attack takes advantage of load manipulation and alternates between a surge in demand which causes a frequency drop on the power grid and when the system starts its recovery and the generators start speeding up again the attacker would switch off the EVCS initiated in the first step and cause a frequency increase. This could be amplified by using discharging oscillatory load attacks, which would cause the generators to speed up due to the mismatch between the demand and extra generation [8]. Different types of oscillatory load attacks can be curated and are summarized as follows: * Switching attacks: * Square wave: synchronizing the compromised load and switching them between on and off [36; 16]. This attack can be made stealthier by distributing the switching behavior on multiple EVCSs to reduce the number of events per EVCS. * Alternating sine wave: synchronizing only small portions of the compromised load every time step t[37; 21] (stealthier than square wave attacks and detecting them is not straightforward). * Dynamic attacks: the size and the trajectory of the compromised load is determined by the attacker based on the grid behavior to achieve and maximize the impact on the grid instability [38]. From a grid perspective, oscillatory EV loads can be manipulated to have lower power factors [39], thus entailing a larger impact compared to residential loads [8]. Oscillatory load attacks do not require huge loads or injections to cause abnormal behavior on the power grid [16]. Even when the load is not large enough to cause generator tripping, a sustained switching attack can cause frequency and voltage oscillations, which in turn damages the turbines due to the constant acceleration and deceleration [8]. Moreover, it is worth mentioning that a variation of these attacks might target inter-area frequency as discussed in [21], these attacks are stealthy and may not be distinguished from the load variations of the grid [21; 36] which makes oscillatory load attacks initiated by the EVCS ecosystem a serious concern. Furthermore, other oscillatory load attacks can be used to force different types of oscillations, such as exciting sub-synchronous resonance [40]. Finally, in the dynamic attack scenario, the adversary induces forced oscillation without the need to excite a specific unstable mode present in the power grid [38]. It is worth noting that, the existence of various operators and the wide distribution of the charging stations, created stealthy attack vectors (adversarial) that might exploit the charging stations of different operators to create the same impact on the power grid and hinder the utilities' ability to detect and localize due to the increased complexity in monitoring the consumer loads. Consequently, to locate an attack a utility might depend on PMU measurements and other artifacts however, they do not reach the granularity of identifying the exact location of the charging station that was exploited to initiate the attack due to the wide distribution of the charging stations and the presence of multiple operators. Granular localization information is necessary for the utility to provide adequate countermeasures and create future plans to secure its system. ### Threat Model We consider an adversary that is able to compromise and control a large number of EVCSs. There are multiple attack vectors which can be used by the adversary to impact the power grid that we take into consideration in our detection mechanism. Namely, we describe the different attack vectors below: * The internal components of an electric vehicle that have internet connectivity such as the On-Board Diagnostics (OBD) port that can be accessed physically or wirelessly and grant access to the Controller Area Network (CAN) bus, which could be leveraged by the attacker to control the vehicle and its charging [41]. * The mobile application which is the component responsible and the enabler for the commercialization of the EVCS ecosystem could be used by the adversary by leveraging the lack of end-end authentication between the user and his vehicle [16], allowing the adversary to opportunistically take advantage of connected vehicles to the charging station. * CMSs have been found to be vulnerable to remote attacks. The adversary can exploit one or more operators' management systems (multi-operator) the adversary can perform attacks against the power grid [8] by commanding a large distributed EVCS botnet. The adversary could create different combinations of attacks by leveraging multiple CMSs. * The OCPP protocol is also taken into consideration which has been found vulnerable to MitM attacks that could be used to initiate and bypass any protection mechanism deployed on the cloud [9]. Consequently, we highlight that unlike Kabir et al. [21], the attacker, after gaining control of the EVCSs, can launch various types of oscillatory load attacks not limited to inter-area oscillation oscillatory load attacks. Moreover, the adversary does not necessarily take advantage only of public charging stations but could also leverage privately owned charging stations. We assume that the attacker can launch covert attacks [42] as illustrated in Figure 2. The adversary, as shown in Step 2, controls a considerable number of EVCSs and can command a coordinated oscillatory load attack against the grid. However, to thwart the utility operator's detection mechanisms, the adversary intercepts (Step 2) measurements and readings that the operator collects to monitor and estimate the state of the grid and injects false data (Step 3) which deceives the physical-layer detection mechanism hosted by the utility operator, and thus renders the grid oblivious of the grids' actual state. It is worth mentioning that the adversary injects data that resembles the normal behavior of the grid. Consequently, the utility operator sends commands to the power grid components (e.g., generators) to perform some actions to stabilize the grid based on historical data (e.g., load demand trends), thus the adversary intercepts these commands (Step 4) and forwards them to the false data injector so that the operator can see expected data trends and would not trigger an alarm at the physical layer (Step 5). The attacker can establish covert channels by injecting malware/ransomware [33] (e.g., BlackEnergy malware injected into Ukraine's power grid [14], Stuxnet Malware infected Iran power grid [43]) into the networked controller and arbitrarily alter the control logic. In our work, these threat vectors are addressed using our detection and mitigation mechanisms. Where the attacker's main goal is to induce forced oscillations that would impact the frequency of the grid. Now, oscillatory load attacks require the coordination of numerous charging stations simultaneously, and we acknowledge that the current number of EVCSs is not enough to launch the proposed attacks. However, with the current exponential increase in the adoption of electric vehicles and the rapid deployment of EVCSs to match the adoption rate, such attacks pose a great threat to power grid stability. To demonstrate the feasibility of such attacks we chose the New South Wales (NSW) grid whose size is similar to the NE-39 bus grid we use for our dataset collection. The NSW grid has an average load of 6989MW [44] and a total number of registered vehicles of 5,892,206 [45]. Scaled to fit the 6097MW 39- bus grid, the total number of vehicles in our grid would be 5,155,681. If we assume a future projection of 50% EV penetration, our grid will contain over 2.5 million EVs. As per the International Energy Agency (IEA) [46], based on the mixture of available EVCSs, the average charging rate per EVCS is 24kW. We highlight that based on these statistics, our attacks only require a small portion of the available EVs to be successful. Our largest attack magnitude, for instance, constitutes 30% of the grid load. This translates to only 3% of the available EVs. By comparison, our smallest attack magnitude only requires 1% of the available EVs. When this analysis is performed for the 9-bus system, used in our distributed mitigation section, we see that it only requires 2.6% of the available EVs if we assume a 50% penetration level. ## 3 Methodology and System Model We discuss the methodology and conceptual model of our detection and mitigation. We provide a discussion of the system model, followed by a detailed discussion of our distributed detection methodology. We also discuss the data-set curation and collection. Finally, we discuss our distributed mitigation methodology and provide an overview of the real-time co-simulation testbed on which we demonstrate our mitigation mechanism. ### System Model In our approach, we attempt to ensure fault tolerance in the deployment of the detection mechanism while handling oscillatory and adversarial oscillatory attacks. To mitigate covert sophisticated attacks that Figure 2: Overview of the covert attack. allow adversaries to deceive traditional physical-layer detection mechanisms, detection should occur at different levels of the interconnected system to build resiliency into it. To address the limitation of previous work, e.g., [21] and other centralized detection mechanisms, we propose to deploy a deep learning model on the EVCS since, the EVCS possesses the ability to collect information about the true operations of the charging stations and power characteristics (e.g., frequency). This presents an advantage over CMS-based detectors where a compromised OCPP connection allows the adversary to inject bi-directional false data that would affect the detection mechanism deployed there. Moreover, the EVCS is the component that is utilized by adversaries to perform physical attacks by compromising other components (e.g., mobile application, CMS, or OCPP). Thus, securing the EVCSs would prevent attacks initiated from any vulnerable point in the EVCS ecosystem. Finally, centralizing the detection mechanisms creates a single point of failure, and maximizes the risk of exposing the deep learning model, and polluting the data since recent incident reports and studies show the vulnerability of the system at scale. Whereas the deployment of a deep learning model on the EVCS would hinder the ability of the adversary due to the distributed and independent operation of charging stations (increased resiliency). Consequently, to deploy a deep learning model on the charging station, new features should be derived compared to the work of [21], which utilizes information that only the CMS has access to (e.g., change Figure 4: Flow chart describing our detection mechanism. Figure 3: EVCS log showing the different features that could be extracted from the charging station logs. in a load of vehicles during a certain \(\delta\) time). Thus, we investigate the usage of various deep learning techniques in detecting attacks against the power grid initiated by the EVCS ecosystem. To the best of our knowledge, we are among the first to investigate a decentralized cyber-detection mechanism deployed on the EVCS ecosystem to protect the grid from the new vulnerabilities of this cyber-physical system. Residential and public charging stations both have log files to record all the operations/events of this station. In Figure 3 we show a sample EVCS log where each transaction might include the following information: EVCS ID, operation type (e.g., charging, or stop), operation date, start operation time, stop operation time, charging rate, type of the charger, and the variation of the frequency of the load bus that the EVCS is connected to overtime. It is worth noting that the OCPP protocol provides a functional block that enables charging stations to send periodic meter values (e.g., voltage, reactive power, etc.). Thus, using the telemetry data collected by the charging station, the power grid frequency, which is directly linked to the speed of the generators, can be directly recorded by the EVCS with high granularity by measuring the period of the voltage waveforms that are sampled over time. It is worth highlighting that electric devices (e.g., charging stations) will exhibit the same frequency as the bus they are connected to. Thus, to collect grid frequency measurements the utility monitors and collects measurements from the buses which incidentally have connected EVCSs. The recent industrial technology advances increased the connectivity of cyber-physical systems that are monitored and controlled by Supervisory Control and Data Acquisition (SCADA) systems that use advanced computing, sensors, control systems, and communication networks [47]. SCADA systems allow power grid operators to gather real-time telemetry data about the grid. This information that the grid operator can acquire from the buses can be used for training deep learning models since, when deploying the model each charging station should be able to gather this information by itself. Accordingly, the charging station can store each operation in its log file and use it to detect anomalies in the usage of the charging station, which indicates that there is a possible attack initiated from the EVCS ecosystem against the grid. This information can be updated in the log file actively. However, since the utility (power grid operator) is the main entity affected by oscillatory attacks, it will take responsibility to gather information from different operators and distribute trained global models to the connected EVCSs. Collaboration with the utility by various EVCS operators is mandatory to allow a collective view of multi-operator attacks. The utility will use past data to train and deploy deep learning models on the charging stations to alleviate any future privacy concern the operators might have about sharing their data. Detection Mechanism: In Figure 4 we give an overview of the proposed detection mechanism to be deployed at the charging station. When an EVCS\({}_{e}\) receives a charging request (k\({}^{th}\) request), the EVCS\({}_{e}\) retrieves the events that occurred in the last t\({}_{1}\) seconds from its logs. Similarly, it retrieves the frequency readings that has occurred and it collected within the same period from its logs. This information is fed into a machine/deep learning model to detect maliciousness of the events that occurred within the last t\({}_{1}\) seconds. By leveraging the combination of the cyber data (series of events) and physical data (frequency on the power grid) that are tightly coupled in case of a coordinated oscillatory load attack, we create a deep learning model that will extract the temporal and spatial relationships between the sequence of readings over time. The observed behavior of the charging station and the underlying infrastructure is used to characterize oscillatory load attacks and differentiate them from the normal functioning of a charging station. It is worth highlighting, the t\({}_{1}\) seconds is a rolling window, and the detection mechanism is real-time. We only require the t\({}_{1}\) window to detect the attack. This also means that no additional extensive data logs are required to be kept on the EVCS\({}_{e}\) since our algorithm will not use any of the data prior to the t\({}_{1}\) rolling window. Mitigation Mechanism: If the deep learning model labels the sequence of events as malicious, the EVCS\({}_{e}\) will create a delay block that will randomly delay request between 0 to 4 seconds to disrupt the synchronization of the oscillatory load attack and notifies the CMS and grid operator by sending the EVCS ID and location. The mitigation mechanism allows distributed and independent decision-making for each charging station, thus ensuring fault tolerance in our mitigation mechanism. Consequently, we test our mitigation mechanism on our test bed which is used to study the impact of the EV ecosystem on the power grid. The results (discussed later) show the effectiveness in neutralizing the impact of an oscillatory load attack on the generator's speed and minimizing the risk and the costs incurred by a successful attack. It is worth highlighting that the independence of our techniques from the features or artifacts that need a global knowledge of the ecosystem and grid provides us with the flexibility needed to deploy our detection-mitigation mechanism on public and private EVCSs. ### Distributed Detection Mechanism Methodology Given the limited number of previous works, which discuss the detection and localization of oscillatory load attacks, along with the limitation of previous detection approaches, we aim to deploy an edge-based AI-enabled detection mechanism on the charging station itself. We leverage cyber and physical characteristics (e.g., charging events and power grid frequency variation) to identify and characterize malicious and benign behaviors. More specifically, the devised methodology attempts to leverage the behavioral characteristics of an oscillatory load attack to propose an effective edge-based, decentralized oscillatory load attack detection. To achieve our objectives, we start by understanding the normal behavior of charging stations by examining a real-life EVCSs dataset. This dataset was obtained from Hydro-Quebec as part of a legal agreement and research collaboration. Hydro-Quebec owns and operates, through a subsidiary, the public EVCSs in Quebec. This data is used to understand the behavior of the public EVCSs ensuring normal behavior and extract certain features that allow us to build our own realistic data-driven normal EVCS behavior. First, we identify the state changes of a charging station. The charging station alternates between three states: idle, charging, and discharging. Whenever a charging station receives a charging request it transitions from idle to charging, and when it receives a stop charge request the charging station goes back to the idle state. However, the duration that the charging station spends in any of the states needs to be understood since the oscillatory load attack is tightly coupled with the total attack load and the time spent in each state. The dataset is acquired from 6,000 EVCSs located in different geographical locations of Quebec from 2018 to early 2022 to cover all four seasons of Canada and their corresponding influence on charging behavior. The data contains multiple principal metrics about charging sessions (e.g., start time, end time, and duration of charging). The normal behavior of the charging stations falls under two general observations 1) normal behavior of a charging station with charging > 5 minutes; 2) switching behavior of an individual charging station that does not impact the grid. Consequently, we analyze the charging behavior of one heavily utilized and one lightly utilized EVCS located around the downtown area in Montreal. The average duration of EVCS 1 (Figure 5a)is 24 minutes with a minimum of 55 seconds. Whereas EVCS 2 recorded an average duration of 8 minutes with a minimum of 26 seconds. After further analysis of EVCS 2 (Figure 5b), a switching behavior is observed at 8:33 A.M which was followed by two other switches at 8:34 and 8:35. Similar behavior was repeated at 9:47, 15:21, 17:11, and 19:15. The two charging station behavior patterns are identified based on their utilization where their hardware specifications are the same (providing an 11 kW charging rate). It is worth mentioning that, a switching behavior occurring simultaneously on numerous charging stations would be considered a coordinated oscillatory load attack. This observation shows that the behavior of a charging station by itself is not enough to detect oscillatory load attacks because it might cause numerous false positives and false negatives due to the presence of a switching behavior during the normal operation of a charging station. As the number of charging stations increase this phenomenon is expected to increase among charging stations. Moreover, since detection is occurring on the charging station that does not have any information about other charging stations, we couple the events happening on the charging station with the frequency readings over the studied t\({}_{1}\) time. The frequency is a global variable shared between all charging stations that are connected to the same bus, which allows the charging station to gain global knowledge of the EVCSs connected to the same bus while keeping the detection local to itself. During a synchronized oscillatory load attack, events that occur on the EVCSs are tightly coupled with grid behavior. Therefore, we couple the events that occur on the charging station (e.g., start time, end time, and duration) and the grid behavior (e.g., frequency) over time. Hence, using the mentioned features, we aim to detect and localize a synchronized oscillatory load attack with the fine granularity of identifying the charging stations that were compromised to perform such attacks. #### 3.2.1 Data Synthesis and Collection A crucial part of our detection mechanism is creating a comprehensive and realistic dataset that resembles both normal and malicious behaviors. Since the existence of such attack data is scarce in real life and due to the unique features we chose, we create a realistic data-driven EV load profile. To achieve this, we independently simulate a Poisson arrival process of EVs to each EVCS. The charging time of these EVs is then simulated as a truncated Gaussian distribution. The parameters of the arrival and charging time models are specified for different periods during the day and for different seasons. These parameters are tuned based on the Hydro-Quebec EVCS dataset. Finally, we also simulate the impact of normal charging on the power grid along with the behavior of the power grid as a result of the different oscillatory load attacks launched. The simulated dataset will be used to train our detection model due to the lack of real data with the required granularity (0.5 seconds) published online. We will focus on anomaly detection for the detection of synchronized oscillatory load attacks. To this end, we couple the behavior of the EVCSs and grid under normal and attack conditions. As mentioned above, our method offers the coupling of the cyber events occurring on the charging station to the physical data, i.e., power grid frequency behavior. The normal arrival of new charging requests at a charging station is coupled with the normal frequency behavior of the bus to which the charging station is connected. The arrival of charging requests during an oscillatory load attack is coupled with the abnormal frequency behavior of the bus to which the charging station is connected. To this end, the IEEE New England 39-bus system [48] was built in MATLAB Simulink to gather the required power grid data. We use the MATLAB-Simulink 2020a Specialized Power Systems Toolbox which is widely used for system stability studies [49]. The Simulink Power System Toolbox allows us to model all the different components of the power system (i.e., loads, lines, transformers, generators, and generator control systems). Given the dynamic behavior of the power system, it is mostly governed by the control system of the generators and we use the models commonly adopted by stability studies i.e., round rotor type synchronous machine block of Simulink, generator exciter model IEEE T1, turbine speed governor IEEE G2, and a power system stabilizer based on IEEE Std 421.5. The simulations were performed with a simulation step size of 1ns. Figure 5: Normal charging behavior of two different charging stations. The implemented model would allow the study of the transient and steady-state behavior of the system. To simulate the normal frequency fluctuations of a power grid, we added random load blocks to all load buses. IEEE grid models have constant loads which usually represent the average load of the bus. However, real consumer behavior is random during a short span of a few minutes such that its average is what is reported and planned by utilities. This gives rise to the need to simulate small random perturbations in the loads of our power system which would lead to normal frequency variations. The random load blocks we added to all load buses are constituted of a random number generator and a dynamic load block which are provided in Simulink. The magnitude of the random number generator is scaled by the nominal load of the bus it is connected to and a multiplication block with a percentage cap that is changed in every simulation run. This setup is used to control the real and reactive power of the dynamic load block. The power factor of the random load block is maintained at 0.8 lagging to simulate benign consumer load variation. In half the simulations, the random source was set to follow a Gaussian distribution and in the other half, it followed a Uniform distribution to increase the randomness in our data and simulate close to real-life load perturbation. To avoid the pattern effect of pseudo-random number generators and to insure true randomness, we utilize the Mersenne Twister algorithm with a period length of \(2^{19937}-1\) and we initiate the shuffle command before every simulation run to randomnly select new seeds for the random number generator and guarantee further randomness. The constructed system is used to create a dataset of 5,000 normal (no attack) scenarios and 5,000 oscillatory load attack scenarios by collecting cyber layer measurements, EVCS events, and physical layer measurements, grid frequency, from the simulated system. Our normal attack dataset constitutes 80% of a behavior similar to EVCS 1 in (Figure 4(a)), whereas the other 20% follow the behavior similar to EVCS 2 in (Figure 4(b)). Moreover, we identify the following 4 scenarios and classify them as normal behavior: * Very slow charging station switching (normal charging request start and stop) and normal bus frequency behavior. Charging events with a very low arrival rate (e.g., \(\lambda<6\) event per 60 minutes), while the grid shows a normal frequency fluctuation. * Very slow charging station switching and abnormal bus frequency behavior. Charging events with a low arrival rate (e.g., \(\lambda<6\) event per 60 minutes), while the grid shows abnormal fluctuation in the frequency. This abnormal grid behavior can either result from some sudden benign disturbance on the grid or from an attack that does not involve the charging station in question. * Slow charging station switching and normal bus frequency behavior. Charging events with a high arrival rate (\(\lambda>6\) events per 60 minutes), with a normal frequency fluctuation as a result of normal consumer behavior. * Fast charging station switching and normal bus frequency behavior. Charging events with a very high arrival rate (\(\lambda>6\) per 60 seconds), coupled with normal frequency fluctuation. This case represents a few actual cases we monitored where the EV owner connects and disconnects a few times a minute at the time of arrival. Furthermore, the absence of any abnormal frequency behavior means the absence of any abnormal behavior on the power grid and thus the absence of a synchronized attack. We identify two cases of attack behavior that include fast charging station switching with abnormal bus frequency where the adversary performs periodic attacks and record the events and the frequency fluctuation. The attack frequency was as fast as 1Hz [8]. The second case is slow charging station switching with abnormal bus frequency where an adversarial attacker with more resources can compromise more charging stations than required for the attack and distribute the switching behavior among them to remain stealthier. If the attack for example requires \(n\) charging stations switching at a frequency of \(f\) Hz, the attacker can compromise \(m\times n\) charging stations and switch them at a frequency of \(\frac{I}{m}\) to obtain an identical aggregate effect but with a fraction of the switches on every charging station and remain stealthier. The switching of the charging stations is crafted in a way such that the aggregate load on the buses adheres to the following behavior. For the slow switching attack scenario, we distribute the load such that the aggregate switching load has a duty cycle (the proportion of time during which an electrical device is operated) of 35%, 50%, and 60% each constituting a third of all cases. Moreover, all three oscillatory load attack variations are crafted with an aggregate period between 1s and 2 s (0.5 -1 Hz) and an aggregate attack magnitude between 10% and 30% of the bus load. This provides us with a comprehensive dataset that simulates the different types of oscillatory load attacks (switching and dynamic) that are discussed in Section 2.3. The different types of attacks differ in their magnitude, periodicity, time of the attack, and stealthiness; however, their collective impact on the power grid is comparable in terms of forcing abnormal frequency oscillations. Consequently, in this work, we aim at detecting oscillatory load attacks to mitigate their impact on the power grid. Given the obtained dataset of a cyber events on the charging station and with frequency data from the grid, we adopt a time-series selection/representation approach. After coupling the two features that vary with time, where one of these features (events on charging station) affects the other feature (frequency) it transforms our problem from a time series classification to a multivariate time-series classification with two axes of difficulty. Temporal and spatial relationships are learned and mined using deep learning techniques to extract the variation of the features with respect to time and how features vary with respect to each other. Each charging station monitors the previous 120 seconds in rolling windows whenever it receives a request and fetches readings from its log file. The events and frequency are recorded every 0.5 seconds, which results in 240 readings for each feature per instance. We fix the rolling window size and the number of readings, due to simulation environment limitations of 120, and 240 respectively. In our study, we devise deep learning models to detect attacks andoptimize them to detect attacks after 5 seconds and 10 seconds of attack start, which we call 5-Attack and 10-Attack, respectively. This means that the last 5 seconds or 10 seconds of the 120-second window will have attack features whereas the rest would be normal behavior. Through this approach, our detection mechanism can successfully detect almost all attacks as early as 5 seconds by only viewing the beginning of the attack. Since our methodology is based on a rolling window, this allows the algorithm to identify attacks if any false negatives occur in previous windows. Furthermore, since we are utilizing a rolling window, our algorithm can start detecting the attacks, with some degree of success, as soon as 1s after the attack starts. Decreasing the size of the attack windows we are optimizing for would decrease the accuracy of the machine/deep learning model as stealthy attack scenarios would be misclassified. In the stealthy version of the attacks, the attacker launches slow oscillatory attacks as well as distributes the switching among multiple charging stations. This means that in short windows of time, the charging station behavior would look normal since only one event or possibly no events at all occur making their behavior look completely normal. Consequently, we chose to optimize for 5-second attack windows to detect a wide variety of attacks without compromising on accuracy. Our window rolls in intervals of 1s (split into two 0.5s sub-intervals) which means that our detector can start recognizing the attacks within 1s of their attack start. #### 3.2.2 Feature Selection Given the obtained dataset of events coupled with their power grid frequency readings, we adopt two feature selection/representation approaches. We collect a sequence of observations that are taken sequentially in time, which defines our data to be time series. Consequently, to use a set of time series \(\mathcal{D}=\mathcal{X}_{i=1}^{N}\) as input for the deep learning algorithms, we map each time series \(\mathcal{X}\) of set \(\mathcal{D}\) into a matrix of \(\mathcal{N}\) rows and \(\mathcal{M}\) columns by choosing \(\mathcal{M}\) data points of the two variables (events and frequency) from each time series \(\mathcal{X}_{i}\) as elements of the feature vector. This allows the deep learning model to take into account temporal and spatial information and find the correlation between the events and the frequency. #### 3.2.3 Classification Models Given the features we selected and the complexity of relating cyber data and physical data to perform the classification, we need to implement a classification algorithm able to handle this data and preserve its properties.To this end, we implement and evaluate different Deep Learning classification models. Specifically, we use Recurrent Neural Networks (RNN) to capture the order, occurrences, and structure of the events. We leverage a special type of RNN, namely Long Short Term Memory (LSTM). LSTMs preserve the errors that will be backpropagated through layers that allow LSTMs to continue learning over many time steps. LSTM is unique in its capability to learn what information to store in long-term memory. LSTM also allows the neural network to identify patterns and sequences in the data by learning temporal relationships between multiple time steps while utilizing memory gates. These features allow LSTM to capture the temporal relations between events in a multivariate time-series data classification problem. We further explore a special type of Convolutional Neural Network, namely a 2D Convolutional LSTM. In our multivariate time series classification, it is important to capture spatial interpretation and relationships. The events that occur on the charging station, in case of synchronized switching attacks, are tightly coupled with the power grid behavior. Capturing spatial information between the cyber layer and physical layer features allows the algorithm to capture the correlation between events on the EVCS and the power grid frequency behavior. Thus, ConvLSTM nodes possess convolutional capabilities to handle spatial information and LSTM capabilities to handle temporal information, solving our dual-axis data relationships [50; 51]. We use ConvLSTMs to overcome the major limitation of LSTMs in finding spatial relationships between features over multiple time steps [50]. Unlike LSTM which flattens the data and loses any spatial relationships, ConvLSTM replaces the LSTM gate in each LSTM cell with convolution operation made up of several filters of square matrix kernels. By doing so, ConvLSTM captures underlying spatial features by convolution operations in multiple-dimensional data while preserving the temporal relationship between the data as well. #### 3.2.4 Model Evaluation and Comparison We follow several standard methods to evaluate the overall effectiveness of the implemented classification models to compare their outcomes. More specifically we use metrics such as accuracy, recall, precision, and F-measure. Moreover, we use the confusion matrix, which is a useful method for discussing the effectiveness of the implemented deep learning models. The confusion matrix shows the number of data instances that were classified correctly using the model (true positive and true negative) and the number of data instances that were misclassified by the model (false negative, false positive). ### Distributed Mitigation Methodology In this paper, we also aim to mitigate the impact of oscillatory load attacks in a distributed and lightweight manner and assist the power grid to return to its normal state easily. Consequently, in this section, we discuss and evaluate a lightweight and distributed mitigation mechanism against oscillatory load attacks. After locally detecting the attacks within 5 seconds on an EVCS as discussed in the previous sections, a charging station can either discard a request or create a random delay by taking this decision independently. In [21], Kabir et al. proposed a centralized physical-layer mitigation technique that requires the utility to upgrade its existing generators with new control mechanisms. While the oscillations were successfully damped using their method, the oscillations on the power grid were never completely eliminated. Moreover, a centralized cyber-layer mitigation technique can only ``` Inputs:\(CS_{log}\): Charging station logs,; //Events and frequency logs \(M_{1}\): the 5-Attack trained deep learning model,; //model to detect attacks within the first 5 seconds \(M_{2}\): the 10-Attack deep learning model ; //model to detect attacks within the first 10 seconds Output:\(L_{test}\): the prediction class for the test sample in \(CS_{log}\). \(d_{1}\): the delay of the incoming requests. whileTruedo foreachEvent\(e_{i}(t)\in CS_{log}\)dodo in parallel \(L_{1}\leftarrow\)Prediction(\(M_{1}\), \(x_{i}\)); //predict the class of the behavior recorded while\(L_{1}\) or \(L_{2}\) is Abnormaldo \(d_{1}\leftarrow\)Random Delay\({}_{0<d\leq 4seconds}\); //continue generating random delays to the new incoming requests Report abnormal behavior to operator/utility end if end foreachdo in parallel \(L_{2}\gets Prediciton(M_{2},x_{i})\); //apply the 10-attack deep learning model to decrease false negatives while\(L_{1}\) or \(L_{2}\) are Abnormaldo \(d_{1}\leftarrow\)Random Delay\({}_{0<d\leq 4seconds}\); //If any of them is abnormal add a delay Report abnormal behavior to operator/utility end foreach end foreach if\(L_{1}\) and \(L_{2}\) is not Abnormalthen \(d_{1}\gets\) 0; //If the prediction of both models did not show abnormal behavior then stop the mitigation mechanism end if end foreach end if ``` **Algorithm 1**Algorithm describing the conceptual model of the detection and mitigation mechanisms. mitigate attacks launched by public charging stations. Due to the aforementioned limitations, we propose a lightweight and distributed mitigation mechanism which can be deployed on the charging station itself (public and/or private). After detecting attacks, each station independently creates a random delay for all incoming requests to break the attacker's synchronization ability and hence minimize the impact on the grid after a persistent attack. Our main goal is to create a lightweight and distributed cyber-layer mitigation mechanism that aligns with the EVCS ecosystem deployment. The charging stations are characterized by low computing power which motivated the need for an independent mitigation mechanism. Consequently, we studied the effectiveness of using the random delay to mitigate the impact of forced oscillations on the power grid. In future work, with the aim to minimize the delay, we plan to utilize deep learning techniques that tailor the delay to each attack type based on the behavior and the load on the grid. We describe in Algorithm 1, the conceptual model of the algorithm and how it is integrated with the mitigation mechanism. The detection is an online algorithm that is running in real-time to detect oscillatory load attacks. We run the 5-attack detection model to detect attacks early and the 10-attack detection model to detect any false negative data samples that resulted from the first step. We utilize this two-step continuous detection technique to lower the false negative impact on the power grid and provide continuous monitoring over the ecosystem, when the first detection technique detects an attack it creates a random delay for any new request ranging between 0 and 4 seconds. However, this delay is removed if the 5-attack and 10-attack deep learning models stop classifying the rolling windows as attacks. Consequently, we can then utilize the 10-attack model to minimize the number of false negatives that might have been misclassified by the 5-attack deep learning model. #### 3.3.1 Test-bed As part of our efforts to study the EV ecosystem, we create an EV co-simulation platform that integrates the different components to simulate the cyber and physical layers of this ecosystem. The cyber layer is composed of the mobile application, central management system (OCPP server), and the charging station cyber interface (OCPP client and firmware). The physical layer is composed of the charging station's physical interface and the power grid. We simulate the cyber-layer using vSphere [52], which is a VMware virtualization platform. vSphere Figure 6: Co-simulation architecture is built from VMware ESXi, a Type 1 hypervisor that is responsible for abstracting processors, memory, storage, and other resources into multiple virtual machines, and a VMware vCenter Server, that allows us to manage and control virtual machines. Moreover, we utilize Hypersim, which is a real-time power system simulator, that helps model power grids and run real-time simulations on dedicated multi-processor hardware (OpalTagert) and is connected to the Hypersim VM over a Local Area Network. The architecture of our testbed is illustrated in Figure 6 and the specifications of the testbed elements are stated in Table 1. We simulate requests (start and stop) sent to multiple charging stations. The charging stations during operation are modeled as the dynamic load blocks in Hypersim connected to their respective buses in our power grid. Consequently, the requests sent to charging stations are then aggregated in real time based on the bus that they are connected to. Then we create a load profile that resembles the changes over time. The aggregated load profile is read by the python interface in real-time. The python interface establishes a session with Hypersim then controls and executes the load profile to simulate the different loads on various buses. ## 4 Experimental Results As described in Section 3.2.1, we focus on oscillatory load attacks initiated by exploiting vulnerabilities in the EV charging ecosystem. We study periodic attacks and stealthy attacks where an attacker with enough resources can group charging stations and alternate the switching between different groups that leads to inconspicuous behavior on the charging station. ### Distributed Detection Mechanism Results We use our multivariate-time series representation by taking the temporal and spatial relationship of the two features (events and frequency) into consideration. We implement several deep learning models such as LSTM for temporal relationships and ConvLSTM from Spatio-temporal relationships between the multivariate time series. We test our models on 5-Attacks and 10-Attacks. Data Pre-Processing: To feed our data to the deep learning algorithms, we encoded our features by converting the events (start and stop) to numerical values. Moreover, we normalize the frequency readings by re-scaling the data and fitting all the frequency data points between 0 and 1. To preserve the shape of the original distribution and the information embedded in it, we can represent the normalization as follows: \[x_{normalized}=\frac{x_{i}-min(X)}{max(X)-min(X)} \tag{1}\] \begin{table} \begin{tabular}{c|c} \hline Technology & Specification \\ \hline \hline vSphere ESXi & Version 6.0.0 \\ \hline Hypersim & Version 2022.1 \\ \hline Hypersim Simulation Step Size & 25 μs \\ \hline OpalTarget & OP5707XG - RCP/HIL Virtex-7 FPGA-based Real-Time Simulator \\ \hline EVCS VM & 1GB 1 CPU \\ \hline CMS VM & 1GB 1 CPU \\ \hline Mobile app VM & 1GB 1 CPU \\ \hline Python Interface & Python 3.7 \\ \hline \end{tabular} \end{table} Table 1: Specifications of the real-time co-simulation testbed. Where \(x_{i}\) is any value from the feature \(x\) (e.g., frequency), \(min(X)\) is the minimum value from the feature, and \(max(X)\) is the maximum value of the feature. We utilized MinMaxScaler to normalize our frequency feature vectors and obtain \(\mathrm{x}_{normalized}\). Finally, we map each class (normal and abnormal), using binary label encoding, to 0 or 1. After that, we split our data into training (80%) and testing (20%) subsets. Model Selection and Evaluation: We studied different deep learning models to classify oscillatory load attacks based on behavioral events on the charging station and their consequent effect on the power grid as represented by the frequency readings. The structure and layers of these models are described in the following sub-sections, along with the evaluation results. Moreover, we chose the Adam Optimizer[53] for this classification problem. Adam outperforms other optimizers, such as Root Mean Square Propagation (RMSprop) and Adaptive Gradient Algorithm (AdaGrad), because of its bias-correction which helps Adam towards the end of the optimization as gradients become sparser [54]. We systematically enhance our outcomes, by iteratively adding layers to a simple model (fewer layers) until we reached a relatively good fit (no under-fitting or over-fitting). Consequently, we perform hyper-parameter tuning using the Random Search algorithm. Finally, we identified the hyperparameters that yielded the highest F-measure among the runs. The parameters tuned are the learning rate in the Adam optimizer, the proportion of drops, the number of neurons in a layer, the size of batches, the number of epochs, filter size, and kernel size. Finally, we evaluate the speed of each model as a measure of their computational performance, and the training time to measure the complexity of the model. Hyper-Parameter Tuning and Applied Random Search: It is worth mentioning different techniques were applied to decrease overfitting and achieve a good fit on training data. We used dropping techniques, which refer to dropping out/ignoring units (i.e., neurons) during the training phase with a certain probability [55]. Moreover, we use the batch normalization technique to stabilize the distribution of inputs (over a mini-batch) to a given layer during training. This helps in dramatically reducing the number of training epochs required to train deep networks [56]. We applied hyper-parameter tuning to find the best parameters to achieve a good fit since there exists a large number of variables that can be tuned to enhance the training. Finally, we use binary cross-entropy as a loss function. For each model, we applied a Random Search and a refined Random Search algorithms to tune the hyperparameters that will yield the best results. In a deep learning model, various parameters contribute to finding the best fit for the training data (e.g., learning rate, decay, batch size, etc.). The Random Search algorithm uses a random combination of these values and trains the deep learning model on all these combinations. In the first step of our 2-step Random Search, we generated 500 different combinations of these Hyperparameters and selected the combination that resulted in the highest F1 score. The random search uses all of these 500 combinations and trains the deep learning model 500 times. We then select the model and the parameters that achieved the highest F1 score. For refined results, we apply the second step of (refined) Random Search by generating 100 different combinations of hyperparameters in the 10% realm of the ones found in the first step, to try enhancing our results further. The random search performs similarly to meta-heuristics and grid search, however, with a lower computational cost [57]. In the following subsections, we present the analysis of the performance of the deep learning algorithms that were implemented to detect switching attacks. We select the deep learning model which shows the highest F-measure score, as it is more representative of the false negatives and false positives in the data. Moreover, we also base our judgment on the number of false-negative, where an attack is misclassified as normal. Long-Short Term Memory (LSTM): We use LSTM model as a benchmark against other spatiotemporal deep learning models. The developed LSTM architecture is depicted in Figure 7 and consists of the following layers: * Input Layer: The input of the network is 240 x 2 of encoded events and frequency reading collected over time. * LSTM Layer: This is the main building block of an LSTM deep neural network and is responsible for learning the order dependency in our feature space. * Fully Connected Layers 1 and 2: After the LSTM layer, we add a fully connected layer with a Leaky ReLU activation function. To decrease overfitting we apply batch normalization along with a dropout. The output is then fed into another fully connected layer with a Leaky Relu activation function. The Leaky ReLU activation function was developed to overcome one of the major shortcomings of the ReLU activation function. The ReLU activation function faces the "Dead ReLU" issue that occurs during backpropagation when no learning happens as the new weight remains equal to the old weight. Followed by batch normalization and a dropout layer. The output of these layers is three-dimensional, consequently, a copy of the output is collapsed into one dimension. * Fully Connected Layer 3: The one-dimensional output is then fed to a connected layer initialized by a truncated normal distribution. Consequently, the last layer combines the features learned in previous layers and applies a Sigmoid function to output a probability between 0 and 1 that indicated the class of the data samples. After running our random search (500 runs), we achieve a relatively good accuracy (97.5%) and F-measure scores (97.493%) on the 5-Attack dataset. Moreover, we run a 500 runs random search to tune the LSTM model on the 10-Attack dataset which achieved a better accuracy (99.4%) and F-measure scores (99.405%) as shown in Table 4. In our experiments, it is crucial to look at how well our model classified attacks. The confusion matrix (Table 3) shows that using the 5-Attack dataset, 972 attack samples were classified correctly, whereas 40 data samples were misclassified. Moreover, using the 10-Attack dataset, 1002 attacks out of the 1012 attacks were classified correctly using our novel Long Short-Term Memory deep learning model, and around 1% of the attacks were incorrectly classified. This confirms that oscillatory load attacks need more in-depth analysis to improve the accuracy of the model especially since the 5-Attack dataset achieved a high false negative (3%). It is worth mentioning, the smaller the attack window viewed by the detection system the earlier we detect attacks. Moreover, we note that the behavior of charging stations in normal conditions has some similarities with charging stations under stealthy attacks. The attacker, in stealthy attacks, tries to mimic the normal behavior of a charging station by dividing the switching behavior across different groups of charging stations and alternating the switching between them, which could be the reason for such misclassification. Although the misclassification is not huge, the impact that might occur out of successful attacks would cause a devastating impact on the power grid that could lead to tripping lines and overloading generators. Figure 7: Structure of the Long-Short Term Memory Model. We evaluate our models based on the training time and the time to make a prediction on the 20% test set provided. The training time of the best-performing LSTM deep learning model is 4 minutes for both datasets. The time to train the model is a good indicator of the complexity of the model and the resources needed for future enhancements. Moreover, the time to predict the labels of our 2000 sample test set for the 5-Attack dataset and 10-Attack dataset is 6 and 12 seconds, respectively, which equates to 0.003 seconds and 0.006 seconds on average for each data sample. The speed of each model is a measure of its computational performance. Convolutional Long-Short Term Memory (ConvLSTM2D): We use ConvLSTM which is an LSTM variant. ConvLSTM is a type of recurrent neural network for multivariate spatiotemporal detection. It has convolutional structures that combine the ability of a convolutional neural network to incorporate spatial and temporal correlations into modeling and automatically capture the shared structures across variables (events and frequency). The developed ConvLSTM architecture, depicted in Figure 8, consists of the following layers: * Input Layer: The input of the network is 240 x 2 of encoded events and frequency reading collected over \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multicolumn{5}{c}{5-Attack} & \multicolumn{2}{c}{10-Attack} \\ \hline & LSTM & ConvLSTM\({}_{2D}\) & LSTM & ConvLSTM\({}_{2D}\) \\ \hline **Learning Rate** & 0.014717 & 0.0001939 & 0.00070810 & 0.0001 \\ **Drops** & 0.34 & 0.2 & 0.2 & 0.18 \\ **Batches** & 56 & 30 & 40 & 34 \\ **Units 1** & 146 & 150 & 119 & 176 \\ **Units 2** & 180 & 32 & 104 & 16 \\ **Units 3** & 32 & - & 32 & - \\ **Epochs** & 5 & 6 & 6 & 7 \\ **Filter 1** & - & 4 & - & 5 \\ **Kernel Size 1** & - & 6x6 & - & 5x5 \\ **Filter 2** & - & 8 & - & 8 \\ **Kernel Size 2** & - & 5x5 & - & 5x5 \\ \hline \hline \end{tabular} \end{table} Table 2: Optimized Hyper-Parameters for the Implemented Models Figure 8: Structure of the Convolutional Long-Short Term Memory Model. time. * ConvLSTM Layer 1: This is the main building block of the ConvLSTM deep learning network and is responsible for finding Spatio-temporal relationships in the multivariate time series. To decrease overfitting we apply batch normalization and a dropout. * Fully Connected Layer 1: After the ConvLSTM layer, the output is fed into a fully-connected layer with a leaky ReLU activation function and followed by batch normalization and dropout. * ConvLSTM Layer 2: The output is then fed into a second convolutional LSTM layer to derive further Spatiotemporal correlations from the features. Consequently, followed by batch normalization and a dropout layer. Moreover, the output is then flattened into a one-dimensional vector of numbers. * Fully Connected Layer 2: The one-dimensional output after flattening is then inputted into a fully connected layer initialized by a truncated normal distribution. Consequently, the last layer combines the learned weights in previous layers and applies a Sigmoid function to output a probability between 0 and 1 that indicates the class of the data samples. After running our random search (500 runs) on the 5-Attack dataset, we achieve good accuracy (99.4%) and F-measure scores (99.393%). Moreover, we also test our deep learning model on the 10-Attack dataset and achieved better accuracy (99.8%) and F-measure score (99.803%). The confusion matrix for both datasets of the ConvLSTM model is depicted in Table 3. The classifier was able to correctly classify 985 out of 988 normal samples and 1002 out of 1012 attack samples. The performance of the classifier improved as we increase the attack window in the 10-Attack dataset. This shows that 0.99% and 0.099% of the attacks were misclassified for both datasets, as compared to the LSTM that achieved a 3.952% and 0.99% false negative rate. It is important to note that the number of misclassified attacks (false negatives) is an important indicator in choosing the best model to detect oscillatory load attacks. The impact of oscillatory load attacks initiated by the EVCS ecosystem has been studied by Sayed et al. [8]. The risk of oscillatory load attacks is increasing as a result of the rapid deployment of charging stations. We acknowledge that the current deployment (number) of charging stations does not allow attackers to impact \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multicolumn{5}{c}{5-Attack} & \multicolumn{2}{c}{10-Attack} \\ \hline & LSTM & ConvLSTM\({}_{2D}\) & LSTM & ConvLSTM\({}_{2D}\) \\ \hline Accuracy & 97.500 & 99.400 & 99.400 & 99.800 \\ F-measure & 97.493 & 99.405 & 99.405 & 99.803 \\ Recall & 96.047 & 99.111 & 99.012 & 99.901 \\ Precision & 98.982 & 99.702 & 99.801 & 99.704 \\ \hline \hline \end{tabular} \end{table} Table 4: Classifiers Outcomes \begin{table} \begin{tabular}{c|c c|c c|c c|c c} \hline \hline \multicolumn{5}{c}{5-Attack} & \multicolumn{4}{c}{10-Attack} \\ \hline & \multicolumn{2}{c}{LSTM} & \multicolumn{2}{c}{ConvLSTM\({}_{2D}\)} & \multicolumn{2}{c}{LSTM} & \multicolumn{2}{c}{ConvLSTM\({}_{2D}\)} \\ \hline & N & A & N & A & N & A & N & A \\ N & 978 & 10 & 985 & 3 & 986 & 2 & 985 & 3 \\ A & 40 & 972 & 9 & 1003 & 10 & 1002 & 1 & 1011 \\ \hline \hline \end{tabular} \end{table} Table 3: Confusion Matrices for LSTM and ConvLSTM the power grid, however, with the current advancement and push towards electrifying the transportation system that is being enforced by governments such attacks will entail great risk. Consequently, we evaluate the training time of the best ConvLSTM model accumulated to 2 hours approximately. The time consumed during training is substantial compared to the LSTM. This result is expected due to the increase in the number of training parameters tuned (e.g., filter and kernel sizes) that will allow the model to perform convolutional techniques on the input data to extract spatiotemporal relationships. The LSTM variant is labor-intensive in terms of training. However, the computational time of our ConvLSTM model is 22 and 28 seconds which amounts to 0.011 and 0.014 seconds on average per data sample for the 5-Attack and 10-Attack datasets. ### Distributed Mitigation Results To evaluate our distributed mitigation mechanism, we launch various oscillatory load attacks and study the impact on the grid on the 9-Bus system, which is a simplified abstraction of the Western System Coordinating Council (WSCC) [58] grid in North America. In our test-bed setup, we are restrained to the 9-Bus system however, the general behavior of the different power grids is similar. Thus, our mitigation mechanism is easily reproducible on different power grids. After detecting the attacks within 5 seconds, every charging station adds a random delay to every request with the aim to deprive the adversary of the ability to synchronize attacks on multiple charging stations. Random delays are introduced with a maximum of 4 seconds. We chose 4 seconds based on the sensitivity analysis performed by Galletta et al. [59], which implies a decrease in user performance and thus behavioral intentions begin to flatten when the delays extend to 4 seconds or longer in web interfaces. Kabir et al. [21] suggested discarding a request if an attack is detected. However, if a false positive attack was detected, the quality of service with respect to a valid customer is affected, which would lead to user frustration. As a consequence, we use a random delay topped at 4 seconds to preserve the quality of service. In what follows, we demonstrate an EV attack equivalent to 84MW load on one bus. This attack is equivalent to about 7636 EVs charging at the 11kW Level 2 chargers. Although such a number might be relatively high, the growth in the EV numbers will soon provide a large enough surface to make it possible [8]. Relative to today's average charging rate of 24kW, the attacker would only need to compromise 3500 EVCS. Moreover, as the EVCS market moves towards wide adoption of level 3 chargers, the number of needed compromised charging stations decreases. Level 3 chargers are DC fast chargers that deliver a charging rate of 40kW to 360kW, which means that to perform the same attack scenario we need 2100 EVs charging at 40 kW or as little as 233 charging at 360kW superchargers. Now, oscillatory load attacks take advantage of load manipulation to impact the frequency stability of the power grid. This attack revolves around the concept of creating a demand surge to cause a frequency drop on the grid followed by a drop in demand to cause the frequency to overshoot. The adversary uses the compromised load to create an imbalance between the increased load and the generated power, causing the generators to slow down, hence resulting in a frequency drop. Consequently, the attacker switches off the compromised load to cause an increase in the frequency and the generator speed in response to \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multicolumn{4}{c}{5-Attack} & \multicolumn{2}{c}{10-Attack} \\ \hline & LSTM & ConvLSTM\({}_{2D}\) & LSTM & ConvLSTM\({}_{2D}\) \\ \hline Training Time & 0:04:25 & 1:45:28 & 00:04:31 & 02:27:53 \\ Prediction Time (s) & 0.003 & 0.011 & 0.006 & 0.014 \\ \hline \hline \end{tabular} \end{table} Table 5: Classifiers Time the adversary's actions. The attacker alternates between charging and stopping or discharging to disturb and impact the grid. The variation of the power generation speed due to an oscillatory load attack with a 2.4 seconds period (1.2 seconds on and 1.2 seconds off) is demonstrated in Figure 9. The sustained attack hinders the system's recovery causing fluctuations in the speed that would damage the turbines and decrease their lifetime due to the constant acceleration and deceleration. Different attacks could be launched by an adversary, however, a random delay between 0 to 4 seconds encompasses the wide range of attacks and is enough to mitigate this family of attacks. In future work, we will work on a mitigation mechanism that will utilize deep learning techniques to classify attacks based on their duty cycle that allows us to create a smart mitigation mechanism that minimizes the random delay needed. As shown in Figure 10, the attack was launched (step 1) and detected (step 2) after 5 seconds using our novel detection mechanism and consequently, the different charging stations independently initiate their mitigation mechanism and induce a random delay between 0 and 4 seconds to attenuate the impact on the power grid and the generators and prevent attacker synchronization. The main aim here is to stabilize the system by ensuring the disturbance does not impact the system's performance. We demonstrate in Figure 10 the attack load and its variation over time after implementing our mitigation technique on our test bed (behavior of the grid after step 2). In the first few seconds, we observe two peaks showing the switching on and off due to an attack before detection, followed by a gradual distribution of the attack load over a period of 50 seconds as a result of the random delay of consequent start and stop requests. It is clear in Figure 10 how the added random delay causes the discrete behavior of the attack load, especially beyond 40s. The frequency variation after a mitigated attack is shown in Figure 11, that demonstrates the effectiveness of the proposed approach in eliminating the oscillations caused by the same sustained attack that is shown in Figure 9.To demonstrate the worst-case scenario that all the attacks are detected by the end of the first 5s of the attack and not before, we delayed the initiation of the mitigation technique till t=10s which is 5 seconds Figure 9: The variation of the generator’s speed as a result of an oscillatory load attack. after the attack started. This lightweight mitigation scheme prevents the attacker from synchronizing an attack thus, eliminating the attacker's ability to impact the power grid. The randomization of the attack load over time results in a gradual increase in the load rather than instantaneous spikes and drops which allows the grid's generators to cope with the change in demand (behavior after step 2). Along with that, as shown in Figure 11, the generator speed starts to get damped at t=11s in 1s of detecting the attack. It is worth mentioning that stopping the attack doe not immediately bring the system back to its nominal speed. The system requires 2s to return from the abnormal state to the normal frequency range. Our mitigation strategy was successful in eliminating the impact of the attack on the power grid without the need for adding a physical control layer to the power grid. Furthermore, our proposed cyber-layer mitigation scheme is comparable to physical layer mitigation schemes. In [21], the control scheme was able to limit the forced oscillation to a safe threshold after 15s of the attack initiation as compared to our 2s. Furthermore, our mitigation mechanism is able to completely eliminate the attack impact reducing the need for continuous acceleration and deceleration of the generators, unlike physical layer mitigation mechanisms that can dampen the dangerous oscillations but can never eliminate them. Figure 10 demonstrates that after detection, the attack load would increase gradually to reach around 48 MW (half the total attack load). This would result in the grid's ability to return to stability since the attack load oscillations have been eliminated. It is worth mentioning that if the attack utilizes the V2G ability of the EVCSs, our mitigation would result in an EV attack load centered around zero, which is even better for the stability of the grid and allows the generator to regulate their speeds easily. ## 5 Evaluation, Comparison, and Discussion To detect oscillatory load attacks, we proposed an approach that leverages the behavioral characteristics of the charging station and the power grid. This approach is used to detect oscillatory load attacks initiated from the EV charging ecosystem regardless of the specific exploits and vulnerabilities used to compromise the different components that constitute it. The events at the charging station are directly related to the behavior of the power grid, making them the most important data on the cyber layer of the ecosystem. Furthermore, the result demonstrated that such coordinated attacks have a unique signature constituting of the charging event and the frequency on the power grid. Moreover, some attacks disguised as normal behavior (stealthy attacks) might go undetected. Based on our previous experiments, our choice of features Figure 10: Load profile after mitigation. significantly impacted our accuracy and allowed us to detect oscillatory load attacks with high recall and precision values. We evaluate our detection system on the 5-Attack dataset (5 seconds attack window) and the 10-Attack dataset (10 seconds attack windows). The precision and recall for LSTM and ConvLSTM improved as we increased the attack window from 5 seconds to 10 seconds, as summarized in Table 4. The LSTM was less effective on the 5-Attack dataset and achieved an F-measure score of 97.493%. However, the LSTM performance improved on the 10-Attack dataset. We observe improvement for other metrics studied (accuracy, precision, recall, and a number of false negative samples). The number of misclassified samples decreased from 40 to 10 when the LSTM model viewed a 10 seconds attack window, as shown in Table 3. Moreover, the 5-Attack LSTM model misclassified stealthy and non-stealthy attacks whereas, the ConvLSTM only misclassified few stealthy attacks achieving a low false negative on stealthy attacks with less than 1.7% (0.88% of total attacks). The same stealthy attacks were later detected in the 10-Attack ConvLSTM model decreasing the misclassified stealthy attacks to 0.19% (0.099% of total attacks) using the 10-Attack detector. Moreover, the other types of non-stealthy attacks were all detected using 5-Attack ConvLSTM. This shows that detecting stealthy oscillatory load attacks is not as trivial as other types of attacks and a two-step detection mechanism is best suited to provide redundancy and effectiveness to the detection mechanism to help secure the vital services provided by the power grid. It is worth noting that our detection mechanism works on a rolling basis as long as the charging station is receiving requests, thus, improving the detection ability of our proposed approach. We highlight that the rolling window 5-attack detector will detect attacks in any smaller time frame but we highlight that it was optimized for detection after 5 seconds to be able to detect stealthy attacks. We test our 5-Attack detector (without retaining) on windows containing only the first second of the attack behavior, and we were able to identify a third of the attacks within 1s of their initiation, achieving a recall of 33.3%. In this test, only 3 normal data samples were misclassified as attacks which is consistent with the results of the original 5-Attack Figure 11: The variation of the generator’s speed as a result of an oscillatory load attack followed by mitigation. model. In the stealthy version of the attacks, the attacker launches slow oscillatory attacks as well as distributes the switching among multiple charging stations. This makes the behavior of individual EVCSs in short windows of time look normal since only one event or possibly no events at all occur. Indeed, this is aligned with our finding where the 5-attack model misclassified the stealthy attacks within the first second and labeled them as normal. This shows the need for a rolling window model. Consequently, we highlight that we chose the 5-Attack model so we do not compromise between the impact on the grid and the accuracy of our model. In a real-life deployment, the distributed mitigation mechanism in 1 will start mitigating the attacks as soon as the detector classifies a window as an anomaly and does not need to wait for 5 seconds after the attack is initiated. Moreover, the deep learning model ConvLSTM, an LSTM variant, achieved a better performance than the LSTM on the two datasets. The ConvLSTM showed improvement over LSTM on the 5-Attack dataset achieving a 99.405% F-measure score. Moreover, the ConvLSTM also achieved a 99.803% F-measure score on the 10-Attack dataset. It is important to note that the performance of the model 5-Attack dataset is crucial for our evaluation since early detection of the attack is needed. The LSTM did not perform as well on the 5-Attack dataset compared to the 10-Attack dataset, whereas the ConvLSTM using the convolutional filters on the input allowed the deep learning model to learn intricate patterns of the attack data which allowed it to effectively classify samples. The ConvLSTM substitutes the matrix multiplication of the LSTM at each gate with convolutional operations that allowed it to extract the spatiotemporal relationships between the multiple timesteps over recorded variables (events and frequency). This relationship is the most crucial aspect in detecting oscillatory load attacks where ConvLSTM showed improved performance over the LSTM [60]. Moreover, the fully connected LSTM has to unfold the inputs to 1D vectors before processing them, thus losing all the spatial information during the process. Thus, to preserve the spatial features the ConvLSTM uses 3D tensors that preserve the spatial information and determine the future state of a cell by taking into consideration the local neighbors of a cell [50]. The ConvLSTM reaps the benefits of the LSTM with temporal data and the benefits of a convolutional neural network with spatial data, which was important in our study of the two features over multiple timesteps. The ConvLSTM was able to learn the patterns of the attack with only 5-second windows. Indeed, the performance of both classifiers on the 10-Attack dataset is expected to be better, since the impact of the switching would increase tremendously showing a significant change in the behavior. Moreover, the number of misclassified samples, most importantly the false negatives, is crucial to evaluate the efficiency of our detection model and how much we can trust the classifiers to identify attacks. Our analysis presented in Table 3 shows that the LSTM misclassified 40 and 10 attack samples as normal for the two datasets, which allows the adversary to execute a wider range of adversarial attacks as compared to the ConvLSTM which only misclassified 9 and 1 data samples for the 5-Attack and the 10 Attack datasets, respectively. The ConvLSTM outperformed the LSTM in various aspects. However, training the ConvLSTM model took around two hours and a half as compared to the LSTM model which took about 4 minutes. The training time is tolerable in our system model because we assume that training is performed by a central authority with enough resources and has access to data from various operators and does not impact the prediction of attacks. Moreover, the prediction time of the ConvLSTM is still in the order of milliseconds which means that although its complex structure requires extra training time, its performance once deployed is not hindered by this complexity. The complexity of the ConvLSTM model arises from the structure of the ConvLSTM layer that utilizes matrix multiplication along with the kernels and the filters used that increase the trainable parameters in the model drastically leading to high training time. Moreover, the increase in the training time of the ConvLSTM is also due to the batch size and the learning rate where increasing the batch size leads to a poor generalization over the data samples and a small learning rate requires more training epochs given the smaller changes made to the weights each update. In our approach a compromise was made between the training time and the accuracy to provide a reliable deep learning model that is able to effectively detect attacks. Moreover, the ConvLSTM and its convolutional mechanisms applied to features helped detect oscillatory switching attacks with as little as 5 seconds attack window. Further, to compare the computational performances of the devised classifiers, we measure their speed in terms of the time required to complete the classification experiments. As illustrated in Table 5, the LSTM performed significantly faster than the convolutional LSTM, with 0.003 seconds to complete the classification. Whereas ConvLSTM performed relatively slower, with a computational time of 0.011 seconds. However, the time required by the ConvLSTM is tolerable since the output of the deep learning is almost instantaneous. Our deep learning model depends on the behavioral characteristics (logged by the charging station during operation). The characteristics allow us to distribute the decision-making where each charging station acts independently. To the best of our knowledge, we are the first to enable a distributed detection mechanism of oscillatory load attacks where models do not need to be deployed on a central management system to perform accurate detection. In [21], Kabir et al. devised a detection algorithm that depends on two charging events and the number of vehicles connecting within \(\Delta\) time. The number of vehicles is an artifact that is only known to the CMS of a specific operator and is not shared. However, in our approach, we utilize the frequency reading of the power grid, which is a shared variable (artifact) among the charging stations of different operators connected to the same bus. These features enable us to detect multi-operator and stealthy sophisticated attacks and distribute the detection mechanism. The deep learning model can be deployed on every charging station (public or private), where each EVCS can take its decision solely based on the artifacts (events and frequency) that can be collected by the EVCS independently. Furthermore, since our approach is deployed on the charging station itself it prevents MitM attacks that can be launched by an adversary on the OCPP traffic exchange between the CMS and the EVCS itself [9] to control charging stations and perform oscillatory load attacks. Our detection approach mitigates various attack vectors by deploying the deep learning model on the component that is used to create an impact (EVCS). It is worth mentioning, that our detection mechanism could be deployed on private charging stations which mitigates the limitation introduced by central detection mechanisms [21]. Furthermore, our detection approach requires viewing only 5s of the oscillatory switching attacks as compared to [21] that was tested on 20, 30, and 40 seconds attack periods and resulted in 30%, 10%, 5% false negative rates, respectively. Robustness and limitations: In our approach, we address oscillatory load attacks and adversarial oscillatory attacks that other detection mechanisms fail to detect (e.g., multi-operator, stealthy, and MitM oscillatory load attacks). This improves the robustness of our model and increases the spectrum of various oscillatory load attacks that could be detected by this approach. Considering that our approach's scope is only to detect coordinated oscillatory load attacks based on the combination of cyber and physical behavioral characteristics, we performed our experiments on the New England 39-Bus System. We did not include data samples from different power grids. Attacks on the grids have various impacts. For example, an attack on a 9-Bus system might not have the same consequences on a 39-Bus system. However, our approach is easily reproducible to make it operational on other power grids. Although this work contributes to understanding and detecting oscillatory load attacks, however, it faces a few current limitations. For instance, the work relies on a supervised learning approach, which cannot classify new, previously unseen attacks. To overcome this limitation, unsupervised approaches can be considered complementary approaches to face the emergence of any adversarial attacks. Additionally, our approach can be leveraged as a stepping stone to develop new cyber-layer defense mechanisms that prevent and detect oscillatory load attacks. In our work, we assume that the charging station is honest, thus, the adversary can evade our detection mechanism by compromising the charging station itself. However, the adversary needs to compromise all the charging stations needed to mount attacks. The distributed nature of our detection mechanism makes it hard for the adversary to mount attacks easily and ensures fault tolerance in the system, unlike centralized detection mechanisms that provide a single point of failure. Moreover, adversaries need to hack and exploit charging stations of different firmware versions and would require the attacker to find vulnerabilities in the different types of charging stations. Finally, we plan in our future work to use federated learning to assist in preserving the privacy of the records during the training period without the need of the power grid operator to get charging behavior data to create an AI-enabled detection model. Moreover, it is important to note that to evade the mitigation technique, the attacker needs to guess the random number generator (if the operator/manufacturer used a weak random number generator). However, each charging station creates a random delay independently of the other hindering the adversary from discovering the random delay of all the charging stations that are being exploited to mount an oscillatory load attack. Through this work, we show that a simple and lightweight random delay mechanism provides an efficient countermeasure to adversaries trying to launch oscillatory load attacks. This mechanism is compatible with the nature of the ecosystem as it doesn't require coordination with the other charging stations which would create an overhead for charging stations that are equipped with limited computing power. However, we plan in the future work to create a framework to support the grid using V2G and mitigate the impact of EVCS-launched cyber-attacks. ## 6 Conclusion The increase in the adoption of EVs and their IoT-enabled infrastructure is creating a new attack surface to target the power grid and cause instability to the infrastructure. The different interconnected components can be exploited by adversaries to initiate various types of oscillatory load attacks (e.g., stealthy, multi-operator, etc) that exploits a botnet of EVCSs (public or private) to induce and sustain grid instability. Consequently, we devised a distributed deep learning detection mechanism that can accurately detect oscillatory load attacks by viewing as little as 5 seconds of an attack and achieving an F-measure score of 99.4% with a false negative rate of less than 1%. Our approach and the unique features we chose allowed us to deploy a reliable deep-learning model to detect attacks on public and private charging stations. Consequently, after detection, we evaluate the use of a lightweight distributed mitigation mechanism that is also deployed on the charging station by randomly delaying requests with a maximum delay of 4 seconds to ensure the quality of service in case of a false positive detection that would affect customers. We tested our distributed mitigation mechanism on a real-time EV co-simulation testbed. The detection/mitigation mechanisms proposed are robust, lightweight, and easily reproducible and provide a defense layer to secure the power grid. Moreover, this is a novel solution that can be deployed on existing infrastructure to build security into the ecosystem. ## 7 Acknowledgement This research was conducted and funded as part of the Concordia University/ Hydro-Quebec/ NSERC research collaboration project "Large-Scale Integration of EVCSs into the Smart Grid: A comprehensive cyber-physical study and security assessment." Grant reference: ALLRP 567144-21.
2308.02428
Applications of Laguerre transform to solve Schrödinger-type equations and Differential Equations of order four
The finite Laguerre transform is applied to solve Differential Equations Problems of order higher than two and a one-dimensional steady-state Schr\"{o}dinger equation, by using elementary Linear Algebra methods.
Gabriel López Garza
2023-08-04T16:08:29Z
http://arxiv.org/abs/2308.02428v1
Applications of Laguerre transform to solve Schrodinger-type equations and Differential Equations of order four ## Abstract The finite Laguerre transform is applied to solve Differential Equations Problems of order higher than two and a one-dimensional steady-state Schrodinger equation, by using elementary Linear Algebra methods. ## 1 Introduction The theory of finite Sturm-Liouville transform, as studied in [6], has been applied to solve differential and partial differential equations since the fifties of the last century. McCully, in particular, developed the theory of Laguerre transform in [11], where the author after calculating some transforms formulas, employs the finite Laguerre transform for solving the one-dimensional nonhomogeneous heat equation as an instance of an application. Since then, the Laguerre transform has become a useful tool for solving differential equations and for other applications. Taking into account the large body of literature related to Laguerre transforms it is fair to suggest that wherever Laguerre polynomials appear in relation to ordinary and partial differential equations, it is possible to use the finite Laguerre transform. The present article could be inscribed in support of such a hypothesis. In fact, related to this, it can be mentioned the case of the article [1] where recently, Alhaidari utilizes Laguerre polynomials to solve, among others, some examples of the Schodinger equation. To this aim, he applies the basic properties of Laguerre polynomials, as well as the theory of three-terms recursion sequences for difference equations of order two, as presented in [10]. Approximations to the solutions of the Schrodinger equation for the Morse potential case, in terms of the Laguerre polynomials, are known [7], [14], but the technique introduced in [1] seems to be new. Nevertheless, Alhaidari never applies explicitly the Laguerre finite transform which, as the present article shows, simplifies and clarifies the pertinence of such kinds of techniques. In particular, the relevance of the most important Sturm-Liouville transform \(\mathcal{T}[L[y_{n}(x)]]=\lambda_{n}\mathcal{T}[y_{n}(x)]\), where \(L\) is a second order differential operator and \(\lambda_{n},y_{n}(x)\) are the eigenvalues and eigenfunctions related to the operator respectively, is completely hidden in Alhaidari's work. By adding and subtracting appropriate terms, the author avoids the use of transforms, which is valid, but such procedures do not permit the implementation of algorithms efficiently. By employing the finite Laguerre transform method, such difficulties can be overcome without great difficulty. Another application treated in this article deals with the iteration of a Sturm-Liouville operator, which transform is \(\mathcal{T}[L^{2}[y_{n}(x)]]=(\lambda_{n})^{2}\mathcal{T}[y_{n}(x)]\), simply, but when used in physical applications it gives rise to equations that look very complicated and difficult to solve, but with finite transform methods it is not so. In such class of problems may be included the Laguerre type orthogonal polynomials discovered by Krall [8], [9], which are instances of eigen-solutions of fourth-order differential equations that do not satisfy second order differential equations. The finite transforms method associates with the equations studied in this article systems of linear equations which in the case of the Schrodinger equation leads can be reduced to a three-term recursion formula which are completely classified and known [10]. In the case of equations of order bigger than two it is not possible to reduce the equations to systems of three-terms, but it is possible to reduce them two a triangular system which somehow, it is easy to solve. In both last cases, ordinary differential equations solutions are solved by elementary linear algebra methods. In section 2, the mathematical frame of the Sturm-Liouville finite transform is briefly summarized to set the context in which the Laguerre finite transform is inscribed. The basic properties of the Laguerre transform are also exposed in this section. In section 3 are solved the Coulomb and Morse instances of the Schrodinger equation as well as a Laguerre-type equation of fourth-order. The literature on Laguerre transforms is immense and, hence, it is difficult to know if some transform is already known. The criteria followed in this article is that, if a formula is not in the classical book of Erdelyi [5] or in [2], or if a Laguerre transform is not in [13], the proof must be exhibited. In the Appendix section 5, the reader can find the proofs of some of such Laguerre transforms. That the solution of the fourth-order equation studied in this article is a linear combination of Laguerre polynomials is also proved in the Appendix. ## 2 Mathematical setting ### Sturm-Liouville Finite Transform Consider a Suturm-Liouville boundary value problem \[\mathcal{L}[y](x)=\frac{d}{dx}\left(p(x)\frac{d}{dx}y(x)\right)-q(x)y (y)=-\lambda r(x)y(x) \tag{1}\] \[a_{1}y(\alpha)+b_{1}y^{\prime}(\alpha)=0,\qquad a_{2}y(\beta)+b_ {2}y^{\prime}(\beta)=0, \tag{2}\] when the eigenfunctions are polynomials. Specifically, in this article is consider the case of Laguerre polynomials which, as the reader may recall, are solutions of \[\mathcal{L}[y](x) = \frac{d}{dx}\left(x^{\nu+1}e^{-x}\frac{d}{dx}y(x)\right)=-nx^{ \nu}e^{-x}y(x),0<x<\infty. \tag{3}\] And given that \(\beta=+\infty\), the boundary condition (2) become \[\lim_{\beta\to+\infty}r(\beta)(a_{2}y(\beta)+b_{2}y^{\prime}(\beta))=0, \tag{4}\] where \(r(x)=x^{\nu}e^{-x}\), accordingly. Solutions of (3) are denoted \(L_{n}^{\nu}(x)\), for \(\nu\neq 0\) and, as is usually done in the literature, \(L_{n}^{0}(x)\) is denoted \(L_{n}(x)\) simply. The number \(\nu\) is called the order of the corresponding Laguerre polynomials. Solutions of the equation (1) which satisfy boundary conditions (2), denoted by \(P_{n}(x)\), \(n\geq 0\). It is well known that suitable differential functions \(f(x)\) (i.e. functions satisfying the respective boundary value conditions and sufficiently differentiable to satisfy the corresponding differential equation), have an expansion, called Sturm -Liouville expansion, of the form \[f(x) = c_{0}P_{0}(x)+c_{1}P_{1}(x)+\cdots=\sum_{m=0}^{\infty}c_{m}P_{m} (x), \tag{5}\] \[\mbox{where }c_{m} = \frac{\int_{\alpha}^{\beta}r(x)f(x)P_{m}(x)dx}{\int_{\alpha}^{ \beta}r(x)P_{m}^{2}(x)dx}.\] Of course, assuming \(r(x)>0,\alpha\leq x\leq\beta\), to be continuous in the interval of definition, the integral \(\int_{\alpha}^{\beta}r(x)h(x)g(x)dx\) defines an inner product \(\langle h,g\rangle_{r}\) in a corresponding space of integrable functions. From formula (5) it is possible to establish a correspondence \(f(x)\leftrightarrow\{c_{m}\},m\geq 0\), so that the sequence \(\{c_{n}\}\) is known as the Sturm-Liouville finite transform of \(f(x)\), which is usually denoted by \[\mathcal{T}[f](s)=\{c_{m}\}\stackrel{{ def}}{{=}}c_{0}+c_{1}s+c_{2}s^{2}+\cdots \tag{6}\] With the purely formal notation defined in (6) we mean the sum and Cauchy product of sequences, where \(s^{n}\) is the sequence \[s^{n}\stackrel{{ def}}{{=}}\{\underbrace{0,0,\ldots,0}_{n},1,0,0,\ldots\} \tag{7}\] The sum of sequences \(\{a_{n}\},\{b_{n}\}\) is defined as usual by \(\{a_{n}\}+\{b_{n}\}\stackrel{{ def}}{{=}}\{a_{n}+b_{n}\}\) and the Cauchy product of sequences is defined by the sequence \[\{a_{n}\}\{b_{n}\}=\{a_{0}b_{0},a_{0}b_{1}+a_{1}b_{0},a_{0}b_{2}+a_{1}b_{1}+a_{2 }b_{0},\ldots,\sum_{\tau=0}^{n}a_{\tau}b_{n-\tau},\ldots\}. \tag{8}\] The sequence \(\{k,0,0,\ldots\}\) is simply written \[k\stackrel{{ def}}{{=}}\{k,0,0,\ldots\}=k\{1,0,0,\ldots\},\] as a simplification in the notation currently used, so by (6) it is understood \[\mathcal{T}[f](s)=\{c_{m}\} = c_{0}+c_{1}s+c_{2}s^{2}+\cdots+c_{n}s^{n}+\cdots\] \[= \{c_{0},0,\ldots\}\{1,0,\ldots\}+\{c_{1},0,\ldots\}\{0,1,0, \ldots\}+\cdots\] \[+\{c_{n},0,\ldots\}\{\underbrace{0,0,\ldots,0}_{n},1,0,0,\ldots \}+\cdots.\] Observe that convergence is not germane to this notation, and (6) merely states that the term \(c_{m}\) of the sequence occupies the same place as the \(1\) in the sequence \(s^{m}\). Finally, the last step in solving differential equations via transforms is to find the inverse transform. In the case of finite transforms, the inverse transform is find simply writing down the corresponding series in terms of eigenfunctions [6], i. e. series (5) gives directly the inverse transform of a sequence \[\mathcal{T}^{-1}[\{c_{n}\}]=c_{0}P_{0}(x)+c_{1}P_{1}(x)+c_{2}P_{2}(x)+\cdots \tag{9}\] Of course, convergence matters in this case, but this aspect is well-known for Laguerre polynomials or at least boundedness of solutions is warranted for the problems studied in this article (see [1] for boundedness of solutions of the Schodinger equation). Nevertheless, such expressions depend on the characteristics of each particular examples which have to be specified. ### Laguerre transforms of order \(\nu\) The Laguerre polynomials are defined by \[L_{n}^{\nu}(x)=\sum_{m=0}^{n}\binom{n+\nu}{n-m}\frac{(-x)^{m}}{m!},\] where \(\binom{\alpha}{\beta}=\frac{\Gamma(\alpha+1)}{\Gamma(\beta+1)\Gamma(\alpha- \beta+1)}.\) The basic orthogonality relation between the Laguerre polynomials is given by \[\int_{0}^{\infty}e^{-x}x^{\nu}L_{n}^{\nu}(x)L_{m}^{\nu}(x)dx=\frac{\Gamma(n+ \nu+1)}{n!}\delta_{nm}, \tag{10}\] where \(\delta_{nm}\) is the Kronecker delta symbol. Such relation leads to the definition of the Laguerre transform of order \(\nu\): \[\mathcal{T}^{\nu}[f(x)]=\left\{\int_{0}^{\infty}e^{-x}x^{\nu}L_{n}^{\nu}(x)f(x) dx\right\}=\{c_{n}^{\nu}\}, \tag{11}\] it must be emphasize that the Laguerre transform is a sequence of numbers in \(\mathbb{C}\). The inverse Laguerre transform is defined by \[f(x)=\left(\mathcal{T}^{\nu}\right)^{-1}[\{c_{n}^{\nu}\}]=\sum_{k=0}^{\infty} c_{n}^{\nu}L_{n}^{\nu}(x). \tag{12}\] ## 3 Examples of applications ### Applications to the Schrodinger equation In this section is consider the equation \[-\frac{1}{2}\frac{d^{2}}{dr^{2}}\psi(r)-(V(r)+E)\psi(r)=0 \tag{13}\] which in appropriate units (\(\hbar\)=M=1) is the steady state Schodinger equation defined in a one dimensional space, where \(V(r)\) is a potential function and \(E\) is the energy. Under the change of coordinates (see [1]), given by \(\lambda^{-1}\xi(x)=dx/dr\), where \(\lambda\geq 0\) has inverse length units, equation (13) becomes \[\lambda^{2}\xi^{2}\left[\frac{d^{2}}{dx^{2}}\psi(x)+\frac{1}{\xi}\frac{d\xi}{ dx}\frac{d}{dx}\psi(x)-\frac{2}{\lambda^{2}\xi^{2}}W(x)\psi(x)\right]=0, \tag{14}\] where \(W(x)=V(r)-E\). To obtain a Laguerre-type equation the change of coordinates must satisfy \(x(r)\geq 0\) and setting \(\frac{1}{\xi}\frac{d\xi}{dx}=\frac{a}{x}\), leads to \(\xi(x)=x^{a}e^{bx}\). In this way, equation (14) becomes \[\lambda^{2}\xi^{2}\left[\frac{d^{2}}{dx^{2}}\psi(x)+\left(\frac{a}{x}+b\right) \frac{d}{dx}\psi(x)+\left(A_{+}+\frac{A_{-}}{x^{2}}-\frac{A_{0}}{x}\right) \psi(x)\right]=0, \tag{15}\] where \(A_{\pm},A_{0},a,b\) are real parameters determined in terms of \(V(r)\) and \(E\). To solve equation (15) it is proposed a solution of the form \[\psi(x)=x^{\alpha}e^{-\beta x}y(x), \tag{16}\] where \(y=\sum_{k=0}^{\infty}c_{k}L_{k}^{\nu}(x)\), and \(L_{n}^{\nu}(x)\) are the Laguerre polynomials of order \(\nu\), and \(\alpha,\beta,\nu\) are dimensionless parameters, free for the moment, but to be determined according to the concrete examples to be solved below. To solve (15) the use of the finite Laguerre transform is introduced. Many of the following transforms are known [13] or are obtained by direct calculation by using Laguerre polynomial properties found in [2] or in [5]. The following formulas (19), (20) and (21), are proved in section 5, since they are not found in any of the items in the bibliography: [2], [13] or [5]. **Theorem 1**.: Let \(\mathcal{T}^{\nu}[f(x)]=\{c_{n}^{\nu}\}\), then \[\mathcal{T}^{\nu}\left[\frac{d}{dx}f(x)\right] = \left\{\sum_{k=0}^{n}c_{k}^{\nu}-\nu\sum_{k=0}^{n}c_{k}^{\nu-1}\right\} \tag{17}\] \[\mathcal{T}^{\nu}\left[\frac{f(x)}{x}\right] = \left\{\sum_{k=0}^{n}c_{k}^{\nu-1}\right\},\] (18) \[\mathcal{T}^{\nu}\left[xf(x)\right] = \left\{(2n+\nu+1)c_{n}^{\nu}-(n+1)c_{n+1}^{\nu}-(n+\nu)c_{n-1}^{ \nu}\right\},\] (19) \[\mathcal{T}^{\nu}\left[x\frac{d}{dx}f(x)\right] = \left\{nc_{n}^{\nu}-(n+1)c_{n+1}^{\nu}\right\},\] (20) \[\mathcal{T}^{\nu}\left[x\frac{d^{2}}{dx^{2}}f(x)\right] = \left\{-(\nu+1)\left(\sum_{k=0}^{n}c_{k}^{\nu}-\nu\sum_{k=0}^{n} c_{k}^{\nu-1}\right)-(n+1)c_{n+1}^{\nu}\right\}.\] To begin with, the Coulomb problem is solved next. #### 3.1.1 Coulomb problem Setting the parameters \(a=b=0\) follows that \(\xi(x)=1\), so that \(x=\lambda r\). The corresponding form of equation (15) for this problem is \[\frac{\lambda^{2}\xi^{2}}{x}\left[x\frac{d^{2}}{dx^{2}}\psi(x)+\left(A_{+}x+ \frac{A_{-}}{x}-A_{0}\right)\psi(x)\right]=0, \tag{22}\] where \(A_{0}=\frac{2Z}{\lambda},A_{-}=-\mathbb{\dd(\dd+1)},A_{+}=\frac{2E}{\lambda^{ 2}}\). To solve equation (22) it is proposed a solution of the form \[\psi(x)=x^{\alpha}e^{-\beta x}y(x), \tag{23}\] where \(y=\sum_{k=0}^{\infty}c_{k}L_{k}^{\nu}(x)\), \(L_{n}^{\nu}(x)\) are the Laguerre polynomials of order \(\nu\), and \(\alpha,\beta,\nu\) are dimensionless parameters, free for the moment, but to be determined according to the concrete examples to be solved below. By substituting \(\psi\) of (23) in (22) the following equation is obtained, after canceling factors: \[x\frac{d^{2}}{dx^{2}}y(x)+(2\alpha-2\beta x)\frac{d}{dx}y+( \beta^{2}+A_{+})xy(x)+\\ +[\alpha^{2}-\alpha+A_{-}]\frac{y(x)}{x}-(A_{0}+2\alpha\beta)y(x)=0. \tag{24}\] It is possible to find a solution of equation (24) of the form \(\psi(x)=x^{\alpha}e^{-\beta x}y(x)\), where \(y(x)=\sum_{k=0}^{\infty}c_{k}L_{k}^{\nu}(x)\) since \(c_{n}^{\nu}\) can be calculated and hence, solve equation (24), by means of the finite transform method. Applying formulas (17) to (21) to equation (24) it is possible to solve the Coulomb problem. The complete transform of equation (24) is \[[-\nu-1+2\alpha-2\beta n+(\beta^{2}+A_{+})(2n+\nu+1)-\\ -(A_{0}-2\alpha)c_{n}^{\nu}-(\beta^{2}+A_{+})(n+\nu)c_{n-1}^{\nu}- \\ -[(n+1)(1-2\beta+\beta^{2}+A_{+})]c_{n+1}^{\nu}+\\ +(\nu+1-2\alpha)\left(-\sum_{k=0}^{n-1}c_{k}^{\nu}+\nu\sum_{k=0}^ {n}c_{k}^{\nu-1}\right)+\\ +(\alpha^{2}-\alpha+A_{-})\sum_{k=0}^{n}c_{k}^{\nu-1}=0. \tag{25}\] **Remark.** Note that equation (25) is not a three-terms recursion formula since it does include terms order \(\nu-1\) as well as terms of order \(\nu\). Nevertheless it is possible to solve some equations by restricting some of the coefficients of formula (25) by considering physical parameters. The parameters \(\alpha\), \(\beta\), \(\nu\) are chosen in (25) in such a way that the equation does not contain terms of order \(c_{k}^{\nu-1}\) nor terms that include a summation sign. In this procedure, it is valid to appeal to Theorem 6.1 [10][p. 139 case II], since, as already mentioned, \(\alpha,\beta,\nu\) are free parameters. Then it is valid to choosing \[0 = \alpha^{2}-\alpha+A_{-} \tag{26}\] \[\beta = \frac{1}{2}\] (27) \[\nu = 2\alpha-1. \tag{28}\] From (26) it is obtain \(\alpha=\frac{1\pm\sqrt{1-4A_{-}}}{2}\), hence by (28) \(\nu^{2}=1^{2}-4A_{-}\), so that \(\alpha=(1\pm\nu)/2\). Therefore, for Coulomb problem \(\alpha=(1\pm\sqrt{1-4A_{-}})/2\), \(\nu=\pm\sqrt{1-4A_{-}}\), and \(1/4\geq A_{-}\). Finally, the choice of parameters \[A_{0}=\frac{2Z}{\lambda},\qquad A_{-}=-\updownarrow(\updownarrow+1),\qquad A _{+}=\frac{2E}{\lambda^{2}}, \tag{29}\] leads to \[\alpha=\updownarrow+1,\qquad\nu=2\updownarrow+1, \tag{30}\] where \(Z\) is the electric charge and \(\updownarrow\) is the angular momentum quantum number. Equation (25) leads to a three terms recurrence relation as follows. By substituting (26), (27), and (28) in the transformed equation (25) it is obtain after simplification \[(n+1)c_{n+1}^{\nu}+\left(\frac{n+A_{0}+\alpha}{\left(A_{+}+\frac{1}{4} \right)}-(2n+\nu+1)\right)c_{n}^{\nu}+(n+\nu)c_{n-1}^{\nu}\ =\ 0. \tag{31}\] Next, substitute the values of (29) in (31). After collecting terms and simplification it follows \[(n+1)c_{n+1}^{\nu}-2\left(\frac{-Z}{4E}\sin\phi+((n+\alpha)\cos\phi)\right)c_{n} ^{\nu}+(n+2\alpha-1)c_{n-1}^{\nu}=0, \tag{32}\] where \(\cos\phi=\frac{4(2E)-\lambda^{2}}{4(2E)+\lambda^{2}}\). As noticed by Alhaidari [1], equation (32) is related to the Meixner-Pollaczek three-terms recurrence relation in [10][eq. (9.7.3), p. 213]. Note that (32) is not identical with the Meixner-Pollaczek formula since the order \(\nu=2\alpha-1\) differs from the order \((\lambda)\) in equation (9.7.3) in [10]. Consequently, by setting \(\lambda=\frac{\nu+1}{2}=\updownarrow+1\) a solution of equation (22) is found by choosing the coefficients \(c_{n}^{\nu}\) as \[c_{n}^{\nu}=P_{n}^{\left(\frac{\nu+1}{2}\right)}(z;\phi)=\frac{\left(\frac{ \nu+1}{2}\right)_{n}}{n!}e^{in\phi}{}_{2}F_{1}\bigg{[}\genfrac{}{}{0.0pt}{}{-n \,\,\lambda+iz}{2\lambda};1-e^{-2i\phi}\bigg{]}, \tag{33}\] where, \({}_{2}F_{1}[\cdot]\) is a hypergeometric function, \(z=-Z/2E\), and \((k)_{n}\) is the Pochhammer symbol (for definitions and notation see [10][Ch. 1 sec. 1.4], for instance). Therefore, taking into account the relations in (30) and substituting in (33), the solution of equation (22) is \[\psi(x)=x^{\updownarrow+1}e^{-x/2}\sum_{k=0}^{\infty}P_{k}^{\updownarrow+1}(z ;\phi)\,L_{k}^{2\updownarrow+1}(x), \tag{34}\] which is equivalent to equation (60) in [1]. **Remark.** Recall that in this paper the Laguerre polynomials \(L_{n}^{\nu}(x)\) are not normalized and that \(z,\phi\) are fixed parameters in (34). #### 3.1.2 One dimensional Morse oscillator For this case the values of \(a,b\), are \(a=1,\ b=0\) so that \(\xi(x)=x\). Hence by substituting \(\xi,\xi^{\prime}\) in equation (15) and after simplification it is obtain \[x\frac{d^{2}}{dx^{2}}\psi(x)+\frac{d}{dx}\psi(x)+\left(A_{+}x-A_{0}+\frac{A_{- }}{x}\right)\psi(x)=0. \tag{35}\] Now, again, it is proposed a solution \(\psi(x)=x^{\alpha}e^{-\beta x}y(x)\) of (35), which after substitution gives \[x\frac{d^{2}}{dx^{2}}y(x)+(1+2\alpha-2\beta x)\frac{d}{dx}y(x)+ \\ +\left((\beta^{2}+A_{+})x-(\beta+2\alpha+A_{0})+\frac{\alpha^{2}+ A_{-}}{x}\right)y(x)=0. \tag{36}\] After taking transforms in (36) with formulas (17) to (21), follows the next equation for the Laguerre coefficients of \(y(x)=\sum_{k=0}^{\infty}c_{n}^{\nu}L_{n}^{\nu}(x)\): \[c_{n}^{\nu}[n(A_{+}-2\beta+2\beta^{2}) +\beta(-1-2\alpha+\beta+\beta\nu)+2\alpha-\nu(1-A_{+})-A_{0}+A_{+}]+\] \[-c_{n+1}^{\nu}[(n+1)((1-\beta)^{2}+A_{+})]-\] \[-c_{n-1}^{\nu}(\beta^{2}+A_{+})(\nu+n)+\] \[+(-\nu+\alpha(2+\alpha)+A_{-})\sum_{k=0}^{n-1}c_{k}^{\nu}+\] \[+(\nu^{2}-2\nu\alpha+\alpha^{2}+A_{-})\sum_{k=0}^{n}c_{k}^{\nu-1 }=0. \tag{37}\] With the same simplification criterion as in the Coulomb example, by appealing to Theorem 6.1 [10][p. 139 case II], it is obtained the following system by equating the coefficients of the sum symbols to zero \[-\nu+2\alpha = 0\] \[\alpha^{2}+A_{-} = 0, \tag{38}\] from (38) it follows that \(\nu=2\alpha,\alpha^{2}=-A_{-}\), and, moreover, it is possible to set \(\beta=1/2\) too, as in the Coulomb example. On the other hand the values of \(A_{0},A_{-},A_{+}\) given in [1] are \[A_{0}=\frac{-2V_{1}}{\lambda^{2}},\quad A_{-}=\frac{2E}{\lambda^{2}},\quad A _{+}=\frac{-2V_{2}}{\lambda^{2}}. \tag{39}\] By substituting the values of \(\alpha,\nu,\beta,A_{0},a_{-},A_{+}\) in (37) a three-term recursion system is obtained: \[(n+1)c_{n+1}^{\nu}-c_{n}^{\nu}\left[2n\frac{A_{+}-\frac{1}{4}}{A_{+}+\frac{1} {4}}-\frac{\frac{1}{2}(1+\nu)+A_{0}}{A_{+}-\frac{1}{4}}+\nu+1\right]+(n+\nu)c _{n-1}^{\nu}=0. \tag{40}\] Setting \[\cos\phi=\frac{A_{+}-\frac{1}{4}}{A_{+}+\frac{1}{4}}\] equation (40) becomes \[(n+1)c_{n+1}^{\nu}-2c_{n}^{\nu}\left[\left(n+\frac{\nu+1}{2}\right)\cos\phi+ \frac{A_{0}}{2\sqrt{A_{+}}}\sin\phi\right]+(n+\nu)c_{n-1}^{\nu}=0. \tag{41}\] Now, formula (41) can be compared with Meixner-Pollaczek recursion formula in [10][eq. (9.7.3), p. 213]: \[(n+1)P_{n+1}^{(\omega)}\left(\frac{A_{0}}{2\sqrt{A_{+}}};\phi \right)-\\ -2\left[\frac{A_{0}}{2\sqrt{A_{+}}}\sin\phi+(n+\omega)\cos\phi \right]P_{n}^{(\omega)}\left(\frac{A_{0}}{2\sqrt{A_{+}}};\phi\right)\\ +(n+\nu)P_{n-1}^{(\omega)}\left(\frac{A_{0}}{2\sqrt{A_{+}}};\phi \right)=0. \tag{42}\] Set \(\omega=(\nu+1)/2\) then the solution of the one dimensional Morse operator is \[\psi(x)=x^{\nu/2}e^{-x/2}\sum_{k=0}^{\infty}\frac{(\nu+1)_{k}}{k!}e^{-in\phi}{ }_{2}F_{1}\!\left[\begin{matrix}-n&\frac{\nu+1}{2}+i\frac{A_{0}}{2\sqrt{A_{+}} }\\ &\nu+1\end{matrix};1-e^{-2i\phi}\right]\!L_{k}^{\nu}(x), \tag{43}\] where \(\nu=\frac{2\sqrt{-2E}}{\lambda}\) and \((\nu+1)_{k}=(\nu+1)(\nu+2)\cdots(\nu+k-1)\). \(\Box\) **Remark.** Given the first argument \(-n\) in the hypergeometric function \({}_{2}F_{1}[\cdot]\) in (43) it is worth to mention that such function is a polynomial for each \(n\), since \((-n)_{n+1}=0\). ### Second example, Laguerre-type equations of order four The problem (44) shown below has been of interest in the literature [8], [9]. The following Theorem 2, shows that it is possible to solve fourth-order Laguerre-type equations with finite transform methods as follows. Note that \(L_{n}^{0}(x)\) is simply denoted by \(L_{n}(x)\) and similarly \(\mathcal{T}^{0}=\mathcal{T}\). **Theorem 2.**_The problem,_ \[\begin{cases}\frac{d^{2}}{dx^{2}}\left(x^{2}e^{-x}\frac{d^{2}y}{dx^{2}} \right)-\frac{d}{dx}\left(([2R+2]x+2)e^{-x}\frac{dy}{dx}\right)=e^{-x}\lambda _{m}y,\\ y(0)=R>0,\quad y^{\prime}(0)=\frac{\lambda_{m}y(0)}{-2R}=-\lambda_{m}=-m(m+2R+ 1),\,m\in\mathbb{N},\end{cases} \tag{44}\] _has solutions of the form_ \[y_{m}(x)=-L_{0}(x)-L_{2}(x)-\cdots-L_{m-1}(x)+(R+m)L_{m}(x),\quad 0\leq x<\infty \tag{45}\] _where_ \[L_{n}(x)=\sum_{k=0}^{n}(-1)^{k}\frac{n!}{(n-k)!(k!)^{2}}x^{k},\;n=0,1,\ldots\] _are the order zero Laguerre polynomials of degree \(n\)._ The proof of theorem 2 is in section 5. **Remark.** It is worth to mention that solutions obtained by Krall in [8], [9] for problem (44) are of the form \[R_{n}(x)=\sum_{k=0}^{n}\frac{(-1)^{k}}{(k+1)!}\binom{n}{k}[k(R+n+1)+R]x^{k},\ n=0,1,\dots. \tag{46}\] Such solutions (46) are obtained by the Frobenius method. That the solutions obtained by Krall are equal to the solution s obtained in this article i. e., that \(y_{m}(x)=R_{m}(x),\,m=0,1,\dots\) for all \(x\in[0,\infty)\) is shown by induction in the appendix 5. The novelty of the solutions \(y_{m}(x)\) of (44) is the method of obtaining them that is described below. **Method to obtain solutions of (44).** Using (3) the differential equation in (44) may be written as \[\mathcal{L}^{2}[y]-(2R+1)\mathcal{L}[y]+2\frac{dy}{dx}-2\frac{d^{2}y}{dx^{2}}= \lambda_{m}y, \tag{47}\] where \(L\) is defined in identity (3). The Laguerre transforms of \(\mathcal{T}[L[y]]\), \(T[L^{2}[y]]\), \(\mathcal{T}[\frac{dy}{dx}]\) are known [11], and are included below for the convenience of the reader: \[\mathcal{T}[\mathcal{L}[y]] = \{-nc_{n}\},\ \mbox{where}\ \mathcal{T}[y]=\{c_{n}\},\ n=0,1,2,\dots \tag{48}\] \[\mathcal{T}[\mathcal{L}^{2}[y]] = \{n^{2}c_{n}\},\] (49) \[\mathcal{T}\left[\frac{dy}{dx}\right] = \left\{\sum_{k=0}^{n}c_{k}-y(0)\right\}. \tag{50}\] By integration by parts as in [11], or iteration in formula (50), it is possible to obtain the formula \[\mathcal{T}\left[\frac{d^{2}y}{dx^{2}}\right] = \left\{\sum_{k=0}^{n}(k+1)c_{n-k}-(n+1)y(0)-y^{\prime}(0)\right\} \tag{51}\] where \(y^{\prime}(0)=\frac{dy}{dx}\big{|}_{x=0}\). After formulas (48), (49), (50), and (51) by taking transforms in equation (47), it is obtain \[\{n(n+2R+1)c_{n}\}+2\left\{\sum_{k=0}^{n}c_{k}-y(0)\right\}-\\ -2\left\{\sum_{k=0}^{n}(k+1)c_{n-k}-(n+1)y(0)-y^{\prime}(0)\right\} =m(m+2R+1)\{c_{n}\} \tag{52}\] where the formula is valid for \(\lambda_{m}=m(m+2R+1)\), and \(m\in\mathbb{N}\), as in [9][p. 262]. After simplification in (52) it is obtained \[\left\{2ny(0)+2y^{\prime}(0)-2\sum_{k=1}^{n}kc_{n-k}\right\}\ =\ \{(m-n)(2R+n+m+1)c_{n}\}. \tag{53}\] Set \(y(x)=\sum_{k=0}^{\infty}c_{k}L_{k}(x)\), given that the Laguerre polynomials satisfy \(L_{n}(0)=1\), \(L_{n}^{\prime}(0)=-n\) for all \(n\geq 0\), [2, thm.6.3 ], then \[y(0)=\sum_{k=0}^{\infty}c_{k},\quad y^{\prime}(0)=-\sum_{k=0}^{\infty}kc_{k} \tag{54}\] It is possible to show that solutions of (44) are polynomials of degree \(m\). After simplification and taking into account that formulas (54) are consequently finite, equation (53) becomes simply \[\left\{(m-n)(2R+n+m+1)c_{n}\right\}=-2\left\{\sum_{k=n+1}^{m}(k-n)c_{k}\right\},\quad 0\leq n\leq m. \tag{55}\] Formula (55) and conditions \(y(0)=\sum_{k=0}^{m}c_{k},\quad y^{\prime}(0)=-\sum_{k=0}^{m}kc_{k}\) provide systems of upper triangular equations, \[\begin{array}{c}c_{1}+\hskip 56.905512ptc_{2}+\hskip 56.905512pt\cdots \hskip 56.905512pt+c_{m}=R\\ (m-1)(2R+m+2)c_{1}+\hskip 56.905512pt2(c_{2}+\hskip 56.905512pt+2c_{3}+ \cdots+(m-1)c_{m})=0\\ (m-2)(2R+m+3)c_{2}+2(c_{3}+2c_{4}+\cdots+(m-2)c_{m})=0\\ \hskip 56.905512pt\vdots\\ 2(R+m)c_{m-1}+2c_{m}=0,\end{array}\] which are easily solved: \(c_{0}=c_{1}=\cdots=c_{m-1}=-1\), \(c_{m}=R+m\). Therefore for \(m=0,1,2,\dots\) it is obtained a family of eigenfunctions \(y_{m}(x)\), which are the solutions (45) of the eigenvalue problem (44). ## 4 Conclusions The reader may notice that the solutions obtained for the Schrodinger equation in this paper, are slightly different from the approximations already known [7], [14] in terms of Laguerre polynomials. Although the solutions obtained in the present study include a complete series and not just a few terms approximation, the discrepancy depends on the convergence of the Sturm-Liuville expansion of the solutions found. The solutions in this article are also different from the solutions in [1], however, the reader may note that the polynomials used in [1] are normalized, which is not the case for the polynomials used here. In addition, the author in [1] obtained, by solving the Morse oscillator, a second-order difference equation containing terms in \(n^{2}\). Terms of order \(n^{2}\) can be obtained also by taking Laguerre transforms of equation (35) multiplied by \(x\). But in doing so, the author of this article was unable to reduce the obtained system to a system involving recursive series of only three terms, which is needed essentially in the method in [1]. Perhaps it can be achieved, using equivalent formulas of Laguerre transforms, but this goal was no longer pursued since a solution in terms of Meixner-Polaczek and Laguerre polynomials was obtained, and therefore the goals of this article were achieved. ## 5 Appendix In this section are included the proofs of theorem 1 and theorem 2. **Proof of theorem 1.** Formula (17) is shown in [13][p. 11-16, 11-17]. Formula (18) follows from identity in [5][formula (39) p. 192] \[L_{n}^{\nu}(x)=\sum_{k=0}^{n}\frac{(\nu-\beta)_{k}}{k!}L_{n-k}^{\beta}(x),\] where \((r)_{k}=r(r+1)\cdots(r+k-1)\). Note that formula (18) is obtained from last formula since for \(\nu-\beta=1\), it follows \(\mathcal{T}^{\nu}\left[\frac{f(x)}{x}\right]=\left\{\sum_{k=0}^{n}\frac{(1)_{k }}{k!}c_{n-k}^{\nu-1}\right\}=\left\{\sum_{k=0}^{n}c_{k}^{\nu-1}\right\}\), given that \((1)_{k}=k!\). The proof of formula (19), follows from formula (ii) of theorem 6.11 in [2]: \[xL_{n}^{\nu}(x)=(2n+\nu+1)L_{n}^{\nu}-(n+\nu)L_{n-1}^{\nu}(x)-(n+1)L_{n+1}^{ \nu}(x). \tag{56}\] Effectively, \[\mathcal{T}^{\nu}[xf(x)]=\int_{0}^{\infty}e^{-x}x^{\nu}L_{n}^{ \nu}(x)xf(x)dx=\\ =\int_{0}^{\infty}e^{-x}x^{\nu}\left[(2n+\nu+1)L_{n}^{\nu}-(n+\nu )L_{n-1}^{\nu}(x)-(n+1)L_{n+1}^{\nu}(x)\right]f(x)dx=\\ =(2n+\nu+1)c_{n}^{\nu}-(n+\nu)c_{n-1}^{\nu}-(n+1)c_{n+1}^{\nu}.\] Formula (20) is obtain as follows \[\mathcal{T}^{\nu}\left[x\frac{d}{dx}f(x)\right]=\left\{\int_{0}^ {\infty}e^{-x}x^{\nu}L_{n}^{\nu}(x)x\frac{d}{dx}f(x)dx\right\}\\ =\left\{\int_{0}^{\infty}e^{-x}x^{\nu+1}\left(L_{n}^{\nu+1}(x)-L _{n-1}^{\nu+1}(x)\right)\frac{d}{dx}f(x)dx\right\},\] where the last integral follows from the identity \(L_{n}^{\nu}(x)=L_{n}^{\nu+1}(x)-L_{n-1}^{\nu+1}(x)\), [2][Theorem 6.11 (i)]. Now, from formula (17) \[\int_{0}^{\infty}e^{-x}x^{\nu+1}L_{n}^{\nu+1}(x)\frac{d}{dx}f(x)dx=c_{n}^{ \nu+1}-(\nu+1)\sum_{k=0}^{n}c_{k}^{\nu}+\sum_{k=0}^{n-1}c_{k}^{\nu+1}\] \[\int_{0}^{\infty}e^{-x}x^{\nu+1}L_{n-1}^{\nu+1}(x)\frac{d}{dx}f(x)dx=c_{n-1}^ {\nu+1}-(\nu+1)\sum_{k=0}^{n-1}c_{k}^{\nu}+\sum_{k=0}^{n-2}c_{k}^{\nu+1}.\] In this way, by substracting the last two transforms \[{\cal T}^{\nu}\left[x\frac{d}{dx}f(x)\right]=c_{n}^{\nu+1}-(\nu+1)c_{n}^{\nu}.\] By using the recurrence relation \(c_{n}^{\nu+1}=(n+\nu+1)c_{n}^{\nu}-(n+1)c_{n+1}^{\nu}\), [13][p **11**-18,11.195 (a)] it is obtained \[{\cal T}^{\nu}\left[x\frac{d}{dx}f(x)\right]=nc_{n}^{\nu}-(n+1)c_{n+1}^{\nu}\] as claimed. Finally, formula (21) follows from the solutions of equation (3), equation which is equivalent to \[x\frac{d^{2}}{dx^{2}}f(x)+(\nu+1-x)\frac{d}{dx}f(x)=-nf(x), \tag{57}\] from (57) the next transformation follows \[{\cal T}^{\nu}\left[x\frac{d^{2}}{dx^{2}}f(x)+(\nu+1-x)\frac{d}{dx}f(x)\right] =\{-nc_{n}^{\nu}\}. \tag{58}\] Since \({\cal T}^{\nu}\) is linear the following relation holds \[{\cal T}^{\nu}\left[x\frac{d^{2}}{dx^{2}}f(x)\right]=\{-nc_{n}^{\nu}\}-{\cal T }^{\nu}\left[(\nu+1)\frac{d}{dx}f(x)\right]+{\cal T}^{\nu}\left[x\frac{d}{dx} f(x)\right]. \tag{59}\] So, formula (21) follows directly from (59) and formulas (19), and (20). \(\square\) **Remark.** Observe that if the operator \({\cal L}^{\nu}[f(x)]\) is defined as \[{\cal L}^{\nu}[f(x)]=e^{x}x^{-\nu}\frac{d}{dx}\left(x^{\nu+1}e^{-x}\frac{d}{ dx}y(x)\right),\] then formula (58) can be obtained from the Sturm-Liouville eigenvalue relation \[{\cal T}^{\nu}[{\cal L}^{\nu}[L_{n}^{\nu}(x)]]=-n{\cal T}^{\nu}[L_{n}^{\nu}(x )],\] which is probably the most important Laguerre-type transform for the purposes of this article. **Proof of Theorem 2.** Proof by induction on \(m\). For \(m=0\), \(y_{0}(x)=R=R_{0}(x)\) is obtained directly from substitution \(m=0\) in (45), taking into account that \(L_{0}(x)=1\) for \(x\in[0,\infty)\), as well as direct substitution in (46). Assume that for \(m=1,2,\ldots,n\), formula \(y_{m}(x)=R_{m}(x)\) holds, it is necessary to show that \(y_{n+1}(x)=R_{n+1}(x)\). It follows from formula (45) that \[y_{n+1}=-L_{0}(x)-\cdots-L_{n-1}(x)-L_{n}(x)+(R+n+1)L_{n+1}(x)\\ =-L_{0}(x)-\cdots-L_{n-1}(x)-L_{n}(x)+(R+n+1)L_{n+1}(x)+\\ +(R+n)L_{n}(x)-(R+n)L_{n}(x). \tag{60}\] By substituting the inductive hypothesis in (60) it follows that \[y_{n+1}(x)=R_{n}(x)+(R+n+1)(L_{n+1}(x)-L_{n}(x)) \tag{61}\] But \[L_{n+1}(x)-L_{n}(x)=\sum_{k=0}^{n+1}\frac{(-1)^{k}(n+1)!}{(k!)^{2 }(n+1-k)!}x^{k} -\sum_{k=0}^{n}\frac{(-1)^{k}n!}{(k!)^{2}(n-k)!}x^{k}\] \[=\sum_{k=0}^{n+1}\frac{(-1)^{k}n!}{k!(k-1)!(n-k+1)!}x^{k}\] So that, \[y_{n+1}(x) =R_{n}(x)+(R+n+1)\sum_{k=0}^{n+1}\frac{(-1)^{k}n!}{k!(k-1)!(n-k+1)!}x^{k}\] \[=R_{n}(x)+(R+n+1)\sum_{k=0}^{n+1}\frac{(-1)^{k}n!(k+1)}{(k+1)!(k- 1)!(n-k+1)!}x^{k}\] \[=R_{n}(x)+\sum_{k=0}^{n+1}\frac{(-1)^{k}}{(k+1)!}\frac{n![k(R+n+ 1)+R+n+1]}{(k-1)!(n-k+1)!}x^{k}\] \[=R_{n}(x)+\sum_{k=0}^{n+1}\left[\frac{n!(k(R+n+1)+R)}{(k-1)!(n-k+ 1)!}+\frac{(n+1)!k}{(n-k+1)!k!}\right]x^{k}\] \[=R_{n}(x)+\sum_{k=0}^{n+1}\frac{(-1)^{k}}{(k+1)!}\left[\binom{n}{ k-1}(k(R+n+1)+R)+\binom{n+1}{k}k\right]x^{k}.\] By using the formula for \(R_{n}(x)\) and the Pascal rule \(\binom{n+1}{k}=\binom{n}{k}+\binom{n}{k-1}\) it follows that \[y_{n+1}(x)=\sum_{k=0}^{n}\frac{(-1)^{k}}{(k+1)!}\binom{n}{k}(k(R +n+1)+R)x^{k}+\\ +\sum_{k=0}^{n}\frac{(-1)^{k}}{(k+1)!}\binom{n}{k-1}(k(R+n+1)+R)x ^{k}+\\ +\sum_{k=0}^{n+1}\frac{(-1)^{k}}{(k+1)!}\binom{n+1}{k}kx^{k},\] consequently, \[y_{n+1}(x)=\sum_{k=0}^{n}\frac{(-1)^{k}}{(k+1)!}\binom{n+1}{k} (k(R+n+1)+R)x^{k}+\sum_{k=0}^{n+1}\frac{(-1)^{k}}{(k+1)!}\binom{n+1}{k}kx^{k}\] \[=R_{n+1}(x).\] Therefore \(y_{m}(x)=R_{m}(x)\) for all \(m\in\mathbb{N}\) and \(x\in[0,\infty)\). \(\Box\)
2310.17509
Experimental investigations of quasi-coherent micro-instabilities in Ohmic plasmas
The ITG and TEM instabilities with quasi-coherent spectra have been identified experimentally, by the newly developed far-forward collective scattering measurements in J-TEXT tokamak Ohmical plasmas. The ITG mode has characteristic frequencies in the range of 30-100kHz and wavenumber of k_\theta\rho_s<0.3. After the plasma density exceeds at critical value, the ITG mode shows a bifurcation behavior, featured by frequency decrease and amplitude enhancement. Meanwhile, the ion energy loss enhancement and confinement degradation are also observed. It gives the direct experimental evidence for ion thermal transport caused by ITG instability.
Peng Shi, J. C. Li, G. Zhuang, Zhifeng Cheng, Li Gao, Yinan Zhou
2023-10-26T16:04:59Z
http://arxiv.org/abs/2310.17509v1
# Experimental investigations of quasi-coherent micro-instabilities in J-TEXT Ohmic plasmas ###### Abstract The ITG and TEM instabilities with quasi-coherent spectra have been identified experimentally, by the newly developed far-forward collective scattering measurements in J-TEXT tokamak Ohmical plasmas. The ITG mode has characteristic frequencies in the range of \(30-100kHz\) and wavenumber of \(k_{\theta}p_{s}\) < 0.3. After the plasma density exceeds at critical value, the ITG mode shows a bifurcation behavior, featured by frequency decrease and amplitude enhancement. Meanwhile, the ion energy loss enhancement and confinement degradation are also observed. It gives the direct experimental evidence for ion thermal transport caused by ITG instability. It is widely accepted that the anomalous transport rise from micro-instabilities or turbulence is the main mechanism for cross-field particle and heat transport in tokamaks [1,2]. Therefore, understanding the micro-instabilities in tokamaks is crucial for future fusion devices. For Ohmically heated tokamak plasmas, one of the most important micro-instabilities modes is the ion-temperature-gradient driven drift wave instabilities (ITG mode) [3,4]. Theory has long predicted that the ITG mode is the dominant microscopic turbulence and the dominant source of anomalous ion transport in tokamak plasmas [5,6]. However, there only exists sparse direct or indirect experimental evidences to implicate the ITG mode in particular instability in a tokamak [7,8]. So the dominant role of the ITG mode predicted by theory has not been firmly established experimentally in tokamaks. The challenge to directly distinguish ITG mode in tokamak plasmas is identifying the propagation direction for a specific turbulence. Because the ITG mode usually coexists with the trapped electron mode (TEM) and they have the similar wavelength scale such that \(k_{\theta}\rho_{s}\) < 1, where \(k_{\theta}\) is the poloidal wave number and \(\rho_{s}\) = \(\sqrt{m_{i}T_{e}}/(Z_{i}B)\) is the main ion Larmor radius with respect to the main ion sound speed. In early years, by use of far-infrared (FIR) collective scattering measurements, a turbulence with ion feature (propagated in the ion diamagnetic drift direction) was observed in the saturated Ohmic confinement (SOC) plasmas, which was referred to the ITG mode turbulence [9,10]. In recent decades, on the benefits of development of reflectometer diagnostics, a particular kind of density fluctuations called quasi-coherent (QC) modes con-cerning the ITG and TEM modes were widely observed in tokamaks [11-13]. The first studies concerning QC modes were performed in T-10 tokamak, which was reported in Ref. 11. They found two different QC fluctuations: low frequency (LF) QC and high frequency (HF) QC modes. By comparing with simulations, the LF QC mode was inferred as ITG instability while the HF QC mode was linked with TEM instability. Subsequent studies related to QC modes in TEXTOR and Tore Supra mainly focused on the LF QC modes due to the HF QC modes were not detected. More recently, a QC mode similar to the LF QC mode in T-10 was also observed on HL-2A and J-TEXT tokamaks by reflectometer [14]. But in contrast to the Ref. 11, authors of Ref. 12 &14 inferred that LF QC modes as the TEM instabilities, although they have quite similar characteristic frequency of \(50-120kHz\) and wave-number of \(k_{\theta}\rho_{s}\cong 0.1-0.4\). The difference mainly arises from the lack of direct measurement for the propagation direction, which is the key point for judging the QC modes be ion or electron mode. In this sense, the FIR collective scattering measurements [9] have an advantage over reflectometer. But on the other hand, the collective scattering cannot identify the QC turbulences because it only measure the fluctuation with particular wave-number \(k_{\perp}\). In a word, there still have not direct evidences to affirm the ITG or TEM modes in tokamak experiments. Most recently, by using the newly developed far-forward collective scattering (FCS) measurements [15], the QC density fluctuation reported in Ref. 14 has also been detected and studied on Joint-TEXT tokamak (formerly TEXT-U), which is a conventional medium-sized tokamak with a major radius of \(R_{0}=1.05\ m\) and minor radius of \(a=0.25m\)\(\sim\)\(0.29m\) (set by the silicon-carbide coated graphite limiter) [16]. The FCS measurement is based on the 17-channel three-wave FIR polarimeter-interferometer system (POLARIS) [17], which has the vertical impact parameters as \(r=-24\): \(3\): \(24cm\), where \(r=R-R_{0}\). Here, \(r<0\) and \(r>0\) corresponds to high field side (HFS) and low field side (LFS) respectively. Additionally, the FCS measures the line-integral electron density fluctuations with wave number in the range of \(\ k_{\perp}<1.5cm^{-1}\), where the index \(\perp\) means direction perpendicular to the incident beam [15]. Thus, the maximum detectable poloidal wave-number varies with the radial positon of measuring chord. It decreases from the center to edge. It should be note that all discharges presented here are Ohmically heated hydrogen plasmas by gas-puffing fueling. And the minor radius \(a\) is set at \(0.255m\). The typical density fluctuations spectra measured by FCS is shown in figure 1, which are from J-TEXT Ohmic discharge with parameters as: \(I_{p}=180kA,B_{t}=2.0T\), \(\vec{n}_{e0}=3\times 10^{19}m^{-3}\). The center frequency ( \(885-900kHz\) ) with large amplitude is the intermediate frequency (IF) signal for measuring Faraday rotation angle. The two parts beside IF alike wings are the collective scattering signals, which are contributed by density fluctuations. There is no scattering signal before discharge, as the dashed line shows. The frequency difference between the IF (\(f_{0}\)) and scattering signal (\(f_{\mathrm{s}}\)) is just the frequency of electron density fluctuation. The scattering spectrum should be symmetric relative to \(f_{0}\), because the two probe beams are collinear combined. As Fig. 1 shows, the FCS spectra on the \(R-R_{0}=18cm\) and \(R-R_{0}=0cm\) chords display two peaks at frequencies of \(\ |f_{S}-f_{0}|\cong 70\ \&\ 80kHz\) respectively. And that frequency peaks have broad-band (\(\Delta f\)) about \(17kHz\). The characteristic frequency of \(\Delta f/f\cong 0.25\) indicates that the density fluctuations have QC features. Additionally, the mid-frequencies of that QC modes have a decreasing tendency with plasma minor radius. That is consistent with the reflectometer observations [14]. Furthermore, the QC mode is absent on the \(R-R_{0}=-9cm\) chord while it is noticeable at \(R-R_{0}=9cm\). It shows the clear LFS/HFS asymmetry of the QC modes, which is the same to the QC mode observations on TEXTOR [12] and T-10 [11]. In other words, the QC modes are ballooned in the LFS. Actually, for the discharge in Fig. 1, the QC modes can be observed on all the measuring chords from \(R-R_{0}=18cm\) to \(R-R_{0}=-6cm\). As mentioned above, the wave number of density fluctuations measured by FCS is limited as \(k_{\perp}<1.5cm^{-1}\). Supposing that the electron temperature ( \(T_{e}\) ) varies from \(800eV\) to \(200eV\) while radial position increase from \(r=0cm\) to \(r=18cm\), then the normalized wave-number (\(\rho_{s}k_{\theta}\)) for the QC mode is estimated as \(\rho_{s}k_{\theta}<0.3\) in central and \(\rho_{s}k_{\theta}<0.1\) at edge (\(r=18cm\)). According to the collective scattering principle, the heterodyne detection using a twin-frequency source (one acts as local beam) is available to measure the propagation direction of density fluctuation wave at laboratory frame [18]. Of course, it demands the detector receiving the scattering wave from a particular direction. Thus, the frequency shift direction relative to incident beam is corresponding to the propagation direction. For normal far-forward scattering, it is almost impossible to discriminate the propagation direction of density fluctuations, because the scattering waves with positive and negative frequency shift are symmetric relative to the collection optical path of detector. But if the probe beam deviates from the optic axis of detector collection path, it will be available to identify the propagation direction. For FCS based on POLARIS, benefits from probe beam refracting by plasma, that deviation exists naturally. So if the refraction is enough significant, we could discriminate the propagation direction of that QC mode, by analyzing the heterodyne signal between local and scattering beams. The heterodyne FCS spectra mixed by local and scattering beams are plotted in figure 2. The IF \((2175-2210kHz)\) is set for measuring electron density. As the Figure 2: (a) Heterodyne FCS spectra at \(R-R_{0}=12cm\) and \(R-R_{0}=-9cm\) for J-TEXT Ohmic discharge ( \(I_{p}=180kA,B_{t}=2.0T\), \(\vec{n}_{e0}=2\times 10^{19}m^{-3}\)). And the spectrum at \(R-R_{0}=-9cm\) is multiplied by three. (b) Time traces of the amplitudes of IF in top panel. Figure 1: FCS spectra in the J-TEXT ohmic discharge (\(I_{p}=180kA,B_{t}=2.0T\), \(\vec{n}_{e0}=3\times 10^{19}m^{-3}\) ). The QC density fluctuations are seen on chords of \(R-R_{0}=18cm\) and \(R-R_{0}=0cm\), but not seen on \(R-R_{0}=-9cm\). Fig. 2 (b) shows, the IF amplitude decreases 25% and 50% at \(R-R_{0}=12cm\) and \(R-R_{0}=-9cm\) respectively, which results from the refraction of probe beams. As predicted above, the FCS spectra show obvious asymmetry relative to IF (Fig. 2 (a)). In this discharge, the frequency of local beam is set to be larger than probe beam, and the toroidal field is anticlockwise from the top view. Therefore, for channel at LFS (\(R-R_{0}=12cm\)), the negative frequency shift means the density fluctuations propagating in the ion diamagnetic direction at Lab. frame. And that is opposite for channel at HFS (\(R-R_{0}=-9cm\)). In figure 2, both FCS spectra at LFS and HFS indicate that the QC mode propagates in ion direction at Lab. frame. Calculating with \(k_{\theta}<1cm^{-1}\) and \(f=70kHz\), the phase velocity of the QC mode at Lab. Frame is given as \(v_{QC}>4.4km/s\). Considering the plasma \(E\times B\) equilibrium flow usually rotates in electron direction, the QC velocity is underestimated at plasma frame. Therefore, it can be affirmed that the QC mode propagates in ion direction at plasma frame. In other words, the QC mode is an ion mode. In addition, the FCS spectrum at \(R-R_{0}=-9cm\) in Fig. 2 is multiplied by three. It means that the density fluctuations at HFS is much smaller than that at LFS. As mentioned above, expect for the LF QC mode (\(70\)\(\sim\)\(120kHz\)), experiments in T-10 tokamak found another HF QC modes (\(150\)\(\sim\)\(250kHz\)) [11]. Actually in J-TEXT tokamak, the FCS also measured another QC mode whose characteristic frequency is higher than that ion QC mode. The heterodyne FCS spectra contain two different QC modes are showed in figure 3. As same to the spectra in Fig. 2, the ion QC mode (\(\sim\)\(75kHz\)) is clearly observed on both chords of \(R-R_{0}=12cm\) and \(R-R_{0}=-3cm\). But the different and important thing is that another QC mode with characteristic frequency near \(170kHz\) simultaneously appears at the \(R-R_{0}=-3cm\) spectrum. And the frequency shift direction for the HF QC is opposite to the LF QC. Thus, it can be easily deduced that this HF QC mode propagates in electron direction in Lab. Frame. Additionally, the HF QC mode is distinct at \(r=3cm\) chord but almost disappears at \(r=6cm\) (not shown in Fig. 3). It is mostly because the wave-number of HF QC mode falls some value between the measuring limitation of \(r=3cm\) and \(r=6cm\). Then its poloidal wave-number is estimated in the range of \(k_{\theta}\cong 1.4-1.5cm^{-1}\). Supposing \(k_{\theta}=1.45cm^{-1}\) and \(f=170kHz\), it gives the HF QC mode phase velocity at Lab. frame \(v_{HFQC}\cong 7.3km/s\) in electron diamagnetic direction. The plasma equilibrium flow poloidal velocity is estimated as \(1-2km/s\), inferred from carbon ion poloidal velocity measured by high-resolution spectrometer system [19]. Thus, the HF QC mode is inferred as electron mode. In conclusion, the FCS measurement on J-TEXT has observed two different QC density waves, and they propagate in ion and electron direction respectively. The ion and electron QC modes have characteristic frequency of \(50\)\(\sim\)\(100kHz\) and \(150\)\(\sim\)\(200kHz\) respectively. Also, the typical wave-number for ion mode is limited by \(\rho_{s}k_{\theta}<0.3\) at central and \(\rho_{s}k_{\theta}<0.1\) for edge region, while that for the electron mode is estimated as \(0.15<\rho_{s}k_{\theta}<0.3\). For Ohmic L-mode tokamak plasmas, the most unstable micro-instabilities with long wave-length (\(\rho_{s}k_{\theta}<1\)) are predicted to be ITG and TEM modes. Also, the normalized wave-number for the two QC modes is closed to the theoretical predictions [20]. Take into account the propagation direction, it is reasonable to conclude that the ion QC mode is ITG instability and the electron QC mode is TEM instability. In addition, we should note that the TEM mode is difficult to measure, because its wave-number is close to the limitation of the FCS measurement on J-TEXT. Thus, here mainly studies the ITG mode in the following. As suggested that the stability of ITG mode is strongly related to plasma density and confinement saturation in tokamaks, we have studied the behaviors of ITG mode during density ramp-up. For the discharge using continual gas-puffing to raise density in one shot, the FCS spectra (\(r=12cm\)) evolution with central averaged electron density (\(\bar{n}_{e0}\)) is plotted in Fig. 4. The discharge parameters are \(I_{p}=180kA,B_{t}=2.0T\), and the density climbs from \(1.5\times 10^{19}m^{-3}\) to \(4.5\times 10^{19}m^{-3}\). It is need to note that the FCS spectra in Fig. 4 is normalized by density (\(P_{FCS}/\bar{n}_{e0}\)). According to the behaviors of ITG modes, this discharge can be divided into three density regimes. In the low density (LD) range (\(\bar{n}_{e0}<2\times 10^{19}m^{-3}\)), the ITG mode is too weak to be observed, as the spectrum for \(\bar{n}_{e0}=1.8\) shows. In the medium density (MD) range ( \(\bar{n}_{e0}=2-3.5\times 10^{19}m^{-3}\)), the ITG mode is noticeable and enhances slowly with density increase. Meanwhile its characteristic frequency almost keeps constant. In the high density (HD) range (\(\bar{n}_{e0}>3.5\times 10^{19}m^{-3}\)), the amplitude of ITG mode increases substantially with density, and its characteristic frequency decreases simultaneously. As Fig. 4 shows, during the HD regime, the normalized Figure 3: Heterodyne FCS spectra at \(r=12cm\) and \(r=-3cm\) for J-TEXT Ohmic discharge. Both the QC ion mode and another QC electron mode are observed. fluctuation power (\(P_{g}/\bar{n}_{e0}\)) for ITG mode has trebled, and its central frequency decreases from \(80kHz\) to \(40kHz\). In conclusion, the ITG mode behaviors have two bifurcation points. The first point is the appearance of ITG mode, and the corresponding critical density is \(\sim\!\!2\times 10^{19}m^{-3}\). The second point is the abrupt amplitude increase and frequency decrease for ITG mode, and the corresponding critical density is \(\sim\!\!3.5\times 10^{19}m^{-3}\). It is believed that the SOC regime is related to the bifurcation behavior of ITG mode. According to the empirical scaling by _Shimomura_[21], the critical density for SOC is predicted as \(n_{e}^{c}=I_{p}\mu_{0}\sqrt{A_{i}}/(2\sqrt{2}\pi a^{2})\cong 3.9\times 10^{19}m^{-3}\). It is much close to the second bifurcation point. Furthermore, we have studied the ion temperature profile and the ITG critical parameters \(\eta_{i}=L_{n}/L_{Ti}\) for slab branch and \(R_{0}/L_{Ti}\) for toroidal branch. The ion temperaure profile at edge (\(r>0.6a\)) is given by high-resolution spectrometer system, and plasma density profile is obtained by POLARIS [22]. As Fig. 5(c) shows, both \(\eta_{i}\) and \(R_{0}/L_{Ti}\) (at \(r\!=\!0.7a\!\!\sim\!\!18cm\) ) increase with density climbing. That is why ITG mode enhances with density. The correlation between the ITG mode amplitude and \(\eta_{i}\) has been affirmed by experiments in CLM [23]. Also, the critical value of \(\eta_{i}\) for ITG mode occurrance (where \(\bar{n}_{e0}\approx 2\times 10^{19}m^{-3}\)) is about 1.5. It is consistent with the theoretical prediction. In addition, the ion temperature (Fig. 5(a)) at core region (\(0.6a<r<0.8a\) ) shows abrupt declining after the density exceeds \(3.5\times 10^{19}m^{-3}\). Meanwhile, the density profile peaking factor (\(n_{e0}/\bar{n}_{e0}\)) and ion energy in the region (\(r>0.6a\)) both reaches a maxima, as Fig. 5(b) indicates. That implicates the enhancement of ion energy losses and global confinement degradation. And the critical density \(\bar{n}_{e0}\cong 3.5\times 10^{19}m^{-3}\) is highly consistent with the threshold of the abrupt ennencenter for ITG mode. In summary, the ITG and TEM instabilities with quasi-coherent spectra have been affirmed experimentally for the first time, by the far-forward collective scattering measurement in J-TEXT tokamak. The ITG mode shows a bifurcation behavior after plasma density exceeds a critical value, where it enhances substantially. At the same point, the ion energy loss increase and global confinement degradation are also observed. It gives the direct experimental evidence for the ion thermal transport driven by ITG mode. **Acknowledgement**: This work was supported by the National Natural Science Foundation of China under Grant Nos. 0204131240 and 11575067.
2309.02701
Magic angle (in)stability and mobility edges in disordered Chern insulators
Why do experiments only exhibit one magic angle if the chiral limit of the Bistritzer-MacDonald Hamiltonian suggest a plethora of them? - In this article, we investigate the remarkable stability of the first magic angle in contrast to higher (smaller) magic angles. More precisely, we examine the influence of disorder on magic angles and the Bistritzer-MacDonald Hamiltonian. We establish the existence of a mobility edge near the energy of the flat band for small disorder. We also show that the mobility edges persist even when all global Chern numbers become zero, leveraging the $C_{2z}T$ symmetry of the system to demonstrate non-trivial sublattice transport. This effect is robust even beyond the chiral limit and in the vicinity of perfect magic angles, as is expected from experiments.
Simon Becker, Izak Oltman, Martin Vogel
2023-09-06T04:29:04Z
http://arxiv.org/abs/2309.02701v1
# Magic Angle (in)stability and Mobility Edges in Disordered Chern Insulators ###### Abstract. Why do experiments only exhibit one magic angle if the chiral limit of the Bistritzer-MacDonald Hamiltonian suggest a plethora of them? - In this article, we investigate the remarkable stability of the first magic angle in contrast to higher (smaller) magic angles. More precisely, we examine the influence of disorder on magic angles and the Bistritzer-MacDonald Hamiltonian. We establish the existence of a mobility edge near the energy of the flat band for small disorder. We also show that the mobility edges persist even when all global Chern numbers become zero, leveraging the \(C_{2z}T\) symmetry of the system to demonstrate non-trivial sublattice transport. This effect is robust even beyond the chiral limit and in the vicinity of perfect magic angles, as is expected from experiments. ## 1. Introduction Twisted bilayer graphene is a highly tunable material that exhibits approximately flat bands at special twisting angles, the so-called _magic angles_[11, 12]. In this article, we study the question of why the largest magic angle, as predicted by the chiral model, is more robust than smaller magic angles within the chiral limit. We analyze why the chiral model's largest magic angle exhibits greater resilience compared to smaller magic angles within the chiral limit. This finding potentially elucidates why, up until now, only the first magic angle has been experimentally observed. We also study the impact of disorder in the chiral limit and its interplay with the flat bands at magic angles. In quantum systems, disorder-induced dynamical localization is a well-known phenomenon, wherein spatially localized wavepackets do not significantly diffuse under time evolution. While the underlying mechanisms behind this phenomenon are relatively well understood, the opposite behavior--diffusive behavior in disordered systems--is only understood in specific cases [1, 1, 13, 14, 15]. It is widely believed that large classes of two-dimensional quantum systems, even under minor disorder, exclusively exhibit localization, as conjectured by Problem 2 on Simon's list of open problems for Schrodinger operators [16]. As we will show below, the Hamiltonian describing twisted bilayer graphene at a magic angle or other related materials is an exception. These new classes of materials, so-called _Chern insulators_, exhibit non-zero Chern numbers in the absence of external magnetic fields [11]. In particular, in twisted bilayer graphene, for wavepackets localized sufficiently close to the perturbed flat band at zero energy, the time-evolution is, in a suitable sense, ballistic. Our argument here is an adaptation of an argument by Germinet, Klein, and Schenker showing a form of delocalization for the Landau Hamiltonian [10]. The physical intuition behind this delocalization argument is straightforward: the Landau Hamiltonian exhibits non-zero Hall conductivity at each Landau level. Moreover, as the Hall conductivity, a topological quantity, remains invariant under minor disorder, the existence of substantial spectral gaps between the Landau levels prevents strong localization across the spectrum. The properties of the flat bands for twisted bilayer graphene are analogous to Landau levels with the crucial difference that no magnetic field is required. At the first magic angle, the two flat bands exhibit only a Chern number zero. However, the two flat bands individually carry non-zero Chern numbers \(\pm 1\), allowing for an anomalous quantum Hall effect when the TBG substrate is e.g. aligned with hexagonal boron nitride [14]. Mathematically, the effect of aligning the substrate with hBN is modeled by adding an effective mass term to the Hamiltonian thereby splitting the two flat bands each carrying a non-zero Chern number. In addition, the flat bands are gapped from the rest of the spectrum. We also establish a localized regime that rests on the multi-scale analysis framework of Germinet-Klein [10, 11]. Here, the only difficulty is to allow for a sufficiently large class of random perturbations which requires us to extend the estimate on the number of eigenvalues (NE) and thus the Wegner estimate (W). The chiral limit of the massive continuum model for twisted bilayer graphene, which can also be thought of as a model Hamiltonian for twisted transition metal dichalcogenides (TMDs) [12], is the Hamiltonian \(H(m,\alpha)\) acting on \(L^{2}(\mathbb{C};\mathbb{C}^{4})\) with domain given by the Sobolev space \(H^{1}(\mathbb{C};\mathbb{C}^{4})\) \[H(m,\alpha)=\begin{pmatrix}m&D(\alpha)^{*}\\ D(\alpha)&-m\end{pmatrix}\text{ with }D(\alpha)=\begin{pmatrix}2D_{\bar{z}}& \alpha U(z)\\ \alpha U(-z)&2D_{\bar{z}}\end{pmatrix}, \tag{1.1}\] where \(D_{\bar{z}}=-i\partial_{\bar{z}}\,,\,\alpha\in\mathbb{C}\backslash\{0\}\) is an effective parameter that is inversely proportional to the twisting angle and \(m\geq 0\) a mass parameter. Let \(\Gamma:=4\pi i\omega(\mathbb{Z}\oplus\omega\mathbb{Z})\) be a triangular lattice with \(\omega=e^{2\pi i/3}\). The tunnelling potentials \(U\) are \(\Gamma\)-periodic functions satisfying for \(\mathbf{a}=4\pi ia_{1}\omega/3+4\pi ia_{2}\omega^{2}/3\) with \(a_{i}\in\mathbb{Z}\), i.e. \(\mathbf{a}\in\Gamma_{3}:=\Gamma/3\) \[U(z+\mathbf{a})=\bar{\omega}^{a_{1}+a_{2}}U(z),\quad U(\omega z)=\omega U(z), \quad\overline{U(z)}=U(\bar{z}). \tag{1.2}\] The central object in the one-particle picture of twisted bilayer graphene are the so-called _magic angles_. We say that \(\alpha\in\mathbb{C}\setminus\{0\}\) is magic if and only if the Bloch-Floquet transformed Hamiltonian, see [1, (2.11)], with mass parameter \(m\geq 0\) exhibits a flat band at energy \(\pm m\), i.e. \[\pm m\in\bigcap_{k\in\mathbb{C}}\operatorname{Spec}_{L^{2}(\mathbb{C}/\Gamma)}(H_ {k}(m,\alpha))\text{ with }H_{k}(m,\alpha)=\begin{pmatrix}m&D(\alpha)^{*}+\bar{k}\\ D(\alpha)+k&-m\end{pmatrix}, \tag{1.3}\] with \(\operatorname{Spec}_{X}(S)\) denoting the spectrum of the linear operator \(S\) on the Hilbert space \(X\) on a suitable dense domain, where \(H_{k}(m,\alpha):H^{1}(\mathbb{C}/\Gamma;\mathbb{C}^{2})\to L^{2}(\mathbb{C}/ \Gamma;\mathbb{C}^{2}).\) The set of \(\alpha\) under which there exists a flat band at energy \(\pm m\) is independent of \(m\)1. In the sequel, we shall suppress the mass parameter \(m\geq 0\) in the notation. Footnote 1: Observe that \(\operatorname{Spec}H_{k}(m,\alpha)=\pm\sqrt{\operatorname{Spec}H_{k}(0,\alpha )^{2}+m^{2}}\)[192, (5.66)]. For the study of magic angles we also introduce a translation operator \[\mathscr{L}_{\mathbf{a}}w(z):=\begin{pmatrix}\omega^{a_{1}+a_{2}}&0\\ 0&1\end{pmatrix}w(z+\mathbf{a}),\quad\mathbf{a}\in\Gamma_{3}, \tag{1.4}\] and a rotation operator \(\mathscr{C}u(z)=u(\omega z).\) We can then define subspaces \[L^{2}_{k,p}:=\{u\in L^{2}(\mathbb{C}/\Gamma;\mathbb{C}^{2});\mathscr{L}_{ \mathbf{a}}u(z)=\omega^{k}u(z)\text{ and }\mathscr{C}u(z)=\bar{\omega}^{p}u(z)\} \tag{1.5}\] and similarly \(L^{2}_{k}:=\bigoplus_{p\in\mathbb{Z}_{3}}L^{2}_{k,p}.\) The set of such magic parameters \(\alpha\), that we shall denote by \(\mathcal{A}\), is characterized by [1, 10] \[\alpha^{-1}\in\operatorname{Spec}_{L^{2}_{0}}(T_{k})\text{ with }T_{k}=(2D_{\bar{z}}-k)^{-1} \begin{pmatrix}0&U(z)\\ U(-z)&0\end{pmatrix}\text{ for some }k\notin\Gamma^{*} \tag{1.6}\] with \(\Gamma^{*}\) the dual lattice. We then define the set of generic magic angles. The terminology _generic_ is motivated by [1, 10] which shows that for a generic choice of tunnelling potentials all magic angles are of the following form. **Definition 1.1** (Generic magic angles).: _We say that \(\alpha\in\mathcal{A}\) is a simple or two-fold degenerate magic angle if \(1/\alpha\in\operatorname{Spec}_{L^{2}_{0}}(T_{k})\) and \(\dim\ker_{L^{2}_{0}}(T_{k}-1/\alpha)=\nu\) with \(\nu=1,2\), respectively. In the following sections, we will denote the combination of these magic angles as the collection of generic magic angles._ ### Magic angle (in)stability The first aim of this article is to study perturbations of the operator \(T_{k}\), i.e. for \(\delta>0\) \[T_{k,\delta}=(2D_{\bar{z}}-k)^{-1}\begin{pmatrix}0&U(z)+\delta V_{+}\\ U(-z)+\delta V_{-}&0\end{pmatrix}\] with bounded linear perturbations \(V_{\pm}\). We then obtain in Theorem 6 a bound on the spread of the magic angles under such perturbations. This is a non-trivial result as the operator \(T_{k}\), whose eigenvalues are the magic angles, is a non-normal operator. In particular, small perturbations in norm can lead to substantial perturbations of the spectrum, see [10]. On the other hand, we show that even simple rank \(1\)-perturbations of exponentially small size in \(1/|\mu|\) suffice to generate eigenvalues \(\mu\) in the spectrum of \(T_{k}\). **Theorem 1** (Instability).: _Let \(\mu\in\mathbb{C}\) and \(k\notin\Gamma^{*}\), then there exists a rank-\(1\) operator \(R\) with \(\|R\|=\mathcal{O}(e^{-c/|\mu|})\) and \(c(k)>0\) independent of \(\mu\) such that \(\mu\in\operatorname{Spec}(T_{k}+R)\). Here, \(T_{k}\) characterizes the set of magic parameters as explained in (1.6)._ ### Anderson model and IDS One consequence of having a flat band is the occurrence of jump discontinuities in the _integrated density of states_ (IDS). The integrated density of states is defined as follows, see [11] and others: **Definition 1.2**.: _The integrated density of states (IDS) for energies \(E_{2}>E_{1}\) and \(I=[E_{1},E_{2}]\) is defined by_ \[N(I):=\lim_{L\to\infty}\frac{\operatorname{tr}(1\!\!1_{I}(H_{\Lambda_{L}}( \alpha)))}{L^{2}}\] _with \(\Lambda_{L}=\mathbb{C}/(L\Gamma)\) and \(H_{\Lambda_{L}}\) has periodic boundary conditions, i.e. \(H_{\Lambda_{L}}:H^{1}(\Lambda_{L})\subset L^{2}(H_{\Lambda_{L}})\to L^{2}(H_{ \Lambda_{L}})\)._ For ergodic random operators, the almost sure existence of this limit is shown using the subadditive ergodic theorem, see for instance [13, Sec. 7.3]. Alternatively, one may define for \(f\in C_{c}^{\infty}(\mathbb{R})\) the regularized trace \[\widetilde{\operatorname{tr}}(f(H(\alpha)))=\lim_{L\to\infty}\frac{ \operatorname{tr}(1\!\!1_{\Lambda_{L}}f(H(\alpha)))}{|\Lambda_{L}|}. \tag{1.7}\] By Riesz's theorem on the representation of positive functionals, one has that \[\widetilde{\operatorname{tr}}(f(H(\alpha)))=\int_{\mathbb{R}}f(\lambda)\ d \rho(\lambda),\] where \(\rho\) is the _density of states (DOS) measure_ of \(H(\alpha)\). This way, \(N(I)=\int_{I}\ d\rho(\lambda)\). **Remark 1**.: _For Schrodinger operators it is common to consider Dirichlet approximations of the finite-size truncation. It is known that Dirac operators do in general not have any self-adjoint Dirichlet realizations. However, self-adjoint Neumann-type boundary conditions are possible, see [1] and for instance the introduction of [13] for a mathematical discussion. The independence of the definition of the IDS of the boundary conditions can then be shown using spectral shift function techniques if the operator contains a gap in the spectrum, see for instance the work by Nakamura [11] on Schrodinger operators._ A periodic Hamiltonian that exhibits a flat band at energy \(E\) possesses a jump discontinuity in the IDS at \(E\). In particular, the Lebesgue decomposition of \(\rho\) has a pure point contribution at \(E\). As a consequence, if we define the associated cumulative distribution function \(N_{E_{0}}:(E_{0},\infty)\to\mathbb{R}\) by \(N_{E_{0}}(E):=N([E_{0},E])\), then this function is monotonically increasing and right-continuous (cadlag). At a magic angle, the function \(N_{E_{0}}\) for \(E_{0}<\pm m\) exhibits a jump discontinuity at \(E=\pm m.\) Indeed, this can easily be seen from the following formula which shows that for a periodic Hamiltonian one just has [10, (1.29)] \[N(I)=\int_{\mathbb{C}/\Gamma^{*}}\left(k\mapsto\sum_{\lambda\in\operatorname{ Spec}_{L^{2}(\mathbb{C}/\Gamma)}(H_{k}(\alpha))}\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1 \mskip-4.0mu l}{\rm 1\mskip-4.5mu l}{\rm 1\mskip-5.0mu l}_{I}(\lambda)\right)\, \frac{dk}{4\pi^{2}}.\] Let \(\alpha\in\mathcal{A}\) be a generic magic angle, as in Def. 1.1, then we define the energy gap between the flat bands and the rest of the spectrum \[E_{\text{gap}}:=\inf_{\lambda\in\operatorname{Spec}(H(\alpha)^{2})\setminus \{0\}}\sqrt{\lambda}>0. \tag{1.8}\] That this quantity is non-zero by [1, 17] for simple and by [1, 17] for two-fold degenerate magic angles and is illustrated in Figure 1. In particular, the following union of intervals is in the resolvent set of the Hamiltonians \[\Big{(}-\sqrt{E_{\text{gap}}^{2}+m^{2}},-m\Big{)}\cup\big{(}-m,m\big{)}\cup \Big{(}m,\sqrt{E_{\text{gap}}^{2}+m^{2}}\Big{)}\subset\mathbb{R}\setminus \operatorname{Spec}(H). \tag{1.9}\] Let \(P_{X}\) be the orthogonal projection onto a closed subspace \(X\). For \(\alpha\in\mathcal{A}\) generic it has been shown in [1, 17] and [1, 16] that the Chern number of the flat band at energy \(\pm m\) is \(\mp 1\) or more generally (including \(m=0\)) for the Hamiltonian in (1.1) \[\operatorname{Cher}(P_{\ker(D(\alpha))})=-1\text{ and }\operatorname{Cher}(P_{ \ker(D(\alpha)^{*})})=1. \tag{1.10}\] The Chern number can be computed from the Hall conductivity \(\Omega(P)\), see (4.5), by using that \[\operatorname{Cher}(P)=-2\pi i\Omega(P).\] In particular, the net Chern number of the flat bands for \(m=0\) is zero \[\operatorname{Cher}(P_{\ker(D(\alpha))}\oplus P_{\ker(D(\alpha)^{*})})= \operatorname{Cher}(P_{\ker(H(\alpha))})=0.\] **Assumption 1** (Anderson model).: _We introduce the Anderson-type Hamiltonian with alloy-type potentials and (possible) lattice relaxation effects for \(\lambda>0\) and \(u\in C_{c}^{\infty}(\mathbb{C};\mathbb{C}^{4})\)._ \[H_{\lambda}=H+\lambda V_{\omega}\text{ where }V_{\omega}=\sum_{\gamma\in \Gamma}\omega_{\gamma}u(\bullet-\gamma-\xi_{\gamma}), \tag{1.11}\] _where \((\omega_{\gamma})_{\gamma}\) and \((\xi_{\gamma})_{\gamma}\) are families of i.i.d. random variables with absolutely continuous bounded densities. For \((\omega_{\gamma})\) the density is a function \(g\) with \(\operatorname{supp}(g)\subset[-1,1]\) and in case of \((\xi_{\gamma})_{\gamma}\) a density \(h\) supported within a compact domain \(D\subset\mathbb{C}\) where we allow \(\xi\equiv\operatorname{const}\). Random variables \(\xi_{\gamma}\) model small inhomogeneities of the moire lattice due to relaxation effects. We assume that either_ 1. Case 1: _The disorder_ \(u\) _in (_1.11_) is of the form_ \[u(z)=\begin{pmatrix}Y(z)&Z(z)^{*}\\ Z(z)&-Y(z)\end{pmatrix}\in C_{c}^{\infty}(\mathbb{C};\mathbb{C}^{4})\] (1.12) _where_ \(\inf_{\xi\in D^{\Gamma}}\inf_{z\in\mathbb{C}}\sum_{\gamma\in\Gamma}Y(z-\gamma- \xi_{\gamma})>0\). 2. Case 2: _The disorder_ \(u\in C_{c}^{\infty}(\mathbb{C};\mathbb{C}^{4})\) _with_ \(u\geq 0\) _and_ \(\inf_{z\in B_{\varepsilon}(z_{0}),\gamma\in\Gamma,\xi\in D}u(z-\gamma-\xi)>0\) _for some_ \(B_{\varepsilon}(z_{0})\) Figure 1. Band structure of non-disordered twisted bilayer graphene (1.1) at the first real positive magic angle \(\alpha\approx 0.58566\) with zero effective mass (top) and non-zero effective mass (bottom). _For normalization purposes, we assume that \(\sup_{\xi\in D^{\Gamma}}\|\sum_{\gamma\in\Gamma}u(\bullet-\gamma-\xi_{\gamma})\|_{ \infty}\leq 1\) and \(\operatorname{supp}u\subset\Lambda_{R}(0)\) for some fixed \(R>0\) where \(\Lambda_{L}:=\mathbb{C}/(L\Gamma)\) and \(\Lambda_{L}(z)=\Lambda_{L}+z\)._ We emphasize that under assumption (1), the matrix \(u\) is neither positive nor negative definite. This usually poses an obstruction to proving Wegner estimates as the eigenvalues are not monotone in the noise parameter. We can overcome this obstacle here by using the off-diagonal structure of the Hamiltonian. The probability space is the Polish space \(\Omega=(\operatorname{supp}(g))^{\Gamma}\times(\operatorname{supp}(h))^{\Gamma}\) with the product measure. Then \((H_{\lambda})\) is an ergodic (with respect to lattice translations) family of self-adjoint operators with continuous dependence \(\Omega\ni(\omega,\xi)\mapsto(H_{\lambda}+i)^{-1}.\) Thus, there is \(\Sigma\subset\mathbb{R}\) closed such that \[\operatorname{Spec}_{L^{2}(\mathbb{C})}(H_{\lambda})=\Sigma\text{ a.s.}, \tag{1.13}\] see [14, 15]. In addition, using ergodicity arguments, see e.g. [13], the density of states measure for the random operator, \(\rho^{H_{\lambda}}\) exists almost surely and is almost surely non-random, i.e. is almost surely equal to a non-random measure \(\rho^{H_{\lambda}}=\rho\) a.s. with \(\rho\) non-random. An extension of our work to unbounded disorder is possible. In the context of Schrodinger operators this extension has been shown for magnetic Landau Hamiltonians [11, 1]. Related proofs of localization for Dirac operators under a spectral gap assumption have also been obtained in [1]. For \(\lambda\neq 0\), the infinitely-degenerate point spectrum of \(H\) at energy zero, i.e. the flat band, gets non-trivially perturbed and expands in energy. To capture this, we then introduce constants \(K_{\pm}:=\sqrt{E_{\operatorname{gap}}^{2}+m^{2}}\pm\lambda\sup_{\omega\in \Omega}\|V_{\omega}\|_{\infty}\) and \(k_{\pm}:=m\pm|\lambda|\sup_{\omega\in\Omega}\|V_{\omega}\|_{\infty}.\) One thus finds analogously to (1.9) for the disordered Hamiltonian \[(-K_{-},-k_{+})\cup(-k_{-},k_{-})\cup(k_{+},K_{-})\subset\mathbb{R}\setminus\Sigma, \tag{1.14}\] where all three intervals are non-trivial for \(\lambda>0\) sufficiently small and \(m>0\). We then also define \[J_{-}:=[-k_{+},-k_{-}]\text{ and }J_{+}:=[k_{-},k_{+}]. \tag{1.15}\] When perturbing away from perfect magic angles, we may do so by either using a random potential or perturbing \(\alpha\) slightly. In both cases, for sufficiently small perturbations, this leaves the spectral gap to the remaining bands open. Given a finite domain \(\Lambda_{L}:=\mathbb{C}/(L\Gamma)\subset\mathbb{C}\), we introduce the Hamiltonian \[H_{\lambda,\Lambda_{L}}=H_{\Lambda_{L}}+\lambda V_{\omega,\Lambda_{L}},\] with periodic boundary conditions where \(\sum_{\gamma\in\bar{\Lambda}_{L}}\omega_{\gamma}(u\,\text{1\kern-2.5ptl}_{ \Lambda_{L}})(\bullet-\gamma-\xi_{\gamma})\) with \(\widetilde{\Lambda}_{L}:=\Lambda_{L}\cap\Gamma\). In general we shall denote by \(S_{\Lambda_{L}}\) the restriction of an operator \(S\) to the domain \(\Lambda_{L}\) with periodic boundary conditions in case that \(S\) is a differential operator. While the occurrence of a flat band for the unperturbed Hamiltonian (1.1) leads to a jump discontinuity in the IDS, one has that the random Hamiltonian (1.11) has a Lipschitz continuous IDS for all \(\lambda\neq 0\). Since the randomly perturbed Hamiltonian is no longer periodic, it is customary to measure the destruction of the flat band by studying the regularity of the IDS. **Theorem 2** (Continuous IDS).: _Consider the Anderson Hamiltonian as in Assumption (1) with \(m\geq 0\) and coupling constant \(\lambda\in(-\varepsilon(m),\varepsilon(m))\setminus\{0\}\) with \(\varepsilon(m)>0\) sufficiently small, then the integrated density of states (IDS) is a.s. Holder continuous in Hausdorff distance \(d_{H}\) for all \(\beta\in(0,1)\), i.e._ \[|N(I)-N(I^{\prime})|\lesssim_{\beta,I,I^{\prime}}d_{\text{H}}(I,I^{\prime})^{ \beta},\] * _Case 1 disorder: for intervals_ \(I,I^{\prime}\subset[-k_{+},k_{+}]\) _with_ \(m>0\) _and_ * _Case 2 disorder: arbitrary bounded intervals_ \(I,I^{\prime}\Subset\mathbb{R}\)_,_ \(\lambda\in\mathbb{R}\setminus\{0\}\) _and_ \(m\geq 0\)_. If we assume in addition that_ \(u\) _is globally positive, i.e._ \[\inf_{\xi\in D^{\Gamma}}\inf_{z\in\mathbb{C}}\sum_{\gamma\in\Gamma}u(z-\gamma -\xi_{\gamma})>0,\] (1.16) _then the IDS is a.s. Lipschitz continuous_ \[|N(I)-N(I^{\prime})|\lesssim_{I,I^{\prime}}d_{\text{H}}(I,I^{\prime}).\] _In particular, the IDS is a.s. differentiable and its Radon-Nikodym derivative, the density of states (DOS), exists a.s. and is a.s. bounded._ The above results follow directly from the following estimate on the number of eigenvalues (NE) that directly lead to Wegner estimates (4.3). **Proposition 1.3** (Ne).: _Under the assumptions of Theorem 2, we have that there is \(\beta\in(0,1)\)_ \[\mathbf{E}\operatorname{tr}(\mathds{1}_{I}(H_{\lambda,\Lambda_{L}}))\lesssim_ {\beta}|I|^{\beta}|\Lambda_{L}|.\] _If in Case 2 we assume in addition that (1.16) holds, then we may take \(\beta=1\)_ \[\mathbf{E}\operatorname{tr}(\mathds{1}_{I}(H_{\lambda,\Lambda_{L}}))\lesssim |I||\Lambda_{L}|.\] ### Mobility edges In the works of Germinet-Klein [1, 1, 1] dynamics of transport have been introduced. The dynamical localization implies a strong form of decaying eigenfunctions, see Def. 4.1. To measure _dynamical localization/delocalization_ one introduces the following Hilbert-Schmidt norm \[M_{\lambda}(p,\chi,t)=\left\|\langle\bullet\rangle^{p/2}e^{-itH_{\lambda}} \chi(H_{\lambda})\,\mathds{1}_{\mathbb{C}/\Gamma_{3}}\right\|_{2}^{2}, \tag{1.17}\] where \(\Gamma_{3}:=\Gamma/3\), for some non-negative \(\chi\in C_{c}^{\infty}\) with time average \[\mathcal{M}_{\lambda}(p,\chi,T)=\frac{1}{T}\int_{0}^{\infty}\mathbf{E}\Big{(} M_{\lambda}(p,\chi,t)\Big{)}e^{-t/T}\ dt.\] Recall that \(\frac{1}{T}\int_{0}^{\infty}t^{n}e^{-t/T}\ dt=T^{n}\Gamma(n+1)\), to see that \(\mathcal{M}_{\lambda}(p,\chi,T)\) indicates a time-averaged power scaling of \(M_{\lambda}(p,\chi,t).\) Here, \(M_{\lambda}(p,\chi,t)\) measures the spread of mass in a spectral energy window of the Hamiltonian from the origin under the free Schrodinger evolution. We shall then show that the random Hamiltonian (1.11) exhibits diffusive behavior in the vicinity of magic angles. **Theorem 3** (Dynamical delocalization).: _Let \(\alpha_{*}\) be a generic magic angles as in Definition 1.1. We consider a coupling constant \(\lambda\in(-\varepsilon(m,\alpha_{*}),\varepsilon(m,\alpha_{*}))\), \(\alpha\in(\alpha_{*}-\delta(m,\alpha_{*}),\alpha_{*}+\delta(m,\alpha_{*}))\), mass \(m\geq 0\) and sufficiently small \(\varepsilon(m,\alpha_{*}),\delta(m,\alpha_{*})>0\). The random Hamiltonian \(H_{\lambda}\) exhibits diffusive behavior for \(m>0\) at at least two energies \(E_{\pm}(\lambda)\) located close to energies \(\pm m\), respectively, and at at least one energy \(E(\lambda)\) for \(m=0\). In particular, for every \(\chi\in C_{c}^{\infty}\) that equals to one on an open interval \(J\) containing at least one of \(E_{\pm}(\lambda)\) and \(p>0\) we have for all \(T>0\)_ \[\mathcal{M}_{\lambda}(p,\chi,T)\gtrsim_{p,J}T^{\frac{p}{4}-6}.\] We do not have a very precise understanding how close \(E_{\pm}(\lambda)\) are to \(\pm m\). By choosing suitable disorder (of fixed support but rescaled probability), one can show that \(E_{\pm}\) and \(\pm m\) can get arbitrarily close, see Theorem 7, at least when \(\alpha\in\mathcal{A}\) is a magic angle, as the bands of the unperturbed Hamiltonian are perfectly flat. **Remark 2**.: _Transport behavior can also be characterized by the \(p\)-dependence of the estimate in the previous theorem and using a local transport exponent_ \[\beta_{\lambda}(E)=\sup_{p>0}\inf_{\begin{subarray}{c}I\ni E\\ I\text{ open }\end{subarray}}\sup_{\chi\in C^{\infty}(I;[0,\infty))}\liminf_{T \to\infty}\frac{\log_{+}\mathcal{M}_{\lambda}(p,\chi,T)}{p\log(T)}.\] _The region of dynamical localization is then defined as the open set_ \[\Sigma^{\mathrm{DL}}:=\{E\in\mathbb{R};\beta_{\lambda}(E)=0\} \tag{1.18}\] _whereas the region of dynamical delocalization \(\Sigma^{\mathrm{DD}}\) is defined as its complement. A mobility edge is an energy \(E\in\Sigma^{\mathrm{DD}}\cap\overline{\Sigma^{\mathrm{DL}}\cap\Sigma}\). It follows from [1, 2.10, 2.11] that Theorem 3 implies \(\beta_{\lambda}(E)>1/4.\) Theorem 7 then proves the existence of mobility edges for the disordered Hamiltonian._ While Theorem 3 describes the dynamical features of the Hamiltonian, it is equally valid to ask for a spectral theoretic interpretation of transport and localization. The interpretation of the nature of the spectrum in the dynamically localized phase is captured by the concept of SUDEC, see Def. 4.1. The existence of a dynamical delocalization, in the above sense, does not imply the existence of a.c. or s.c. spectrum. Given that at magic angles the Hamiltonian \(H_{0}(\alpha)\) only exhibits (infinitely degenerate) point spectrum at energies \(\pm m\), it is unknown if such phases can occur for our disordered Hamiltonian in a neighborhood of the flat bands. We conjecture that this is not the case. As we will explain below, see Remark 3, the point spectrum of the Hamiltonian within an energy window, cannot be _too localized_. This can be made precise within the framework of generalized Wannier functions [13, 14]: **Definition 1.4** (Wannier basis).: _Let \(P\) be an orthogonal projection onto \(L^{2}(\mathbb{C})\). We say an orthonormal basis \((\psi_{\beta})_{\beta\in I}\in L^{2}(\mathbb{C})\) for an index set \(I\subset\mathbb{N}\) is an \(s\)-localized generalized Wannier basis for \(P\) for some \(s>0\) if:_ * \(\overline{\operatorname{span}}(\psi_{\beta})=\operatorname{ran}(P).\)__ * _There exists_ \(M<\infty\) _and a collection of localization centers_ \((\mu_{\beta})\subset\mathbb{C}\) _such that for all_ \(\beta\in I\)__ \[\int_{\mathbb{C}}\langle z-\mu_{\beta}\rangle^{2s}|\psi_{\beta}(z)|^{2}d \lambda(z)\leq M,\text{ with }\lambda\text{ Lebesgue measure.}\] One then has for the random Hamiltonian \(H_{\lambda}:\) **Theorem 4** (Slow decay; \(m>0\)).: _Under the assumptions of Theorem 3, we define the orthogonal projection \(P_{\lambda}:=\operatorname{\text{\rm 1\kern-2.2ptl}}_{J_{\pm}}(H_{\lambda})\) on \(L^{2}(\mathbb{C})\) with \(J_{\pm}\) as in (1.15) for \(m>0\). For any \(\delta>0\) and for any \(\lambda\in(-\varepsilon(m),\varepsilon(m))\) with \(\varepsilon(m)>0\) sufficiently small and independent of \(\delta>0\), \(P_{\lambda}\) does not admit a \(1+\delta\)-localized generalized Wannier basis._ _However, the projection admits a \(1-\delta\)-localized generalized Wannier basis for small disorder._ In this article, we have not considered disorder that only perturbs the off-diagonal entries of the Hamiltonian (1.1), since no techniques to show Wegner estimates for such disorder are known on which the multi-scale analysis rests. Wegner estimates are however not needed to study the decay of Wannier functions and thus we shall consider such perturbations now, by looking at the Hamiltonian \[H_{\lambda}=\begin{pmatrix}m&(D(\alpha)+\lambda W)^{*}\\ D(\alpha)+\lambda W&-m\end{pmatrix} \tag{1.19}\] where \(W\in L^{\infty}(\mathbb{C};\mathbb{C}^{2\times 2})\) is a (possibly random) potential which we assume without loss of generality to satisfy \(\|W\|_{\infty}\leq 1.\) The result of Theorem 4 cannot be directly extended to \(m=0\), since the net Chern number of the Hamiltonian is zero. However, the square of the Hamiltonian (1.19) exhibits a diagonal form \[H_{\lambda}^{2}=\operatorname{diag}((D(\alpha)+\lambda W)^{*}(D(\alpha)+ \lambda W)+m^{2},(D(\alpha)+\lambda W)(D(\alpha)+\lambda W)^{*}+m^{2}). \tag{1.20}\] Thus, to capture the low-lying spectrum, we may study the projections \[\begin{split}& P_{+,\lambda}:=\operatorname{\text{\rm 1\kern-2.2ptl}}_{[0, \mu]}((D(\alpha)+\lambda W)^{*}(D(\alpha)+\lambda W))\text{ and }\\ & P_{-,\lambda}:=\operatorname{\text{\rm 1\kern-2.2ptl}}_{[0, \mu]}((D(\alpha)+\lambda W)(D(\alpha)+\lambda W)^{*}),\end{split} \tag{1.21}\] separately, where we dropped the \(m\geq 0\), dependence as it does not affect the spectrum apart from a constant shift. We then have **Theorem 5** (Slow decay; \(m\geq 0\)).: _Let \(\mu<E_{gap}^{2}/2\) with \(E_{gap}\) as in (1.8) and \(P_{\pm,\lambda}\) be as in (1.21). For any \(\delta>0\) and for any \(\lambda\in(-\varepsilon,\varepsilon)\) with \(\varepsilon>0\) sufficiently small and independent of \(\delta>0\), the projection \(P_{\pm,\lambda}\) does not admit a \(1+\delta\)-localized generalized Wannier basis. However, the projections admit a \(1-\delta\)-localized generalized Wannier basis for small disorder._ We make a few observations related to Theorem 4 and the notion of Wannier bases. First, these theorems imply a lower bound on the uniform decay of eigenfunctions for the random Hamiltonian. In particular, it implies that if the random Hamiltonian exhibits pure point spectrum, then the decay is not _too fast_ in a uniform sense which should be compared with the notion of SUDEC, see Def. 4.1 which one obtains by applying the multiscale analysis. In particular, one has **Remark 3** (Lower bound on uniform eigenfunction decay).: _If the Hamiltonian only exhibits point spectrum in the interval \(I\), for which the associated spectral projections does not admit a \(1+\delta\) generalized Wannier basis, then we can choose an orthonormal basis of eigenfunctions \((\psi_{\beta})\) such that \(\overline{\operatorname{span}}(\psi_{\beta})=\operatorname{ran}(P)\) and any sequence of localization centers \(\mu_{\beta}\)_ \[\sup_{\beta}\int_{\mathbb{C}}\langle z-\mu_{\beta}\rangle^{2+\delta}|\psi_{ \beta}(z)|^{2}\ dz=\infty.\] _In this sense, Theorem 4 gives a lower-bound on the decay of eigenfunctions in case that the random Hamiltonian exhibits only pure point spectrum._ **Outline of article**. * In Section 2, we study the stability of the first magic angle under small perturbations. * In Section 3, we study the regularity of the integrated density of states by stating a estimate on the number of eigenvalues (NE) under Assumption 1. * In Section 4, we derive the existence of a mobility edge in a neighborhood of the perturbed flat bands. * In Section 5, we prove Theorem 4. ## 2. (In)stability of magic angles In this section, we obtain stability bounds on magic angles with respect to perturbations. We recall the definition of the compact Birman-Schwinger operator \(T_{k}\), with \(k=(\omega^{2}k_{1}-\omega k_{2})/\sqrt{3}\), where \((k_{1},k_{2})\in\mathbb{R}^{2}\setminus(3\mathbb{Z}^{2}+\{(0,0),(-1,-1)\})\). This operator is defined by \[T_{k}:=(2D_{\bar{z}}-k)^{-1}\begin{pmatrix}0&U(z)\\ U(-z)&0\end{pmatrix}:L_{0}^{2}(\mathbb{C}/\Gamma;\mathbb{C}^{2})\to(H^{1}\cap L_ {0}^{2})(\mathbb{C}/\Gamma;\mathbb{C}^{2}),\] where \[L_{p}^{2}(\mathbb{C}/\Gamma;\mathbb{C}^{2}):=\Big{\{}u\in L^{2}(\mathbb{C}/ \Gamma,\mathbb{C}^{2}):\mathscr{L}_{\mathbf{a}}u(z)=e^{2\pi i(a_{1}p+a_{2}p)}u (z+\mathbf{a}),\ a_{j}\in\tfrac{1}{3}\mathbb{Z}\Big{\}},\] for \(\mathbf{a}=4\pi i(\omega a_{1}+\omega^{2}a_{2}).\) For scalar functions, we also define spaces \(L_{p}^{2}(\mathbb{C}/\Gamma;\mathbb{C}^{2})\) where we replace the translation operator by its first component (1.4). As described in (1.6), \(\alpha\neq 0\) is magic if and only if \(1/\alpha\in\operatorname{Spec}_{L_{0}^{2}}(T_{k})\setminus\{0\}.\) One can then show that \(1/\alpha\in\operatorname{Spec}_{L_{0}^{2}}(T_{k})\setminus\{0\}\) if and only if \(1/\alpha\in\operatorname{Spec}_{L_{1}^{2}}(T_{k})\setminus\{0\},\) see [1]. By squaring the operator, we define new compact operators \(A_{k},B_{k}\) \[\begin{split}& T_{k}^{2}=:3\operatorname{diag}(A_{k},B_{k}) \text{ with }A_{k}:=(2D_{\bar{z}}-k)^{-1}U(z)(2D_{\bar{z}}-k)^{-1}U(-z)\text{ and }\\ & B_{k}:=(2D_{\bar{z}}-k)^{-1}U(-z)(2D_{\bar{z}}-k)^{-1}U(z). \end{split} \tag{2.1}\] Since the operator \((2D_{\bar{z}}-k)^{-1}\) is compact, it follows that \[\operatorname{Spec}_{L_{1}^{2}}(T_{k}^{2})\setminus\{0\}=3\operatorname{Spec }_{L_{1}^{2}}(A_{k})\setminus\{0\}=3\operatorname{Spec}_{L_{2}^{2}}(B_{k}) \setminus\{0\}.\] This implies that \(\operatorname{tr}((T_{k}^{2})^{n})=2\cdot 3^{n}\operatorname{tr}(A_{k}^{n})\) for \(n>1\) which is well-defined as \(A_{k}\) is a Hilbert-Schmidt operator. In particular, the spectrum of \(A_{k}\) is independent of \(k,\) see [1, Prop.3.1.] and we have \(\operatorname{Spec}_{L_{1}^{2}}(A_{0})=\operatorname{Spec}_{L_{1}^{2}}(A_{k})\) for any \(k\notin\Gamma^{*}.\) The traces of powers of \(A_{k}\) are illustrated in Table 1. We thus have that \[\mathbb{C}\setminus\{0\}\ni\alpha\text{ is magic }\Leftrightarrow 1/\alpha \in\operatorname{Spec}_{L_{1}^{2}}(T_{k})\Leftrightarrow 1/(3\alpha^{2})\in \operatorname{Spec}_{L_{1}^{2}}(A_{k}),\] where the right-hand side depends only on \(\alpha\) and the unperturbed operator \(A.\) We then consider a perturbation of the potentials which gives us a new operator \[T_{k,\delta}=(2D_{\bar{z}}-k)^{-1}\begin{pmatrix}0&U(z)+\delta V_{+}\\ U(-z)+\delta V_{-}&0\end{pmatrix}:L_{1}^{2}\to L_{1}^{2},\] \begin{table} \begin{tabular}{c|c} \(p\) & \(\sigma_{p}\frac{\sqrt{3}}{\pi}\) \\ \hline \hline 1 & 2/9 \\ \hline 2 & 4/9 \\ \hline 3 & 32/63 \\ \hline 4 & 40/81 \\ \hline \end{tabular} \begin{tabular}{c|c} \(p\) & \(\sigma_{p}\frac{\sqrt{3}}{\pi}\) \\ \hline \hline 5 & 9560/20007 \\ \hline 6 & 245120/527877 \\ \hline 7 & 1957475168/4337177481 \\ \hline 8 & 13316086960/30360242367 \\ \hline \end{tabular} \end{table} Table 1. Traces of \(A_{k},\)\(\sigma_{p}=\operatorname{tr}(A_{k}^{p}),\) where \(\sigma_{1}\) is not absolutely summable as \(A_{k}\) is not of trace-class. with bounded potentials \(V_{\pm}\in C^{\infty}(\mathbb{C}/\Gamma)\) and \(\delta>0\) where \(V_{\pm}\) satisfies the same symmetries as \(U(\pm\bullet)\), respectively, cf. (1.2). By squaring the operator, similar to (2.1), we define \[T^{2}_{k,\delta}=:3\operatorname{diag}(A_{k,\delta},B_{k,\delta}) \tag{2.2}\] such that \(\operatorname{Spec}_{L^{2}_{1}}(T^{2}_{k,\delta})\setminus\{0\}=3 \operatorname{Spec}_{L^{2}_{1}}(A_{k,\delta})\setminus\{0\}=3\operatorname{ Spec}_{L^{2}_{2}}(B_{k,\delta})\setminus\{0\}\). To describe the spectral (in)-stability of non-normal operators one resorts to the _pseudospectrum_, see also the book [1]. **Definition 2.1**.: _Let \(P\) be a bounded linear operator. We denote the \(\varepsilon\)-pseudospectrum of \(P\), for every \(\varepsilon>0\), by_ \[\operatorname{Spec}_{\varepsilon}(P):=\bigcup_{K\in L(H);\|K\|\leq \varepsilon}\operatorname{Spec}(P+K), \tag{2.3}\] _with \(L(H)\) the space of bounded linear operators. It is equivalently characterized by_ \[\operatorname{Spec}_{\varepsilon}(P):=\operatorname{Spec}(P)\cup\{z\notin \operatorname{Spec}(P);\|(z-P)^{-1}\|>1/\varepsilon\}.\] ### Stability of magic angles In order to study the stability of small magic angles, characterized by the eigenvalues of \(A_{k}\) (\(\alpha\)_is magic_ if and only if \((3\alpha^{2})^{-1}\in\operatorname{Spec}_{L^{2}_{1}}(A_{k})\)), we start with a resolvent bound and recall the definition of the regularized determinant for a Hilbert-Schmidt operator \(T\)[10] \[\det_{2}(1+T):=\prod_{\lambda\in\operatorname{Spec}(T)}(1+\lambda)e^{-\lambda}.\] The following estimate is non-trivial, as the operator \(A_{k}\) is non-normal: **Lemma 2.2**.: _Let \(A=A_{0}\) be as above, then for \(\alpha\in\mathbb{C}\) such that \(1\notin\operatorname{Spec}_{L^{2}_{1}}(3\alpha^{2}A)\)_ \[\|(1-3\alpha^{2}A)^{-1}\|\leq 1+\frac{e^{(6|\alpha|^{2}+e)}}{|\det_{2}(1-3 \alpha^{2}A)|}.\] Before stating the proof of this lemma we state a perturbation estimate that limits by how much the eigenvalues of \(A_{\delta}\) can spread. This bound is illustrated in Fig. 2. **Theorem 6**.: _Let \(A:=A_{0}\), as in (2.1), with \(\sqrt{\operatorname{Spec}(A)}\setminus\{0\}\) the magic angles, and define \(A_{\delta}:=A_{0,\delta}\) as in (2.2). The perturbed operator \(A_{\delta}\) does not have any eigenvalues \(1/(3\alpha^{2})\) with \(\alpha\in\mathbb{C}\setminus\{0\}\) as long as the size of the perturbation satisfies_ \[\|A_{\delta}-A\|\leq\frac{|\det_{2}(1-3\alpha^{2}A)|}{3|\alpha^{2}|\Big{(}| \det_{2}(1-3\alpha^{2}A)|+e^{(6|\alpha|^{2}+e)}\Big{)}}. \tag{2.4}\] Before stating the proof of this result, we shall briefly discuss the interpretation of (2.4). The right hand side of (2.4) is small for large \(|\alpha|\), i.e. small twisting angles as well as for \(1/(3\alpha^{2})\) close to \(\operatorname{Spec}_{L^{2}_{1}}(A)\), i.e. for \(\alpha\) that are _almost magic_. This means that for such \(\alpha\) even small perturbations of the potential may generate eigenvalues of the form \(1/(3\alpha^{2})\) of the perturbed operator \(A_{\delta}\). This shows that such \(\alpha\) are inherently unstable, as small perturbations can generate and destroy them. Conversely, for large twisting angles, i.e. small \(\alpha\), it is in general impossible to generate spectrum \(1/(3\alpha^{2})\) of the perturbed operator. In particular, this bound implies a spectral stability for small \(\alpha\), i.e. large magic angles, since they cannot move by much. The regularized determinant in (2.4) can be controlled (from above and below) by Lemma 2.3. Proof of Theo.: _6._ On the one hand by (2.3), we find \[\operatorname{Spec}_{1}(3\alpha^{2}A_{\delta})\subset\operatorname{Spec}_{1,3 \|\alpha^{2}(A_{\delta}-A)\|}(3\alpha^{2}A).\] This implies that if \(1\in\operatorname{Spec}_{1}(3\alpha^{2}A_{\delta})\), then by the characterization of the pseudo-spectrum and Lemma 2.2 \[\frac{1}{3\|\alpha^{2}(A_{\delta}-A)\|}\leq\|(1-3\alpha^{2}A)^{-1}\|\leq 1+ \frac{e^{(6|\alpha|^{2}+e)}}{|\det_{2}(1-3\alpha^{2}A)|}.\] Rearranging this estimate implies the result. We now give the proof of the auxiliary Lemma 2.2. Proof of Lemma 2.2.: We recall from [15, Theo.5.1] that together with \(\|S^{*}S\|_{1}\leq\|S\|_{2}^{2}\) for \(S\) a Hilbert-Schmidt operator and \(K\) a trace-class operator with \(\nu=1+e^{1/2}\) \[|\det_{2}(1+S+K)|\leq e^{\frac{\|S\|_{2}^{2}}{2}+\nu\|K\|_{1}}. \tag{2.5}\] Figure 2. This figure shows the right-hand side of equation (2.4) close to the first magic angle. Assuming \(1+S\) is invertible and \(S\) a finite rank operator, we have for the usual determinant \[\det(1+S+\mu K) =\det(1+S)\det(1+\mu(1+S)^{-1}K)\] \[=\det(1+S)(1+\mu\operatorname{tr}((1+S)^{-1}K))+\mathcal{O}(\mu^{ 2}).\] This shows that \[\partial_{\mu}|_{\mu=0}\det(1+S+\mu K)=\det(1+S)\operatorname{tr}((1+S)^{-1}K))\] which shows \[\partial_{\mu}|_{\mu=0}\log\det(1+S+\mu K)=\operatorname{tr}((1+S)^{-1}K)).\] Using that \[\log\det_{2}(1+S+\mu K)=\log\det(1+S+\mu K)-\operatorname{tr}(S+\mu K), \tag{2.6}\] we find the log-derivative of the regularized 2-determinant \[\partial_{\mu}|_{\mu=0}\log(\det_{2}(1+S+\mu K))=\operatorname{tr}((1+S)^{-1} K)-\operatorname{tr}(K).\] By using a density argument it follows that this formula also holds for \(S\) Hilbert-Schmidt, i.e. we can drop the assumption that \(S\) is of finite rank. Thus, one finds from (2.6) by specializing to \(K=\langle\phi,\bullet\rangle\psi\), with \(\|\phi\|=\|\psi\|=1\) and multiplying by \(\det_{2}(1+S)\) \[\det_{2}(1+S)\langle\phi,(1+S)^{-1}\psi\rangle=\partial_{\mu}\Big{|}_{\mu=0} \det_{2}(1+S+\mu K)-\det_{2}(1+S)\langle\phi,\psi\rangle.\] Hence, using a Cauchy estimate \(|\partial_{\mu}|_{\mu=0}f(\mu)|\leq\sup_{|\mu|=1}|f(\mu)|\) for \(f(\mu):=\det_{2}(1+S+\mu K)\), we find \[\|\det_{2}(1+S)(1+S)^{-1}\|\leq\sup_{|\mu|=1}|\det_{2}(1+S+\mu K)|+|\det_{2}( 1+S)|\] it thus follows together with (2.5) that \[\|(1+S)^{-1}\|\leq 1+\sup_{|\mu|=1}\frac{|\det_{2}(1+S+\mu K)|}{|\det_{2}(1+S)| }\leq 1+\frac{e^{\|S\|_{2}^{2}/2+\nu}}{|\det_{2}(1+S)|}.\] Specializing the estimate to \(S=-3\alpha^{2}A\), we find by using that \(\|A_{0}\|_{2}\leq 2\), see [10, Lemma 4.1] and \(\nu<e\), see [11], \[\|(1-3\alpha^{2}A)^{-1}\|\leq 1+\frac{e^{(6|\alpha|^{2}+e)}}{|\det_{2}(1-3 \alpha^{2}A)|}\] which was to be shown. Consequently, if \(\alpha_{*}\) is a magic angle, we can estimate \(\det_{2}(1-3\alpha^{2}A_{0})\) in (2.4) by using [10, Lemma 5.1], which in a reduced version states that **Lemma 2.3**.: _The entire function \(\mathbb{C}\ni\alpha\mapsto\det_{2}(1-3\alpha^{2}A_{k})\) satisfies for any \(n\geq 0\)_ \[\left|\det_{2}(1-3\alpha^{2}A_{k})-\sum_{k=0}^{n}\mu_{k}\frac{(-3)^{k}\alpha^{2k }}{k!}\right|\leq\sum_{j=n+1}^{\infty}\left(\frac{\sqrt{e}\|A_{0}\|_{2}3|\alpha |^{2}}{\sqrt{j}}\right)^{j}\] _with \(\|A_{0}\|_{2}\leq 2\), where_ \[\mu_{j}=\det\left(\begin{array}{ccccc}0&j-1&0&\cdots&0\\ \sigma_{2}&0&j-2&\cdots&0\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ \sigma_{j-1}&\sigma_{j-2}&\cdots&0&1\\ \sigma_{j}&\sigma_{j-1}&\sigma_{j-2}&\cdots&0\end{array}\right),\text{ with }\sigma_{j}=\operatorname{tr}A_{k}^{j}. \tag{2.7}\] The first few traces \(\sigma_{j}\) are summarized in Table 1. ### Instability of magic angles We shall now give the proof of Theorem 1. Arbitrary low-lying eigenvalues of \(T_{k}\), which correspond to large magic angles in the unperturbed case, can be produced by rank 1 perturbations of \(T_{k}\) that are exponentially small in the spectral parameter. Let \(\mu\) be one such low-lying eigenvalue of \(T_{k}\). On the Hamiltonian side, this indicates that zero modes with quasi-momentum \(k\) and \(\alpha=1/\mu\) can be generated by rank one perturbations of the Bloch-Floquet Hamiltonian, \(H_{k}(\alpha)\) Proof of Theo.: _1._ We recall that by [Be*22, Theo 4] there exists for each \(k\in\mathbb{C}\) an \(L^{2}\)-normalized \(u_{\mu}\in C_{c}^{\infty}(\mathbb{C};\mathbb{C}^{2})\) such that the operator \[P(\mu)=\begin{pmatrix}2\mu D_{\bar{z}}&U(z)\\ U(-z)&2\mu D_{\bar{z}}\end{pmatrix}\] satisfies \(\|(P(\mu)-\mu k)u_{\mu}\|=\mathcal{O}(e^{-c/|\mu|})\) with \(\|u_{\mu}\|_{L^{2}}=1\) and \(c>0.\) This implies that there is a constant \(K>0\), which we allow to change throughout this proof, such that \(\|(P(\mu)-\mu k)^{-1}\|\geq Ke^{c/|\mu|}\). Hence, we define the normalized \(v_{\mu}:=\frac{(P(\mu)-\mu k)u_{\mu}}{\|(P(\mu)-\mu k)u_{\mu}\|}\), then \(\|(P(\mu)-\mu k)^{-1}v_{\mu}\|>Ke^{c/|\mu|}.\) We recall that \[(P(\mu)-\mu k)^{-1}=-(T_{k}-\mu)^{-1}(2D_{\bar{z}}-k)^{-1}.\] This implies that, since \(\|(2D_{\bar{z}}-k)^{-1}\|=1/d(k,\Gamma^{*})\), where \(d\) denotes the Hausdorff distance \[\left\|(T_{k}-\mu)^{-1}\right\|\geq Ke^{c/|\mu|}.\] Hence, for the normalized \(s_{\mu}:=\frac{(2D_{\bar{z}}-k)^{-1}v_{\mu}}{\|(2D_{\bar{z}}-k)^{-1}v_{\mu}\|}\), we have \[(T_{k}-\mu)^{-1}s_{\mu}=t_{\mu}\text{ with }\|t_{\mu}\|\geq Ke^{c/|\mu|}.\] Thus, we can define \(R\varphi:=\frac{\langle\varphi,t_{\mu}\rangle}{\|t_{\mu}\|^{2}}s_{\mu}\) with norm \(\|R\|=\mathcal{O}(e^{-c/|\mu|})\) such that \[\mu\in\operatorname{Spec}(T_{k}-R).\] ## 3. Integrated DOS and Wegner estimate In this section we prove Theorem 2 by stating a proof of Prop. 1.3, i.e. study the regularity of the integrated density of states and prove a corresponding estimate on the number of eigenvalues of the disordered Hamiltonian. This then also implies a Wegner estimate by (4.3). We start by giving the proof of Holder continuity, which Figure 3. Upper row: Magic angles (left) and resolvent norm of operator \(T_{k}\) (right). Lower row: 1000 realizations of random perturbations of tunnelling potential \(U+\delta V\) with new magic angles (black dots) superimposed on resolvent norm figure. \(\delta=1/100\) (left) and \(\delta=1/10\) (right). uses the spectral shift function, see [12, 13, 14], and then subsequently explain the modifications to obtain Lipschitz continuity, which uses spectral averaging. In the following, we will write \(\chi_{x,L}:=\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{\Lambda_{L}}(x)\) with \(\chi_{x}:=\chi_{x,1}\), \(\Lambda_{L}:=\mathbb{C}/(L\Gamma)\), and \(\Lambda_{L}(z):=z+\Lambda_{L}\) and often drop subscripts to simplify the notation. Let \(A\) be a compact operator then \(\|A\|_{k}\) denotes the \(k\)-th Schatten class norm. ### Proof of Prop. 1.3 In this subsection we shall give the proof of Prop. 1.3, up to two crucial estimates that are provided in different subsections, namely the Holder estimate (3.13) in Subsection 3.2 and the Lipschitz estimate (3.15) in Subsection 3.3. Proof of Prop. 1.3.: In the proof, we shall focus on Case 1 disorder as Case 2 disorder follows along the same lines but more care is needed since the potential \(u\) is not positive in Case 1. We shall emphasize the differences of the two cases in our proof. Since the spectrum in Case 1 exhibits a spectral gap, see (1.14), we may focus without loss of generality on the spectrum around \(m\). The argument around \(-m\) is analogous. In Case 2, we do not have to restrict ourselves to those neighborhoods. Let \(E_{0}\in\Delta\subset\tilde{\Delta}\subset(k_{-},k_{+})\) for two closed bounded intervals \(\Delta,\tilde{\Delta}\), with \(\Delta\) of non-empty interior, centered at \(E_{0}\) and \(d_{0}:=d(E_{0},\mathbb{R}\setminus\tilde{\Delta})\). We decompose \[\operatorname{tr}(\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{\Delta}(H_{ \lambda,\Lambda_{L}}))=\operatorname{tr}(\leavevmode\hbox{\small 1\kern-3.8pt \rm l}_{\Delta}(H_{\lambda,\Lambda_{L}})\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{ \tilde{\Delta}}(H_{0,\Lambda_{L}}))+\operatorname{tr}(\leavevmode\hbox{ \small 1\kern-3.8pt\rm l}_{\Delta}(H_{\lambda,\Lambda_{L}})\leavevmode\hbox{ \small 1\kern-3.8pt\rm l}_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0,\Lambda_{L}})). \tag{3.1}\] We then write for the second term in (3.1) \[\operatorname{tr}(\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{\Delta}(H_{ \lambda,\Lambda_{L}})\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{\mathbb{R}\setminus \tilde{\Delta}}(H_{0,\Lambda_{L}})) =\operatorname{tr}(\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{ \Delta}(H_{\lambda,\Lambda_{L}})(H_{\lambda,\Lambda_{L}}-E_{0})(H_{0,\Lambda_{ L}}-E_{0})^{-1}\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0, \Lambda_{L}})) \tag{3.2}\] The first term in (3.2) satisfies by Holder's inequality and the definition of \(d_{0}\) \[|\operatorname{tr}(\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{\Delta}(H_{ \lambda,\Lambda_{L}})(H_{\lambda,\Lambda_{L}}-E_{0})(H_{0,\Lambda_{L}}-E_{0} )^{-1}\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0, \Lambda_{L}}))|\leq\frac{|\Delta|}{2d_{0}}\operatorname{tr}(\leavevmode \hbox{\small 1\kern-3.8pt\rm l}_{\Delta}(H_{\lambda,\Lambda_{L}})).\] We then use the inequality \[\operatorname{tr}(\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{ \Delta}(H_{\lambda,\Lambda_{L}})\lambda V_{\omega,\Lambda_{L}}(H_{0,\Lambda_{L }}-E_{0})^{-1}\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{\mathbb{R}\setminus\tilde{ \Delta}}(H_{0,\Lambda_{L}}))\] \[\leq\frac{|\lambda|}{d_{0}}\|\leavevmode\hbox{\small 1\kern-3.8pt \rm l}_{\Delta}(H_{\lambda,\Lambda_{L}})V_{\omega,\Lambda_{L}}\|_{2}\| \leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0, \Lambda_{L}})\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{\Delta}(H_{\lambda,\Lambda_{L}})\|_{2}\] \[\leq\frac{\zeta\operatorname{tr}(\leavevmode\hbox{\small 1 \kern-3.8pt\rm l}_{\Delta}(H_{\lambda,\Lambda_{L}})\leavevmode\hbox{\small 1 \kern-3.8pt\rm l}_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0,\Lambda_{L}}))}{2d_{ 0}}+\frac{\lambda^{2}\operatorname{tr}(\leavevmode\hbox{\small 1\kern-3.8pt\rm l}_{\Delta}(H_{\lambda, \Lambda_{L}})V_{\omega,\Lambda_{L}}^{2})}{2\zeta d_{0}},\] with \(\zeta>0\). We can then bound (3.2), in terms of \[\tilde{V}_{\omega,\Lambda_{L}}:=\sum_{\gamma\in\tilde{\Lambda}_{L}}u(\bullet- \gamma-\xi_{\gamma}),\] by choosing \(\zeta>0\) sufficiently small \[\begin{split}\operatorname{tr}(1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L }})1\!\!1_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0,\Lambda_{L}}))& \leq\frac{|\Delta|}{d_{0}}\operatorname{tr}(1\!\!1_{\Delta}(H_{ \lambda,\Lambda_{L}}))+\frac{\lambda^{2}\operatorname{tr}(1\!\!1_{\Delta}(H_{ \lambda,\Lambda_{L}})V_{\omega,\Lambda_{L}}^{2})}{\zeta d_{0}}\\ &\lesssim\frac{|\Delta|}{d_{0}}\operatorname{tr}(1\!\!1_{\Delta}( H_{\lambda,\Lambda_{L}}))+\frac{\lambda^{2}\operatorname{tr}(1\!\!1_{\Delta}(H_{ \lambda,\Lambda_{L}})\widetilde{V}_{\omega,\Lambda_{L}})}{\zeta d_{0}}.\end{split} \tag{3.3}\] Notice that while we do not have that \(V_{\omega,\Lambda_{L}}^{2}\lesssim\tilde{V}_{\omega,\Lambda_{L}}\), at least in case (1), since \(\tilde{V}_{\omega,\Lambda_{L}}\) is not positive, we still have that \[\operatorname{tr}(1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}})V_{\omega,\Lambda_{L }}^{2})\lesssim\operatorname{tr}(1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}}) \widetilde{V}_{\omega,\Lambda_{L}}). \tag{3.4}\] To see this, observe that \[1\!\!1_{\Delta}(H_{0,\Lambda_{L}})=P_{\ker(D(\alpha)_{\Lambda_{L}})}\oplus 0. \tag{3.5}\] Indeed, since \(\Lambda_{L}=\mathbb{C}/(L\Gamma)\), we have by periodicity of the Hamiltonian \(H_{0}\) that \(\operatorname{Spec}(H_{0,\Lambda_{L}})\subset\operatorname{Spec}(H_{0}).\) Since the spectrum of \(H_{\lambda,\Lambda_{L}}\) is uniformly gapped for \(\lambda\) small, it follows that the spectral projection \(\lambda\mapsto 1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}})\) is norm-continuous. We conclude from (3.5) that for \(\varphi=(\varphi_{1},\varphi_{2})\) \[\varphi=1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}})\varphi\Rightarrow\|\varphi_ {2}\|\leq\varepsilon(\lambda)\|\varphi_{1}\|. \tag{3.6}\] with \(\varepsilon(0)=0\) and \(\lambda\mapsto\varepsilon(\lambda)\geq 0\) continuous. Indeed, applying norms to (3.6), we find by substituting \[1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}})=1\!\!1_{\Delta}(H_{0,\Lambda_{L}})+( 1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}})-1\!\!1_{\Delta}(H_{0,\Lambda_{L}}))\] in (3.6) that, since \(\|1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}})-1\!\!1_{\Delta}(H_{0,\Lambda_{L}} )\|=\mathcal{O}(|\lambda|)\), there is \(C>0\) such that \[\sqrt{\|\varphi_{1}\|^{2}+\|\varphi_{2}\|^{2}}\leq\|P_{\ker(D(\alpha)_{\Lambda _{L}})}\varphi_{1}\|+C\lambda\sqrt{\|\varphi_{1}\|^{2}+\|\varphi_{2}\|^{2}}.\] Rearranging this, we find \[(1-C\lambda)\sqrt{\|\varphi_{1}\|^{2}+\|\varphi_{2}\|^{2}}\lesssim\|P_{\ker(D (\alpha)_{\Lambda_{L}})}\varphi_{1}\|\leq\|\varphi_{1}\|\] which implies that, since \[(1-C\lambda)\sqrt{1+\|\varphi_{2}\|^{2}/\|\varphi_{1}\|^{2}}\leq 1\text{ that }\| \varphi_{2}\|\lesssim\lambda/(1-C\lambda)\|\varphi_{1}\|\] showing (3.6). This then directly implies (3.4), since in the notation of (1.12) \[\begin{split}\operatorname{tr}(1\!\!1\!1_{\Delta}(H_{\lambda,\Lambda _{L}})\tilde{V}_{\omega,\Lambda_{L}})&=\sum_{\begin{subarray}{c} \varphi\text{ ONB of}\\ \operatorname{ran}(1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}}))\end{subarray}} \left(\langle\varphi_{1},Y\varphi_{1}\rangle-\langle\varphi_{2},Y\varphi_{2} \rangle+2\operatorname{Re}(\langle\varphi_{1},Z\varphi_{2}\rangle)\right)\\ &\geq\sum_{\begin{subarray}{c}\varphi\text{ ONB of}\\ \operatorname{ran}(1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}}))\end{subarray}} \left(\|\varphi_{1}\|^{2}\inf Y-\|\varphi_{2}\|^{2}\sup Y-\|\varphi_{1}\|\| \varphi_{2}\|\sup Z\right)\\ &\gtrsim\sum_{\begin{subarray}{c}\varphi\text{ ONB of}\\ \operatorname{ran}(1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}}))\end{subarray}}\| \varphi_{1}\|^{2}(\inf Y-\lambda^{2}\sup Y-\lambda\sup Z)\\ &\gtrsim\sum_{\begin{subarray}{c}\varphi\text{ ONB of}\\ \operatorname{ran}(1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}}))\end{subarray}}\| \varphi_{1}\|^{2}\inf Y\text{ for }\lambda\text{ small enough.}\end{split}\] We can easily obtain, along the same lines, an upper bound on the left-hand side of (3.4) \[\operatorname{tr}(1\!\!1\!1_{\Delta}(H_{\lambda,\Lambda_{L}})V_{\omega,\Lambda _{L}}^{2})\lesssim\sum_{\begin{subarray}{c}\varphi\text{ ONB of}\\ \operatorname{ran}(1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}}))\end{subarray}}\| \varphi_{1}\|^{2}\sup(V_{\omega,\Lambda_{L}}^{2})_{11}\] to see that (3.4) holds. Finally, we have for the first term in (3.1) \[\begin{split}\operatorname{tr}(1\!\!1_{\Delta}&(H_{ \lambda,\Lambda_{L}})\,1\!\!1_{\tilde{\Delta}}(H_{0,\Lambda_{L}}))\lesssim \operatorname{tr}(1\!\!1\!1_{\Delta}(H_{\lambda,\Lambda_{L}})\,1\!\!1_{\tilde {\Delta}}(H_{0,\Lambda_{L}})\tilde{V}_{\omega,\Lambda_{L}}\,1\!\!1_{\tilde{ \Delta}}(H_{0,\Lambda_{L}})\,1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L}}))\\ &\lesssim\operatorname{tr}(1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L} })\,1\!\!1_{\tilde{\Delta}}(H_{0,\Lambda_{L}})\tilde{V}_{\omega,\Lambda_{L}} (1\!\!1-1\!\!1_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0,\Lambda_{L}})))\\ &\lesssim\operatorname{tr}(1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L} })(1\!\!1-1\!\!1_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0,\Lambda_{L}})) \tilde{V}_{\omega,\Lambda_{L}}-1\!\!1_{\tilde{\Delta}}(H_{\lambda,\Lambda_{L} })\,1\!\!1_{\tilde{\Delta}}(H_{0,\Lambda_{L}})\tilde{V}_{\omega,\Lambda_{L}} \,1\!\!1_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0,\Lambda_{L}}))\\ &\lesssim\operatorname{tr}(1\!\!1_{\Delta}(H_{\lambda,\Lambda_{L} })(\tilde{V}_{\omega,\Lambda_{L}}-1\!\!1_{\mathbb{R}\setminus\tilde{\Delta}}( H_{0,\Lambda_{L}})\tilde{V}_{\omega,\Lambda_{L}}\,1\!\!1_{\mathbb{R}\setminus\tilde{ \Delta}}(H_{0,\Lambda_{L}})\\ &\qquad-1\!\!1_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0,\Lambda_{L }})\tilde{V}_{\omega,\Lambda_{L}}\,1\!\!1_{\tilde{\Delta}}(H_{0,\Lambda_{L}})-1 \!\!1_{\tilde{\Delta}}(H_{0,\Lambda_{L}})\tilde{V}_{\omega,\Lambda_{L}}\,1 \!\!1_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0,\Lambda_{L}}))).\end{split} \tag{3.7}\] Here, we used in the first line of (3.7) the following identity that we shall verify below \[1\!\!1_{\tilde{\Delta}}(H_{0,\Lambda_{L}})\lesssim 1\!\!1_{\tilde{\Delta}}(H_{0, \Lambda_{L}})\tilde{V}_{\omega,\Lambda_{L}}\,1\!\!1_{\tilde{\Delta}}(H_{0, \Lambda_{L}}). \tag{3.8}\] Since \(H_{0,\Lambda_{L}}\) is the unperturbed Hamiltonian, the eigenvectors associated to spectrum in \(\tilde{\Delta}\) are supported in the first two entries of the wavefunction, cf. (3.5). Let \(\pi_{1}:=\operatorname{diag}(1,1,0,0)\) be the projection onto the first two entries. We can then define another auxiliary potential \(\hat{V}_{\Lambda_{L}}(z):=\inf_{\xi\in D^{\Gamma}}\sum_{\gamma\in\tilde{\Lambda} _{L}}\pi_{1}u(z-\gamma-\xi)\pi_{1}.\) Thus, one has that \(0\leq\hat{V}_{\Lambda_{L}}\leq\pi_{1}\tilde{V}_{\omega,\Lambda_{L}}\pi_{1}\). The projection onto the first two components is redundant for case 2 disorder since \(u\geq 0\) in that case. Finally to show (3.8) it suffices to show that \[\operatorname{\rm 1\mskip-4.5mu l}_{\tilde{\Delta}}(H_{0,\Lambda_{L}})\lesssim \operatorname{\rm 1\mskip-4.5mu l}_{\tilde{\Delta}}(H_{0,\Lambda_{L}})\hat{V}_{ \Lambda_{L}}\operatorname{\rm 1\mskip-4.5mu l}_{\tilde{\Delta}}(H_{0,\Lambda_{L}}).\] Since \(H_{0}\) is a periodic Hamiltonian with respect to any lattice \(L\Gamma\) it suffices by Bloch-Floquet theory to prove the estimate for the Bloch functions of the full Hamiltonian \(H_{0}\). Indeed, let \((v_{i}(k))_{i\in I(k)}\) be the Bloch functions associated with the spectral projection \(\operatorname{\rm 1\mskip-4.5mu l}_{\tilde{\Delta}}(H_{0})\), where \(I(k)\) is the set of Bloch eigenvalues inside \(\tilde{\Delta}\) with quasimomentum \(k\), where \(H_{0,\Lambda_{L}}\) has a finite subset (in \(k\)) of those as eigenvectors. It then suffices to show that \(M(k):=(\langle v_{i}(k),\hat{V}_{\Lambda_{L}}v_{j}(k)\rangle_{L^{2}(\mathbb{C} )})_{i,j}\) is strictly positive definite for all \(k\). If not, then there is \(k_{0}\in\mathbb{C}\) and \(w(k_{0}):=\sum_{j}\beta_{j}v_{j}\) with \(\beta_{j}\) not all zero, such that \(M(k_{0})w(k_{0})=0\) and by strict positivity of \(\hat{V}_{\Lambda_{L}}\), see (1.12), we find \(w(k_{0})|_{B_{\varepsilon}(z_{0})}\equiv 0\), but this implies that \(w\equiv 0\) by real-analyticity of \(w(k_{0})\), since \(H_{0}\) is elliptic with real-analytic coefficients, which is a contradiction. Thus \(M_{k}\) is a strictly positive matrix and using continuity in \(k\) and compactness of \(\mathbb{C}/\Gamma^{*}\), we also see that \(M_{k}>c_{0}>0\) for all \(k\). For the second term in (3.7) observe that by the boundedness of the potential \[|\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{\lambda, \Lambda_{L}})\operatorname{\rm 1\mskip-4.5mu l}_{\mathbb{R}\setminus\tilde{ \Delta}}(H_{0,\Lambda_{L}})\tilde{V}_{\omega,\Lambda_{L}}\operatorname{\rm 1 \mskip-4.5mu l}_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0,\Lambda_{L}}))| \lesssim\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{ \lambda,\Lambda_{L}})\operatorname{\rm 1\mskip-4.5mu l}_{\mathbb{R}\setminus \tilde{\Delta}}(H_{0,\Lambda_{L}}))\] where the last term can be estimated using (3.3). We shall now estimate the third and fourth term at the end of (3.7) for \(\delta>0\), using Young's, the Cauchy-Schwarz inequality, and that \(\|A\|_{2}=\|A^{*}\|_{2}\) \[|\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{ \lambda,\Lambda_{L}})\operatorname{\rm 1\mskip-4.5mu l}_{\mathbb{R}\setminus \tilde{\Delta}}(H_{0,\Lambda_{L}})\tilde{V}_{\omega,\Lambda_{L}} \operatorname{\rm 1\mskip-4.5mu l}_{\tilde{\Delta}}(H_{0,\Lambda_{L}}))|\] \[\lesssim\frac{\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{ \Delta}(H_{\lambda,\Lambda_{L}})\operatorname{\rm 1\mskip-4.5mu l}_{\mathbb{R}\setminus \tilde{\Delta}}(H_{0,\Lambda_{L}}))}{2\delta}+\frac{\delta}{2}\|\operatorname{ \rm 1\mskip-4.5mu l}_{\Delta}(H_{\lambda,\Lambda_{L}})\operatorname{\rm 1\mskip-4.5mu l}_{ \tilde{\Delta}}(H_{0,\Lambda_{L}})\|_{2}^{2}\] and similarly \[|\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{ \lambda,\Lambda_{L}})\operatorname{\rm 1\mskip-4.5mu l}_{\tilde{\Delta}}(H_{0, \Lambda_{L}})\tilde{V}_{\omega,\Lambda_{L}}\operatorname{\rm 1\mskip-4.5mu l}_{ \mathbb{R}\setminus\tilde{\Delta}}(H_{0,\Lambda_{L}}))|\] \[\lesssim\frac{\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{ \Delta}(H_{\lambda,\Lambda_{L}})\operatorname{\rm 1\mskip-4.5mu l}_{\mathbb{R}\setminus \tilde{\Delta}}(H_{0,\Lambda_{L}}))}{2\delta}+\frac{\delta}{2}\|\operatorname{ \rm 1\mskip-4.5mu l}_{\Delta}(H_{\lambda,\Lambda_{L}})\operatorname{\rm 1\mskip-4.5mu l}_{ \tilde{\Delta}}(H_{0,\Lambda_{L}})\|_{2}^{2}.\] Inserting the last two estimates into (3.7) and choosing \(\delta>0\) small enough \[\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{\lambda, \Lambda_{L}})\operatorname{\rm 1\mskip-4.5mu l}_{\tilde{\Delta}}(H_{0,\Lambda_{L}}))\lesssim \operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{\lambda,\Lambda_{L}})\tilde{V}_{ \omega,\Lambda_{L}})+\frac{\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{ \lambda,\Lambda_{L}})\operatorname{\rm 1\mskip-4.5mu l}_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0, \Lambda_{L}}))}{\delta}.\] Inserting this estimate into (3.1) yields \[\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{\lambda, \Lambda_{L}}))\lesssim\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{ \lambda,\Lambda_{L}})\operatorname{\rm 1\mskip-4.5mu l}_{\mathbb{R}\setminus \tilde{\Delta}}(H_{0,\Lambda_{L}}))+\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{ \lambda,\Lambda_{L}})\tilde{V}_{\omega,\Lambda_{L}})+\frac{\operatorname{tr}( \operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{\lambda,\Lambda_{L}})\operatorname{\rm 1 \mskip-4.5mu l}_{\mathbb{R}\setminus\tilde{\Delta}}(H_{0,\Lambda_{L}}))}{\delta}.\] Thus, by choosing \(|\Delta|\) sufficiently small in (3.3) \[\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{\lambda, \Lambda_{L}}))\lesssim\operatorname{tr}(\operatorname{\rm 1\mskip-4.5mu l}_{\Delta}(H_{ \lambda,\Lambda_{L}})\tilde{V}_{\omega,\Lambda_{L}}).\] Applying expectation values and using (3.13), which we show below, we find for \(q\in(0,1)\) \[\mathbf{E}\operatorname{tr}(\mathds{1}_{\Delta}(H_{\lambda,\Lambda_{L}})) \lesssim\mathbf{E}\operatorname{tr}(\mathds{1}_{\Delta}(H_{\lambda,\Lambda_{L}} )\tilde{V}_{\omega,\Lambda_{L}})\lesssim_{q}|\Delta|^{q}|\Lambda_{L}| \tag{3.9}\] which shows the result by using a partition of small intervals \(\Delta\) covering \(I\). ### Spectral shift function and Holder continuity To obtain the Holder estimate, used to show (3.9), we recall the definition of the _spectral shift function_, first. Let \(H_{0}\) and \(H_{1}\) be two self-adjoint operator such that \(H_{1}-H_{0}\) is trace-class, then the spectral shift function is defined as, see [22, Ch. 8, Sec. 2, Theo. 1] \[\xi(\lambda,H_{1},H_{0}):=\frac{1}{\pi}\lim_{\varepsilon\downarrow 0}\arg \det(\operatorname{id}+(H_{1}-H_{0})(H_{0}-\lambda-i\varepsilon)^{-1}).\] In particular for any \(p\geq 1\) one has the \(L^{p}\) bound [16, Theorem 2.1] \[\|\xi(\bullet,H_{1},H_{0})\|_{L^{p}}\leq\|H_{1}-H_{0}\|_{1/p}^{1/p} \tag{3.10}\] where the right-hand side is defined as the generalized Schatten norm \[\|T\|_{q}=\Big{(}\sum_{\lambda\in\operatorname{Spec}(T^{*}T)}\lambda^{q/2} \Big{)}^{1/q}.\] We then start by setting \(\varphi(x):=\arctan(x^{n})\), with \(n\in 2\mathbb{N}_{0}+1\) sufficiently large, such that \(h_{1}-h_{0}\) is trace-class, with \(h_{0}:=\varphi(H_{0})\) and \(h_{1}:=\varphi(H_{1})\). Then, we have the _Birman-Krein formula_, see [22, Ch.8, Sec. 11, Lemma 3] stating that for absolutely continuous \(f\) \[\operatorname{tr}(f(H_{1})-f(H_{0}))=\int_{\mathbb{R}}\xi(\varphi(\lambda),h_ {1},h_{0})\ df(\lambda).\] Let \(\Delta=[a,b]\) then we start by defining \[s(x):=\begin{cases}0&x\leq 0\\ 3x^{2}-2x^{3}&0\leq x\leq 1\\ 1&1\leq x\end{cases}\] and \[f_{\Delta}(t):=1-s\Bigg{(}\tfrac{t-a+\tfrac{1}{2}|\Delta|}{2|\Delta|}\Bigg{)}. \tag{3.11}\] We observe that this function satisfies \(\inf_{t\in\Delta}(-f^{\prime}_{\Delta}(t))=9/8\). Thus, we have for \(C>0\) \[\mathds{1}_{\Delta}(H_{\lambda,\Lambda_{L}})\leq-C|\Delta|f^{\prime}_{\Delta} (H_{\lambda,\Lambda_{L}})\] which implies \[\operatorname{tr}(\lambda\tilde{V}_{\omega,\Lambda_{L}}\, \operatorname{1\!\!1}_{\Delta}(H_{\lambda,\Lambda_{L}})) \leq-C|\Delta|\operatorname{tr}(\lambda\tilde{V}_{\omega,\Lambda_{L }}f^{\prime}_{\Delta}(H_{\lambda,\Lambda_{L}}))\] \[=-C|\Delta|\sum_{\gamma\in\tilde{\Lambda}_{L}}\partial_{\omega_{ \gamma}}\operatorname{tr}(f_{\Delta}(H_{\lambda,\Lambda_{L}})).\] Applying the expectation value to this inequality, we find by positivity of \(g\), the density of \(\omega_{\gamma}\), that for \(\mathbf{E}_{\gamma}\) the expectation value with respect to all random variables (\(\xi_{\gamma^{\prime}}\)) and all \(\omega_{\gamma^{\prime}}\) apart from \(\gamma^{\prime}=\gamma\) \[\mathbf{E}\operatorname{tr}(\lambda\tilde{V}_{\omega,\Lambda_{L} }\,\operatorname{1\!\!1}_{\Delta}(H_{\lambda,\Lambda_{L}}))\leq-\sum_{\gamma \in\tilde{\Lambda}_{L}}\mathbf{E}_{\gamma}C|\Delta|\int_{0}^{1}g(\omega_{ \gamma})\partial_{\omega_{\gamma}}\operatorname{tr}(f_{\Delta}(H_{\lambda, \Lambda_{L}}))\ d\omega_{\gamma}\] \[\leq-\sum_{\gamma\in\tilde{\Lambda}_{L}}\mathbf{E}_{\gamma}C| \Delta|\|g\|_{\infty}\int_{0}^{1}\partial_{\omega_{\gamma}}\operatorname{tr}( f_{\Delta}(H_{\lambda,\Lambda_{L}}))\ d\omega_{\gamma}\] \[\leq-C|\Delta|\|g\|_{\infty}\sum_{\gamma\in\tilde{\Lambda}_{L}} \mathbf{E}_{\gamma}\operatorname{tr}(f_{\Delta}(H_{\lambda,\Lambda_{L}}( \omega_{\gamma}=1))-f_{\Delta}(H_{\lambda,\Lambda_{L}}(\omega_{\gamma}=0)))\] \[=C|\Delta|\|g\|_{\infty}\sum_{\gamma\in\tilde{\Lambda}_{L}}\int_{ \operatorname{supp}(f_{\Delta})}f^{\prime}_{\Delta}(t)\mathbf{E}_{\gamma} \xi(\varphi(t),\varphi(H_{\lambda,\Lambda_{L}}(\omega_{\gamma}=1)),\varphi(H_ {\lambda,\Lambda_{L}}(\omega_{\gamma}=0)))\ dt, \tag{3.12}\] where \(H_{\lambda,\Lambda_{L}}(\omega_{\gamma}=\zeta)\) is the Hamiltonian \(H_{\lambda,\Lambda_{L}}\) with \(\omega_{\gamma}\) replaced by the constant \(\zeta\) and \(|\operatorname{supp}(f_{\Delta})|=\mathcal{O}(|\Delta|).\) Thus, using Holder's inequality, we find for any \(\beta\in(0,1)\) with (3.10) and \(n\) in the arctan regularization \(\varphi\) sufficiently large2 Footnote 2: using \(\varphi(t)-\varphi(t_{0})=\int_{t_{0}}^{t}\frac{ns^{n-1}}{1+s^{2n}}\ ds\) we can create, by choosing \(n\) sufficiently large, arbitrarily large powers of the resolvent. This yields the desired trace-class condition. \[\mathbf{E}\operatorname{tr}(\lambda\tilde{V}_{\omega,\Lambda_{L}}\, \operatorname{1\!\!1}_{\Delta}(H_{\lambda,\Lambda_{L}}))\lesssim|\Delta|^{ \beta}|\Lambda_{L}| \tag{3.13}\] which is the identity used to obtain (3.9). ### Spectral averaging and Lipschitz continuity We now complete the proof of Lipschitz continuity and follow an argument developed initially by Combes and Hislop [1, Corr. 4.2] for Schrodinger operators. Proof of Theorem 2 (Lipschitz continuity).: Let \(E=\max\{|E_{1}|,|E_{2}|\}\) where \(\Delta=[E_{1},E_{2}]\), then \[\begin{split}\mathbf{E}(\operatorname{tr}(\operatorname{1\!1}_{ \Delta}(H_{\lambda,\Lambda_{L}})))&\leq e^{E^{2}}\mathbf{E}( \operatorname{tr}(\operatorname{1\!1}_{\Delta}(H_{\lambda,\Lambda_{L}})e^{-H _{\lambda,\Lambda_{L}}^{2}}))\\ &\leq e^{E^{2}}\sum_{j\in\bar{\Lambda}_{L}}\left(\|\mathbf{E}( \chi_{j}\operatorname{1\!1}_{\Delta}(H_{\lambda,\Lambda_{L}})\chi_{j})\|\sup_ {\omega\in\Omega}\operatorname{tr}\left(\chi_{j}e^{-H_{\lambda,\Lambda_{L}}^{2 }}\right)\right)\\ &\lesssim e^{E^{2}}\sum_{j\in\bar{\Lambda}_{L}}\|\mathbf{E}( \chi_{j}\operatorname{1\!1}_{\Delta}(H_{\lambda,\Lambda_{L}})\chi_{j})\|, \end{split} \tag{3.14}\] where we used that \(\sup_{\omega\in\Omega}\operatorname{tr}\left(\chi_{j}e^{-H_{\lambda,\Lambda_{ L}}^{2}}\right)\) is uniformly bounded in all parameters. Under the assumptions of Case 2, we know that \(u_{j}\) are strictly positive on \(\operatorname{supp}(\chi_{j})\) thus also \(0\leq\chi_{j}^{2}\lesssim u_{j}\) which is the necessary condition [1, (4.2)] to apply spectral averaging which readily implies together with (3.14) that \[\mathbf{E}(\operatorname{tr}(\operatorname{1\!1}_{\Delta}(H_{\lambda,\Lambda_ {L}})))\lesssim|\Delta||\Lambda_{L}| \tag{3.15}\] which is the identity (3.9) with \(\beta=1\) for Case 2 disorder. ## 4. Mobility edge To prove Theorem 3 we recall Germinet and Klein's notion of summable uniform decay of correlations (SUDEC), see [10]. **Definition 4.1** (Sudec).: _The Hamiltonian \(H_{\lambda}\) is said to exhibit a.e. SUDEC in an interval \(J\) if its spectrum is pure point and for every closed \(I\subset J\), for \(\{\varphi_{n}\}\) an orthonormal set of eigenfunctions of \(H_{\lambda}\) with eigenvalues \(E_{n}\in I\), we define \(\beta_{n}:=\|\langle z\rangle^{-2}\varphi_{n}\|^{2}.\) Then for \(\zeta\in(0,1)\) there is \(C_{I,\zeta}<\infty\) such that_ \[\|\chi_{z}(\varphi_{n}\otimes\varphi_{n})\chi_{w}\|\leq C_{I,\zeta}\beta_{n} \langle z\rangle^{2}\langle w\rangle^{2}e^{-|z-w|^{\zeta}}\text{ for }w,z\in\mathbb{C}\] _and in addition one has \(\mathbf{P}\)-almost surely_ \[\sum_{n\in\mathbb{N}}\beta_{n}<\infty. \tag{4.1}\] The strategy to establish delocalization is to show that if the Hamiltonian would exhibit only SUDEC-type localization (SUDEC), then this would contradict the non-vanishing Chern numbers of the flat bands. ### The ingredients to the multi-scale analysis For the applicability of the multi-scale analysis a la Germinet-Klein we require six ingredients of our Hamiltonian often referred to in their works by the acronyms, as introduced in [10], * Strong generalized eigenfunction expansion **SGEE** (Lemma 4.2), * Simon-Lieb inequality **SLI** and exponential decay inequality **EDI** (both Lemma 4.3), * Number of eigenvalues estimate **NE** and Wegner estimate **W** (both (4.3) and Prop. 1.3), and * Independence at a distance **IAD**. Here, independence at a distance (IAD) just follows from the choice of Anderson-type randomness and means that the disorder of the potentials at a certain distance are independent of each other. We then start with the strong generalized eigenfunction expansion (SGEE). Therefore, we introduce Hilbert spaces \[\mathcal{H}_{\pm}:=L^{2}(\mathbb{C},\mathbb{C}^{4};\langle z\rangle^{\pm 4\nu} \ dz).\] **Lemma 4.2** (Sgee).: _Let \(\nu\geq 1/2\). The set \(D_{+}^{\omega}:=\{\phi\in D(H_{\lambda})\cap\mathcal{H}_{+};H_{\lambda}\phi \in\mathcal{H}_{+}\}\) is dense in \(\mathcal{H}_{+}\) and a core of \(H_{\lambda}.\) Moreover, for \(\mu\in\mathbb{R}\setminus\{0\}\) we have_ \[\mathbf{E}\Bigg{(}\operatorname{tr}\Big{(}\langle z\rangle^{-2\nu}(H_{ \lambda}-i\mu)^{-4}\,\mathrm{1\hskip-2.845276ptl}_{I}(H_{\lambda})\langle z \rangle^{-2\nu}\Big{)}^{2}\Bigg{)}<\infty.\] (SGEE) Proof.: The statement about the core is immediate, as \(C_{c}^{\infty}(\mathbb{C};\mathbb{C}^{4})\) is a core, see for instance Theorem 8. The second statement follows as \(\langle z\rangle^{-2\nu}(H_{\lambda}-i\mu)^{-2}\) is a uniformly bounded (in \(\omega\)) Hilbert-Schmidt operator. This is easily seen by The Simon-Lieb inequality (SLI), relating resolvents at different scales, and the eigenfunction decay inequality (EDI), relating decay of finite-volume resolvents to the decay of generalized eigenfunctions and thus Anderson localization, are discussed in the next Lemma. We thus define \(\Xi_{\Lambda_{L}(z)}\) to be the characteristic function of the belt \[\Upsilon_{L}(z):=\Lambda_{L-1}(z)\setminus\Lambda_{L-3}(z).\] For \(z\in\Gamma\) and \(l>4\), we define smooth cut-off functions \(\tilde{\chi}_{\Lambda_{l}(z)}\in C_{c}^{\infty}(\mathbb{C};[0,1])\) that are equal to one on \(\Lambda_{l-3}(z)\) and \(0\) on \(\mathbb{C}\setminus\Lambda_{l-5/2}(z)\). **Lemma 4.3** (Sli & Edi).: _Let \(J\) be a compact interval. For \(L,l^{\prime},l^{\prime\prime}\in 2\mathbb{N}\) and \(x,y^{\prime},y^{\prime\prime}\in\Gamma\) with \(\Lambda_{l^{\prime\prime}}(y)\subsetneq\Lambda_{l^{\prime}}(y^{\prime}) \subsetneq\Lambda_{L}(x)\), then \(\mathbf{P}\)-a.s.: If \(E\in J\cap(\operatorname{Spec}(H_{\lambda,\Lambda_{l}(x)})\cap\operatorname{ Spec}(H_{\lambda,\Lambda_{l^{\prime}}(y^{\prime})}))^{c}\) then the Simon-Lieb inequality holds_ \[\begin{split}\|\Xi_{\Lambda_{L}(x)}(H_{\lambda,\Lambda_{L}(x)}-E )^{-1}\chi_{\Lambda_{l^{\prime\prime}}(y)}\|&\lesssim_{J}\|\Xi_{ \Lambda_{l^{\prime}}(y^{\prime})}(H_{\lambda,\Lambda_{l^{\prime}}(y^{\prime} )}-E)^{-1}\chi_{\Lambda_{l^{\prime\prime}}(y)}\|\\ &\times\|\Xi_{\Lambda_{L}(x)}(H_{\lambda,\Lambda_{L}(x)}-E)^{-1} \Xi_{\Lambda_{l^{\prime}}(y^{\prime})}\|.\end{split}\] (SLI) _In addition, we have \(\mathbf{P}\)-a.s. that any \(z\in\Gamma\), and any generalized eigenfunction \(\psi\), i.e. \(\psi\) solving \((H_{\lambda}-E)\psi=0\) and growing at most polynomially, with \(E\in J\cap\operatorname{Spec}(H_{\lambda,\Lambda_{L}(x)})^{c}\) one has the eigenfunction decay inequality_ \[\|\chi_{z}\psi\|\lesssim_{J}\|\Xi_{\Lambda_{L}(x)}(H_{\lambda,\Lambda_{L}(x)}- E)^{-1}\chi_{z}\|\|\Xi_{\Lambda_{L}(x)}\psi\|.\] (EDI) Proof.: 1. The proof of the SLI can be streamlined for linear differential operators with disorder of Anderson-type. We start from the following resolvent identity \[(H_{\lambda}-E)\tilde{\chi}_{\Lambda_{l^{\prime}}(y^{\prime})}(H_{ \lambda,\Lambda_{L}(x)}-E)^{-1} =[(H_{\lambda}-E),\tilde{\chi}_{\Lambda_{l^{\prime}}(y^{\prime})}] (H_{\lambda,\Lambda_{L}(x)}-E)^{-1}\] \[+\tilde{\chi}_{\Lambda_{l^{\prime}}(y^{\prime})}(H_{\lambda}-E)(H _{\lambda,\Lambda_{L}(x)}-E)^{-1}.\] Using that by assumption \(\Lambda_{l^{\prime}}(y^{\prime})\subset\Lambda_{L}(x)\) we have \(\chi_{\Lambda_{l^{\prime}}(y^{\prime})}H_{\lambda}=\chi_{\Lambda_{l^{\prime}}( y^{\prime})}H_{\lambda,\Lambda_{L}(x)}\) and find by substituting \(\chi_{\Lambda_{l^{\prime}}(y^{\prime})}H_{\lambda}\) in the last line above \[(H_{\lambda}-E)\tilde{\chi}_{\Lambda_{l^{\prime}}(y^{\prime})}(H_{\lambda, \Lambda_{L}(x)}-E)^{-1}=[H_{\lambda},\tilde{\chi}_{\Lambda_{l^{\prime}}(y^{ \prime})}](H_{\lambda,\Lambda_{L}(x)}-E)^{-1}+\tilde{\chi}_{\Lambda_{l^{ \prime}}(y^{\prime})}.\] Since \(H_{\lambda}\tilde{\chi}_{\Lambda_{l^{\prime}}(y^{\prime})}=H_{\lambda, \Lambda_{l^{\prime}}(y^{\prime})}\tilde{\chi}_{\Lambda_{l^{\prime}}(y^{\prime})}\) we find by multiplying the previous line by \((H_{\lambda,\Lambda_{l^{\prime}}(y^{\prime})}-E)^{-1}\) that \[\tilde{\chi}_{\Lambda_{l^{\prime}}(y^{\prime})}(H_{\lambda, \Lambda_{L}(x)}-E)^{-1} =(H_{\lambda,\Lambda_{l^{\prime}}(y^{\prime})}-E)^{-1}[H_{\lambda,\Lambda_{l^{\prime}}(y^{\prime})},\tilde{\chi}_{\Lambda_{l^{\prime}}(y^{ \prime})}](H_{\lambda,\Lambda_{L}(x)}-E)^{-1}\] \[+(H_{\lambda,\Lambda_{l^{\prime}}(y^{\prime})}-E)^{-1}\tilde{\chi} _{\Lambda_{l^{\prime}}(y^{\prime})}.\] Multiplying this equation from the left by \(\chi_{\Lambda_{l^{\prime\prime}}(y)}\) and from the right by \(\Xi_{\Lambda_{L}(x)}\), the SLI ready follows from the boundedness of \([H_{\lambda,\Lambda_{l^{\prime}}(y^{\prime}),},\tilde{\chi}_{\Lambda_{l^{ \prime}}(y^{\prime})}]\) and submultiplicativity of the operator norm, as \(\tilde{\chi}_{\Lambda_{l^{\prime}}(y^{\prime})}\Xi_{\Lambda_{L}(x)}=0\) implies that the second term on the right vanishes and \[[H_{\lambda,\Lambda_{l^{\prime}}(y^{\prime}),},\tilde{\chi}_{\Lambda_{l^{ \prime}}(y^{\prime})}]=\Xi_{\Lambda_{l^{\prime}}(y^{\prime})}[H_{\lambda, \Lambda_{l^{\prime}}(y^{\prime}),},\tilde{\chi}_{\Lambda_{l^{\prime}}(y^{ \prime})}]\Xi_{\Lambda_{l^{\prime}}(y^{\prime})}.\] (4.2) 2. For the proof of the EDI, it suffices to choose \(\psi\) as in the Lemma and observe the resolvent identity \((H_{\lambda,x,L}-E)^{-1}[H_{\lambda},\tilde{\chi}_{\Lambda_{L}(x)}]\psi= \tilde{\chi}_{\Lambda_{L}(x)}\psi\) which is easily verified by using that \((V_{\omega}-V_{\omega,\Lambda_{L}(x)})\tilde{\chi}_{\Lambda_{L}(x)}=0\) as well as \(H_{\lambda}\psi=E\psi.\) Using then an analogue of (4.2), \([H_{\lambda},\tilde{\chi}_{\Lambda_{L}(x)}]=\Xi_{\Lambda_{L}(x)}[H_{\lambda}, \tilde{\chi}_{\Lambda_{L}(x)}]\Xi_{\Lambda_{L}(x)}\), together with the boundedness of the commutator shows the claim. We complete our preparations by discussing the estimate on the number of eigenvalues (NE) and the Wegner estimate (W). The estimate on the number of eigenvalues (NE) is the estimate stated in Proposition 1.3. The Wegner estimate is then obtained by applying the estimate in Proposition 1.3 to the last expression in this set of inequalities \[\begin{split}\mathbf{P}(d(\operatorname{Spec}(H_{\lambda,\Lambda_{ L}}),E)<\eta)&=\mathbf{P}(\operatorname{rank}\,\mathrm{1}\!\!1_{(E- \eta,E+\eta)}(H_{\lambda,\Lambda_{L}})\geq 1)\\ &\leq\mathbf{E}(\operatorname{tr}(\mathrm{1}\!\!1_{(E-\eta,E+\eta) }(H_{\lambda,\Lambda_{L}}))).\end{split} \tag{4.3}\] ### Dynamical delocalization In this subsection we prove Theorem 3. To imitate the proof of delocalization in [6], we shall study the third power of the random Hamiltonian (1.11), since \(H^{3}(M)\hookrightarrow L^{2}(M)\), for \(M\) a two-dimensional compact manifold, is a trace-class embedding3 and \(x\mapsto x^{3}\) is bijective, by defining Footnote 3: Recall that \(\lambda_{k}\sim_{M}k\) is the Weyl asymptotics of the negative Laplacian in dimension \(2\); thus \(\sum_{k}k^{-3/2}<\infty\) \[S_{\lambda}:=H_{\lambda}^{3}.\] Let \(\mathcal{C}_{\pm}:=\partial B_{|\lambda|\sup_{\omega\in\Omega}\|V_{\omega}\|_{ \infty}}(\pm m)\) such that \(\mathcal{C}_{\pm}\) encircles the spectrum of the random perturbation of a single flat band, but nothing else (if \(m=0\), then \(\mathcal{C}_{\pm}\) both coincide, we shall explain the modifications of this case at the end of this section). This is possible for sufficiently small noise \(\lambda>0\) as the flat band at energies \(\pm m\) are strictly gapped (1.8) from all higher bands, in the absence of disorder. We then define the \(L^{2}(\mathbb{C};\mathbb{C}^{4})\)-valued spectral projection \[P_{\lambda,\pm}=-\frac{1}{2\pi i}\int_{\mathcal{C}_{\pm}^{3}}(S_{\lambda}-z)^{ -1}\ dz, \tag{4.4}\] where by \(\mathcal{C}_{\pm}^{3}\) we just mean the set of elements in \(\mathcal{C}_{\pm}\) raised to the third power. The delocalization argument rests on the following two pillars: * If the random Hamiltonian exhibits only dynamical localization close to \(\pm m\), then this implies that the partial Chern numbers of \(P_{\lambda,\pm}\), defined in section B, have to vanish, see Prop. B.2. * The partial Chern numbers of \(P_{\lambda,\pm}\) are invariant under disorder as well as small perturbations in \(\alpha\) away from perfect magic angles. As a consequence, the Hamiltonian exhibits dynamical delocalization at energies close to \(\pm m\). To simplify the notation, we drop the \(\pm\) and just focus on \(+m\), since \(-m\) can be treated analogously. The central object in this discussion is the Hall conductance. Assuming \(\|P[[P,\Theta_{1}],[P,\Theta_{2}]]\|_{1}<\infty\) for a spectral projection \(P\) and multiplication operators \(\Theta_{1}(z):=\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1 \mskip-4.5mu l}{\rm 1\mskip-5.0mu l}_{[1/2,\infty)}({\rm Re}\,z)\) and \(\Theta_{2}(z):=\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1 \mskip-4.5mu l}{\rm 1\mskip-5.0mu l}_{[1/2,\infty)}({\rm Im}\,z)\) the Hall conductance is defined by \[\Omega(P):={\rm tr}(P[[P,\Theta_{1}],[P,\Theta_{2}]])={\rm tr}([P\Theta_{1}P,P\Theta_{2}P]). \tag{4.5}\] Here, \(\kappa=-i[P\Theta_{1}P,P\Theta_{2}P]\) is also called the _adiabatic curvature_ with Hall charge transport \(Q=-2\pi\,{\rm tr}(\kappa).\) That \(Q\) is an integer is shown for example in [1, Theorem 8.2] or [1, (49),(58)] where it is related to Chern characters and Fredholm indices, respectively, and then [1, Theorem 1] where this quantity is discussed for periodic and quasi-periodic operators. Proof of Theo. 3.: Since \(H^{3}(M)\hookrightarrow L^{2}(M)\xrightarrow{\text{extension}}L^{2}(\mathbb{C})\) is a trace-class embedding, for bounded open sets \(M\), it follows that there is a universal constant \(K_{1}>0\) such that for sufficiently small disorder \(\lambda\) and \(\mu\in\mathcal{C}^{3}\) with \(\mathcal{C}^{3}\) as above in trace norm \[\|(S_{\lambda}-\mu)^{-1}\chi_{z}\|_{1}\leq K_{1}\text{ for all }z\in\Gamma. \tag{4.6}\] Next, we are going to construct an analogue of the Combes-Thomas estimate (CTE) for the operator \(S_{\lambda}\): By conjugating the operator \(S_{\lambda}\) with \(e^{f}\) where \(f\) is some smooth function, we find \[e^{f}S_{\lambda}e^{-f}=S_{\lambda}+R_{f},\] where \[\|R_{f}\|_{L(H^{3},L^{2})}\lesssim\varepsilon\text{ if }\|\partial^{\beta}f\|_{ \infty}\leq\varepsilon\ll 1\text{ for all }1\leq|\beta|\leq 3.\] This implies that for \(z\notin\operatorname{Spec}(S_{\lambda})\) \[e^{f}(S_{\lambda}-z)e^{-f}=(\operatorname{id}+R_{f}(S_{\lambda}-z)^{-1})(S_{ \lambda}-z).\] Thus, for \(z\notin\operatorname{Spec}(S_{\lambda})\) and \(\varepsilon>0\) sufficiently small such that \(\|R_{f}(S_{\lambda}-z)^{-1}\|<1\), \[\|e^{-f}(S_{\lambda}-z)^{-1}e^{f}\|_{L(L^{2},H^{3})}=\mathcal{O}(\langle d( \operatorname{Spec}(S_{\lambda}),z)^{-1}\rangle).\] We conclude that for \(f(z):=\varepsilon\langle z-w_{0}\rangle\) with \(w_{0}\in\mathbb{C}\) fixed, we have for all \(w\in\mathbb{C}\) \[\|\chi_{w_{0}}(S_{\lambda}-z)^{-1}\chi_{w}\|=\|\chi_{w_{0}}e^{f}(e^{-f}(S_{ \lambda}-z)^{-1}e^{f})e^{-f}\chi_{w}\|=\mathcal{O}\Big{(}\tfrac{e^{-\varepsilon \langle w-w_{0}\rangle}}{d(\operatorname{Spec}(S_{\lambda}),z)}\Big{)},\] (CTE) as well as \[\begin{split}\|\chi_{w_{0}}(S_{\lambda}-S_{0})(S_{\lambda}-z)^{- 1}\chi_{w}\|&=\|\chi_{w_{0}}e^{f}\|\|e^{-f}(S_{\lambda}-S_{0})e^{ f}\|_{L(H^{3},L^{2})}\\ &\quad\times\|e^{-f}(S_{\lambda}-z)^{-1}e^{f}\|_{L(L^{2},H^{3})} \|e^{-f}\chi_{w}\|\\ &=\mathcal{O}(\tfrac{\|e^{-f}\chi_{w}\|}{d(\operatorname{Spec}(S_ {\lambda}),z)})=\mathcal{O}\Big{(}\tfrac{e^{-\varepsilon\langle w-w_{0}\rangle }}{d(\operatorname{Spec}(S_{\lambda}),z)}\Big{)}.\end{split} \tag{4.7}\] From the Combes-Thomas estimate (CTE) and (4.4) we find the exponential estimate \[\|\chi_{w_{0}}P_{\lambda}\chi_{w}\|\lesssim e^{-\varepsilon|w-w_{0}|}. \tag{4.8}\] By [1, Lemma 3.1], this implies that \[\|P_{\lambda}[[P_{\lambda},\Theta_{1}],[P_{\lambda},\Theta_{2}]]\|_{1}<\infty,\] which implies that the Hall conductance is well-defined. In fact, using (4.6) we have \[\|\chi_{w}P_{\lambda}\chi_{w_{0}}\|_{1}=\mathcal{O}(1)\text{ and }\|\chi_{w}P_{ \lambda}\chi_{w_{0}}\|_{2}^{2}\leq\|\chi_{w}P_{\lambda}\chi_{w_{0}}\|_{1}\| \chi_{w}P_{\lambda}\chi_{w_{0}}\|=\mathcal{O}(e^{-\varepsilon|w-w_{0}|}). \tag{4.9}\] To obtain the invariance of the Chern number under small disorder, we now define \[Q_{\lambda,\zeta}:=P_{\zeta}-P_{\lambda}=\frac{\zeta-\lambda}{2\pi i}\int_{ \mathcal{C}^{3}}(S_{\lambda}-z)^{-1}\frac{(S_{\zeta}-S_{\lambda})}{(\zeta- \lambda)}(S_{\zeta}-z)^{-1}\ dz. \tag{4.10}\] then by (4.9) we find \[\|\chi_{w}Q_{\lambda,\zeta}\chi_{w_{0}}\|_{2}^{2}=\mathcal{O}(e^{-\varepsilon| w-w_{0}|}). \tag{4.11}\] If the random potential has compact support, i.e. \(H_{\lambda}\) in (1.11) is replaced by \[H_{\lambda}(L)=H+\lambda V_{\omega}\text{ where }V_{\omega}=\sum_{\gamma\in \tilde{\Lambda}_{L}}\omega_{\gamma}u(\bullet-\gamma-\xi_{\gamma}), \tag{4.12}\] for some \(L>0\), then by using a partition of unity and (4.6), we find \(\|Q_{\lambda,\zeta}\|_{1}<\infty\) and consequently the traces of all commutators vanish \[\begin{split}\Omega(P_{\zeta})-\Omega(P_{\lambda})=& \operatorname{tr}([Q_{\lambda,\zeta}\Theta_{1}P_{\zeta},P_{\zeta} \Theta_{2}P_{\zeta}]+[P_{\lambda}\Theta_{1}Q_{\lambda,\zeta},P_{\zeta}\Theta_{ 2}P_{\zeta}]\\ &+[P_{\lambda}\Theta_{1}P_{\lambda},Q_{\lambda,\zeta}\Theta_{2}P_ {\zeta}]+[P_{\lambda}\Theta_{1}P_{\lambda},P_{\lambda}\Theta_{2}Q_{\lambda, \zeta}])=0.\end{split} \tag{4.13}\] So the integer-valued map \(\lambda\mapsto\Omega(P_{\lambda})\) is constant for \(\lambda\) small around zero, under the assumption of a compactly supported random potential in (4.12). It remains now to drop the compact support constraint on the random potential in (4.12). Let \(S_{\lambda}(L)=H_{\lambda}(L)^{3}\), then we define \[Q_{\lambda,>L}:=P_{\lambda}-P_{\lambda}(L)=\frac{\lambda}{2\pi i}\int_{ \mathcal{C}^{3}}(S_{\lambda}-z)^{-1}(S_{\lambda}-S_{\lambda}(L))(S_{\lambda}( L)-z)^{-1}\ dz, \tag{4.14}\] where \(P_{\lambda}(L)\) is the corresponding spectral projection associated with \(S_{\lambda}(L).\) By the Combes-Thomas estimates (CTE) and the resolvent identity (4.14), we find \[\|\chi_{w}Q_{\lambda,>L}\chi_{w_{0}}\|\lesssim e^{-c\varepsilon((L-R-|w|)_{+} +(L-R-|w_{0}|)_{+}+|w-w_{0}|)}\] for some \(c>0\), where we used that \(S_{\lambda}-S_{\lambda}(L)\) is zero on \(\Lambda_{L-R}(0).\) Thus, writing the difference of Hall conductivities yields the desired limit \[\begin{split}\Omega(P_{\lambda})-\Omega(P_{\lambda}(L))=& \operatorname{tr}(Q_{\lambda,>L}[[P_{\lambda},\Theta_{1}],[P_{ \zeta},\Theta_{2}]]+P_{\lambda,L}[[Q_{\lambda,>L},\Theta_{1}],[P_{\lambda}, \Theta_{2}]]\\ &+P_{\lambda,L}[[P_{\lambda},\Theta_{1}],[Q_{\lambda,>L},\Theta_{ 2}]])\to 0\text{ as }L\to\infty.\end{split} \tag{4.15}\] Here, one uses the strong limit \(s-\lim_{L\to\infty}Q_{\lambda,>L}=0\) to show the non-vanishing of the first term on the right-hand side in (4.15) and that \[|\operatorname{tr}(P_{\lambda,L}[[Q_{\lambda,>L},\Theta_{1}],[P_{\lambda}, \Theta_{2}]])|\leq 2\sum_{\gamma,\gamma^{\prime}\in\Gamma}\|\chi_{\gamma}[Q_{ \lambda,>L},\Theta_{1}]\chi_{\gamma^{\prime}}\|_{2}\|\chi_{\gamma^{\prime}}[P_ {\lambda},\Theta_{2}]\chi_{\gamma}\|_{2}\] with a similar estimate for the last term in (4.15). The last bound converges to zero for \(L\to\infty\) by using (4.9) and (4.11), see [10, Lemma 3.1 (i)] for details. Thus, the conductivity derived from \(P_{\lambda}\) is locally constant in \(\lambda\) and \(\alpha\), see (4.13), which shows using (1.10) that Chern numbers stay \(\pm 1\), for \(m>0\), respectively. For \(m=0\) we repeat the previous computation with our modified \(\Omega_{i}\) (B.3) to arrive at the same conclusion. Thus, if, in the notation of (1.14), \(\Sigma\cap(-K_{-},K_{-})\subset\Sigma^{\mathrm{DL}}\) then this would contradict the non-vanishing of the (partial) Chern number, see (B.6), in regions of full localization as shown in Prop. B.2. The bound in the statement of Theorem 3 follows then from [10, Theo 2.10]. ### Dynamical localization Working under assumptions (1), we shall now study the localized phase of the Anderson model of the form \[H_{\lambda}=H+V_{\omega}\text{ where }V_{\omega}=\sum_{\gamma\in\Gamma}\omega_{ \gamma}u(\bullet-\gamma-\xi_{\gamma}). \tag{4.16}\] Here we got rid of the \(\lambda\) parameter but instead consider random variables \(\omega_{\gamma}\) that are distributed according to a bounded density \(g_{\lambda}\) of compact support in \([-\delta,\delta]\) with \(\delta<\min(m,E_{\text{gap}})\) for \(m>0\) and \(\delta<E_{\text{gap}}\) for \(m=0\). Here \(g_{\lambda}\) is a rescaled distribution \(g_{\lambda}(u)=c_{\lambda}g(u/\lambda)/\lambda\operatorname{\text{\rm 1 \kern-3.8ptI}}_{[-\delta,\delta]}\), with \(g>0\), such that as \(\lambda\downarrow 0\) the mass gets concentrated near zero and \(c_{\lambda}\leq C\), uniformly in \(\lambda\), is the normalizing constant. By (1.13), with probability \(1\), the spectrum \(\Sigma\) is independent of \(\lambda.\) Our next theorem shows that the mobility edges can be shown to be located arbitrarily close to the original flat bands, by choosing \(\lambda\) small, while keeping the support of the disorder fixed, within the interval \([-\delta,\delta]\) which is the motivation for our modifications of the Hamiltonian. **Theorem 7** (Mobility edge).: _Let \(\langle\bullet\rangle^{\gamma}g\) be bounded for some \(\gamma>3\) and let \(\tau\in(0,\frac{\gamma-3}{\gamma+1})\). Let \(H_{\lambda}\) be as in Assumption 1 with the modification that \(\lambda\) is incorporated in the rescaled density, as described in (4.16) and \(D\subset\mathbb{C}\) small enough. Then for any \(m>0\) there exist at least two distinct dynamical mobility edges, denoted by \(\mathscr{E}_{+}(\lambda)>\mathscr{E}_{-}(\lambda)\) such that_ \[|\mathscr{E}_{+}(\lambda)-m|+|\mathscr{E}_{-}(\lambda)+m|\lesssim\lambda^{1- \frac{4}{\gamma+1}-\tau}\underset{\lambda\downarrow 0}{\longrightarrow}0.\] _In particular,_ \[\Big{\{}E\in\Big{(}-\sqrt{E_{\text{gap}}^{2}/2+m^{2}},\sqrt{E_{\text{gap}}^{2} /2+m^{2}}\Big{)};|E\pm m|\gtrsim\lambda^{1-\frac{4}{\gamma+1}-\tau}\Big{\}} \subset\Sigma^{\text{DL}},\] _where the region of dynamical localization \(\Sigma^{\text{DL}}\) has been defined in (1.18). In the case of \(m=0\) the same result holds but with only at least one guaranteed mobility edge._ Proof.: We start by observing that using the \(L^{\infty}\) bound on \(\langle\bullet\rangle^{\gamma}g\), we have for any \(\varepsilon>0\) and \(X\sim g_{\lambda}\) \[\mathbf{P}(|X|\geq\varepsilon)=\int_{\delta\geq|x|\geq\varepsilon}g_{\lambda} (x)\ dx\lesssim\int_{\delta/\lambda\geq|x|\geq\varepsilon/\lambda}g(x)\ dx \lesssim\langle\lambda/\varepsilon\rangle^{\gamma-1}. \tag{4.17}\] Thus, for the probability of the low-lying spectrum to be contained in a small interval \([-\varepsilon,\varepsilon]\), we find for \(L_{0}\gg 1\) fixed \[\begin{split}\mathbf{P}\Big{(}\operatorname{Spec}(H_{\lambda, \Lambda_{L_{0}}})\cap\Big{(}-\sqrt{E_{\text{gap}}^{2}/2+m^{2}},\sqrt{E_{ \text{gap}}^{2}/2+m^{2}}\Big{)}\subset\pm m+[-\varepsilon,\varepsilon]\Big{)} \\ \overset{\text{union bound}}{\geq}\mathbf{P}(|\omega_{\gamma}| \leq\varepsilon/2;\gamma\in\tilde{\Lambda}_{L_{0}+R}(x))\\ \overset{\eqref{eq:condcondcondcondcondcondcondcond}}{\geq}(1-C( \lambda/\varepsilon)^{\gamma-1})^{(L_{0}+R)^{2}}\\ \overset{\text{Bernoulli}}{\geq}1-C(\lambda/\varepsilon)^{\gamma-1 }L_{0}^{2}\text{ for small }\lambda/\varepsilon,\end{split}\] for small enough \(\lambda/\varepsilon\). We recall that \(R>0\) is such that \(\operatorname{supp}u\subset\Lambda_{R}(0).\) This probability is large, if we choose \[\varepsilon=C\lambda L_{0}^{\frac{2}{\gamma-1}} \tag{4.18}\] for \(C\gg 1.\) To prove localization, one chooses \(L_{0}\gg 1\) large enough, as specified in [10, (2.16)] and \(\lambda\ll 1\). We now fix an energy \(\sqrt{E_{\mathrm{gap}}^{2}/2+m^{2}}\geq|E|\) such that \(|E\pm m|\geq 2\varepsilon\) with \(E\in\Sigma\). Then \(E\) is, with high probability, a distance \(\varepsilon>0\) away from the spectrum of the finite-size Hamiltonian \(H_{\lambda,\Lambda_{L_{0}}}\). In order to show localization, we shall satisfy the finite-size criterion of [10, Theorem 2.4]. This will give us another condition aside from (4.18). Indeed, in our setting the finite-size criterion stated in [10, Theorem 2.4] takes the following form \[\frac{C_{1}L_{0}^{25/3}}{\lambda\varepsilon}e^{-C_{2}\varepsilon L_{0}}<1 \tag{4.19}\] for two constants \(C_{1},C_{2}>0.\) The term \(L_{0}^{25/3}\) is obtained from [10, Theorem 2.4] by choosing (in the notation of [10]) \(b=1,d=2,\) and performing a union bound over a partition of \(\Gamma_{0}\) and \(\chi_{0,L_{0}/3}\) which accounts for another \(L_{0}^{3}.\) The \(\lambda\) in the denominator is due to the scaling of the constant in the Wegner estimate which for us is proportional to the supremum norm of the density, which for us is \(\|g_{\lambda}\|_{\infty}=\mathcal{O}(1/\lambda).\) By [10, Theorem 2.4] one concludes localization if both (4.18) and (4.19) hold. Setting then \(\varepsilon:=C_{3}\lambda L_{0}^{\frac{2}{\gamma-1}}\) with \(C_{3}>0\) sufficiently large to satisfy (4.18), we find that (4.19) becomes \[\frac{C_{1}L_{0}^{25/3}}{C_{3}\lambda^{2}L_{0}^{\frac{2}{\gamma-1}}}e^{-C_{2} C_{3}\lambda^{2}L_{0}^{\frac{\gamma\pm 1}{\gamma-1}}}<1.\] We now also set \(L_{0}^{\frac{2}{\gamma-1}}=\lambda^{-\frac{4}{\gamma+1}-\tau}\) with \(\tau(\gamma)>0\) small such that \(-\frac{4}{\gamma+1}-\tau>-1.\) This means that \(L_{0}^{\frac{\gamma+1}{\gamma-1}}=\lambda^{-2-\tau\frac{\gamma+1}{2}}\) which implies that for \(\lambda\) small enough, (4.19) holds as well. The characterization of the localized regime then follows from [10, Theorem 2.4], the existence of a mobility edge follows together with Theorem 3. ## 5. Decay of point spectrum and Wannier bases We now give the proof of Theo. 4 and 5. We shall mainly focus on the first case and only explain the modifications for the second result at the very end. Proof of Theo.4 & 5.: We first reduce the analysis to \(\lambda=0.\) By \(\lambda\)-continuity of the random perturbation, the spectral projections \(P_{0}=\mbox{1l}_{J_{\pm}}(H_{0})\) and \(P_{\lambda}=\mbox{1l}_{J_{\pm}}(H_{\lambda})\) with \(J_{\pm}\) as in (1.15) \[\|P_{0}-P_{\lambda}\|=\mathcal{O}(|\lambda|)\] by using e.g. the resolvent identity and holomorphic functional calculus and the spectral gap of the Hamiltonian. Thus, for \(\lambda\) small enough there is an isometry [1, Lemma 10]\(U\) such that \(UU^{*}=P_{0}\) and \(U^{*}U=P_{\lambda}.\) In particular \(P_{0}U=UP_{\lambda}.\) It then follows that \(U\) has a Schwartz kernel \(K\) that is exponentially close to the identity, cf. [1, Lemma 8.5]. By this we mean that there is \(\gamma>0\) such that \[|K(z,z^{\prime})-1|=\mathcal{O}(e^{-\gamma|z-z^{\prime}|}).\] The Schur test for integral operators implies that \(\tilde{U}:=\langle\bullet-z_{0}\rangle U\langle\bullet-z_{0}\rangle^{-1}\) is a family of operators uniformly bounded in \(z_{0}\in\mathbb{C}\). This implies that for any \(\varphi\in L^{2}(\mathbb{C};\mathbb{C}^{4})\) \[\langle\bullet-z_{0}\rangle^{1+\delta}P_{0}U\varphi=\langle\bullet-z_{0} \rangle^{1+\delta}U\langle\bullet-z_{0}\rangle^{-1-\delta}\langle\bullet-z_{0 }\rangle^{1+\delta}P_{\lambda}\varphi.\] Taking norms, we find, using that \(\|\langle\bullet-z_{0}\rangle^{1+\delta}P_{\lambda}\varphi\|<\infty\) by assumption, that \[\|\langle\bullet-z_{0}\rangle^{1+\delta}P_{0}U\varphi\|<\infty.\] This implies, by choosing for \(U\varphi\) an orthonormal basis of \(\overline{\operatorname{ran}}(P_{0})\), i.e. \((\psi_{n})\) is an orthonormal basis of \(\overline{\operatorname{ran}}(P_{0})\), then \(\varphi_{n}:=U^{*}\psi_{n}\), that \(P_{0}\) exhibits a \((1+\delta)\)-localized generalized Wannier basis. Since \(P_{0}\) is precisely the projection onto \(\ker(D(\alpha))\), we deduce that \(P_{0}\) exhibits a non-zero Chern number, see (1.10), and therefore do not possess a \((1+\delta)\)-localized Wannier basis, see [11] which gives a contradiction. Conversely, let \(P_{k}(\alpha)=(2\pi i)^{-1}\oint_{\gamma}(z-H_{k}(0,\alpha))^{-1}\ dz\), where \(\gamma\) is a sufficiently small circle around zero encircling only the flat band eigenvalue but nothing else in the spectrum of \(H_{k}(0,\alpha)\). Then \(P_{k}(\alpha)\) is the spectral projection onto the flat band eigenfunction of \(H_{k}\). Since \(k\mapsto H_{k}\) is real-analytic, this implies that \(k\mapsto P_{k}\) is real-analytic. Moreover, since \(H_{k-\gamma^{*}}(\alpha)=\tau(\gamma^{*})H_{k}(\alpha)\tau(\gamma^{*})^{-1}\) with \(\tau_{\gamma}(z):=e^{i\operatorname{Re}(z\gamma^{*})}\) with \(\gamma^{*}\in\Gamma_{3}^{*}\), the spectral projection satisfies the covariance relation \[P_{k-\gamma}(\alpha)=\tau(\gamma^{*})P_{k}(\alpha)\tau(\gamma^{*})^{-1}.\] It then follows from [14, Theo. 2.4] that there exists an associated Wannier basis satisfying \(\|\langle\bullet\rangle^{p/2}w_{\gamma}\|_{L^{2}(\mathbb{C})}^{2}\leq C<\infty\) for \(p<1\) and all \(\gamma\in\Gamma\) for the unperturbed periodic problem. Reversing the argument provided in the first part of the proof, it follows that the randomly perturbed problem also exhibits such a Wannier basis. To show Theorem 5 one proceeds analogously and notices that \(P_{\pm,\lambda=0}\) correspond to the projections onto \(\ker(D(\alpha))\) and \(\ker(D(\alpha)^{*})\), each one exhibiting a non-zero Chern number. With this result at hand, we are able to evaluate the quantity (1.17) for the unperturbed Hamiltonian providing a link between the dynamical and spectral theoretic notion of (de)-localization. **Proposition 5.1**.: _Let \(\alpha\) be a simple magic angle, as in Def. 1.1, then for all \(p\geq 1\)_ \[\left\|\langle\bullet\rangle^{p/2}e^{-itH(\alpha)}P_{\ker(H(\alpha))}\, \mathbbm{1}_{\mathbb{C}/\Gamma_{3}}\right\|_{2}^{2}=\infty,\] _while the left-hand side is finite for \(p<1\)._ Proof.: We start by observing that for an orthonormal basis \((f_{n})\) of \(L^{2}(\mathbb{C}/\Gamma_{3})\) and \((e_{i})\) the standard basis of \(\mathbb{C}^{4}\) \[\left\|\langle\bullet\rangle^{p/2}e^{-itH(\alpha)}P_{\ker(H(\alpha) )}\,\mathrm{1\hskip-2.845276ptl}_{\mathbb{C}/\Gamma_{3}}\right\|_{2}^{2} =\left\|\langle\bullet\rangle^{p/2}P_{\ker(H(\alpha))}\,\mathrm{1 \hskip-2.845276ptl}_{\mathbb{C}/\Gamma_{3}}\right\|_{2}^{2}\] \[=\left\|\langle\bullet\rangle^{p/2}P_{\ker(D(\alpha))\oplus\ker(D (\alpha)^{*})}\,\mathrm{1\hskip-2.845276ptl}_{\mathbb{C}/\Gamma_{3}}\right\|_{2 }^{2}\] \[=\sum_{i=1}^{4}\sum_{n\in\mathbb{N}}\left\|\langle\bullet\rangle^{ p/2}P_{\ker(D(\alpha))\oplus\ker(D(\alpha)^{*})}f_{n}\otimes e_{i}\right\|^{2}\] \[=\left\|\langle\bullet\rangle^{p/2}P_{\ker(D(\alpha))}\,\mathrm{1 \hskip-2.845276ptl}_{\mathbb{C}/\Gamma_{3}}\right\|_{2}^{2}+\left\|\langle \bullet\rangle^{p/2}P_{\ker(D(\alpha)^{*})}\,\mathrm{1\hskip-2.845276ptl}_{ \mathbb{C}/\Gamma_{3}}\right\|_{2}^{2}.\] Without loss of generality, we shall focus on the first summand. Consider the unitary Bloch-Floquet transform \(\mathcal{B}u(z,k):=\sum_{\gamma\in\Gamma_{3}}e^{i(z+\gamma,k)}\mathscr{L}_{ \gamma}u(z)\), where \(L_{\gamma}\) has been defined in (1.4), with the convention that \(\langle z,z_{0}\rangle:=\mathrm{Re}(z\bar{z}_{0})\), and its inverse/adjoint \(\mathcal{C}v(z):=\int_{\mathbb{C}/\Gamma_{3}^{*}}v(z,k)e^{-i\langle z,k\rangle }\ \frac{dm(k)}{|\mathbb{C}/\Gamma_{3}^{*}|}\). We then find that \[\mathscr{L}_{\gamma}\mathcal{C}v(z):=\int_{\mathbb{C}/\Gamma_{3}^{*}}v(z,k)e^ {-i\langle z+\gamma,k\rangle}\ \frac{dm(k)}{|\mathbb{C}/\Gamma_{3}^{*}|}=\mathcal{C}(e^{-i \langle\gamma,k\rangle}v(z,k)). \tag{5.1}\] Since by assumption \(\ker(D(\alpha)+k)=\mathrm{span}\{\varphi(\bullet,k)\}\), we see that \[(e^{-i\langle\gamma,k\rangle}\varphi(z,k))_{\gamma\in\Gamma},\,\text{for $ \varphi(\bullet,k)\in L^{2}(\mathbb{C}/\Gamma_{3})$ normalized}, \tag{5.2}\] forms a basis of the space \(\int_{\mathbb{C}/\Gamma_{3}^{*}}^{\oplus}\ker(D(\alpha)+k)dk\). Indeed, orthonormality just follows from \[\begin{split}\langle e^{-i\langle\gamma,k\rangle}\varphi(z,k),e^ {i\langle\gamma^{\prime},k\rangle}\varphi(z,k)\rangle&=\int_{ \mathbb{C}/\Gamma_{3}^{*}}\int_{\mathbb{C}/\Gamma_{3}}|\varphi(z,k)|^{2}e^{-i \langle\gamma-\gamma^{\prime},k\rangle}\frac{dz\ dk}{|\mathbb{C}/\Gamma_{3}^{ *}|}\\ &=\int_{\mathbb{C}/\Gamma_{3}^{*}}e^{-i\langle\gamma-\gamma^{ \prime},k\rangle}\frac{dk}{|\mathbb{C}/\Gamma_{3}^{*}|}=\delta_{\gamma,\gamma^ {\prime}}\end{split} \tag{5.3}\] and completeness from the completeness of the regular Fourier expansion, i.e. a general element in this subspace is of the form \[\sum_{\gamma\in\Gamma_{3}^{*}}f(\gamma)e^{-i\langle\gamma,k\rangle}v(z,k)\text { for $f\in\ell^{2}(\Gamma_{3}^{*})$}.\] We then have that \(\mathcal{B}D(\alpha)\mathcal{C}\varphi(x,k)=(D(\alpha)+k)\varphi(x,k).\) Recall the trivial decomposition of \(L^{2}\) given by \(L^{2}(\mathbb{C})=L^{2}(\mathbb{C}/\Gamma_{3})\oplus L^{2}(\mathbb{C}\setminus( \mathbb{C}/\Gamma_{3}))\). We then find for the Hilbert-Schmidt norm using an orthonormal basis \((e_{n})\) of \(L^{2}(\mathbb{C}/\Gamma_{3})\) \[\left\|\langle\bullet\rangle^{p/2}P_{\ker(D(\alpha))}\,\mathrm{1 \hskip-2.845276ptl}_{\mathbb{C}/\Gamma_{3}}\right\|_{2}^{2}=\left\|\langle \bullet\rangle^{p/2}P_{\ker(D(\alpha))}\,\mathrm{1\hskip-2.845276ptl}_{\mathbb{C }/\Gamma_{3}}\right\|_{2}^{2}\] \[=\sum_{n\in\mathbb{Z}}\|\langle\bullet\rangle^{p/2}P_{\ker(D( \alpha))}e_{n}\|_{L^{2}(\mathbb{C})}^{2}=\sum_{n\in\mathbb{Z}}\|\langle\bullet \rangle^{p/2}\mathcal{C}P_{\ker(D(\alpha)+k)}\mathcal{B}\,\mathrm{1\hskip-2.845276pt l}_{\mathbb{C}/\Gamma_{3}}\,e_{n}\|_{L^{2}(\mathbb{C})}^{2}.\] Since by assumption \(P_{\ker(D(\alpha)+k)}=\varphi(\bullet,k)\otimes\varphi(\bullet,k)\) is a rank \(1\) projection, we have \[\|\langle\bullet\rangle^{p/2}\mathcal{C}P_{\ker(D(\alpha)+k)} \mathcal{B}\,\mathrm{1\hskip-2.845276pt1}_{\mathcal{C}/\Gamma_{3}}\,e_{n}\|_{ L^{2}(\mathbb{C})}^{2} =\|\langle\bullet\rangle^{p/2}\mathcal{C}\varphi\|_{L^{2}(\mathbb{C})}^{2}| \langle\varphi,\mathcal{B}(\mathrm{1\hskip-2.845276pt1}_{\mathcal{C}/\Gamma_{3} }\,e_{n})\rangle_{L^{2}(\mathbb{C}/\Gamma_{3}\times\mathbb{C}/\Gamma_{3}^{*})} |^{2}\] \[=\|\langle\bullet\rangle^{p/2}\mathcal{C}\varphi\|_{L^{2}(\mathbb{C })}^{2}|\langle\mathcal{C}\varphi,\mathrm{1\hskip-2.845276pt1}_{\mathcal{C}/ \Gamma_{3}}\,e_{n}\rangle_{L^{2}(\mathbb{C})}|^{2}.\] This implies that \[\left\|\langle\bullet\rangle^{p/2}P_{\ker(D(\alpha))}\,\mathrm{1 \hskip-2.845276pt1}_{\mathcal{C}/\Gamma_{3}}\right\|_{2}^{2} =\|\langle\bullet\rangle^{p/2}\mathcal{C}\varphi\|_{L^{2}(\mathbb{ C})}^{2}\sum_{n\in\mathbb{Z}}|\langle\mathcal{C}\varphi,e_{n}\rangle_{L^{2}( \mathbb{C}/\Gamma_{3})}|^{2}\] \[=\|\langle\bullet\rangle^{p/2}\mathcal{C}\varphi\|_{L^{2}( \mathbb{C})}^{2}\|\mathcal{C}\varphi\|_{L^{2}(\mathbb{C}/\Gamma_{3})}.\] However, a Wannier basis is obtained from \(\mathcal{C}\varphi\) by defining \(w_{\gamma}:=\mathscr{L}_{\gamma}\mathcal{C}\varphi\). Indeed, using (5.3) functions \(w_{\gamma}\) are an orthonormal basis of \(\ker(D(\alpha))\) as \[\langle w_{\gamma},w_{\gamma^{\prime}}\rangle_{L^{2}(\mathbb{C})}=\langle \mathscr{L}_{\gamma}\mathcal{C}\varphi,\mathscr{L}_{\gamma^{\prime}}\mathcal{C }\varphi\rangle_{L^{2}(\mathbb{C})}=\delta_{\gamma,\gamma^{\prime}}\] and span \(\ker(D(\alpha))\) due to (5.1) and (5.2). Thus, we obtain since \(\mathscr{L}_{\gamma}\) is an isometry that \[\|\langle\bullet\rangle^{p/2}w_{0}\|_{L^{2}(\mathbb{C})}^{2}=\|\mathscr{L}_{ \gamma}\langle\bullet\rangle^{p/2}w_{0}\|_{L^{2}(\mathbb{C})}^{2}=\|\langle \bullet+\gamma\rangle^{p/2}w_{\gamma}\|_{L^{2}(\mathbb{C})}.\] From the non-existence of a \(1\)-localized Wannier basis and the existence of a \((1-\delta)\) Wannier basis, for any \(\delta>0\), see for instance [13], we find that \(\|\langle\bullet\rangle^{p/2}w_{0}\|_{L^{2}(\mathbb{C})}^{2}=\infty\) for \(p\geq 1\) and is finite for \(p<1\). ## Appendix A Essential self-adjointness In this appendix, we recall the essential self-adjointness of our Hamiltonian with even possibly unbounded disorder on \(C_{c}^{\infty}(\mathbb{C})\). **Theorem 8**.: _The Hamiltonian \(H_{\lambda}(\alpha)\) (1.11) is, under the more general assumptions, with \(L^{\infty}(\mathbb{R})\)-bounded density \(g\) for random variables \((\omega_{\gamma})\) and arbitrary density \(h\) is almost surely essentially self-adjoint on \(C_{c}^{\infty}(\mathbb{C}).\)_ Proof.: To see that \(H_{\lambda}(\alpha)\) is essentially self-adjoint, we first observe that it is symmetric on \(C_{c}^{\infty}(\mathbb{C}).\) It thus suffices to show that for any \(L^{2}\)-normalized \(\psi\) \[(H_{\lambda}(\alpha)\pm i)\psi=0\text{ implies }\psi\equiv 0,\] i.e. the deficiency indices are zero. Elliptic regularity and the assumption that \(u\in L^{\infty}\) implies that \(\psi\in C^{\infty}(\mathbb{C})\). We then pick a cut-off function \(\eta_{n}(z):=\eta(z/n)\) with \(\eta\in C_{c}^{\infty}(\mathbb{C})\) and \(\eta|_{B_{1}(0)}\equiv 1\) and find \[(H_{\lambda}(\alpha)\pm i)\eta_{n}\psi=\begin{pmatrix}0&2D_{z}\eta_{n}\cdot \mathrm{id}_{\mathbb{C}^{2}}\\ 2D_{\bar{z}}\eta_{n}\cdot\mathrm{id}_{\mathbb{C}^{2}}&0\end{pmatrix}\psi.\] We conclude that \[\|\eta_{n}\psi\|_{2}^{2}+\|H_{\lambda}(\alpha)\eta_{n}\psi\|_{2}^{2}=\|(H_{ \lambda}(\alpha)\pm i)\eta_{n}\psi\|_{2}^{2}\lesssim\|\nabla\eta_{n}\|_{\infty} ^{2}=\mathcal{O}(1/n^{2})\xrightarrow[n\to\infty]{}0.\] Since \(\eta_{n}\psi\to\psi\) by dominated convergence, we conclude that \(\psi\equiv 0\). ## Appendix B Partial Chern numbers Let \(P\) be an orthogonal projection on \(L^{2}(\mathbb{C};\mathbb{C}^{2n})\) such that for some \(\xi\in(0,1)\), \(\kappa>0\), and \(K_{P}<\infty\) we have \[\|\chi_{z_{0}}P\chi_{z_{1}}\|_{2}\leq K_{P}\langle z_{0}\rangle^{\kappa} \langle z_{1}\rangle^{\kappa}e^{-|z_{0}-z_{1}|^{\xi}}\text{ for all }z_{0},z_{1}\in\Gamma.\] (B.1) This condition is satisfied for spectral projections of Hamiltonians under the assumption of (SUDEC), cf. Def. 4.1. Let \(\pi_{1}:=\operatorname{diag}(\operatorname{id}_{\mathbb{C}^{n}},0)\) and \(\pi_{2}:=\operatorname{diag}(0,\operatorname{id}_{\mathbb{C}^{n}})\). By the definition of the Hilbert-Schmidt norm one finds for all \(i,j\) \[\|\chi_{z_{0}}\pi_{i}P\pi_{j}\chi_{z_{1}}\|_{2}\leq\|\chi_{z_{0}}P\chi_{z_{1}} \|_{2}\leq K_{P}\langle z_{0}\rangle^{\kappa}\langle z_{1}\rangle^{\kappa}e^{ -|z_{0}-z_{1}|^{\xi}}\text{ for all }z_{0},z_{1}\in\Gamma.\] (B.2) We define the new \(\hat{\Theta}_{j,i}:=\pi_{i}\Theta_{j}=\Theta_{j}\pi_{i}\) and replace (4.5) by \[\Omega_{i}(P):=\operatorname{tr}(P[[P,\hat{\Theta}_{1}(i)],[P,\hat{\Theta}_{2 }(i)]])\] (B.3) which is well-defined for \[|\Omega_{i}(P)|:=\|P[[P,\hat{\Theta}_{1}(i)],[P,\hat{\Theta}_{2}(i)]]\|_{1}<\infty.\] (B.4) **Remark 4**.: _It is convenient to modify \(\hat{\Theta}_{i}\) rather than \(P\) in the definition of \(\Omega\), since \(\pi_{i}P\pi_{j}\) is in general no longer a projection, even for \(i=j.\)_ Since we still have that \([\hat{\Theta}_{i},\hat{\Theta}_{j}]=0\) we find the equivalent formulation of (B.3) \[\Omega_{i}(P)=\operatorname{tr}[P\hat{\Theta}_{1}(i)P,P\hat{\Theta}_{2}(i)P].\] (B.5) In particular, if \(P\) is a finite-rank projection then, we always find \(\Omega_{i}(P)=0,\) as (B.5) is a commutator of trace-class operators. To provide further motivation for the above definition (B.3), we shall consider the unperturbed Hamiltonian \(H_{0}=\begin{pmatrix}m&D^{*}\\ D&-m\end{pmatrix}\) then \(H_{0}^{2}=\operatorname{diag}(D^{*}D+m^{2},DD^{*}+m^{2})\) and consequently any spectral projection of \(H_{0}^{2}\) is also diagonal and thus of the form \(P_{0}=\operatorname{diag}(P_{0}(1),P_{0}(2)).\) Thus, we have \[\Omega_{i}(P_{0})=\operatorname{tr}([P_{0}\hat{\Theta}_{1}(i)P_{0},P_{0}\hat{ \Theta}_{2}(i)P_{0}])=\Omega(P_{0}(i)),\] where we recall from (1.10) that for a generic magic angle and \(P_{0}=\operatorname{1\!\!1}_{[0,\mu]}(H_{0}^{2})\) with \(\mu\in(0,E_{\operatorname{gap}}^{2})\) \[\Omega_{1}(P_{0})=\frac{i}{2\pi}\text{ and }\Omega_{2}(P_{0})=-\frac{i}{2\pi}.\] (B.6) Thus, while \(\Omega(P_{0})=0\) for \(m=0\), we have \(\Omega_{1}(P_{0}),\Omega_{2}(P_{0})\neq 0.\) The definition of \(\Omega_{i}\) captures the non-trivial sublattice Chern numbers of twisted bilayer graphene while the total Chern number vanishes. One also readily verifies the usual properties of Chern characters, see for instance [11, Lemma 3.1], [10]: **Proposition B.1**.: _Let \(P\) be an orthogonal projection satisfying (B.1), then_ 1. \(|\Omega_{i}(P)|\lesssim_{\kappa,\xi}K_{P}^{2}.\)__ 2. _Let_ \(s\in\mathbb{R}\) _and define_ \(\hat{\Theta}_{j,i}^{(s)}(t):=\pi_{i}\Theta_{j}(t-s)\)_, then_ \[\Omega_{i}^{r,s}(P):=\operatorname{tr}(P[[P,\hat{\Theta}_{1,i}^{(s)}],[P,\hat {\Theta}_{2,i}^{(r)}]])\text{ for }r,s\in\mathbb{R}.\] _In particular,_ \[\Omega_{i}^{r,s}=\Omega_{i}.\] (B.7) 3. _Let_ \(P,Q\) _be two orthogonal projections, each satisfying (_B.1_), such that_ \(PQ=QP=0\)_, then_ \[\Omega_{i}(P+Q)=\Omega_{i}(P)+\Omega_{i}(Q).\] Proof.: The first property follows readily from combining (B.2) with the proof [11, Lemma 3.1 (i)]. The second property follows from a direct computation, see [11, Lemma 3.1 (ii)]. The last property follows from \(P[Q,\hat{\Theta}_{i}]=-P\hat{\Theta}_{i}Q\) and evaluating (B.3) since one finds for the cross-terms \[\operatorname{tr}\Big{(}-P\hat{\Theta}_{1}Q\hat{\Theta}_{2}+P\hat{\Theta}_{1} Q\hat{\Theta}_{2}P-Q\hat{\Theta}_{1}P\hat{\Theta}_{2}+Q\hat{\Theta}_{1}P\hat{ \Theta}_{2}Q\Big{)}=0.\] We also want to mention reference [1, Sec.6] showing full details on how to obtain the second point and [10, Lemma 8] for the third point. The independence of switch functions \(\hat{\Theta}_{j,i}^{(s)}\) in Prop. B.1 implies that \(\Omega_{i}\) is an almost surely constant quantity \[\Omega_{i}(P)=\mathbf{E}\Omega_{i}(P)\text{ for }\mathbf{P}-a.s.\] The purpose of the first and last point in Prop. B.1 is to conclude that in regions of SUDEC, cf. Definition 4.1, all \(\Omega_{i}\) vanish. **Proposition B.2**.: _Let \(H_{\lambda}\) exhibit_ SUDEC _in an interval \(J\), then for all closed \(I\subset J\) we have_ \[\Omega_{i}(\text{$1\hskip-2.845276pt\mathrm{l}$}_{I}(H_{\lambda}))=0\text{ for }\mathbf{P}-a.s.\] Proof.: Let \(M\subset\mathbb{N}\) be a (finite or infinite) enumeration (counting multiplicities) of all point spectrum of \(H_{\lambda}\). We can then write the spectral projection as \[\operatorname{\mathds{1}\kern-2.0pt{\rm l}}_{I}(H_{\lambda})=\sum_{m\in M}P_{m}\] where \(P_{m}\) are rank one projections. In addition, we have \(K_{P}:=\sum_{m\in M}\alpha_{m}\) where \(\alpha_{m}\) are defined in (4.1). Using the third item in Prop. B.1 we then have for any \(\{1,...,N\}\subset M\) \[\Omega_{i}(\operatorname{\mathds{1}\kern-2.0pt{\rm l}}_{I}(H_{\lambda}))= \sum_{m=1}^{N}\underbrace{\Omega_{i}(P_{m})}_{=0}+\Omega_{i}\Bigg{(}\sum_{m\in M \setminus\{1,..,N\}}P_{m}\Bigg{)}=\Omega_{i}\Bigg{(}\sum_{m\in M\setminus\{1,..,N\}}P_{m}\Bigg{)}.\] Invoking then the first item in Prop. B.1, we find that as we make \(N\) arbitrarily large or equal to \(|M|\), if \(M\) is finite, we obtain \(\Omega_{i}(\sum_{m\in M\setminus\{1,..,N\}}P_{m})\to 0\). **Acknowledgements:**. We thank Jie Wang for suggesting the relevance of different disorder types for TBG and Maciej Zworski for initial discussions on the project. M. Vogel was partially funded by the Agence Nationale de la Recherche, through the project ADYCT (ANR-20-CE40-0017). I. Oltman was jointly funded by the National Science Foundation Graduate Research Fellowship under grant DGE-1650114 and by grant DMS-1901462.
2308.15154
The Anatomy of Conspirators: Unveiling Traits using a Comprehensive Twitter Dataset
The discourse around conspiracy theories is currently thriving amidst the rampant misinformation in online environments. Research in this field has been focused on detecting conspiracy theories on social media, often relying on limited datasets. In this study, we present a novel methodology for constructing a Twitter dataset that encompasses accounts engaged in conspiracy-related activities throughout the year 2022. Our approach centers on data collection that is independent of specific conspiracy theories and information operations. Additionally, our dataset includes a control group comprising randomly selected users who can be fairly compared to the individuals involved in conspiracy activities. This comprehensive collection effort yielded a total of 15K accounts and 37M tweets extracted from their timelines. We conduct a comparative analysis of the two groups across three dimensions: topics, profiles, and behavioral characteristics. The results indicate that conspiracy and control users exhibit similarity in terms of their profile metadata characteristics. However, they diverge significantly in terms of behavior and activity, particularly regarding the discussed topics, the terminology used, and their stance on trending subjects. In addition, we find no significant disparity in the presence of bot users between the two groups. Finally, we develop a classifier to identify conspiracy users using features borrowed from bot, troll and linguistic literature. The results demonstrate a high accuracy level (with an F1 score of 0.94), enabling us to uncover the most discriminating features associated with conspiracy-related accounts.
Margherita Gambini, Serena Tardelli, Maurizio Tesconi
2023-08-29T09:35:23Z
http://arxiv.org/abs/2308.15154v2
# The Anatomy of Conspirators: Unveiling Traits ###### Abstract The discourse around conspiracy theories is currently thriving amidst the rampant misinformation prevalent in online environments. Research in this field has been focused on detecting conspiracy theories on social media, often relying on limited datasets. In this study, we present a novel methodology for constructing a Twitter dataset that encompasses accounts engaged in conspiracy-related activities throughout the year 2022. Our approach centers on data collection that is independent of specific conspiracy theories and information operations. Additionally, our dataset includes a control group comprising randomly selected users who can be fairly compared to the individuals involved in conspiracy activities. This comprehensive collection effort yielded a total of 15K accounts and 37M tweets extracted from their timelines. We conduct a comparative analysis of the two groups across three dimensions: topics, profiles, and behavioral characteristics. The results indicate that conspiracy and control users exhibit similarity in terms of their profile metadata characteristics. However, they diverge significantly in terms of behavior and activity, particularly regarding the discussed topics, the terminology used, and their stance on trending subjects. Interestingly, there is no significant disparity in the presence of bot users between the two groups, suggesting that conspiracy and automation are orthogonal concepts. Finally, we develop a classifier to identify conspiracy users using 93 features, some of which are commonly employed in literature for troll identification. The results demonstrate a high accuracy level (with an average F1 score of 0.98%), enabling us to uncover the most discriminative features associated with conspiracy-related accounts. keywords: conspiracy, machine learning, social media dataset, comparative analysis + Footnote †: journal: ## 1 Introduction Conspiracy culture has been brewing on social media for over a decade, fueled by the misinformation, polarization, and science denial typical of the online ecosystem [1; 2; 3; 4]. Conspiracy theories provide alternative explanations to significant historical or current events with claims of secret plots by people or groups having ambigous intentions (e.g., usurpation of power, violation of rights, alteration of the bedrock institutions, societal disruption, etc.) [5; 6; 7]. Social media platforms have enabled faster communication and dissemination of conspiratorial narratives. As such, recent times have seen a plethora of online conspiracy beliefs concerning a broad range of topics. Notable examples encompass unconventional interpretations of climate change [8], the 9/11 attacks [9], political movements like QAnon [10], and, more recently, theories related to the COVID-19 pandemic [11; 12]. As a result, the spread of such information can have far-reaching implications for both individual users and society at large [13; 14; 15; 16; 17]. For these reasons, research into online conspiracy has grown in recent years, aimed at comprehending the dynamics of online conspiracy culture across various academic disciplines by using models and analytical approaches primarily based on linguistic and rhetorical theory [7]. Understanding users' inclinations towards conspiracy theories is of significant interest, as it can offer valuable insights into the propagation of ideologies, without limiting the analysis to a specific conspiracy theory. This understanding is crucial for assessing the roles played by the involved individuals and taking appropriate measures to mitigate the impact of this phenomenon. Nonetheless, despite research advances in the detection of emerging or predefined conspiracy theories [18; 19; 20], the study of conspiracy users' characteristics remains limited. In fact, most of the existing datasets and collection techniques are either focused on specific conspiracy topics or domains, or rely on manual annotation or subjective criteria to label users as conspiratorial or not. This limitation poses challenges for developing and evaluating automated methods for user-level conspiracy detection and analysis. Therefore, there is a need for a large-scale, diverse, and reliable dataset that can capture the general characteristics and behaviours of conspiracy users across different topics. Such a dataset would also enable researchers to explore various aspects of online conspiracy, such as network analysis, content analysis, sentiment analysis, misinformation and stance detection. ### Contributions In this study, we collect and share a Twitter dataset of 15K users, categorized into two groups: a set of textitconspiracy users engaging with diverse conspiracy theories in May 2022, and a control group of _random users_ collected from those posting on the same topics during the same period. Additionally, we collect their timelines, totaling of 37M tweets. Numerous social platforms, ranging from fringe to mainstream, have been exposed to various types of conspiracy narratives [21; 22; 10; 23; 24; 25; 26; 27]. Here, we focus on Twitter data due to its reported extent of conspiracy engagement, wide audience reach, rapid dissemination, and ease of accessibility. With a robust dataset, we analyze the distinctions between the two user groups across three dimensions: topic preferences, profile metadata, and behavioral patterns. In particular, our approach allows us to explore new research directions and address the following research questions: **RQ1** - _How can we construct a robust and comprehensive dataset of online conspiracy users?_ Existing datasets are often limited by simplistic gathering methods. In this study, we collect with a rigorous methodology users endorsing conspiracy beliefs posted by known conspiracy source on Twitter. We also collect a control group of random users with similar metadata properties who discuss the same topics but do not show signs of conspiracy involvement, ensuring a fair comparison between the two groups. **RQ2** - _What are the differences in attitude and approach when discussing topics between conspiracy and random users?_ Different user groups can interpret and engage with topics in diverse ways. We delve into and compare the manners in which these two groups relate to and discuss specific subjects. **RQ3** - _Which features predominantly differentiate the conspiracy users?_ Our analysis of feature importance involves training various classifiers using three classes of features. The results reveal that conspiracy users tend to engage more in conversations, reply frequently, and exhibit less lexical variability. This linguistic distinctiveness holds even when compared to a control group of randomly selected users with similar activity, nature, and language characteristics. Furthermore, we compare our findings with state-of-the-art techniques that leverage deeper linguistic properties, resulting in an 11% increase in F-score. Our main contributions stemming from the aforementioned RQs can be summarized as follows: * We create and publicly share a large, robust, and balanced dataset of 15K conspiracy and random control users, along with 137M tweets from their timelines. * We show that these two user groups display divergent attitudes and perspectives on specific topics, thereby reinforcing their distinctions. * Our analysis indicates that both groups have an automation rate below 1%, suggesting the involvement of genuine, real individuals. * We provide analysis on discriminative features by employing a classifier that leverages profile metadata, behavioral characteristics, and linguistic features borrowed from the literature on bot, troll, and conspiracy detection, obtaining a high accuracy (average F1 score of 0.98%). * We compare our detection capabilities with state-of-the-art methods, showing a better performance in terms of F1 score. **Reproducibility.** We release an anonymized, privacy-preserving version of the dataset 1 Footnote 1: [https://zenodo.org/record/8239530](https://zenodo.org/record/8239530) ### Roadmap The remainder of this paper is organized as follows. In Section 2, we review recent literature concerning online conspiracies, with a focus on works that offer accessible datasets and profiles of conspiracy users compared to random users. In Section 3.1, we present and motivate our data collection strategy, and provide the first descriptive statistics about the resulting dataset. In Section 3, we provide an overview of the methods leveraged for the analysis and characterization of conspiracy compared to a control group of random users. In Section 4, we present the results, showing the differences between the two groups in terms of topic and profile metadata, as well as present the classification results and show the most discriminative features. Finally, in Section 5, we summarize the main findings, contextualize them within the broader research effort against online conspiracy, address the limitations, and outline potential future research directions. ## 2 Related work In recent years, there has been an increasing focus on identifying and understanding conspiracy theories. Unlike other forms of misleading content, like misinformation, disinformation, fake news, or rumors, conspiracy theories present a unique challenge. They raise the crucial and ongoing to question whether they are fundamentally false, alternative explanations, or situated somewhere between reality and fiction [5; 7]. Interestingly, individuals on social media who embrace conspiracy theories aren't solely automated accounts, trolls, or spreaders of fake news and rumors. Many genuinely hold beliefs in hidden agendas or plots, some of which have been verified as true2, while others remain unverified. Furthermore, not all conspiracy theorists actively scheme or propagate them. Among them are regular people who simply hold these beliefs without actively disseminating them. To discriminate between these different types of users, the availability of a high-quality, robust social media dataset is essential. Such a dataset must encompass the wide array of dynamic behaviors associated with conspiracy beliefs, but its creation presents significant challenges that necessitate meticulous planning and assessment. Previous studies have attempted to create social datasets for similar purposes, not without some limitations and trade-offs. Footnote 2: [https://www.rd.com/list/conspiracy-theories-that-turned-out-to-be-true/](https://www.rd.com/list/conspiracy-theories-that-turned-out-to-be-true/) The following two subsections delve into the current leading methods for constructing datasets comprising both conspiracy-affiliated users and those in a control group, as well as describe how these two groups are compared and characterized. ### Datasets of conspiracy and control group users In literature, the comparison between users who engage in conspiracy theories and a control group is commonly performed by analyzing the presence of specific keywords and/or URLs related to these theories within social media posts [28; 29; 30; 20; 31; 21; 32; 33]. In this context, the control group typically consists of users who consume content that directly opposes specific conspiracy theories. Some studies employed such approach and collected datasets of conspirative and non-conspirative users. For instance, the authors in [28] collected and annotated tweets related to COVID-19 conspiracy, sampled 109 conspirative users annotated as posting conspiracy theory content and retrieved their timelines. As control group, they identified 109 non-conspirative users exhibiting a tweet behaviour focusing on coronavirus related content in general, using generic related keywords (e.g., corona, covid, pandemic). However, this study faces certain limitations. Notably, the dataset size is relatively small due to the challenging task of manual annotation, and the focus is narrowed to a specific conspiracy theory. Similarly, prior investigations have predominantly centered around specific conspiracy theories [34; 35; 36; 33; 37]. In [31], the authors focused on the conspiracy narrative surrounding COVID-19, leveraging hashtags in support or opposed to specific conspiracies. Similarly, the authors in [20] collected and analyzed online discussions related to four distinct conspiracy theories. Nevertheless, findings tied exclusively to a single conspiracy theory may not be easily extrapolated to other societal events that might trigger beliefs in conspiracies. Therefore, our primary objective is to explore users involved in conspiracy-related discussions without being confined to a particular information operation or theory. The authors in [29] advanced the state-of-the-art by considering six conspiracy theories. They curated and annotated tweets containing hashtags likely used to either endorse or refute these theories. This resulted in retrieving 977 users who engage with conspiracy content and 950 users who counteract such narratives. Meanwhile, the authors in [30] managed to expand the conspiracy user base by adopting a different method. They exploited five known conspiracy-affiliated and five science-oriented Twitter influencers, retrieving a sample of their followers to create a dataset of pro- and anti-conspiracy groups. However, these approaches have some drawbacks. Specifically, hashtags or relationships do not always accurately reflect the content of a tweet, as they can indicate both support for or rejection of conspiracy theories. Our approach tackles this issue by considering _likes_, which better capture individual user preferences. This eliminates the need for manual annotation and allows for the examination of a larger user base. Close to our approach, previous works [32; 21] labelled conspiracy enthusiasts and science-minded users based on the number of likes they gave to conspiracy and science-related posts on Facebook. However, their analysis was confined to the comments these users left in conspiracy and science groups, disregarding their broader Facebook posting history. Finally, it's worth highlighting that our focus is on comparing conspiracy theorists with the broader population of Twitter users, rather than exclusively contrasting them with those who oppose conspiracies. Our focus is on Twitter, for its (formerly) data retrieval ease and its role in the dissemination of conspiracy theories [38; 39]. However, the approach we employ holds the potential for application across various social media platforms. ### Characterization of users engaging conspiracies The characterization and detection of malicious users has received a lot of attention during the last years. Researchers have primarily concentrated on analyzing the traits of various types of problematic users, such as social bots, state-sponsored trolls, and more recently, conspirators. These investigations have predominantly focused on elements like profile metadata, demographic details (e.g., gender, age) [40; 41; 42], social activity [43], interactions [44; 42], and relationships [12; 16; 42; 22]. A more recent trend in conspiracy detection involves examining linguistic patterns present in textual content [30; 29; 22; 32]. This approach aims to identify specific language cues associated with conspiratorial discourse. For instance, the authors in [43] explored linguistic features like average text length, word redundancy, emotional responses, and psycholinguistic aspects to distinguish conspiracy-related content. Likewise, other researchers explored linguistic cues to detect trolls [45] and conspiracy [46; 12; 29]. Similarly, our work investigates features commonly used to spot malicious users (such as fake accounts, bots, trolls, and spreaders of misinformation), which have not been extensively explored for identifying conspirators [47], while also incorporate linguistic features into our analysis. To obtain the features that better discriminate conspirative users from random ones, we leverage the outcomes from several standard machine learning classifiers. Similar efforts were made by previous works, such as [30], in which authors employed a logistic regression model to confirm their qualitative findings on psycholinguistic traits associated with conspiracy believers and science enthusiasts. However they did not present detection results for direct comparison with our work. In another relevant study, [33], the authors analyzed covid19 conspiracy discussions and classified users into misinformation and non-misinformation groups based on their profile metadata and tweet embeddings. However, their approach relied on graph-based methods and did not provide sufficient details for reproducibility (e.g., dataset, graph-data), which limits the comparison with our work. In contrast, our study proposes a model that encompasses diverse features to identify online users engaged in conspiracy discussions. We also benchmark our classification outcomes against a study, [29], that used a CNN-based model incorporating linguistic traits to differentiate between users who share posts supporting or refuting conspiracy theories. In summary, we adopt a computational approach to study the conspiracy phenomenon and compare online users who engage in conspirative discussions with random users. In particular, we examine the profile, activity and psycholinguistic characteristics of conspiracy and random users based on the tweets that they post. Furthermore, we introduce a model that utilizes different features to identify online users participating in conspiracy-related conversations. Therefore, our study offers an orthogonal view and contribution building on prior work by employing distinct methodologies for constructing a reliable and robust conspiracy dataset and applying extended methodologies for characterization and detection. ## 3 Methods and data In this section, we delve into the methods employed to gather and analyze data for our study. We first delineate our data collection strategies for collecting users engaged in conspiracy theories and random users. Next, we describe feature extraction methodologies. ### Data Collection Strategy To address our first research question (RQ1) on creating a robust dataset of users engaged in conspiracy content, we propose a strategy based on users' _liking_ behavior towards posts from various conspiracy accounts. We argue that liking a post indicates stronger approval and endorsement of the message compared to a mere re-share [48]. To ensure a fair comparison, we introduce a control group, on the line of prior research [22]. This control group comprises randomly selected users with similar metadata and engaging in discussions around the main topics. In this way, we can compare the two groups fairly. In summary, our dataset consists of a conspiracy group comprising 7,394 Twitter users and a control group with an equal number of randomly selected users. We will refer to the control group users as textitrandom users throughout this work. The subsequent sections describe in detail the strategy and criteria used to select both conspiracy and random users. #### 3.1.1 Strategy for collecting conspirative users The idea behind the collection is to identify the users who are most likely to believe in various conspiracy theories, by focusing on those appreciating, by means of a like, posts from various conspiracy sources (e.g., websites). The adopted strategy took place in June 2022 and led to the collection of the latest 3,200 tweets for 7,394 users known for liking conspiratorial content. Our approach comprises three key steps, which correspond to Figure 1: 1. **Step 1 - Initial Set of Conspiracy Sources**: We identify an initial set of websites rated as conspiracy Media Bias/Fact Check (MBFC)3, a non-profit organization assessing online source credibility and bias. We extract the associated Twitter accounts and leverage them as _seed accounts_ for the following steps. Footnote 3: [https://mediabiasfactcheck.com/](https://mediabiasfactcheck.com/) 2. **Step 2 - Collecting User Likes**: We gather Twitter users who have liked posts from the seed accounts, indicating potential conspiratorial engagement. 3. **Step 3 - Applying Filters**: We retain users who both follow at least one seed account and have shown uniform interest across multiple seed accounts. The goal is to balance between diversity (number of distinct seed accounts liked by a user) and intensity (total user likes across seed accounts). Additionally, we strike a balance between the absolute number of likes per user for seed accounts and the number of liked seed accounts. We retain users based on this trade-off, constituting our conspiracy group. In summary, our goal is to identify users likely to be conspiratorial based on interactions with seed accounts. We apply the method as follows: Figure 1: Overview of the proposed strategy for collecting conspirator users. _Step 1_. We select an initial set of \(26\) conspiracy sources from MBFC as seeds, as shown in Table 1. As mentioned, MBFC is an indipendent website that aims to provide an objective and transparent assessment of the credibility and bias of online news media sources. By rating more than \(4\)K media sources and employing a team of trained experts and journalists across the political spectrum, MBFC has become the most comprehensive media bias resource on the internet. The website's ratings are based on rigorous criteria and methodology, and have been utilized by researchers for various academic purposes [49, 50]. In particular, we leverage a list of \(300\) news source websites rated as engaging with various topics of dubious veracity and scientific validity, conspiracy theories and pseudo-science. We then look for the twitter accounts associated with these websites and find \(100\) matches. For computational time purposes, we apply manual annotation and filter the matches to keep only the top \(26\) accounts that have more than \(90\) tweets endorsing any conspiracy theory in their most recent \(100\) tweets. Finally, we leverage them as the seed set of conspirative Twitter seed accounts. _Step 2_. In this step, we leverage the like interaction on Twitter, which allows users to express their appreciation or interest in a tweet. Unlike the sharing, which amplifies the message to a wider audience and allow for fact-checking or criticism, or the replying, which initiates a conversation or provides feedback that may or may not agree with the message, the like interactions convey approval or endorsement with posts. By employing Twitter API V2 endpoints, we retrieve likes from seed accounts' posts between July \(19\)th, \(2021\), and February \(28\)th, \(2022\). This method captures user engagement with seed accounts over time, providing a comprehensive view. We obtain \(8,935,961\) likes from \(968,824\) users for \(54,559\) tweets by seed accounts. _Step 3_. In this step, we perform a series of filtering steps to refine our user selection process. Initially, out of a pool of \(968,824\) potential conspirators, we retained only those \((378,144\) users) who also follow at least one of the seed accounts. This demonstrated not only an initial attraction to the tweets of these seed accounts but also an ongoing interest for their overall content. Subsequently, we filter users \((345,936\) users) who display a well-balanced engagement with multiple seed accounts. In other words, if a user likes content from several seed accounts, their level of interest across these accounts is relatively consistent. We measured this by applying a coefficient of variation (_Cov_) of the number of likes per seed account, keeping it at or below 1. Finally, as mentioned, we establish the third filter based on two key factors: the total number of likes given to the set of seed accounts and the count of liked seed accounts. These factors provide insight into the intensity of activity related to conspiracy sources and the range of interest in different conspiracy theories. The combined analysis of these factors is summarized in Table 2. The table cross-references the absolute number of likes (up to 35) along the X-axis with the number of distinct seed accounts liked (up to 7) along the Y-axis. Each cell in the table represents the count of users who have distributed a minimum of Y likes across a minimum of X distinct seed accounts. As we move towards the lower right corner of the table, the number of users naturally decreases. Ideally, we would like to select conspirators who extensively liked a large number of seed accounts. However, this approach \begin{table} \begin{tabular}{l l l} \hline \hline **Conspiracy source provided by MBFC** & **Source website** & **Twitter account** \\ \hline Disclose TV & disclose.tv & @disclosetv \\ amtv & amtvmedia.com & @amtvmedia \\ Daily Grail & dailygrail.com & @DailyGrail \\ Catholic & catholic.org & @CatholicOnline \\ Dark Journalist & darkjournalist.com & @darkjournalist \\ End Time Headlines & endtimeheadlines.org & @EndTimeHeadline \\ CLG News & legitgov.org & @legitgov \\ Friends Of Science & friendsofscience.org & @FriendsOScience \\ Coast To Coast Am & coasttocoastam.com & @coasttocoastam \\ GeoEngineering Watch & geoengineeringwatch.org & @RealGeoEngWatch \\ Blacklisted News & blacklistednews.com & @BlacklistedNews \\ CharismaNews & charismanews.com & @charismanews \\ Children’s Health Defense & childrenshealthdefense.org & @ChildrensHD \\ GMWatch & gmwatch.org & @GMWatch \\ Creation & creation.com & @creationnews \\ Food Babe & foodbabe.com & @thefoodbabe \\ Architects and Engineers for 9/11 Truth & ae911truth.org & @AE911Truth \\ Electroverse & electroverse.net & @Electroversenet \\ Environmental Working Group & ewg.org & @ewg \\ Eluxe Magazine & eluxemagazine.com & @eluxemagazine \\ Gaia & gaia.com & @YourMotherGaia \\ Beholdisrael.org & @beholdisrael \\ Global Healing & globalhealingcenter.com & @GHChealth \\ Australian National Review & anrnews.com & @anr\_news \\ Alliance for Natural Health & anhusa.org & @anhusa \\ AltHealthWorks & AltHealthWorks.com & @AltHealthWORKS \\ \hline \hline \end{tabular} \end{table} Table 1: The 26 selected conspiracy websites. would yield a very small pool of users, potentially less than 100. Instead, our strategy is to strike a balance by focusing on a diverse range of sources while maintaining a reasonable number of likes. The aim was to have around \(10,000\) users for meaningful analysis. This process culminated in selecting 7,394 conspiracy users for in-depth analysis. These users liked at least 4 different conspiracy sources and exhibited a consistent distribution of at least 25 likes. By leveraging Twitter API, we gathered the timeline (the most recent 3,200 tweets) from these 7,394 selected conspirators, accumulating a total of \(18,273,565\) tweets. The final dataset comprises tweets covering the time span from February 27th, 2008 to June 13th, 2022. #### 3.1.2 Strategy for collecting random users To enable a meaningful comparison with our conspiracy group, we establish a set of random users as control group. The selection criteria ensure \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{6}{c}{**total sources (\(s\))**} \\ \cline{2-7} **total likes (\(l\))** & \(s\geq 1\) & \(s\geq 2\) & \(s\geq 3\) & \(s\geq 4\) & \(s\geq 5\) & \(s\geq 6\) & \(s\geq 7\) \\ \hline \(l\geq 1\) & \(345,936\) & \(132,828\) & \(55,064\) & \(19,966\) & \(7,618\) & \(2,462\) & 790 \\ \(l\geq 5\) & \(147,108\) & \(95,526\) & \(49,535\) & \(19,630\) & \(7,618\) & \(2,462\) & 790 \\ \(l\geq 10\) & \(88,628\) & \(61,635\) & \(34,542\) & \(15,817\) & \(6,932\) & \(2,366\) & 779 \\ \(l\geq 15\) & \(61,778\) & \(44,206\) & \(25,228\) & \(12,118\) & \(5,809\) & \(2,095\) & 726 \\ \(l\geq 20\) & \(46,220\) & \(33,507\) & \(19,265\) & \(9,413\) & \(4,807\) & \(1,812\) & 648 \\ \(l\geq 25\) & \(36,055\) & \(26,342\) & \(15,169\) & \(7,394\) & \(3,928\) & \(1,517\) & 557 \\ \(l\geq 30\) & \(29,577\) & \(21,870\) & \(12,746\) & \(6,370\) & \(3,438\) & \(1,354\) & 504 \\ \(l\geq 35\) & \(24,597\) & \(18,308\) & \(10,810\) & \(5,474\) & \(3,000\) & \(1,187\) & 447 \\ \hline \hline \end{tabular} \end{table} Table 2: Number of users who distributed at least Y likes (up to 35) on at least X distinct S accounts (up to 7). Figure 2: Overview of the proposed strategy for collecting random users. parity in terms of discussed topics, account creation period, and language usage. This comprehensive approach involves three steps, as shown in Figure 2: * Collecting Topic-Related Discussions**: We collect tweets linked to the top 10 hashtags used by conspiracy users (Table 3). We establish June 13th, 2022 as the end date for this data collection process. This specific date aligns with the termination of the collection of conspirators' timeline. * Extracting Users discussing this Topics**: This step resulted in the retrieval of \(152,588\) tweets authored by \(82,796\) distinct users. * Filter Random Users**: We exclude users engaging with any of the 26 conspiracy seed sources to ensure a non-conspiracy profile. Additionally, we ensure uniformity of the predominant language in tweets. Finally, chose random users whose creation dates match with conspiracy users, maintaining equal distribution among these groups. This rigorous approach results in a set of 7,394 random users. We gather their timelines, providing a comprehensive dataset for comparative analysis, ending up with 19,268,801 tweets. \begin{table} \begin{tabular}{l r} \hline \hline Hashtag & no. of tweets \\ \hline \#Covid19 & \(31,275\) \\ \#Bitcoin & \(29,496\) \\ \#Ukraine & \(18,524\) \\ \#NoVaccinePassports & \(13,437\) \\ \#Pfizer & \(12,025\) \\ \#StopTheTreaty & \(11,813\) \\ \#cdnpoli & \(10,704\) \\ \#Canada & \(9,736\) \\ \#FreedomConvoy2022 & \(9,256\) \\ \#NoVaccinePassportsAnywhere & \(9,197\) \\ \hline \hline \end{tabular} \end{table} Table 3: Top 10 hashtags used by conspiracy users. ### Feature extraction We aim to identify the features that separate conspirative users from control users. We use data and meta-data to compute 93 different features that provide insights into various aspects of our users. Each feature is either a continuous numeric value, a binary value, or a set of statistics calculated from distribution (i.e., minimum, maximum, median, mean, standard deviation, skewness, and entropy). Each feature falls into one of three categories: continuous numeric values, binary values, or statistical measures derived from distributions (such as minimum, maximum, median, mean, standard deviation, skewness, and entropy). Some of these features are drawn from prior studies on bots and trolls, specifically selecting those proven effective in detecting or characterizing social bots and state-backed trolls [47]. In details, our approach involves extracting account features organized into three groups that capture different aspects of social network behavior. These groupings are inspired by previous research [47] that identified attributes related to account trustworthiness, topical focus [51], behavioral dynamics [51; 52], and strategic goals [51]. These feature groups, referred to as "traits," offer a suitable framework for describing and distinguishing diverse types of social network accounts. Unlike other studies that categorized features broadly into conventional domains (such as user-based, friends, network, temporal, content, sentiment, etc.) [53; 54; 55], we adopt a more intuitive grouping that aligns with the various roles an account can assume within the context of a social network. Our features are summarized by class and presented in Table 4. We discuss them briefly in the following sections. #### 3.2.1 Credibility features This category encompasses features that evaluate the credibility and trustworthiness of social media users based on their profile characteristics. The underlying assumption is that discussions are more likely to be organic if they involve mostly credible users. These features capture profile attributes that can differentiate between low-credibility and high-credibility accounts, such as the quantity and nature of social relationships, account age, and activity level. These features primarily draw from profile metadata, easily observable and assessable when viewing a social network account. These attributes have long served as discriminators for simplistic fake accounts [56; 54; 57; 58]. #### 3.2.2 Initiative features This class measures an account's influence in initiating and guiding discussions, shaping online conversations, and producing diverse and original content. To achieve this, we employ a set of features that quantify the quality and quantity of a user's activity, building on prior works [59; 47]. These features include metrics like the ratio of original to retweeted content, indicating an account's contribution to generating fresh and unique material rather than amplification. Additionally, metrics like the ratio of tweets to replies reflect the user's engagement in dialogues and exchanges with other users, rather than just broadcasting its own messages. These features help measure the quality and diversity of online discussions. \begin{table} \begin{tabular}{l l l} \hline Feature & Type & Description \\ \hline \multicolumn{3}{l}{_Credibility_} \\ \hline Following Count & Numeric & Number of followings \\ Followers Count & Numeric & Number of followers \\ Followers Ratio & Numeric & Ratio between the number of following users and the number of followers squared \\ Account Age & Numeric & Account age expressed in days \\ Followers Ratio & Numeric & Ratio between followers and age \\ Following Ratio & Numeric & Ratio between followers and age \\ Tweets Ratio & Numeric & Ratio between the number of tweets and age \\ Verified & Binary & If the account is verified or not \\ Has bio & Binary & If the account has got the bio \\ Has Default Pic & Binary & If the account has got the Twitter default pic or not \\ Has URL in Bio & Binary & If the account has got any URL in the bio \\ URLs Count & Numeric & Number of URLs in the bio \\ Hashtags Count & Numeric & Number of hashtags in the bio \\ Listed Count & Numeric & Number of public lists that the account is a member of \\ Bio Sentences & Numeric & Number of sentences in the bio \\ Bio Tokens & Numeric & Number of tokens in the bio \\ Bio Chars & Numeric & Number of chars in the bio \\ \hline \multicolumn{3}{l}{_Initiative_} \\ \hline Retwect Ratio & Numeric & Ratio between the number of retweets and the number of tweets \\ Reply Ratio & Numeric & Ratio between the number of replies and the number of tweets \\ Tweet-URL Ratio & Numeric & Ratio between the number of tweets containing a URL and the number of tweets \\ Retwect-URL Ratio & Numeric & Ratio between the number of retweets containing a URL and the number of tweets \\ Reply-URL Ratio & Numeric & Ratio between the number of replies containing a URL and the number of tweets \\ Words in Tweets & Distribution parameters & Distribution of the number of unique words in tweets \\ Words entropy in tweets & Distribution parameters & Distribution of the number of unique words entropy in tweets \\ \hline \multicolumn{3}{l}{_Adaptability_} \\ \hline Language Novelty & Distribution parameters & Percentage of new tokens in a tweet compared to those previously used \\ Time Between Tweets & Distribution parameters & Distribution of time differences between consecutive tweets \\ Time Between Retwets & Distribution parameters & Distribution of time differences between consecutive retweets \\ Time Between Mentions & Distribution parameters & Distribution of time differences between consecutive tweets containing mentions \\ Retwected Accounts & Distribution parameters & Distribution of the number of retweeted accounts \\ URL Domains & Distribution parameters & Distribution of the number of domains contained in tweets \\ Tweets Words & Distribution parameters & Distribution of the number of words contained in tweets \\ Tweets Characters & Distribution parameters & Distribution of the number of characters contained in tweets \\ \hline \end{tabular} \end{table} Table 4: The extracted features for each trait [47]. The distribution parameters are the \(min\), \(max\), \(mean\), \(median\), \(std\), \(skewness\), and \(entropy\). #### 3.2.3 Adaptability features Adaptability refers to the ability or willingness to change in order to suit different conditions. In our study, we measure account adaptability based on how it alters and adapts its behavior and profile over time in response to encountered or contributed topics. For instance, we examine linguistic aspects such as language novelty, entropy, and diversity, alongside other linguistic characteristics reflecting temporal changes. Thus, adaptability ties into the account's temporal and topic-related dynamics and its language usage [45; 51; 59]. ## 4 Results ### Dataset Creation (RQ1) In literature, users engaged in conspiracy activities are often identified as accounts who employ specific conspiracy-related keywords or share URLs from conspiracy websites [28; 29; 30; 20; 31; 21; 32; 33]. However, some of these users may be bots or trolls attempting to spread panic and skepticism in authorities by pushing alternative explanations for events [60; 61; 62; 63]. Misinformed users may inadvertently spreading conspiracy theories [35; 36; 34]. Our data collection accounts for this subtle difference between malicious users, misinformed users and actual conspirators by adopting a strategy that does not rely on the usage of either keywords or URLs. Our approach is grounded in the idea that "liking" a post expresses approval or support for its content, which does not necessarily apply to sharing [48]. This implies that users who frequently like posts from a particular account are likely endorsing the themes promoted by that account, and this endorsement is even stronger if the user follows the account. Consequently, a user who consistently likes posts from various conspiracy accounts, while also being a follower of at least one such account, is more likely to believe in conspiracy theories. Furthermore, relying solely on conspiracy keywords or URLs to identify conspiracy users results in capturing only those who actively spread and support specific plots. Our strategy, on the other hand, enables us to identify conspiracy users who may not necessarily propagate the theories they believe in, and who might endorse a range of conspiracy theories if the source accounts are of a generic nature. Here, "generic" refers to Twitter accounts that discuss multiple conspiracy theories concurrently. A similar liking-based strategy was used by authors in [32] and [21], who identified conspiracy users based on their significant liking activity on conspiracy-related posts. In our approach, we also consider the "follow" relationship from users to conspiracy accounts, providing a stronger validation of their affiliation with the conspiracy realm. For the control group, our aim is to include users that represent the broader social media population while minimizing superficial differences between an average random user and a conspirator. Utilizing anti-conspiracy keywords or URLs (e.g., science-based) [29; 21; 30] or general keywords related to broad topics [28; 20; 31] helps reduce these disparities. However, an additional mechanism is required to ensure similarity between the control and conspiracy groups while maintaining the integrity of both. In our work, instead, we ensure that conspiracy users and random users discuss similar topics and are created around the same time period. It is important to note that these common topics may not be conspiracy-related. To filter potential conspirators from the control group, we exclude random users who have liked posts from any of our seed accounts. A similar concept of a control group was used in [22], where users with similar initial activity to conspirators were identified and tracked as they diverged over time. We briefly provide an overview of the characteristics of our dataset. Figure 3 shows descriptive statistics of the collected users. In terms of content, random users exhibit a higher proportion of original tweets (19%) compared to conspirative users. Conversely, the latter group displays a greater inclination towards engaging in replies (35% as opposed to 21%). Regarding retweets and quotes, no substantial differences are evident. Finally, we verify the presence of automated accounts by employing _Botometer v4_ to compute the _bot_ scores for both conspiracy and random users [64]. The analysis revealed no bot presence within the conspiracy group. However, Figure 3: Dataset statistics approximately 1.5% of random users show a likelihood of being bots with a confidence level greater than 90%. We consider the 1% noise acceptable for our study's purposes. ### Topic characterization (RQ2) In this section, we answer to RQ2, by focusing on the key subjects discussed by the two distinct groups. First, we extract the main topics through social network analysis based on co-occurring hashtags. Then, we employ topic modeling to highlight the primary themes of conversation within each user group. In this way, we highlight highly correlated words that may give a hint on the attitude towards a specific topic. Notably, conspiracy users were collected by leveraging likes, while random users were collected by leveraging hashtags over the same time period. Taking that into consideration, we perform the following analysis on timelines and properly handle hashtag seeds, as detailed as follows. #### 4.2.1 Visualizing co-occurring hashtags per group We begin by computing and visualizing the graph of co-occurring hashtags for conspiracy users. We compare it with the graph generated from hashtags that co-occur in tweets posted by random users. These co-occurrence graphs depict the interconnections between hashtags based on their simultaneous appearance within tweets. For clarity, figures 4 and 5 show only the top 50 hashtags based on weighted degree. Figure 4: Co-occurrence graph of hashtags mentioned by conspiracy users. Figure 4 shows the co-occurrence graph of hashtags mentioned in all tweets posted by conspiracy users. As shown, the core of this graph is predominantly composed of two clusters. One cluster centers around topics related to covid-19 and vaccination discourse. The other cluster involves hashtags commonly used for describing images on Instagram [65; 66], possibly due to cross-platform social media sharing. In Figure 5 we reconstruct the co-occurrence graph of hashtags used by the random users. Given that we collected data for these users by focusing on the top 10 hashtags used by conspiracy users, we omit those hashtags from our analysis. In this scenario, the core of the graph is mainly composed of hashtags associated with cryptocurrency. Notably, covid-related hashtags appear on the periphery of the graph. This suggests that during the data collection from random users, cryptocurrency held a stronger influence than the other topics supplied as input. Nevertheless, the popularity of certain topics (e.g., cryptoworld) might surpass others like (e.g., covid), based on factors such as current trends and individual user preferences. In the next section, we provide a more extensive exploration of the topics and analyze the different user groups' attitude and stances on these subjects. Figure 5: Co-occurrence graph of hashtags mentioned by random users. #### 4.2.2 Characterizing topic discussions and attitudes To gain a deeper understanding of the different attitudes towards the online discourse between conspiracy and random users, we employ topic modeling using a recent, advanced, cutting-edge algorithm known as Anchored Correlation Explanation (CorEx) [67]. The CorEx algorithm learns latent topics from documents without assuming an underlying generative model. It maximizes the correlation between groups of words and latent topics, leveraging the dependencies between words in documents. This approach ensures enhanced flexibility, enabling hierarchical and semi-supervised variants [67]. An essential feature of CorEx is also the ability to anchor words, which facilitates semi-supervised topic modeling and enhances topic separability with minimal intervention. Anchoring involves injecting prior knowledge (anchor words) into the topic model to identify and differentiate underrepresented or significant topics. This process enables us to extract pertinent topics and the associated terminology. Given our focus on studying the attitude towards shared main topics by both user groups, we capitalize on the word anchoring capability of CorEx to enhance topic separability. We build two distinct models for conspiracy users and random users to account for potential variations in topics and forms of speech. We select the top 10 hashtags from Table 2 as anchor words. After experimenting with various configurations, we set the expected number of topics to 10, as additional topics yielded negligible correlation improvement. Finally, we rank \begin{table} \begin{tabular}{c l l} \hline \hline **Topic** & **Highly correlated words by conspirators** & **Highly correlated words by randoms** \\ \hline \multirow{3}{*}{**covid19**} & billgates, vaccineisedeeffects, wakeup, & coronavirus, covid19vaccine, \\ & greatreset, vaccinemandate, & fakenews, humanarights, \\ & vaccinatedeaths, bigpharma, freespeech & breakingnews, donaldtrump \\ \multirow{3}{*}{**bitcoin**} & fiat, coins, transactions, & ethereum, cryptocurrency, \\ & decentralized, satoshi & binance, wallet, nft \\ \multirow{3}{*}{**ukraine**} & ukrainerussiawar, musk, & russians, biden, \\ & sanctions, inflation & invasion, war \\ \multirow{3}{*}{**pfizer**} & adverse, myocarditis, clinical, & astrazeneca, pcr, schwab, \\ & covid, vaers, pcr & biontech, wef, klaus \\ \multirow{3}{*}{**stopthetreaty**} & stopthewho, billgatesbioterrorist, & force, national, \\ & trudeaufortreason, crimesagainsthumanity, & service, air, \\ \multirow{3}{*}{**stophetreaty**} & wefpuppets, reinstatenickhudson, & anti, cause, \\ \cline{1-1} & petitions, henchmen, experimental & court, population \\ \hline \hline \end{tabular} \end{table} Table 5: Topic modeling results, obtained by applying Anchored Correlation Explanation (CorEx) to conspirative and random users. Conspirative users are characterized by the use of more extreme and intense words when discussing a relevant topic. the resulting topics based on the correlation fraction they explain. The outcomes of this analysis are summarized in Table 5, with topics ordered by the amount of total correlation explained. Within each topic, words are arranged according to mutual information with the topic, and anchor words are highlighted in bold. Anchoring substantively augmented the contribution of topics of interest to the model's correlation. High topic quality is confirmed by the presence of non-anchored words with strong coherence within each topic. We report the most informative topics, in Table 5. We uncover some notable differences in the discussion of the same topics. For instance, conspiracy users discussing the topic of covid-19 topic deploy other highly correlated non-anchored words tied to conspiracy terminology [68; 69]. Notably, phrases like _wake up_ and appeals for _free speech_ stand out. Other terms encompass _bigpharma_ and _vaccinatedeaths_. In contrast, random users use milder language in relation to this topic, such as _fakenews_, _human rights_, and _breakingnews_, highlighting the moderation of this group. Similarly, words strongly correlated with the "pfizer" topic among conspiracy users revolve around adverse symptoms (e.g., _adverse_, _myocarditis_, _vaers_). Random users, on the other hand, use more general terms (e.g., _biontech_, _wef_). Another example pertains to the discussion about the international treaty for pandemics prevention and preparedness established by the World Health Organization (WHO) aiming to ensure equitable sharing of vaccines, drugs, and diagnostics during future pandemics [70][70]. Conspiracy users' correlated words include _billgates bioterrorist_, _trudeaufortreason_, _crimesagainsthumanity_, and other words with a nuance of adversion against the act4. In contrast, words used by random users are more generic, and neutralize (e.g., _population_). Finally, regarding discussions on Ukraine and cryptocurrency, no substantial variations emerge between the two user groups. Footnote 4: [https://www.reuters.com/article/factcheck-who-treaty-idUSL2N2XHOKA](https://www.reuters.com/article/factcheck-who-treaty-idUSL2N2XHOKA) ### Leveraging classification for extracting conspiracy discriminating features (RQ3) In order to identify conspiracy-related users and determine the key features that differentiate them from regular users, we leverage a set of 13 off-the-shelf machine learning algorithms (i.e., Light Gradient Boosting Machine (LIGHTGBM), Random Forest (RF), Gradient Boosting Classifier (GBM), Ada Boost Classifier (ADA), Extra Trees Classifier (ET), Decision Tree Classifier (DT), Logistic Regression (LR), Linear Discriminant Analysis (LDA), Ridge Classifier (RIDGE), K Neighbors Classifier (KNN), Support Vector Machine (SVM), Naive Bayes (NB), Quadratic Discriminant Analysis (QDA)). These classifiers are trained using a stratified 10-fold cross-validation approach. We assess several models, beginning with a baseline model, and progressively adding more features to each subsequent model. As a preprocessing step, we initially divide our dataset into training and testing sets using an \(80/20\) split. To provide a rigorous evaluation, we randomly select users for the training and test splits, preserving the balance of the two types of users. As shown in Table 6, the training set includes \(12,708\) users, \(6,354\%\) of which are labelled as conspirators and \(6,354\%\) as random. The test set consists of \(3,178\) users, of which \(1,589\%\) are conspirators and \(1,589\%\) control users.We address missing categorical values by replacing them with the most frequent value within the respective column. Missing numerical values are substituted with the mean value of their respective columns. Table 7 shows the outcomes of the classification process. The table presents the performance of two baseline models: _Majority Class_, which always predicts the majority class; and a random predictor. We show alongside the outcomes of the optimal classifier (LIGHTGBM), evaluated in terms of the F1 score. Within the context of the LIGHTGBM classifier, we incorporate varying sets of features to assess their effectiveness. #### 4.3.1 Feature importance evaluation Here, we explore the feature importance of the best-performing classifier, specifically the LIGHTGBM algorithm, in a comprehensive model that incorporates all features related to credibility, initiative, and adaptability. The goal is to gain a better understanding of which features contribute significantly to the accurate identification of users engaging in conspiracy activities. \begin{table} \begin{tabular}{l r r r} \hline \hline & & \multicolumn{2}{c}{**split**} \\ \cline{3-4} **class** & **users** & _training set_ & _test set_ \\ \hline conspiracy & 7,394 & 5,915 & 1,479 \\ random & 7,394 & 5,915 & 1,479 \\ \hline **total** & 14,788 & 11,830 (80\%) & 2,958 (20\%) \\ \hline \hline \end{tabular} \end{table} Table 6: Dataset composition (ground-truth and train/test split) for the classification task. Figure 6 exhibits the features in descending order of their impact on the Gini criterion, providing insights into their predictive importance within the model. The figure highlights the top 20 features, offering a ranked view of \begin{table} \begin{tabular}{l r r r} \hline \hline & Precision & Recall & F1 \\ \hline Majority Class & 0.53 & 1.0 & 0.69 \\ Random & 0.49 & 0.49 & 0.49 \\ LIGHTGBM (credibility) & 0.79 & 0.78 & 0.78 \\ LIGHTGBM (initiative) & 0.96 & 0.97 & 0.96 \\ LIGHTGBM (adaptability) & 0.97 & 0.97 & 0.97 \\ LIGHTGBM (credibility + initiative + adaptability) & 0.98 & 0.98 & 0.98 \\ \hline \hline \end{tabular} \end{table} Table 7: Performance of the conspiracy and random users detection on different groups of features. Figure 6: Feature importance considering credibility, initiative and adaptability traits. their significance in predicting conspiracy-related users. Feature importance plays a crucial role in comprehending the dynamics of various phenomena where several key factors emerge. Among the discriminating features, the character entropy in tweets emerges as the most influential. A closer examination, as demonstrated in Figure 7a, reveals that random users exhibit greater diversity and richness in their character usage. In contrast, conspirators tend to employ a narrower array of characters and words, suggesting a focus on specific topics and discussions. As second discriminating feature, the mean number of tweet per different language provides insights into the tweet's global reach and potential for cross-cultural engagement. A diverse language usage suggests broader appeal and variety of topic. Conspirators tend to use a single language for their tweets, while random users employ a wider spectrum of languages in their content. As second discriminating feature, the reply rate provides insights into the level of engagement a tweet generates and its consequential relevance and impact on the audience. A high reply rate implies a tweet's ability to initiate discussions and encourage interactions. Figure 7b reveals that conspirators exhibit a higher reply rate compared to random users. When replying, conspiracy users connect with a wider audience and engage in prolonged conversations with respect to random users. In addition to the aforementioned features, additional variables contribute to understanding the significance of certain characteristics in the analysis. For instance, the number of shared URLs, as prevalent among conspirators, offers insights into the extent of their engagement with external content supporting their beliefs, potentially influencing the reception of their tweets. In summary, adaptability-related features are the most influential factors in user categorization, followed by initiative and credibility-related features. These adaptability features, particularly tied to linguistic aspects, stand out as pivotal even when user groups share similar activity, nature, and language traits. This highlights the significant role of linguistic properties in distinguishing between these categories. In the subsequent section, we deeper examine them by comparing our findings and feature importance with a cutting-edge technique from the state-of-the-art that primarily focuses on analyzing the psycholinguistic properties of users. This exploration aims to provide deeper insights and understanding of conspiracy activity. #### 4.3.2 Comparison with the state-of-the-art As mentioned, we conduct a comparison of our results with those in [29], which explored the psycholinguistic characteristics of 977 conspiracy users and 950 anti-conspiracy users. Similarly, we leverage: * _Emotions_: the amount of emotions expressed by the users in their tweets, which includes eight emotional categories (i.e., anger, anticipation, disgust, fear, joy, sadness, surprise, and trust) as defined in [71], computed by leveraging the National Research Council (NRC) emotions lexicon [72]. * _Sentiment_: the amount of sentiment polarity (i.e. positive, negative) expressed by the users in their tweets, computed by leveraging the National Research Council (NRC) sentiment lexicon [72]. * _Personality traits_: we infer the personality traits of users from their tweets by utilizing the IBM Personality Insights API5. These traits consist of the renowned _Big Five traits_[73] (agreeableness, conscientiousness, emotional range (or neuroticism), extroversion and openness), five Figure 7: Most discriminating features of the model. _Values_ (conservation, hedonism, openness to change, self-enhancement and self-transcendence) and 12 _Needs_ (challenge, closeness, curiosity, excitement, harmony, ideal, liberty, love, practicality, self-expression, stability and structure). * _Linguistic patterns_: we assess the variety of linguistic patterns exhibited in a user's tweets using the LIWC tool [74]. In particular, we extract pronouns (I, we, you, she or he, they), time focus (past, present, future), personal concerns (work, leisure, home, money, religion, death), informal language (swear, assent, non-fluencies, fillers), cognitive processes (causation, discrepancy, tentative, certainty) and affective processes (anxiety). Following the methodology called _ConspiDetector_ in [29], we incorporate these user-specific characteristics and GloVe embeddings of user tweets io a dual-branch Neural Network. We exclude 772 random users from the analysis due to insufficient text content for computing IBM Personality traits. Table 8 shows the details of the training, validation, and test sets. In addition to running _ConspiDetector_ on our conspiracy and random users, we evaluate again the best-performing machine-learning classifier on the unbalanced dataset leveraging the credibility, initiative, adaptability and psycholinguistic features. Table 9 indicact that training a standard machine learning algorithm on our dataset using psycholinguistic traits obtains similar results as _ConspiDetector_, leading to an F1 score of 0.90. Furthermore, the performances obtained on the unbalanced dataset are in close alignment with those achieved on the balanced dataset. Finally, when looking at the most discriminative features, Figure 8 illustrates that psycholinguistic traits hold less prominence compared to behavioral characteristics. \begin{table} \begin{tabular}{l r r r r} \hline \hline & & \multicolumn{4}{c}{**split**} \\ \cline{3-5} **class** & **users** & _training set_ & _validation set_ & _test set_ \\ \hline conspiracy & 7,394 & 4,462 & 1,823 & 1,109 \\ random & 7,376 & 4,451 & 1,818 & 1,107 \\ \hline **total** & 14,770 & 8,913 (60\%) & 3,641 (25\%) & 2,216 (15\%) \\ \hline \hline \end{tabular} \end{table} Table 8: Dataset composition (ground-truth and train/test split) for _ConspiDetector_[29]. Nevertheless, within the top 20 features interestingly emerge the _emo Figure 8: Feature importance considering credibility, initiative, adaptability, and psycholinguistics traits. \begin{table} \begin{tabular}{l r r r} \hline \hline & Precision\_macro & Recall\_macro & F1\_macro \\ \hline Majority Class & 0.53 & 1.0 & 0.69 \\ Random & 0.52 & 0.50 & 0.51 \\ ConspiDetector & 0.88 & 0.91 & 0.89 \\ LIGHTGBM (credibility) & 0.81 & 0.76 & 0.78 \\ LIGHTGBM (initiative) & 0.96 & 0.97 & 0.96 \\ LIGHTGBM (adaptability) & 0.97 & 0.97 & 0.97 \\ LIGHTGBM (psycholinguistics) & 0.89 & 0.89 & 0.89 \\ LIGHTGBM (all) & 0.99 & 0.99 & 0.99 \\ \hline \hline \end{tabular} \end{table} Table 9: Performance of baselines, _ConspiDetector_ and LIGHTGBM on our (unbalanced) dataset. _all_ includes _credibility_, _initiative_, _adaptability_ and _psycholinguistic_ features. tion_disgust_ trait (i.e., _disgust_, as the opposite of _trust_), which conveys the conspirators' tendency to exhibit less assertiveness and sociability, as well as suspiciousness and longing for building knowledge [71]. ## 5 Conclusions Online conspiracy detection is a challenging task that requires a combination of robust data and tools. In this paper, we proposed a comprehensive methodology for collecting a rigorous Twitter dataset to study conspiracy theorists' characteristics and compare them to randomly selected accounts that exhibit similar characteristics. In particular, we leveraged the "like" behavior on social media platforms as it can reveal affiliation with conspiracy theories better than other behaviors (e.g., retweets, relying on the use of URLs, etc.). In fact, users who frequently like posts from a specific account are likely to support the themes promoted by that account, especially if they also follow the account. This endorsement of themes is stronger when users both like posts and follow conspiracy-related accounts, making them more prone to believing in conspiracy theories. For the control group, we collected users representing the broader social media population whose activity matches the topics discussed and account creation time of conspiracy users. In this way, we created a more balanced comparison between conspiracy users and regular users while maintaining the integrity of both groups. In addition, we presented a robust approach to detect online conspirative users based on their behavioral characteristics, linguistic features, temporal Figure 9: Most discriminating psycholinguistic features. patterns, and other features proposed in the literature for identifying bots and trolls. The goal of this classification task is twofold. On one hand, we showed that using a standard machine learning classifier on linguistic features and temporal patterns outperforms several baselines and a model proposed in the state-of-the-art as measured by accuracy and F1 score. On the other hand, we employ these findings to profile the two user groups and highlight features that differentially characterize conspiracy-oriented users on social media. Results show that the most discriminating features are the linguistic characteristics. The development of methods to detect conspiracy users based on linguistic traits and patterns, rather than the content of their claims, can be pivotal in identifying and monitoring the proliferation of conspiracy beliefs across diverse platforms and domains. ### Limitations and feature work Our approach presents some limitations that need to be addressed in future work. One primary limitation pertains to the usage of Media Bias Fact Check (MBFC). While widely employed to assess the conspiratorial inclination of news sources, MBFC's categorization process is subjective and potentially influenced by evaluators' personal biases. The criteria used by MBFC to evaluate conspiracy may not be universally agreed upon and can vary from person to person. Additionally, the methodology and transparency of MBFC's fact-checking process may not be fully disclosed, making it difficult to assess the accuracy and reliability of their assessments. Moreover, MBFC's database might not cover all news sources, especially smaller or less-known outlets, resulting in potential gaps in general coverage. It is essential to approach MBFC's ratings with a critical mindset and to consider multiple sources and perspectives when dealing with conspiracy. Another limitation is our singular focus on Twitter, potentially overlooking the multifaceted nature of online conspiracy discourse across various media. Recent shifts in the Twitter policies further challenge the replicability of results. To better understand and tackle online conspiracy activities, future studies should encompass data from multiple platforms. A further constraint stems from our data collection process, relying on features derived from users' timelines. In fact, this approach can be computationally intensive and susceptible to data availability issues. For real-time detection, an efficient and robust alternative could involve simpler features based on users' current activities and interactions, that do not depend on the users' history. Future research might also harness broader social network information, such as followers and followees, to gain additional insights about user credibility and influence. Additional avenues for future research may encompass investigating the political orientations of users engaging with conspiracies and identifying the propaganda strategies and rhetoric used by conspiracy theorists to persuade their audience. In conclusion, our work contributes to the growing field of online misinformation and disinformation research, presenting a valuable dataset and methodology for understanding and combating the propagation of harmful and false beliefs. ## Acknowledgment We thank for the support by project SoBigData.it, which receives funding from European Union - NextGenerationEU - National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) - Project: "SoBigData.it - Strengthening the Italian RI for Social Mining and Big Data Analytics" - Prot. IR0000013 - Avviso n. 3264 del 28/12/2021; This work is also supported by the European Union - Horizon 2020 Program under the scheme "INFRAIA-01-2018-2019 - Integrating Activities for Advanced Communities", Grant Agreement n. 871042, "SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics"; by the European Union under the scheme HORIZON-INFRA-2021-DEV-02-01 - Preparatory phase of new ESFRI research infrastructure projects, Grant Agreement n.101079043, "SoBigData RI PPP: SoBigData RI Preparatory Phase Project"; by project SERICS (PE00000014) under the NRRP MUR program funded by the EU - NGEU. ## Author contributions MG and ST: Conceptualization, Visualization, Methodology, Investigation, Writing. MT: Conceptualization and Supervision.
2303.08511
Mapping Urban Population Growth from Sentinel-2 MSI and Census Data Using Deep Learning: A Case Study in Kigali, Rwanda
To better understand current trends of urban population growth in Sub-Saharan Africa, high-quality spatiotemporal population estimates are necessary. While the joint use of remote sensing and deep learning has achieved promising results for population distribution estimation, most of the current work focuses on fine-scale spatial predictions derived from single date census, thereby neglecting temporal analyses. In this work, we focus on evaluating how deep learning change detection techniques can unravel temporal population dynamics at short intervals. Since Post-Classification Comparison (PCC) methods for change detection are known to propagate the error of the individual maps, we propose an end-to-end population growth mapping method. Specifically, a ResNet encoder, pretrained on a population mapping task with Sentinel-2 MSI data, was incorporated into a Siamese network. The Siamese network was trained at the census level to accurately predict population change. The effectiveness of the proposed method is demonstrated in Kigali, Rwanda, for the time period 2016-2020, using bi-temporal Sentinel-2 data. Compared to PCC, the Siamese network greatly reduced errors in population change predictions at the census level. These results show promise for future remote sensing-based population growth mapping endeavors.
Sebastian Hafner, Stefanos Georganos, Theodomir Mugiraneza, Yifang Ban
2023-03-15T10:39:31Z
http://arxiv.org/abs/2303.08511v1
Mapping Urban Population Growth from Sentinel-2 MSI and Census Data Using Deep Learning: A Case Study in Kigali, Rwanda ###### Abstract To better understand current trends of urban population growth in Sub-Saharan Africa, high-quality spatiotemporal population estimates are necessary. While the joint use of remote sensing and deep learning has achieved promising results for population distribution estimation, most of the current work focuses on fine-scale spatial predictions derived from single date census, thereby neglecting temporal analyses. In this work, we focus on evaluating how deep learning change detection techniques can unravel temporal population dynamics at short intervals. Since Post-Classification Comparison (PCC) methods for change detection are known to propagate the error of the individual maps, we propose an end-to-end population growth mapping method. Specifically, a ResNet encoder, pretrained on a population mapping task with Sentinel-2 MSI data, was incorporated into a Siamese network. The Siamese network was trained at the census level to accurately predict population change. The effectiveness of the proposed method is demonstrated in Kigali, Rwanda, for the time period 2016-2020, using bi-temporal Sentinel-2 data. Compared to PCC, the Siamese network greatly reduced errors in population change predictions at the census level. These results show promise for future remote sensing-based population growth mapping endeavors. Code is available on GitHub1. Footnote 1: [https://github.com/SebastianHafner/PopulationGrowthMapping,Kigali.git](https://github.com/SebastianHafner/PopulationGrowthMapping,Kigali.git) Population mapping, Sub-Saharan Africa, Siamese network ## I Introduction The projections in the World Population Prospects 2022 report suggest that the global population could reach 9.7 billion in 2050 [1]. At the forefront of the anticipated population growth are countries of Sub-Saharan Africa. In light of this, frequent updates of existing population data in that region are crucial, particularly considering that knowledge of population distribution is a necessary requisite for a wide range of applications. For example, population distribution maps provide vital information for vaccination campaigns, disaster response deployment, and urban mobility and transport planning. In recent years, census-independent (i.e., bottom-up) population mapping using deep learning and satellite imagery has shown promise in providing accurate population estimates. For example, Doupe _et al._[2] mapped population density at 8 km spatial resolution in Tanzania and Kenya using a Convolutional Neural Network (CNN) based on the VGG architecture and Landsat 7 imagery. Landsat 7 imagery and the VGG-net were also used by Robinson _et al._[3] to predict population counts in the United States at 1 km spatial resolution. Authors in [4] proposed to fuse Landsat 8 optical data with Sentinel-1 radar data to predict population density at 4.5 km spatial resolution for rural villages in India and demonstrated that dual-branch fusion networks outperform uni-modal networks. Sentinel-2 (S2) MultiSpectral Instrument (MSI) imagery was used by Huang _et al._[5] to map population distribution at 1 km spatial resolution for the Atlanta, Georgia, and Dallas, Texas metropolitan areas in the United States of America. Recently, Neal _et al._[6] used WorldView-2 imagery for estimating population in two districts of Mozambique using representation learning. A ResNet was also used in [7] to map population in Sub-Saharan African cities with multisource satellite imagery from Pleiades and S2. Building footprints were further used to improve the geographical transferability of models. While deep learning-based population mapping from satellite imagery has gained traction in recent years [2, 3, 4, 5, 6, 7], little attention has been paid to population growth mapping with the exception of [8]. Using a ResNet and Landsat 5 imagery, Zhuang _et al._[8] performed population growth analysis in China for the 1985-2010 period by mapping population distribution at 1 km spatial resolution with a 5-year interval. However, analyzing population growth by comparative analysis of independently produced population maps, i.e., change detection by Post-Classification Comparison (PCC), is well-known to suffer from the error propagation of the individual population maps. To that end, we propose an end-to-end population growth mapping method to overcome the error propagation of PCC in uni-temporal population maps. This study is, up to the best of our knowledge, the first to map population growth in an end-to-end fashion from satellite imagery. ## II Study Area and Data Kigali, the capital city and economic hub of Rwanda, was selected as the study area. Kigali encompasses an area of approximately 730 km\({}^{2}\). In 2012, Kigali had a population of approximately 1.1 million and placed among the fastest-growing cities in Africa [9]. In recent years, rapid urbanization resulted in the conversion of major cropland areas into built-up areas in the urban fringe zones of Kigali, which increased ecosystem service demands and negatively affected the habitat for biodiversity service function [10]. S2 MSI imagery of Kigali for 2016 and 2020 was retrieved from Google Earth Engine [11]. Specifically, cloud-free composites were generated by collecting all S2 Level-1C (top-of-atmosphere) scenes acquired during the wet season of the respective year. Thereafter, cloudy pixels (i.e., cloud probability \(>\) 50 %) were masked for each scene, before the scenes were combined using median compositing. The resulting cloud-free composites for 2016 and 2020 are visualized in Figure 0(a) and Figure 0(b), respectively. Population census data at the level of designated census enumeration areas were acquired for Kigali for the years 2016 and 2020 (161 administrative polygons). These areas are corresponding to the smallest administrative entities in Rwanda called villages. The data consist of number of population (head counts) and were acquired from two institutions including Kigali city One Stop center and the Local Administrative Entities Development Agency. Using iterative merging, we aggregated the dataset into a smaller number of units, to reflect a more realistic scenario regarding data availability, but also to adapt to the needs of the experiment (i.e., 100 meter predictive spatial resolution). Finally, the census units were randomly split into a training, validation, and test set (60/20/20 split) (Figure 0(d)). ## III Methodology ### _Problem Setup_ We consider two S2 MSI images that cover the same geographical area (Kigali) but were acquired at two different times, \(t_{1}\) and \(t_{2}\). Furthermore, we consider the census units constituting the City of Kigali, where each census unit, \(U\), contains an accurate count of the population, \(Y\), for \(t_{1}\) and \(t_{2}\). The goal is to train a network that accurately predicts the population growth for a census unit \(D\) (\(=Y^{t_{2}}-Y^{t_{1}}\)) from the part of the S2 images \(I^{t_{1}}\) and \(I^{t_{2}}\) covering \(U\). However, each census unit has a unique non-rectangular shape and, therefore, cannot be used directly as network input. A common way to deal with this is to operate on a grid level by dividing the entire study area into patches that constitute the census areas [7]. Consequently, census units are composed of a varying number of patches (100 x 100 m). Therefore, the network input to predict the population growth for a census unit is, in practice, the collection of S2 patches, \(x^{t1}\) and \(x^{t2}\), constituting the census unit. ### _Proposed Method_ The proposed population growth mapping method consists of two stages: 1) an encoder model is pretrained by mapping population at the grid level, and 2) a Siamese network, incorporating the pretrained encoder, is trained at the census level to map population growth. _Population Mapping at Grid Level:_ Our previous work demonstrated that an encoder based on the ResNet-18 architecture suffices to learn salient features from S2 MSI imagery for population mapping [7]. The same architecture is employed in this work (Figure 2). Specifically, the first layer of the ResNet-18 encoder is replaced with a 3 x 3 conv layer with 4 input channels to accommodate the 10 m S2 bands (Band 2, 3, 4, and 8) as input, while the remaining conv blocks constituting the encoder remain unchanged. The features extracted with the encoder are converted to a population prediction, \(p\), using a fully connected layer. Finally, the ReLu activation function is used to constrain \(p\) values to positive numbers. Hyper-parameters for training are tuned on the validation set using grid search with 3 learning rates (\(10^{-5}\), \(10^{-4}\), and \(10^{-3}\)) and 2 batch sizes (8, 16). AdamW is used as optimizer, and the training duration is set to 100 epochs with early stopping (patience 5) to prevent models from overfitting to the training set. As in [7], flips (horizontal and vertical) and rotations (\(k*90^{\circ}\), where \(k\in\{0,1,2,3\}\)) are applied to the training data augmentation, and the Mean Square Error (MSE) loss (commonly known as L2 loss) is used as loss function. L2 loss is defined as follows: \(L2=(y-p)^{2}\), where the true and predicted population value is denoted by \(y\) and \(p\), respectively. Fig. 1: S2 MSI composites for (a) 2016 and (b) 2020, and (c) population labels at grid level. (d) shows the data set splits. An NVIDIA GeForce RTX 3090 graphics card is used for training. Population Growth Mapping at Census LevelFor population growth mapping, we incorporate the pretrained ResNet-18 encoder into a Siamese network (Figure 3). Siamese networks consist of two encoders with shared weights that are used to separately extract features from the inputs, before deriving the change information from the combined features. Due to their inherent suitability to detect differences, Siamese networks have also become a popular architecture for change detection in bi-temporal pairs of satellite images. In this work, the pretrained encoder is employed to extract features on population count from both images separately. The pair of bi-temporal features is then converted to a population growth prediction using a fully connected layer. No activation function is applied to the output of that layer to allow for negative growth predictions. An important challenge of supervised population growth mapping is that bi-temporal population counts are required for the derivation of growth labels. While it is possible to accurately disaggregate a census to a grid, this requires auxiliary data such as land cover maps or building footprints. However, this data is often not available for both timestamps. Therefore, the Siamese network is trained at the census level by adapting the weakly supervised learning strategy proposed in [12]. Specifically, Metzger _et al._[12] trained a population mapping model using population count at the census level as labels by comparing them to the aggregated model predictions (patch-level) for corresponding census units. Likewise, we use the Siamese network to predict population growth separately for all patches of a census unit, before applying the loss to the sum of predicted growth, \(D\), using \(\Delta Y\) as label. The training setup (i.e., hyper-parameter tuning, early stopping, and data augmentations) is identical to that for population mapping. It should be noted, however, that the pretrained encoder is frozen during training, meaning that only the fully connected layer (\(f_{\text{fc}}\) in Figure 3) is trained. ### _Accuracy Metrics_ We make use of three commonly employed metrics in population studies [13], namely the Root Mean Squared Error (RMSE), the Mean Absolute Error (MAE), and the coefficient of determination (R\({}^{2}\)). RMSE and MAE are defined as follows: \[\text{RMSE}=\sqrt{\frac{\sum_{i=1}^{n}(y_{i}-p_{i})^{2}}{n}},\ \text{MAE}=\sqrt{\frac{\sum_{i=1}^{n}|y_{i}-p_{i}|}{n}}, \tag{1}\] where \(y\) and \(p\) are true and predicted values, respectively, and \(n\) is the sample size. On the other hand, R\({}^{2}\) is defined as 1 minus the fraction of the residual sum of squares and the total variability of the data. ## IV Results Table I lists the quantitative population mapping results at the grid level for 2020 and at the census level for 2016 and 2020. All three accuracy metrics indicate that accurate population predictions were achieved at the grid level. However, the aggregated results at the census level provide a stronger validation since the census population counts are official data. While RMSE and MAE values are not comparable between the grid and census level, the R\({}^{2}\) values at the census level indicate good performance (0.70 +), although worse than the performance achieved at the grid level (0.84). It is also apparent that the obtained accuracy values for 2016 and 2020 are relatively similar. Consequently, applying the model to new data from a different year had little impact on model performance. Figure 4 quantitatively compares the population growth predictions of (a) the PCC with (b) the proposed end-to-end method. The former, PCC, performed poorly, resulting in very high errors (RMSE = 1,471 and MAE = 1,082). In contrast, the proposed method achieved satisfactory results with an RMSE of 202 and an MAE of 165. In terms of R\({}^{2}\) values, the results are more similar, but better performance was also achieved by the proposed method (0.55 vs. 0.67). However, it is also apparent that the proposed method generally underestimates population growth. The qualitative population growth mapping predictions of the proposed method are visualized in Figure 4(b), next to the ground truth in Figure 4(a). Although th \begin{table} \begin{tabular}{l r r r r r} \hline \hline \multirow{2}{*}{Level} & \multicolumn{2}{c}{RMSE \(\downarrow\)} & \multicolumn{2}{c}{MAE \(\downarrow\)} & \multicolumn{2}{c}{R\({}^{2}\) \(\uparrow\)} \\ & 2016 & 2020 & 2016 & 2020 & 2016 & 2020 \\ \cline{2-5} \cline{7-7} Grid & - & 19 & - & 10 & - & 0.84 \\ Census & 3,199 & 3,253 & 2,368 & 2,196 & 0.72 & 0.73 \\ \hline \hline \end{tabular} \end{table} TABLE I: Quantitative population mapping results at the grid and census level for the test set. Fig. 3: Diagram of the proposed population growth mapping method consisting of two pretrained ResNet-18 encoders, \(f_{\text{en}}\), with shared weights and a fully connected layer, \(f_{\text{fc}}\). The network is trained at the census level with frozen encoders. Fig. 2: Diagram of the ResNet-18 model used for grid-level population mapping. was underestimated, the proposed method picked up on the population growth that occurred on the outskirts of Kigali (e.g., in the northeast and in the central south). However, the model failed to detect population growth in the small census units of central Kigali for which it predicted slightly negative growth values. ## V Discussion and Limitations We find the proposed method to be effective for population growth mapping from S2 MSI imagery, especially compared to PCC. Our findings also emphasize that salient features about population count can be learned from S2 imagery using a ResNet model. These results are in line with [7]. Our work is also subject to several limitations. First of all, to train the Siamese network, bi-temporal census data is required. However, census data, let alone bi-temporal census data, is difficult to obtain in Sub-Saharan Africa, or often not available at all [13]. Moreover, the S2 mission was launched less than 8 years ago, while censuses are typically conducted every 10 years. Consequently, bi-temporal census data for time periods starting after 2015 are largely unavailable. Another limitation of this work is that population predictions are based on the presence of built-up areas, but the land use of these areas may not be residential [3]. To overcome this, Neal _et al._[6] suggest including additional data modalities like, for example, night-time light data. Our quantitative results in central Kigali (Figure 4(b)) also suggest that densification of urban areas, and the subsequent increase in population, may be challenging to accurately predict. Finally, further work is needed to assess if the proposed method can accurately detect negative population growth as a result of, for example, slum evictions. ## VI Conclusion In this paper, a population growth mapping method based on a Siamese network is proposed and evaluated in Kigali, Rwanda for the time period 2016-2020. Using S2 MSI data as input, the proposed method achieved satisfactory population growth mapping results at the census level (RMSE = 202, MAE = 165, R\({}^{2}\) = 0.67), and greatly outperformed PCC in terms of RMSE (-1,269) and MAE (-917). Our future work will extend the study area to other Sub-Saharan African cities. Furthermore, we will investigate semi-supervised learning for Siamese network training (e.g., [14]) to reduce the dependence on bi-temporal census data.
2301.09076
The Demailly systems with the Vortex ansatz
For an arbitrary-rank vector bundle over a projective manifold, J.-P. Demailly proposed several systems of equations of Hermitian-Yang-Mills type for the curvature tensor to settle a conjecture of Griffiths on the equivalence of Hartshorne ampleness and Griffiths positivity. In this article, we have studied two proposed systems and proved that these equations have smooth solutions for the Vortex bundle using the continuity method.
Arindam Mandal
2023-01-22T08:22:57Z
http://arxiv.org/abs/2301.09076v1
# The Demailly systems with the vortex ansatz ###### Abstract. For an arbitrary-rank vector bundle over a projective manifold, J.-P. Demailly proposed several systems of equations of Hermitian-Yang-Mills type for the curvature tensor to settle a conjecture of Griffiths on the equivalence of Hartshorne ampleness and Griffiths positivity. In this article, we have studied two proposed systems and proved that these equations have smooth solutions for the Vortex bundle using the continuity method. Keywords: Holomorphic vector bundle, Ample vector bundle, Hermitian metric, curvature tensor, Griffiths positivity, Nakano positivity, dual Nakano positivity, elliptic operator ## 1. Introduction Let \(X\) be an \(n\)-dimensional projective manifold. A rank-\(r\) holomorphic vector bundle \(E\) over \(X\) is said to be ample in the Hartshorne sense [1] if and only if the line bundle \(\mathcal{O}_{\mathbb{P}(E)}(1)\) is ample over \(\mathbb{P}(E)\). The Chern curvature tensor \(\Theta_{E,h}\) of a Hermitian metric \(h\) is said to be Griffiths positive if \(\langle\sqrt{-1}\Theta_{E,h}(\zeta,\bar{\zeta}).v,v\rangle_{h}\) is positive for all decomposable nonzero elements \(\zeta\otimes v\in T_{X}\otimes E\), and Nakano positive if the bilinear form on \(T_{X}\otimes E\) defined by \(\sqrt{-1}\Theta_{E,h}\) is positive. Nakano positivity and dual Nakano positivity (the bundle \((E^{*},h^{*})\) is Nakano negative) imply Griffiths positivity, which is equivalent to dual Griffiths positivity (the bundle \((E^{*},h^{*})\) is Griffiths negative). Griffiths positivity implies ampleness. B. Berndtsson [1] has proved that for every positive integer \(m\), \(S^{m}E\otimes detE\) is Nakano positive if \(E\) is ample. The tangent bundle \(T\mathbb{P}^{n}\) of the complex projective space \(\mathbb{P}^{n}\) is ample but not Nakano positive, and ampleness does not imply Nakano positivity (see [1] for details). A conjecture of Griffiths [1] asks if Hartshorne ampleness implies Griffiths positivity. This conjecture is still open in its full generality. However, the conjecture holds for vector bundles on smooth curves, i.e., for \(n=1\), see [1] and [21] for more details. Much work has been done in this direction (see [10, 11, 12] and the references therein). J. P. Demailly [1] introduced systems of PDE of Hermitian-Yang-Mills type for the curvature tensor to prove the equivalence between ampleness and Griffiths positivity. Let \((E,H_{0})\) be smooth Hermitian holomorphic vector bundle of rank \(r\) over \(X\) such that \(E\) is ample and \(\omega_{0}=\sqrt{-1}\Theta_{detE,detH_{0}}>0\). Then one of Demailly's systems for time-dependent metrics \(h_{t},t\in[0,1]\) is as follows \[\omega_{0}^{-n}det_{T_{X}\otimes E^{*}}\Big{(}\sqrt{-1}\Theta_{E,h _{t}}+(1-t)\alpha\omega_{0}\otimes Id_{E^{*}}\Big{)}^{\frac{1}{r}}=\Big{(} \frac{detH_{0}}{deth_{t}}\Big{)}^{\lambda}a_{0}, \tag{1}\] \[\omega_{0}^{-n}\Big{(}\omega_{0}^{n-1}\wedge\sqrt{-1}(\Theta_{E,h _{t}}-\frac{1}{r}\Theta_{detE,deth_{t}}\otimes Id_{E})\Big{)}\] \[\qquad\qquad=-\varepsilon\Big{(}\frac{detH_{0}}{deth_{t}}\Big{)} ^{\mu}ln\Bigg{(}\frac{h_{t}H_{0}^{-1}}{det(h_{t}H_{0}^{-1})^{\frac{1}{r}}} \Bigg{)},\] where \(\lambda\geq 0,\mu\in\mathbb{R}\), \(\varepsilon>0\) and \(a_{0}=\omega_{0}^{-n}det_{T_{X}\otimes E^{*}}\Big{(}\sqrt{-1}\Theta_{E,h_{0} }+\alpha\omega_{0}\otimes Id_{E^{*}}\Big{)}^{\frac{1}{r}}>0\). The metric \(h_{0}\) is a solution of the second equation (cushioned Hermite-Einstein equation) of system (1) at \(t=0\) with the condition \(deth_{0}=detH_{0}\). With the notations same as above, one more variant of the above system is \[\omega_{0}^{-n}det_{T_{X}\otimes E^{*}}\Big{(}\sqrt{-1}\Theta_{E, h_{t}}+(1-t)\alpha\omega_{0}\otimes Id_{E^{*}}\Big{)}^{\frac{1}{r}}=\Big{(} \frac{detH_{0}}{deth_{t}}\Big{)}^{\lambda}a_{0}, \tag{2}\] \[\omega_{0}^{-n}\Big{(}\omega_{t}^{n-1}\wedge\sqrt{-1}(\Theta_{E, h_{t}}-\frac{1}{r}\Theta_{detE,deth_{t}}\otimes Id_{E})\Big{)}\] \[\qquad\qquad=-\varepsilon\Big{(}\frac{detH_{0}}{deth_{t}}\Big{)} ^{\mu}ln\Bigg{(}\frac{h_{t}H_{0}^{-1}}{det(h_{t}H_{0}^{-1})^{\frac{1}{r}}} \Bigg{)},\] where \(\omega_{t}=\frac{1}{1+r\alpha}(\sqrt{-1}\Theta_{detE,deth_{t}}+r(1-t)\alpha \omega_{0})\) and the metric \(h_{0}\) is a solution of second equation of the system (2) satisfying \(deth_{0}=detH_{0}\). Even though for both of the above systems, the existence of the metric \(h_{0}\) is clear from [10], for our purpose, in the case of Vortex bundle, we shall discuss the existence separately (subsections 2.1 and 3.1). If one can prove the existence of \(h_{t}\) for all time \(t\in[0,1]\), for one of the above systems, then \(h_{t}\) will be dual Nakano positive for \(t=1\). Thus a stronger result than Griffiths conjecture will have been proven, which will certainly settle the Griffiths conjecture. It turns out that there exists ample bundle which is not dual Nakano positive; see [1] for a detailed example. Therefore one should not expect the existence of solutions of Demailly's systems for all time \(t\in[0,1]\) in general. V. P. Pingali [19] has studied the system (1) for the direct sum of ample line bundles on Riemann surfaces using Leray-Schauder degree theory. In this article, using the continuity method, we have studied systems (1) and 2 on the Vortex bundle. We follow the construction of Vortex bundles as in [1] and [10]. Let \(\Sigma\) be a compact Riemann surface with a background Hermitian metric \(k\) on an ample holomorphic line bundle \(L\) such that \(\omega_{\Sigma}=\sqrt{-1}\Theta_{k}\) is the Kahler metric, where \(\Theta_{k}\) is the curvature of the metric \(k\). Consider \(\mathbb{CP}^{1}\) with the metric \(h_{FS}\) on the line bundle \(\mathcal{O}(1)\), who's curvature is the Fubini-study metric \(\omega_{FS}=\frac{\sqrt{-1}dz\wedge d\bar{z}}{(1+|z|^{2})^{2}}\). Define the rank-2 vector bundle \(E\) on the projective manifold \(X=\Sigma\times\mathbb{CP}^{1}\) by \(E=\pi_{1}^{*}((r_{1}+1)L)\otimes\pi_{2}^{*}(r_{2}\mathcal{O}(2))\oplus\pi_{1}^ {*}(r_{1}L)\otimes\pi_{2}^{*}((r_{2}+1)\mathcal{O}(2))\), where \(\pi_{1}:X\rightarrow\Sigma\) and \(\pi_{2}:X\rightarrow\mathbb{CP}^{1}\) are the projection maps and \(r_{1},r_{2}\) are positive integers. Let \(\phi\in H^{0}(\Sigma,L)\) be a global holomorphic section. We see that multiplication of the metric \(k\) by a nonzero constant does not change the curvature \(\Theta_{k}\). So we assume \(k\) has been rescaled so that \(|\phi|_{k}^{2}\leq\frac{1}{2}\). Define a holomorphic structure on \(E\) by the second fundamental form \(\beta=\pi_{1}^{*}\phi\otimes\pi_{2}^{*}\Big{(}\frac{\sqrt{8\pi}dz}{(1+|z|^{2}) ^{2}}\otimes d\bar{z}\Big{)}\). Let \(E\) be equipped with metric \(h_{t}=\pi_{1}^{*}(e^{-(f_{t}+\psi_{t})}k^{r_{1}+1})\otimes\pi_{2}^{*}(h_{FS}^{2 r_{2}})\oplus\pi_{1}^{*}(e^{-f_{t}}k^{r_{1}})\otimes\pi_{2}^{*}(h_{FS}^{2r_{2}+2})\), where \(f_{t}\) and \(\psi_{t}\) are smooth function on \(\Sigma\). Suppose \(\tilde{h}_{t}=\pi_{1}^{*}(e^{-(f_{t}+\psi_{t})}k^{r_{1}+1})\otimes\pi_{2}^{*}(h _{FS}^{2r_{2}})\), \(\tilde{g}_{t}=\pi_{1}^{*}(e^{-f_{t}}k^{r_{1}})\otimes\pi_{2}^{*}(h_{FS}^{2r_{ 2}+2})\) and \(g_{t}=e^{-\psi_{t}}k\). Then the Chern connection of \((E,h_{t})\) for the holomorphic structure given by \(\beta\) is given by the following connection matrix \[A_{h_{t}}=\begin{bmatrix}A_{\tilde{h}_{t}}&\beta\\ -\beta^{\dagger g_{t}}&A_{\tilde{g}_{t}}\end{bmatrix}.\] Its curvature matrix is \[\Theta_{h_{t}}=\begin{bmatrix}\Theta_{\tilde{h}_{t}}-\beta\wedge\beta^{ \dagger g_{t}}&\nabla^{1,0}\beta\\ -\nabla^{0,1}\beta^{\dagger g_{t}}&\Theta_{\tilde{g}_{t}}-\beta^{\dagger g_{t} }\wedge\beta\end{bmatrix}.\] Let \(H_{0}=\pi_{1}^{*}(k^{r_{1}+1})\otimes\pi_{2}^{*}(h_{FS}^{2r_{2}})\oplus\pi_{1} ^{*}(k^{r_{1}})\otimes\pi_{2}^{*}(h_{FS}^{2r_{2}+2})\) be the background metric on the Vortex bundle \(E\). Then \(\omega_{0}=(2r_{1}+1)\omega_{\Sigma}+(4r_{2}+2)\omega_{FS}\). Choosing \(\lambda=0,\mu=0\), and \(\varepsilon=1\), the system (1) for the Vortex bundle will be the following decoupled system of equations. \[\begin{split}&\Big{\{}\Big{(}\Delta f_{t}+\Delta\psi_{t}+(r_{1}+1)+ \alpha(1-t)(2r_{1}+1)\Big{)}\Big{(}2r_{2}+|\phi|^{2}_{g_{t}}+\\ &\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}\Delta f_{t}+r_{1}+\alpha(1- t)(2r_{1}+1)\Big{)}\\ &\Big{(}(2r_{2}+2)-|\phi|^{2}_{g_{t}}+\alpha(1-t)(4r_{2}+2) \Big{)}\Big{\}}+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger _{g_{t}}}}{\omega_{\Sigma}}\\ &\Big{(}2r_{2}+|\phi|^{2}_{g_{t}}+\alpha(1-t)(4r_{2}+2)\Big{)} \Big{(}\Delta f_{t}+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\\ =&\Big{\{}\Big{(}\Delta f_{0}+\Delta\psi_{0}+(r_{1}+ 1)+\alpha(2r_{1}+1)\Big{)}\Big{(}2r_{2}+|\phi|^{2}_{g_{0}}+\alpha(4r_{2}+2) \Big{)}\\ &\Big{(}(2r_{2}+2)-|\phi|^{2}_{g_{0}}+\alpha(4r_{2}+2)\Big{)} \Big{(}\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)}\Big{\}}\\ &+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger _{g_{0}}}}{\omega_{\Sigma}}\Big{(}2r_{2}+|\phi|^{2}_{g_{0}}+\alpha(4r_{2}+2) \Big{)}\\ &\Big{(}\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)},\end{split} \tag{3}\] \[\begin{split}&(2r_{1}+1)\big{(}|\phi|^{2}_{g_{t}}-1\big{)}+ \big{(}\Delta\psi_{t}+1\big{)}(2r_{2}+1)=(2r_{1}+1)(4r_{2}+2)\psi_{t},\end{split} \tag{4}\] where \(h_{0}=\pi_{1}^{*}(e^{-(f_{0}+\psi_{0})}k^{r_{1}+1})\otimes\pi_{2}^{*}(h_{FS}^{ 2r_{2}})\oplus\pi_{1}^{*}(e^{-f_{0}}k^{r_{1}})\otimes\pi_{2}^{*}(h_{FS}^{2r_{2 }+2})\) is the solution of the equation (4) at \(t=0\), satisfying \(deth_{0}=detH_{0}\). The existence of such \(h_{0}\) is discussed in the subsection 2.1 and \(\alpha>0\) is a large enough constant so that \(\sqrt{-1}\Theta_{h_{0}}+\alpha\omega_{0}\otimes Id_{E^{*}}>0\) in the sense of Nakano and \(\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)>0\). We now state one of our results. **Theorem 1.1**.: _The system defined by equations (3) and (4) has smooth solution \((f_{t},\psi_{t})\) such that \(\Delta f_{t}+r_{1}+\alpha(1-t)(2r_{1}+1)>0\) for all \(t\in[0,1]\)._ If we choose \(\lambda=0\) and \(\mu=0\), then the system (2) for the Vortex bundle will be the following coupled system of equations. \[\begin{split}&\Big{\{}\Big{(}\Delta f_{t}+\Delta\psi_{t}+(r_{1}+1)+ \alpha(1-t)(2r_{1}+1)\Big{)}\Big{(}2r_{2}+|\phi|_{g_{t}}^{2}+\\ &\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}\Delta f_{t}+r_{1}+\alpha(1- t)(2r_{1}+1)\Big{)}\\ &\Big{(}(2r_{2}+2)-|\phi|_{g_{t}}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)} \Big{\}}\\ &+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger g _{t}}}{\omega_{\Sigma}}\Big{(}2r_{2}+|\phi|_{g_{t}}^{2}+\alpha(1-t)(4r_{2}+2) \Big{)}\\ &\Big{(}\Delta f_{t}+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\\ =&\Big{\{}\Big{(}\Delta f_{0}+\Delta\psi_{0}+(r_{1}+ 1)+\alpha(2r_{1}+1)\Big{)}\Big{(}2r_{2}+|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2) \Big{)}\\ &\Big{(}(2r_{2}+2)-|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)} \Big{(}\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)}\Big{\}}\\ &+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger g _{0}}}{\omega_{\Sigma}}\Big{(}2r_{2}+|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)} \\ &\Big{(}\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)},\end{split} \tag{5}\] \[\begin{split}&\Big{\{}2\Big{(}2\Delta f_{t}+\Delta\psi_{t}+(1+2 \alpha(1-t))(2r_{1}+1)\Big{)}\Big{(}|\phi|_{g_{t}}^{2}-1\Big{)}\\ &+\big{(}\Delta\psi_{t}+1\big{)}\big{(}1+2\alpha(1-t)\big{)}(4r_{ 2}+2)\Big{\}}\\ &\qquad\qquad=2(2r_{1}+1)(4r_{2}+2)(2\alpha+1)\varepsilon\psi_{t },\end{split} \tag{6}\] where \(h_{0}=\pi_{1}^{*}(e^{-(f_{0}+\psi_{0})}k^{r_{1}+1})\otimes\pi_{2}^{*}(h_{FS}^{ 2r_{2}})\oplus\pi_{1}^{*}(e^{-f_{0}}k^{r_{1}})\otimes\pi_{2}^{*}(h_{FS}^{2r_{2 }+2})\) is the solution of the equation (6) at \(t=0\), satisfying \(deth_{0}=detH_{0}\). The existence of such \(h_{0}\) is discussed in subsection 3.1 and \(\alpha>0\) is a large enough constant so that \(\sqrt{-1}\Theta_{h_{0}}+\alpha\omega_{0}\otimes Id_{E^{*}}>0\) in the sense of Nakano and \(\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)>0\). Finally, we have the following result. **Theorem 1.2**.: _For large enough \(\varepsilon>0\), the system defined by equations (5) and (6) has smooth solution \((f_{t},\psi_{t})\) such that \(\Delta f_{t}+r_{1}+\alpha(1-t)(2r_{1}+1)>0\) for all \(t\in[0,1]\)._ Demailly's original approach involved the method of continuity. However, proving openness for Demailly's systems is the most challenging part because the required positivity properties for openness may not be preserved along the continuity path. Even in the case of a direct sum of ample line bundles on a Riemann surface, it appears hard to prove, and therefore the Leray-Schauder degree method was used in [10]. The main point of this article is to provide a proof-of-concept for Demailly's approaches. We hope that the techniques used for the vortex bundle generalize to more complicated situations. We briefly describe the strategy of the proofs. System (1) is decoupled and hence is relatively easier to handle (Section 2). On the other hand, unlike System (1), System (2) is truly a coupled system. To demonstrate openness, the key point is to prove the lower bound for \(\Delta\psi_{t}\), independent of \(t\) and \(\varepsilon\) so that \(\Delta\psi_{t}+1+2(2r_{1}+1)(4r_{2}+2)(2\alpha+1)\varepsilon\) can be made positive for large \(\varepsilon\). However, as one will see, to get such estimates, it is crucial to observe that the lower bound of \(\Delta\psi_{0}\) itself is independent of \(\varepsilon\). These calculations are rather delicate and carried out in Section 3. ## 2. Proof of Theorem 1.1 For the remainder of the paper, we drop the parameter \(t\) for notational convenience. We denote constants by \(C\) that may vary from line to line and are independent of \(t\) unless specified. ### Existence of solution at \(t=0\) for the first system Recall that \(h_{0}=\pi_{1}^{*}(e^{-(f_{0}+\psi_{0})}k^{r_{1}+1})\otimes\pi_{2}^{*}(h_{F \bar{S}}^{2r_{2}})\oplus\pi_{1}^{*}(e^{-f_{0}}k^{r_{1}})\otimes\pi_{2}^{*}(h_ {F\bar{S}}^{2r_{2}+2})\) is the solution of the equation (4) at \(t=0\), and \(deth_{0}=detH_{0}\). Therefore, \(2f_{0}+\psi_{0}=0\) and \(\psi_{0}\) satisfying \((2r_{1}+1)\big{(}|\phi|_{g_{0}}^{2}-1\big{)}+\big{(}\Delta\psi_{0}+1\big{)}(2r _{2}+1)=(2r_{1}+1)(4r_{2}+2)\psi_{0}\). We shall show that \(\psi_{0}\) exists by the method of continuity. Now define \(L_{s}(\psi_{0})=\Delta\psi_{0}+1-s(1-|\phi|_{g_{0}}^{2})\frac{2r_{1}+1}{2r_{2} +1}-2(2r_{1}+1)\psi_{0}\), where \(s\in[0,1]\). Let \(S:=\{s\in[0,1]|L_{s}=0\) has a smooth solution at \(s\}\). Clearly \(\psi_{0}=\frac{1}{2(2r_{1}+1)}\) is a solution of \(L_{0}=0\). Thus \(0\in S\). Now \(DL_{s}(\psi_{0})[\delta\psi]=\Delta\delta\psi-s|\phi|_{g_{0}}^{2}\frac{2r_{1}+ 1}{2r_{1}+1}\delta\psi-2(2r_{1}+1)\delta\psi\). By maximum principle we get \(Ker(DL_{s})=\{0\}\). Since \(DL_{s}\) is formally self-adjoint, we get \(DL_{s}\) is an isomorphism. Hence by implicit function theorem for Banach manifolds, we get \(S\) is open. Now we must prove that \(S\) is closed. Let us first prove a prior bound on the term \(|\phi|_{g_{0}}^{2}\). **Lemma 2.1**.: \(|\phi|_{g_{0}}^{2}<1\)_._ Proof.: We know \[\partial\bar{\partial}|\phi|_{g_{0}}^{2}=-\Theta_{g_{0}}|\phi|_{g_{0}}^{2}+ \nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger}_{g_{0}}.\] At the point \(p\) of maximum of \(|\phi|_{g_{0}}^{2}\), \(\sqrt{-1}\partial\bar{\partial}|\phi|_{g_{0}}^{2}(p)\leq 0\). Thus we see that \(\sqrt{-1}\Theta_{g_{0}}(p)\geq 0\), which implies \(\big{(}1+\Delta\psi_{0}\big{)}(p)\geq 0\). If \(\psi_{0}(p)\geq 0\), we have \(|\phi|_{g_{0}}^{2}(p)=|\phi|_{k}^{2}(p)e^{-\psi_{0}(p)}\leq|\phi|_{k}^{2}(p) \leq\frac{1}{2}\). Otherwise \(\psi_{0}(p)<0\), and equation (4) implies \((2r_{1}+1)\big{(}|\phi|_{g_{0}}^{2}-1\big{)}<0\). Hence, We get \(|\phi|_{g_{0}}^{2}<1\) Let \(s\in S\), then \(\Delta\psi_{0}+1=s(1-|\phi|_{g_{0}}^{2})\frac{2r_{1}+1}{2r_{2}+1}-2(2r_{1}+1)\psi_ {0}\). Applying maximum principle and Lemma 2.1 we get \(||\psi_{0}||_{C^{0}}<C\) and hence \(||\Delta\psi_{0}||_{C^{0}}<C\). Now by the Arzela-Ascoli theorem, we see that \(S\) is closed. This completes the proof of the existence of \(h_{0}\). Theorem 1.1.: Since the system is decoupled, we shall solve the equation (4) for \(\psi\) using the continuity method. Then we shall solve equation (3) using the same method. Let \[I_{1}^{\prime}:=\Big{\{}t\in[0,1]\Big{|}\text{ equation (\ref{eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq ### Openness of \(I^{\prime}_{1}\) Let us define the map \(T_{2}:C^{2,\beta}\times[0,1]\to C^{0,\beta}\) by \(T_{2}\Big{(}\psi,t\Big{)}=(2r_{1}+1)\big{(}|\phi|_{g}^{2}-1\big{)}+\big{(}\Delta \psi+1\big{)}(2r_{2}+1)-(2r_{1}+1)(4r_{2}+2)\psi\). Its linearization is \[\begin{split}& DT_{2}(\psi,t)[\delta\psi]\\ &=(2r_{2}+1)\Delta\delta\psi-(2r_{1}+1)|\phi|_{g}^{2}\delta\psi- (2r_{1}+1)(4r_{2}+2)\delta\psi.\end{split} \tag{9}\] If \(\delta\psi\in Ker(DT_{2})\), then \(DT_{2}(\psi)[\delta\psi]=0\). Now maximum principle implies that \(\delta\psi=0\). Since \(T_{2}\) is self-adjoint and \(Ker(DT_{2})=0\), it is an isomorphism. From Lemma 2.3, Lemma 2.4, and by bootstrapping, we conclude that solutions are smooth. Therefore, we have uniform estimates of solutions of the equation (4) and its derivative of all orders. Now we can solve the equation (3) for the variable \(f\). As we mentioned earlier, we shall do so by the continuity method. We let \[\begin{split} I^{\prime\prime}_{1}:=\Big{\{}t\in[0,1]\ \Big{|}\ \text{equation (\ref{eq:1}) has smooth solution $f_{t}$ at $t$ and}\\ \Delta f_{t}+r_{1}+\alpha(1-t)(2r_{1}+1)>0\ \Big{\}}.\end{split}\] At \(t=0\), \(f_{0}\) is a solution of equation (3) and \(\Delta f_{0}+r_{1}+\alpha(1-t)(2r_{1}+1)>0\), so \(0\in I_{1}\). Hence \(I_{1}\) is non-empty. We must show that \(I^{\prime\prime}_{1}\) is both open and closed. ### Closedness of \(I^{\prime\prime}_{1}\) Let us prove the closedness of \(I^{\prime\prime}_{1}\) by proving estimates for \(f\) and its derivatives. **Lemma 2.5**.: _There exists a constant \(C\) such that whenever \(t\in I^{\prime\prime}_{1}\), we have \(||\Delta f_{t}||_{C^{0}}\leq C\)._ Proof.: Since \(t\in I_{1}^{\prime\prime}\), we have \(\Delta f>-\big{(}r_{1}+\alpha(2r_{1}+1)\big{)}\). Now from equation (3) we compute, \[\Big{\{}\Big{(}\Delta f+\Delta\psi+(r_{1}+1)+\alpha(1-t)(2r_{1}+1) \Big{)}\Big{(}2r_{2}+|\phi|_{g}^{2} \tag{10}\] \[+\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}\Delta f+r_{1}+\alpha(1-t)(2 r_{1}+1)\Big{)}\] \[\Big{(}(2r_{2}+2)-|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)} \Big{\}}\] \[\leq\Big{(}\Delta f_{0}+\Delta\psi_{0}+(r_{1}+1)+\alpha(2r_{1}+1) \Big{)}\Big{(}2r_{2}+|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)}\] \[\Big{(}(2r_{2}+2)-|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)}\Big{(} \Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)}\] \[+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger g_ {0}}}{\omega_{\Sigma}}\Big{(}2r_{2}+|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)}\] \[\Big{(}\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)}.\] Now appealing to Lemma 2.4, we get \(\Delta f\leq C\). Let \(G\) be the Green's function of the metric \(\omega_{\Sigma}\) such that \(-C\{1+|ln(d_{\omega_{\Sigma}}(x,y))|\}\leq G(x,y)\leq 0\). Then for any continuous function \(f\), we have the following Green representation formula: \[f(x)=\frac{\int_{\Sigma}f(y)\omega_{\Sigma}(y)}{\int_{\Sigma}\omega_{\Sigma}( y)}+\int_{\Sigma}G(x,y)\Delta f(y)\omega_{\Sigma}(y) \tag{11}\] Using the formula (11) and Lemma 2.5 we have the following: **Lemma 2.6**.: _If \(t\in I_{1}^{\prime\prime}\) then \(||f_{t}||_{C^{0}}\leq C\), for some positive constant \(C\)._ Let \(t_{n}\in I_{1}^{\prime\prime}\) be such that \(t_{n}\to t\). To prove \(I_{1}^{\prime\prime}\) is closed, we must show \(t\in I_{1}^{\prime\prime}\). As \(t_{n}\in I_{1}^{\prime\prime}\), we have \(\Delta f_{t_{n}}+r_{1}+\alpha(1-t)(2r_{1}+1)>0\). Arzela-Ascoli theorem, together with the above estimates, we get a subsequence of \(t_{n}\) again call the subsequence by \(t_{n}\) such that \(f_{t_{n}}\to f\) in \(C^{2,\alpha}\). Then by usual bootstrapping argument we have \(f\) is smooth and \(,\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\geq 0\). If \(\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)=0\) at some point, then equation (3) gives a contradiction. Hence \(\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)>0\). Therefore \(t\in I_{1}^{\prime\prime}\). We now proceed to prove the openness of \(I_{1}^{\prime\prime}\). ### Openness of \(I_{1}^{\prime\prime}\) Let \(\mathcal{B}\) be the subset in \(C^{2,\beta}\) defined by \(\mathcal{B}:=\Big{\{}f\in C^{2,\beta}\Big{|}\Delta f+r_{1}+\alpha(1-t)(2r_{1}+ 1)>0,\int_{\Sigma}f\omega_{\Sigma}=0\Big{\}}\). Now let us define the map \(T_{1}:\mathcal{B}\times[0,1]\to C^{0,\beta}\) by \[\begin{split}& T_{1}(f,t)\\ =&\bigg{\{}\Big{(}\Delta f+\Delta\psi+(r_{1}+1)+\alpha(1-t)(2r_{1}+ 1)\Big{)}\Big{(}2r_{2}+|\phi|_{g}^{2}+\\ &\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}\Delta f+r_{1}+\alpha(1-t)(2r _{1}+1)\Big{)}\\ &\Big{(}(2r_{2}+2)-|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\\ &+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger _{g}}}{\omega_{\Sigma}}\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2) \Big{)}\\ &\Big{(}\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\bigg{\}}- \bigg{\{}\Big{(}\Delta f_{0}+\Delta\psi_{0}+(r_{1}+1)+\alpha(2r_{1}+1)\Big{)} \\ &\Big{(}2r_{2}+|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)}\Big{(}( 2r_{2}+2)-|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)}\\ &\Big{(}\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)}+\sqrt{-1}\frac {\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger_{g_{0}}}}{\omega_{\Sigma}} \Big{(}\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)}\\ &\Big{(}2r_{2}+|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)}\bigg{\}}. \end{split} \tag{12}\] Then its linearization at a point \((f,t)\in\mathcal{B}\times[0,1]\) will be \[\begin{split}& DT_{1}(f,t)[\delta f]\\ =&\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2) \Big{)}\bigg{[}\Big{(}(2r_{2}+2)-|\phi|_{g}^{2}+\\ &\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}2\Delta f+\Delta\psi+\big{(}1 +2\alpha(1-t)\big{)}(2r_{1}+1)\Big{)}\\ &+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger _{g}}}{\omega_{\Sigma}}\bigg{]}\Delta\delta f.\end{split} \tag{13}\] If \(t\in I^{\prime\prime}_{1}\), then clearly \(DT_{1}(f,t)\) is an isomorphism. Therefore by the implicit function theorem on Banach manifolds, we get \(I^{\prime\prime}_{1}\) is open. This completes the proof of the Theorem (1.1). ## 3. Proof of theorem 1.2 Our next concern is to prove Theorem (1.2). First, let us discuss the solution of the second system at \(t=0\). ### Existence of solution at \(t=0\) for the second system Recall that \(h_{0}=\pi_{1}^{*}(e^{-(f_{0}+\psi_{0})}k^{r_{1}+1})\otimes\pi_{2}^{*}(h_{FS}^{ 2r_{2}})\oplus\pi_{1}^{*}(e^{-f_{0}}k^{r_{1}})\otimes\pi_{2}^{*}(h_{FS}^{2r_{2 }+2})\) is the solution of the equation (6) at \(t=0\), and \(deth_{0}=detH_{0}\). Therefore \((f_{0},\psi_{0})\) satisfying \(2f_{0}+\psi_{0}=0\) and \[\Big{\{}2\Big{(}2\Delta f_{0}+\Delta\psi_{0}+(1+2\alpha)(2r_{1}+1) \Big{)}\big{(}|\phi|^{2}_{g_{0}}-1\big{)}+\big{(}\Delta\psi_{0}+1\big{)}\big{(}1 +2\alpha\big{)}(4r_{2}+2)\Big{\}}\] \[=2(2r_{1}+1)(4r_{2}+2)(2\alpha+1)\varepsilon\psi_{0}.\] Equivalently, we shall solve the following equation for \(\psi_{0}\), \[\Delta\psi_{0}+1=(1-|\phi|^{2}_{g_{0}})\frac{2r_{1}+1}{2r_{2}+1}+2(2r_{1}+1) \varepsilon\psi_{0}.\] Similar arguments in subsection 2.1 prove the existence of required \(\psi_{0}\). Moreover, we get the following estimates independent of \(\varepsilon\). **Lemma 3.1**.: _There exists positive constant \(C\) independent of \(\varepsilon\) such that \(||\varepsilon\psi_{0}||_{C^{0}}+||\Delta\psi_{0}||_{C^{0}}<C\)._ Let \[I_{2}:=\Big{\{}t\in[0,1]\Big{|}\text{ the system defined by equations (\ref{eq:1}) and (\ref{eq:2}) has smooth}\\ \text{ solution }(f_{t},\psi_{t})\text{ at }t\,\Delta f_{t}+r_{1}+ \alpha(1-t)(2r_{1}+1)>0\Big{\}}.\] At \(t=0\), \((f_{0},\psi_{0})\) solves the system and \(\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)>0\). So \(I_{2}\) is non-empty. Now let us prove \(I_{2}\) is closed by proving some a priori estimates. ### Closedness of \(I_{2}\) A similar proof as Lemma 2.1 gives the following result. **Lemma 3.2**.: \(|\phi|^{2}_{g_{t}}<1\) _for \(t\in I_{2}.\)_ Proof.: A simple computation using normal coordinates gives the following identity \[\partial\bar{\partial}|\phi|^{2}_{g_{t}}=-\Theta_{g_{t}}|\phi|^{2}_{g_{t}}+ \nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger_{g_{t}}}. \tag{14}\] Let \(|\phi|^{2}_{g_{t}}\) attains its maximum at a point \(p\), then \(\sqrt{-1}\partial\bar{\partial}|\phi|^{2}_{g_{t}}(p)\leq 0\). So we must have \(\sqrt{-1}\Theta_{g_{t}}(p)\geq 0\) i.e., \((1+\Delta\psi_{t})(p)\geq 0\). If \(\psi_{t}(p)\geq 0\), then \(|\phi|^{2}_{g_{t}}(p)=|\phi|^{2}_{k}(p)e^{-\psi_{t}(p)}\leq|\phi|^{2}_{k}(p) \leq\frac{1}{2}\). Otherwise, we have \(\psi_{t}(p)<0\). Then from equation (6), we get \(\Big{(}2\Delta f_{t}+\Delta\psi_{t}+(1+2\alpha(1-t))(2r_{1}+1)\Big{)}(|\phi|^{2 }_{g_{t}}-1)(p)<0\). Since \(t\in I_{2}\), it follows that \(|\phi|^{2}_{g_{t}}<1\). Next, we will prove \(C^{0}\) estimates of \(\psi_{t}\). **Lemma 3.3**.: _If \(t\in I_{2}\), then \(||\varepsilon\psi_{t}||_{C^{0}}\leq C\), where \(C\) is independent of \(\varepsilon\)._ Proof.: Suppose \(\psi\) attains its maximum at a point \(p\), then \(\Delta\psi(p)\leq 0\). From equation (6) and \(\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)>0\) we get \[2(2r_{1}+1)(4r_{2}+2)(2\alpha+1)\varepsilon\psi(p)\leq(1+2\alpha)(4r_{2}+2)\] which yields \[\varepsilon\psi(p)\leq\frac{1}{2(2r_{1}+1)}. \tag{15}\] At a minimum point \(q\) for \(\psi\) we have \(\Delta\psi(q)\geq 0\). Now in view of equation (5) this gives \[\Big{\{}\Big{(}\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\Big{(} (2r_{2}+2)-|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)} \tag{16}\] \[\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\Big{\}} (q)\leq\Big{\{}\Big{(}\Delta f_{0}+\Delta\psi_{0}+(r_{1}+1)+\alpha(2r_{1}+1) \Big{)}\] \[\Big{(}2r_{2}+|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)}\Big{(}(2 r_{2}+2)-|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)}\] \[\Big{(}\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)}+\sqrt{-1}\frac {\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger}{}_{g_{0}}}{\omega_{\Sigma}}\] \[\Big{(}2r_{2}+|\phi|_{h_{0}}^{2}+\alpha(4r_{2}+2)\Big{)}\Big{(} \Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)}\Big{\}}(q),\] Applying Lemma 3.1 to the right-hand side term of the equation (16), the following inequality holds for some \(C>0\), independent of \(\varepsilon\) \[\Big{\{}\Big{(}\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\Big{(} (2r_{2}+2)-|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)} \tag{17}\] \[\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\Big{\}} (q)\leq C.\] Equation (6) implies \[\Big{\{}4(|\phi|_{g}^{2}-1)\Big{(}\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)} \Big{\}}(q)\leq 2(2r_{1}+1)(4r_{2}+2)(2\alpha+1)\varepsilon\psi(q) \tag{18}\] Using Lemma3.2, equations (17) and (18) we can see \[2(2r_{1}+1)(4r_{2}+2)(2\alpha+1)\varepsilon\psi(q) \tag{19}\] \[\geq\frac{2C\big{(}|\phi|_{g}^{2}-1\big{)}}{\Big{(}(2r_{2}+2)-| \phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha (1-t)(4r_{2}+2)\Big{)}}(q)\] \[\geq-C.\] Therefore, equations (15) and (19) establish the result. Now we are in a position to prove an essential estimate, which will be used to show closedness as well as openness. **Lemma 3.4**.: _For \(t\in I_{2}\), \(-\tilde{C}\leq\Delta\psi_{t}\leq C\) for some positive constant \(\tilde{C}\) independent of \(\varepsilon\)._ Proof.: Since \(t\in I_{2}\), we have \(\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)>0\). So from equations (6) we get \[\big{(}1+\Delta\psi\big{)}\Big{(}2(|\phi|_{g}^{2}-1)+(1+2\alpha(1- t))(4r_{2}+2)\Big{)}\] \[\geq 2(2r_{1}+1)(4r_{2}+2)(2\alpha+1)\varepsilon\psi_{t}.\] Thus using Lemma3.3 we see \(\Delta\psi\geq-\tilde{C}\), for some positive \(\tilde{C}\) independent of \(\varepsilon\). Now equation (5) gives \[\big{(}1+\Delta\psi\big{)}\Big{(}\Delta f+r_{1}+\alpha(1-t)(2r_{1 }+1)\Big{)}\Big{(}(2r_{2}+2)-|\phi|_{g}^{2}\] \[+\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha (1-t)(4r_{2}+2)\Big{)}\leq C.\] Using equation (6) we have \[\big{(}1+\Delta\psi\big{)}\bigg{\{}\big{(}1+\Delta\psi\big{)}\Big{(}2(|\phi|_{ g}^{2}-1)+\Big{(}1+2\alpha(1-t)\Big{)}(4r_{2}+2)\Big{)}\] \[-2(2r_{1}+1)(4r_{2}+2)(2\alpha+1))\varepsilon\psi\bigg{\}}\] \[\leq\frac{C}{\big{(}(2r_{2}+2)-|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\big{)} \left(2r_{2}+|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\right)},\] which implies \(\Delta\psi\leq C\). With these above estimates of \(\psi_{t}\) in hand, we now estimate \(f_{t}\) as follows. **Lemma 3.5**.: _If \(t\in I_{2}\), then \(||\Delta f_{t}||\leq C\)._ Proof.: Since \(t\in I_{2}\), we have \(\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)>0\). Therefore from the equation (5) we get \[\Big{(}\Delta f+\Delta\psi+(r_{1}+1)+\alpha(1-t)(2r_{1}+1)\Big{)}\Big{(} \Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\leq C\] Therefore \(\Delta f\leq C\), otherwise it contradicts Lemma 3.4. Using the formula (11) and Lemma 3.5, we have the following. **Lemma 3.6**.: _There exists positive constant \(C\) such that \(||f_{t}||_{C^{0}}\leq C\), whenever \(t\in I_{2}\)._ Now one can easily conclude that \(I_{2}\) is closed as follows. Let \(t_{n}\in I_{2}\) be such that \(t_{n}\to t\). To prove \(I_{2}\) is closed, we must show \(t\in I_{2}\). As \(t_{n}\in I_{2}\), we have \(\Delta f_{t_{n}}+r_{1}+\alpha(1-t)(2r_{1}+1)>0\). Arzela-Ascoli theorem, together with the above estimates, we get a subsequence of \(t_{n}\) again call the subsequence by \(t_{n}\) such that \(f_{t_{n}}\to f\) and \(\psi_{t_{n}}\to\psi\) in \(C^{2,\beta}\). Then \(\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\geq 0\) and by usual bootstrapping argument we have \(f\) and \(\psi\) are smooth. Now if \(\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)=0\) at some point, then (5) gives a contradiction. Hence \(\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)>0\). Therefore \(t\in I_{2}\). The only thing we are left with is proving the openness of \(I_{2}\). ### Openness of \(I_{2}\) For \(0<\beta<1\), let \(\mathcal{C}\) be the subset of \(C^{2,\beta}\times C^{2,\beta}\) defined by \(\mathcal{C}:=\Big{\{}(f,\psi)\in C^{2,\beta}\times C^{2,\beta}\Big{|}\Delta f+ r_{1}+\alpha(1-t)(2r_{1}+1)>0,\int_{\Sigma}f\omega_{\Sigma}=0\Big{\}}\). Now let us define the map \(T:\mathcal{C}\times[0,1]\to C^{0,\beta}\times C^{0,\beta}\) by \(T(f,\psi,t)=\Big{(}T_{1}(f,\psi,t),T_{2}(f,\psi,t)\Big{)}\), where \[\begin{split}& T_{1}(f,\psi,t)\\ =&\bigg{\{}\Big{(}\Delta f+\Delta\psi+(r_{1}+1)+ \alpha(1-t)(2r_{1}+1)\Big{)}\Big{(}2r_{2}+|\phi|_{g}^{2}+\\ &\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}\Delta f+r_{1}+\alpha(1-t)(2r _{1}+1)\Big{)}\\ &\Big{(}(2r_{2}+2)-|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}+ \sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger}{}_{g}}{\omega _{\Sigma}}\\ &\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(} \Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\bigg{\}}\\ &-\bigg{\{}\Big{(}\Delta f_{0}+\Delta\psi_{0}+(r_{1}+1)+\alpha(2 r_{1}+1)\Big{)}\Big{(}2r_{2}+|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)}\\ &\Big{(}(2r_{2}+2)-|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2)\Big{)} \Big{(}\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)}\\ &+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger }{}_{g_{0}}}{\omega_{\Sigma}}\Big{(}2r_{2}+|\phi|_{g_{0}}^{2}+\alpha(4r_{2}+2) \Big{)}\\ &\Big{(}\Delta f_{0}+r_{1}+\alpha(2r_{1}+1)\Big{)}\bigg{\}},\end{split} \tag{20}\] and \[\begin{split}& T_{2}(f,\psi,t)\\ =& 2\bigg{(}2\Delta f+\Delta\psi+\big{(}1+2\alpha(1-t) \big{)}(2r_{1}+1)\bigg{)}(|\phi|_{g}^{2}-1)\\ &+\big{(}\Delta\psi+1\big{)}\big{(}1+2\alpha(1-t)\big{)}(4r_{2}+2) \\ &-2(2r_{1}+1)(4r_{2}+2)(2\alpha+1)\varepsilon\psi.\end{split} \tag{21}\] Then the linearization of \(T_{1}\) at a point \((f,\psi)\) will be \[\begin{split}& DT_{1}(f,\psi,t)[\delta f,\delta\psi]\\ =&\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2) \Big{)}\bigg{[}\Big{(}(2r_{2}+2)-|\phi|_{g}^{2}+\\ &\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}2\Delta f+\Delta\psi+\big{(}1 +2\alpha(1-t)\big{)}(2r_{1}+1)\Big{)}\\ &+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger _{g}}}{\omega_{\Sigma}}\bigg{]}\Delta\delta f+\Big{[}\Big{(}2r_{2}+|\phi|_{g}^{ 2}+\\ &\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}\Delta f+r_{1}+\alpha(1-t)(2 r_{1}+1)\Big{)}\\ &\Big{(}(2r_{2}+2)-|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)} \Big{]}\Delta\delta\psi\\ &-\Big{(}\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\bigg{[}| \phi|_{g}^{2}\Big{\{}2(1-|\phi|_{g}^{2})\\ &\Big{(}\Delta f+\Delta\psi+(r_{1}+1)+\alpha(1-t)(2r_{1}+1) \Big{)}+\\ &\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger _{g}}}{\omega_{\Sigma}}\Big{\}}+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{ 0,1}\phi^{\dagger_{g}}}{\omega_{\Sigma}}\\ &\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\bigg{]} \delta\psi\\ &+\Big{(}\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\\ &\bigg{[}\sqrt{-1}\frac{\partial(\delta\psi)\wedge\nabla^{0,1} \phi^{\dagger_{g}}}{\omega_{\Sigma}}+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\phi ^{\dagger_{g}}\nabla^{0,1}(\delta\psi)}{\omega_{\Sigma}}\bigg{]},\end{split} \tag{22}\] and the linearization of \(T_{2}\) at \((f,\psi)\) will be \[\begin{split}& DT_{2}(f,\psi,t)[\delta f,\delta\psi]\\ =& 4(|\phi|_{g}^{2}-1)\Delta\delta f+\bigg{(}2(|\phi|_{ g}^{2}-1)+\Big{(}1+2\alpha(1-t)\Big{)}(4r_{2}+2)\bigg{)}\Delta\delta\psi\\ &-\delta\psi\bigg{[}2|\phi|_{g}^{2}\bigg{(}2\Delta f+\Delta\psi+ \Big{(}1+2\alpha(1-t)\Big{)}(2r_{1}+1)\bigg{)}+\\ & 2(2r_{1}+1)(4r_{2}+2)(2\alpha+1)\varepsilon\bigg{]}.\end{split} \tag{23}\] We shall show that \(DT=[DT_{1},DT_{2}]\) is an isomorphism at a point \((f_{t},\psi_{t})\), for \(t\in I_{2}\). Then by the implicit function theorem for Banach manifolds, we can conclude that \(I_{2}\) is open. Suppose \((\delta f,\delta\psi)\in Ker(DT(f,\psi,t))\) where \(t\in I_{2}\), then we have \(DT_{1}(f,\psi,t)[\delta f,\delta\psi]=0\) and \(DT_{2}(f,\psi,t)[\delta f,\delta\psi]=0\). Now solving \(DT_{2}[\delta f,\delta\psi]=0\) for \(\Delta\delta f\) and substituting the value in \(DT_{1}[\delta f,\delta\psi]=0\) we get, \[\Bigg{[}\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)} \Bigg{\{}\Big{(}(2r_{2}+2)-|\phi|_{g}^{2}+\] \[\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}2\Delta f+\Delta\psi+(1+2 \alpha(1-t))(2r_{1}+1)\Big{)}\] \[+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger_ {g}}}{\omega_{\Sigma}}\bigg{\}}\Big{(}2(|\phi|_{g}^{2}-1)+(1+2\alpha(1-t))\left( 4r_{2}+2\right)\Big{)}\] \[+4\left(1-|\phi|_{g}^{2}\right)\Big{(}2r_{2}+|\phi|_{g}^{2}+ \alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}\Delta f+r_{1}\] \[+\alpha(1-t)(2r_{1}+1)\Big{)}\Big{(}(2r_{2}+2)-|\phi|_{g}^{2}+ \alpha(1-t)(4r_{2}+2)\Big{)}\Bigg{]}\Delta\delta\psi\] \[= \delta\psi\Bigg{[}\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+ 2)\Big{)}\bigg{\{}\Big{(}(2r_{2}+2)-|\phi|_{g}^{2}+\] \[+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger_ {g}}}{\omega_{\Sigma}}\bigg{\}}\bigg{\{}2|\phi|_{g}^{2}\Big{(}2\Delta f+\Delta \psi+(1+2\alpha(1-t))(2r_{1}+1)\Big{)}\] \[+2(2r_{1}+1)(4r_{2}+2)(2\alpha+1)\varepsilon\bigg{\}}+4\big{(}1- |\phi|_{g}^{2}\big{)}|\phi|_{g}^{2}\Big{(}\Delta f+r_{1}+\] \[\alpha(1-t)(2r_{1}+1)\Big{)}\bigg{\{}2(1-|\phi|_{g}^{2})\Big{(} \Delta f+\Delta\psi+(r_{1}+1)+\] \[\alpha(1-t)(2r_{1}+1)\Big{)}+\sqrt{-1}\frac{\nabla^{1,0}\phi \wedge\nabla^{0,1}\phi^{\dagger_{g}}}{\omega_{\Sigma}}\bigg{\}}+\sqrt{-1} \frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger_{g}}}{\omega_{\Sigma}}\] \[\Big{(}2r_{2}+|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\Bigg{]} +\Big{(}\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\] \[\bigg{\{}\sqrt{-1}\frac{\partial(\delta\psi)\wedge\nabla^{0,1} \phi^{\dagger_{g}}}{\omega_{\Sigma}}+\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge \phi^{\dagger_{g}}\nabla^{0,1}(\delta\psi)}{\omega_{\Sigma}}\bigg{\}}.\] Lemma (3.4) shows that for \(t\in I_{2}\), we can choose \(\varepsilon\) large enough so that \[\Big{(}\Delta\psi+1+2(2r_{1}+1)(4r_{2}+2)(2\alpha+1)\varepsilon\Big{)}>0.\] Using the maximum principle on the equation (24), we have \(\delta\psi=0\). Now putting \(\delta\psi=0\) in \(DT_{2}[\delta f,\delta\psi]=0\) gives \(\delta f=0\). Hence for \(t\in I_{2}\) we get \(Ker(DT(f_{t},\psi_{t},t))=0\). Now we shall prove that \(Ker(DT^{*}(f_{t},\psi_{t},t))\) is also trivial. As one can imagine, computing the operator \(DT^{*}\) will be very complicated. Since we are only interested in the kernel of \(DT^{*}\) and we already know the kernel of \(DT\), it is enough to calculate the index of the operator \(DT\). To do so, let us define the following. For \(t\in I_{2}\) and \(s\in[0,1]\), let \(T^{s}\big{(}f,\psi,t\big{)}=\Big{(}T^{s}_{1}\big{(}f,\psi,t\big{)},T^{s}_{2} \big{(}f,\psi,t\big{)}\Big{)}\), where \[T^{s}_{1}(f,\psi,t)\] \[= \Big{(}2r_{2}+s|\phi|^{2}_{g}+\alpha(1-t)(4r_{2}+2)\Big{)}\bigg{[} \Big{(}(2r_{2}+2)-s|\phi|^{2}_{g}+\] \[\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}2s\Delta f+s\Delta\psi+\big{(}1 +2\alpha(1-t)\big{)}(2r_{1}+1)\Big{)}+\] \[\sqrt{-1}s\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger }{}_{g}}{\omega_{\Sigma}}\bigg{]}\Delta\delta f+\Delta\delta\psi\Big{(}2r_{2} +s|\phi|^{2}_{g}+\alpha(1-t)(4r_{2}+2)\Big{)}\] \[\Big{(}s\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\Big{(}(2r_{2} +2)-s|\phi|^{2}_{g}+\alpha(1-t)(4r_{2}+2)\Big{)}, \tag{25}\] and \[T^{s}_{2}(f,\psi,t)\] \[= 4s(|\phi|^{2}_{g}-1)\Delta\delta f+\bigg{(}2s(|\phi|^{2}_{g}-1)+ \Big{(}1+2\alpha(1-t)\Big{)}(4r_{2}+2)\bigg{)}\Delta\delta\psi. \tag{26}\] The following lemma says that we can talk about the index of operator \(T^{s}\). **Lemma 3.7**.: \(T^{s}\) _are elliptic system for \(s\in[0,1]\)._ Proof.: Let \[A=\begin{bmatrix}A_{11}&A_{12}\\ A_{21}&A_{22}\end{bmatrix}\] where, \[A_{11}=\Big{(}2r_{2}+s|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\] \[\quad\bigg{\{}\Big{(}(2r_{2}+2)-s|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+ 2)\Big{)}\] \[\quad\Big{(}2s\Delta f+s\Delta\psi+\big{(}1+2\alpha(1-t)\big{)}(2r _{1}+1)\Big{)}\] \[\quad+s\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{ \dagger_{g}}}{\omega_{\Sigma}}\bigg{\}},\] \[A_{12}=\Big{(}2r_{2}+s|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\] \[\quad\Big{(}s\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)},\] \[\quad\Big{(}(2r_{2}+2)-s|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2) \Big{)},\] \[A_{21}=4s(|\phi|_{g}^{2}-1),\] \[A_{22}=2s(|\phi|_{g}^{2}-1)+\Big{(}1+2\alpha(1-t)\Big{)}(4r_{2}+ 2).\] Then \[det(A) =\begin{vmatrix}A_{11}&A_{12}\\ A_{21}&A_{22}\end{vmatrix}\] \[=\Big{(}2r_{2}+s|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\Bigg{\{} \Big{(}1+2\alpha(1-t)\Big{)}\] \[(4r_{2}+2)\Big{[}\Big{(}(2r_{2}+2)-s|\phi|_{g}^{2}+\alpha(1-t)(4r _{2}+2)\Big{)}\] \[\quad\Big{(}2s\Delta f+s\Delta\psi+\big{(}1+2\alpha(1-t)\big{)}( 2r_{1}+1)\Big{)}+s\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{ \dagger_{g}}}{\omega_{\Sigma}}\Big{]}\] \[+2s(|\phi|_{g}^{2}-1)\Big{[}\Big{(}(2r_{2}+2)-s|\phi|_{g}^{2}+ \alpha(1-t)(4r_{2}+2)\Big{)}\] \[\quad\Big{(}s\Delta\psi+1\big{)}+s\sqrt{-1}\frac{\nabla^{1,0} \phi\wedge\nabla^{0,1}\phi^{\dagger_{g}}}{\omega_{\Sigma}}\Big{]}\Bigg{\}}\] **Case (a).** If \[\Big{[}\Big{(}(2r_{2}+2)-s|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2) \Big{)}\big{(}s\Delta\psi+1\big{)}\] \[\quad\quad\quad+s\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1 }\phi^{\dagger_{g}}}{\omega_{\Sigma}}\Big{]}\leq 0.\] As \(t\in I_{2}\) i.e., \(\Delta f+r_{1}+\alpha(1-t)(2r_{1}+1)>0\), therefore \(det(A)>0\). **Case (b).** If \[\Big{[}\Big{(}(2r_{2}+2)-s|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)} \big{(}s\Delta\psi+1\big{)}\] \[\qquad\qquad+s\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1} \phi^{\dagger_{g}}}{\omega_{\Sigma}}\Big{]}>0.\] Then, \[det(A)= \Bigg{[}\Big{(}2r_{2}+s|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)} \Big{(}1+2\alpha(1-t)\Big{)}(4r_{2}+2)\] \[\Big{(}(2r_{2}+2)-s|\phi|_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}2 \Big{(}s\Delta f+r_{1}+\] \[\alpha(1-t)(2r_{1}+1)\Big{)}\Bigg{]}+\Bigg{[}\Big{(}2r_{2}+s|\phi |_{g}^{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\] \[s\sqrt{-1}\frac{\nabla^{1,0}\phi\wedge\nabla^{0,1}\phi^{\dagger _{g}}}{\omega_{\Sigma}}\Big{\}}\Big{\{}2s(|\phi|_{g}^{2}-1)+\Big{(}1+2\alpha( 1-t)\Big{)}(4r_{2}+2)\Big{\}}\Bigg{]}>0,\] the first term is positive as \(t\in I_{2}\), and the second is positive because of the assumption. Hence \(det(A)>0\) for \(t\in I_{2}\). We see that \(T^{s}:T^{0}\simeq T^{1}\) defines a homotopy. So the index of Fredholm operators \(T^{s}\) defined by \(Ind(T^{s})=dim(Ker(T^{s}))-dim(Coker(T^{s}))\) will be constant for \(0\leq s\leq 1\), in particular \(Ind(T^{0})=Ind(T^{1})\). Now \[T^{0}(f,\psi,t)\] \[= \Bigg{(}\bigg{\{}\Big{(}2r_{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(} (2r_{2}+2)+\alpha(1-t)(4r_{2}+2)\Big{)}\] \[\Big{(}1+2\alpha(1-t)\big{)}(2r_{1}+1)\Delta\delta f+\Big{(}2r_{2 }+\alpha(1-t)(4r_{2}+2)\Big{)}\] \[\Big{(}r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\Big{(}(2r_{2}+2)+\alpha (1-t)(4r_{2}+2)\Big{)}\Delta\delta\psi\bigg{\}},\] \[\Big{(}1+2\alpha(1-t)\big{)}(4r_{2}+2)\Delta\delta\psi\Bigg{)},\] and \[T^{0^{*}}(f,\psi,t)\] \[= \Bigg{(}\bigg{\{}\big{(}1+2\alpha(1-t)\big{)}(4r_{2}+2)\Delta\delta f -\Big{(}2r_{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\] \[\Big{(}r_{1}+\alpha(1-t)(2r_{1}+1)\Big{)}\Big{(}(2r_{2}+2)+\alpha( 1-t)(4r_{2}+2)\Big{)}\Delta\delta\psi\bigg{\}},\] \[\Big{(}2r_{2}+\alpha(1-t)(4r_{2}+2)\Big{)}\Big{(}(2r_{2}+2)+ \alpha(1-t)(4r_{2}+2)\Big{)}\] \[\big{(}1+2\alpha(1-t)\big{)}(2r_{1}+1)\Delta\delta\psi\bigg{)}.\] A simple calculation gives \(Ind(T^{0})=0\). Since \(DT\) and \(T^{1}\) have the same principal symbol, their index must be the same. Hence, \(Ind(DT)=0\) and consequently \(Ker(DT)=Ker(DT^{*})=0\). Therefore Fredholm's alternative implies that \(DT\) is an isomorphism. This completes the proof of the theorem (2). ## Acknowledgements I am grateful to my advisor, Vamsi Pritham Pingali, for suggesting this problem. I also thank him for the invaluable and fruitful discussion about the same. This work is supported by a scholarship from the Indian Institute of Science.
2307.04804
S2vNTM: Semi-supervised vMF Neural Topic Modeling
Language model based methods are powerful techniques for text classification. However, the models have several shortcomings. (1) It is difficult to integrate human knowledge such as keywords. (2) It needs a lot of resources to train the models. (3) It relied on large text data to pretrain. In this paper, we propose Semi-Supervised vMF Neural Topic Modeling (S2vNTM) to overcome these difficulties. S2vNTM takes a few seed keywords as input for topics. S2vNTM leverages the pattern of keywords to identify potential topics, as well as optimize the quality of topics' keywords sets. Across a variety of datasets, S2vNTM outperforms existing semi-supervised topic modeling methods in classification accuracy with limited keywords provided. S2vNTM is at least twice as fast as baselines.
Weijie Xu, Jay Desai, Srinivasan Sengamedu, Xiaoyu Jiang, Francis Iannacci
2023-07-06T21:44:31Z
http://arxiv.org/abs/2307.04804v2
# S2vNTM: Semi-supervised vMF Neural Topic Modeling ###### Abstract Language model based methods are powerful techniques for text classification. However, the models have several shortcomings. (1) It is difficult to integrate human knowledge such as keywords. (2) It needs a lot of resources to train the models. (3) It relied on large text data to pretrain. In this paper, we propose Semi-Supervised vMF Neural Topic Modeling (S2vNTM) to overcome these difficulties. S2vNTM takes a few seed keywords as input for topics. S2vNTM leverages the pattern of keywords to identify potential topics, as well as optimize the quality of topics' keywords sets. Across a variety of datasets, S2vNTM outperforms existing semi-supervised topic modeling methods in classification accuracy with limited keywords provided. S2vNTM is at least twice as fast as baselines. ## 1 Introduction Language Model (LM) pre-training Vaswani et al. (2017); Devlin et al. (2018) has proven to be useful in learning universal language representations. Recent language models such as Yang et al. (2019); Sun et al. (2019); Chen et al. (2022); Ding et al. (2021) have achieved amazing results in text classification. Most of these methods need enough high-quality labels to train. To make LM based methods work well when limited labels are available, few shot learning methods such as Bianchi et al. (2021); Meng et al. (2020a, b); Mekala and Shang (2020); Yu et al. (2021); Wang et al. (2021) have been proposed. However, these methods rely on large pre-trained texts and can be biased to apply to a different environment. Topic modeling methods generate topics based on the pattern of words. To be specific, unsupervised topic modeling methods Blei et al. (2003); Teh et al. (2006); Miao et al. (2018); Dieng et al. (2020) discover the abstract topics that occur in a collection of documents. Recently developed neural topic modeling achieves faster inference in integrating topic modeling methods with deep neural networks and uncovers semantic relationship Zhao et al. (2020a); Wang and Yang (2020). Compared to unsupervised topic modeling methods, semi-supervised topic modeling methods Mao et al. (2012); Jagarlamudi et al. (2012); Gallagher et al. (2018) allow the model to match the provided patterns from users such as keywords. However, these methods do not have high topic classification accuracy. After studying topic modeling methods in real world applications Choi et al. (2017); Cao et al. (2019); Kim et al. (2013); Zhao et al. (2020b); Xu et al. (2022), we realize the scenario that cannot be solved by current methods. The scenario involves topic exploration: users have identified a subset of topic keywords. They want to capture topics based on these keywords, while explore additional topics. They value the quality of the resulting topics and want to identify new topics while refining the topics' keywords iteratively Kim et al. (2013); Smith et al. (2018). In addition, users want to use the topic they created on topic classification. In this work, we propose semi-supervised vMF neural topic modeling (S2vNTM). S2vNTM takes the desired number of topics as well as keywords/key phrases for some subsets of topics as input. It incorporates this information as guideline and leverages negative sampling to create topics that match the pattern of selected keywords. It creates additional topics which align with the semantic structure of the documents. It can help users remove redundant topics. Figure 1 illustrates how users interact with our model. The advantages of this method include: 1. It consistently achieves the best topic classification performance on different datasets compared to similar methods. 2. S2vNTM only requires a few seed keywords per topic, and this makes it suitable for data scarce settings. It does not require any transfer learning. 3. S2vNTM is explainable and easy to fine-tune which makes it suitable for interfacing with subject-matter experts and low resource settings. In sections below, we have shown Method in Section 2 which describes the technical details of S2vNTM, Results in Section 3 and Conclusion and Future work in Section 4. Details on Modularity of S2vNTM is given in Appendix A. Related Work and Challenges are described in Appendix B, Experiments in Appendix C and Ablation Studies in Appendix E. ## 2 Method Figure 2 shows the overall architecture of S2vNTM. The encoder is based on a Neural Topic Model leveraging von Mises-Fisher distribution. We use von Mises-Fisher distribution because it captures distributions on unit sphere and induces better clustering properties. To improve clustering, we add temperature function to the latent distribution(See details in Appendix A.1). The decoder tries to reconstruct the input from the topics while leveraging user-provided seeds for the topics. The model is trained end-to-end with the objective of minimizing reconstruction error while conforming to user-provided seeds and minimizing topic overlap. ### vNTM We first introduce notation: the encoder network, \(\phi\), encodes the bag of words representation of any document \(X_{d}\) and outputs the parameters which can be used to sample the topic distribution \(t_{d}\). The decoder is represented by a vocabulary embedding matrix \(e_{W}\) and a topic embedding matrix \(e_{t}\). We use a spherical word embedding Meng et al. (2019) trained on the dataset where we apply the model to create \(e_{W}\) and keep it fixed during the training. Spherical word embedding performs better on word similarity related tasks. If we do not keep embedding fixed, reconstruction loss will make the embeddings of co-occurred words closer which is not aligned with true word similarity. Fewer parameters to train can also make our method more stable. \(W\) represents all selected vocabularies and \(T\) contains all topics. In this notation, our algorithm can be described as follows: for every document \(d\), (1) input bag of word representation \(X_{d}\) to encoder \(\phi\). (2) Using \(\phi\), output direction parameter \(\mu\) and variation parameter \(\kappa\) for vMF distribution.Xu et al. (2023) (3) Based on \(\mu\) and \(\kappa\), generate a topic distribution \(t_{d}\) using temperature function. (4) Reconstruct \(X_{d}\) by \(t_{d}\times\mathrm{softmax}(e_{t}e_{W}^{T})\). The goal of this model is to maximize the marginal likelihood of the documents: \(\sum_{d=1}^{D}\log p(X_{d}|e_{t},e_{W})\) Figure 1: An S2vNTM application scenario. Human experts define topic keywords set and the number of topics first. During the training procedure, S2vNTM outputs keywords for each topic by merging the redundant keywords group and identifying new topics. Human experts then confirm/remove the keywords and/or add new keywords. S2vNTM continues refining the keyword list with a fast fine-tuning procedure. After a few iterations, S2vNTM provides users topics with high-quality keywords and high topic classification accuracy. To make it tractable, the loss function combines reconstruction loss with KL divergence as below: \[L_{Recon}=(-E_{q_{o}(t_{d}|X_{d})}[logp_{\theta}(X_{d}|t_{d})] \tag{1}\] \[L_{KL}=KL[q_{\phi}(t_{d}|X_{d})||p(t_{d})]) \tag{2}\] _Our spherical word embedding is trained on the dataset without any pretraining. This can help embeddings deal with domain specific word. This can also make our model work for the language where there is not much text data available to pre-train._ We leverage the vMF distribution as our latent distribution because of its clusterability and stability Xu and Durrett (2018); Ennajari et al. (2021); Reisinger et al. (2010); Davidson et al. (2018). Because of the design of the decoder, for each topic, it can be represented as a distribution of all words in vocabulary (\(\mathrm{softmax}(e_{t}e_{W}^{T})\)). _When a document is provided, the user can identify the topics distribution of documents and also related keywords that contribute to these topics. Thus, the model is explainable._ ### Loss Function Our method allows users to define an arbitrary number of topics and provide keywords for some subsets of those topics. The model takes these two parameters as inputs and generates topics that include user's keywords as well as additional topics that align with topic distribution. With that being said, we want the prior loss similar to \[L_{CE}=-\sum_{s\in S}max_{t\in T}\log\prod_{x\in s}q(x|t) \tag{3}\] where \(S\) contains all keywords groups, s is a group of keywords and T is the group of topics, \(q(x|t)\) stands for the probability of word x given t calculated by decoder. \[q(x|t)=\frac{\exp{(e_{t_{j}}e_{x_{i}}^{T})}}{\sum_{x\in X}\exp{(e_{t_{j}}e_{x ^{T}})}} \tag{4}\] This is the j-th row and i-th column of decoder embedding matrix \(\mathrm{softmax}(e_{T}e_{W}^{T})\). Thus, it uses existed neural network structure to calculate and makes it computationally efficient. Figure 2: The neural network architecture of S2vNTM. We denote the dimension of the data in the bracket. \(n\) is the number of documents. \(v\) is the number of vocabularies. \(t\) is the number of topics. \(e\) is the dimension of embeddings. Word Embedding(green) is fixed during the training. Pink represents user provided data. Orange denotes all loss function including \(L_{KL}\), \(L_{Recon}\), \(L_{CE}\) and \(L_{NS}\) ### Topic and Keywords set Matching We want to make sure matched topics capture all documents related to the provided keywords. The problem of using \(L_{CE}\) is that different keywords set may map to the same topic. It may merge the irrelevant topic set when that topic set is not aligned with most of the topics. To avoid this situation, we first select the topic that is most likely to align with this group of keywords but not align with words in all other groups. To be specific, we first select \[t_{s}=\operatorname*{arg\,max}_{t\in T}(E_{x\in s}(\log q(x|t))-\max_{x\in S} \log(q(x|t))) \tag{5}\] This is inspired by Gumbel-Softmax Jang et al. (2016). If one word in keywords set is dissimilar to the topic, the log will penalize it heavily and the topic is less likely to be matched. We also want to separate keyword groups which are different. If a keyword in another group has a higher probability in a topic, then \(max_{x\in S}\log(q(s|t))\) will be large, which makes the topic less likely to be the selected topic. If we have two similar keywords' sets, they can have similar and large \(E_{x\in s}(\log q(x|t))\). These keywords sets can still map to the same topics. The benefit of this matching method is that it is more stable compare to method such as Gumbel-softmax and it can remove redundant topics by merging it with similar topics. ### Negative Sampling We also want keywords as guidance to select other related keywords. Similar to Yang et al. (2020), when a keyword set is matched with a topic, we want the topic to be less correlated with words that are unrelated to the matched keyword set. Thus, we leverage negative sampling. We first select the top N words in the selected topic using a decoder embedding matrix and sample each of top N word with sampling probability equal to \(max_{x\in s}1-cos(x,x_{N})\) where \(x_{N}\) stands for a word in top N words in that selected topic and \(cos\) stands for cosine similarity. Our goal is to make words that are dissimilar to the provided keywords likely to be sampled, as seen in Table 2. Negative Sampling can also help the model converge faster since it pushes away unrelated words quicker Mimno and Thompson (2017). The penalty we add for each keywords' set is: \[L_{NS,s}=\gamma\sum_{x\in ns}(\log(q(x|t_{s}))) \tag{6}\] where \(ns\) contains words sampled from negative sampling. The loss of negative sampling is \[L_{NS}=\sum_{s\in S}L_{NS,s} \tag{7}\] \(\beta\) controls input keywords strength on overall loss function and \(\gamma\) controls the strength of negative sampling. The overall loss function is: \[L=L_{Recon}+L_{KL}+\beta*L_{CE}+\gamma*L_{NS} \tag{8}\] where \(L_{NS}\) is the sum of all keywords set. \(L_{Recon}\) is the reconstruction loss and \(L_{KL}\) is the KL divergence loss. The benefit of this negative sampling design is that \(q(x|t_{s})\) can be directly mapped from the decoder. Thus, it does not require additional computation, which saves computation resources. ## 3 Results We ran our experiments 10 times with different seeds and show the result in Table 1 (and Figure 5 in the Appendix). (1) S2vNTM achieves the best accuracy in all three datasets. In fact, the worst \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Model & Ad News & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline Metric & Accuracy & Accuracy & Accuracy & Accuracy & Accuracy & Accuracy & Accuracy & Accuracy & Accuracy \\ \hline Gumbel-LSTM & 0.778 \(\pm\) 0.017 & 0.857 \(\pm\) 0.016 & 0.725 \(\pm\) 0.009 & 0.541 \(\pm\) 0.012 & 0.872 \(\pm\) 0.012 & 0.309 \(\pm\) 0.017 & 0.41 \(\pm\) 0.009 & 0.024 \(\pm\) 0.005 & 0.47 \(\pm\) 0.008 \\ Corek & 0.778 \(\pm\) 0.003 & 0.899 \(\pm\) 0.001 & 0.705 \(\pm\) 0.002 & 0.532 \(\pm\) 0.001 & 0.762 \(\pm\) 0.005 & 0.294 \(\pm\) 0.024 & 0.53 \(\pm\) 0.009 & 0.84 \(\pm\) 0.005 & 0.492 \(\pm\) 0.011 \\ S2vNTM & 0.795 \(\pm\) 0.009 & 0.302 \(\pm\) 0.007 & 0.792 \(\pm\) 0.009 & 0.651 \(\pm\) 0.03 & 0.531 \(\pm\) 0.002 & 0.362 \(\pm\) 0.009 & 0.508 \(\pm\) 0.029 & 0.505 \(\pm\) 0.002 & 0.545 \(\pm\) 0.002 \\ \hline \end{tabular} \end{table} Table 1: Scores and Standard Deviation for Accuracy, Macro F1 and Auroc of GuidedLDA, CoreEx and S2vNTM models on AG News, R8 and DBLP datasets. reported accuracy of S2vNTM is higher than the best from the other two methods. We believe there are 3 reasons contributing to its superior performance. (i) It has high clusterability using vMF as a latent distribution. This makes our method easily clustered. (ii) Negative sampling excludes unrelated keywords from the topics. This makes our method perform better on documents that are related to keywords. (iii) S2vNTM also uses word embedding trained on the dataset. This makes our method perform well on documents that have words that are similar to words in keywords set. (2) S2vNTM keywords make more sense qualitatively in Table 2 in Appendix. This is due to KL divergence loss. Flexible concentration parameter \(\kappa\) makes our method more locally concentrated. This makes topics different from each other. (3) S2vNTM also has a higher avocr and Macro F1 score than other methods in most cases (from Table 1). This means that our method can deal with imbalanced datasets and can easily distinguish between classes. However, it performs less well on R8, which has 8 imbalanced classes. For class with less than 300 documents, keywords selected by tf-idf are less representative. Thus, it has lower performance and higher variance. Besides, our method using vMF distribution which has higher reconstruction loss when the dimension is high. R8 has 8 classes which make our method perform worse. Qualitatively, as you can see in Table 2, negative sampling reduces the importance of unrelated keywords such as _call, york, company_ while increasing the importance of given keywords such as _military, industry, athlete_. Also, semantically, keywords in each set are closer to each other. For example, in the first set of keywords, _government, war_ are semantically more related to _crime, rule_ compared to _call, election_. On the other hand, even if CorEx has good topic diversity, the keywords set is not coherent. For example, the last group in Table D has _inc, corp, people, bush, million_ in one group. Determining the relationship between these keywords is not obvious. **Speed** We run each model 10 times on AG News with different seeds to evaluate how long it takes to fine-tune the model by modifying 20 percent of keywords set. The average fine-tune time for our method is 51.33 seconds. To compare, CatE Meng et al. (2018) takes 888.61 seconds to fine-tune, while CorEx takes 94.98 seconds to fine-tune. This shows that our method is better suitable for iterative topic learning Hu et al. (2014) and resource restrictive environments. Overall, qualitative results show that _S2vNTM can help users find more coherent and relevant keywords compare to existed methods. Negative sampling makes the topics set more coherent. S2vNTM is at least twice faster than baselines._ ## 4 Conclusion and Future Work In conclusion, we propose S2vNTM as an approach to integrate keywords as pattern to current neural topic modeling methods. It is based on vMF distribution, negative sampling, modified topic keywords mapping and spherical word embeddings. Our method achieves better classification performance compared to existing semi-supervised topic modeling methods. It is not sensitive to parameters. S2vNTM gives more coherent topics qualitatively. It also performs well when the input keywords set is less common in the dataset. It is also fast to fine-tune. It does not require pretraining or transfer learning. It only needs a few sets of seed words as input. The ablation study shows the potential of our method to further improve. In the future, we will focus on decreasing the gap between loss function and classification metric, incorporating sequential information and further improving the stability of the model. We will also work on improving its expressability in higher dimensions. \begin{table} \begin{tabular}{|c|c|} \hline S2vNTM & S2vNTM + Negative Sampling \\ \hline **government, war**, president, call, election & **government, war**, **military**, crime, rule \\ \hline **stock**, high, investor, **market**, york & **stock**, investor, **market**, share, **industry** \\ \hline **software, computer**, system, microsoft, company & **software, computer**, microsoft, system, technology \\ \hline game, sport, champion, season, team & game, sport, champion, season, **athelete** \\ \hline united, reuters, international, state, union & reuters, united, state, international, plan \\ \hline reuters, report, target, http, company & reuters, report, target, http, company \\ \hline \end{tabular} \end{table} Table 2: Comparison of top 5 keywords from each topics on AG News. The keywords that are given are [government,military,war], [stock,market,industry], [computer,telescope,software], [basketball,football,athlete].
2303.11040
Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving
3D object detection is an important task in autonomous driving to perceive the surroundings. Despite the excellent performance, the existing 3D detectors lack the robustness to real-world corruptions caused by adverse weathers, sensor noises, etc., provoking concerns about the safety and reliability of autonomous driving systems. To comprehensively and rigorously benchmark the corruption robustness of 3D detectors, in this paper we design 27 types of common corruptions for both LiDAR and camera inputs considering real-world driving scenarios. By synthesizing these corruptions on public datasets, we establish three corruption robustness benchmarks -- KITTI-C, nuScenes-C, and Waymo-C. Then, we conduct large-scale experiments on 24 diverse 3D object detection models to evaluate their corruption robustness. Based on the evaluation results, we draw several important findings, including: 1) motion-level corruptions are the most threatening ones that lead to significant performance drop of all models; 2) LiDAR-camera fusion models demonstrate better robustness; 3) camera-only models are extremely vulnerable to image corruptions, showing the indispensability of LiDAR point clouds. We release the benchmarks and codes at https://github.com/kkkcx/3D_Corruptions_AD. We hope that our benchmarks and findings can provide insights for future research on developing robust 3D object detection models.
Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao Yang, Hang Su, Xingxing Wei, Jun Zhu
2023-03-20T11:45:54Z
http://arxiv.org/abs/2303.11040v1
# Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving ###### Abstract 3D object detection is an important task in autonomous driving to perceive the surroundings. Despite the excellent performance, the existing 3D detectors lack the robustness to real-world corruptions caused by adverse weathers, sensor noises, etc., provoking concerns about the safety and reliability of autonomous driving systems. To comprehensively and rigorously benchmark the corruption robustness of 3D detectors, in this paper we design 27 types of common corruptions for both LiDAR and camera inputs considering real-world driving scenarios. By synthesizing these corruptions on public datasets, we establish three corruption robustness benchmarks--KITTI-C, nuScenes-C, and Waymo-C. Then, we conduct large-scale experiments on 24 diverse 3D object detection models to evaluate their corruption robustness. Based on the evaluation results, we draw several important findings, including: 1) motion-level corruptions are the most threatening ones that lead to significant performance drop of all models; 2) LiDAR-camera fusion models demonstrate better robustness; 3) camera-only models are extremely vulnerable to image corruptions, showing the indispensability of LiDAR point clouds. We release the benchmarks and codes at [https://github.com/kkkcx/3D_Corruptions_AD](https://github.com/kkkcx/3D_Corruptions_AD). We hope that our benchmarks and findings can provide insights for future research on developing robust 3D object detection models. ## 1 Introduction As a fundamental task in autonomous driving, 3D object detection aims to identify objects of interest (_e.g_., vehicles, pedestrians, or cyclists) in the surrounding environment by predicting their categories and the corresponding 3D bounding boxes. LiDAR and camera are two important types of sensors for 3D object detection, where the former provides the depth information of road objects as sparse point clouds, while the latter captures abundant semantic information of the scene as color images. Based on the complementary nature of the two modalities, 3D object detection models can be categorized into LiDAR-only [51, 52, 53, 54, 50], camera-only [51, 52, 53, 50, 59], and LiDAR-camera fusion [51, 52, 53, 56] models. Since autonomous driving is safety-critical, it is of paramount importance to assess the robustness of 3D object detectors under diverse circumstances before deployed. Although the recent progress of 3D object detection has led to significant improvements in typical benchmarks (_e.g_., KITTI [17], nuScenes [6], and Waymo [54]), the existing models based on data-driven deep learning approaches often generalize poorly to the corrupted data caused by, _e.g_., adverse weathers [28, 21, 22], sensor noises [47, 25, 7], and uncommon objects [32, 9], posing a formidable obstacle to safe and reliable autonomous driving [1]. To perform robustness evaluation, recent works construct new datasets of road anomalies [32, 23, 43, 9] or under extreme weather conditions [44, 4, 15]. Nevertheless, they are usually of small sizes due to the high data collection costs and the rareness of corner cases or adverse weathers. Other works synthesize common corruptions on clean datasets to benchmark robustness on image classification [25] and point cloud recognition [53, 47], but they only consider several simple corruptions, which could be insufficient and unrealistic for 3D object detection. Therefore, it remains challenging to comprehensively characterize different corruptions considering diverse driving scenarios and fairly evaluate corruption robustness of existing models within a unified framework. In this paper, we systematically design **27** types of common corruptions in 3D object detection for both LiDAR and camera sensors to comprehensively and rigorously evaluate the corruption robustness of current 3D object detectors. The corruptions are grouped into _weather_, _sensor_, _motion_, _object_, and _alignment_ levels, covering the majority of real-world corruption cases, as demonstrated in Fig. 1. Most of them are specifically designed for autonomous driving (_e.g_., motion-level ones), which have not been explored before. Following [25], every corruption has five severities, leading to a total number of **135** distinct corruptions. By applying them to typical autonomous driving datasets--KITTI [17], nuScenes [6], and Waymo [54], we establish three corruption robustness benchmarks--**KITTI-C**, **nuScenes-C**, and **Waymo-C**. We hope that these large-scale corrupted datasets can serve as general datasets for fairly and comprehensively benchmarking corruption robustness of 3D object detection models and facilitating future research. We conduct large-scale experiments to compare the corruption robustness of existing 3D object detection models. Specifically, we evaluate 11 models on KITTI-C, 10 models on nuScenes-C, and 3 models on Waymo-C. The models are of great variety with different input modalities, representation methods, and detection heads. Based on the evaluation results, we find that: 1) the corruption robustness of 3D object detectors is highly correlated with their clean accuracy; 2) motion-level corruptions impair the model performance most, while being rarely explored before; 3) LiDAR-camera fusion models are more resistant to corruptions, but there is a trade-off between robustness under image corruptions and point cloud corruptions of fusion models. More discussions are provided in Sec. 6. Moreover, we study data augmentation strategies [14, 69, 72] as potential solutions to improve corruption robustness, but find that they provide a little robustness gain, leaving robustness enhancement of 3D object detection an open problem for future research. ## 2 Related Work ### 3D Object Detection Based on the input modality, we categorize 3D object detection models into LiDAR-only, camera-only, and LiDAR-camera fusion models. **LiDAR-only models:** LiDAR point clouds are sparse, irregular, and unordered by nature. To learn useful representations, _voxel-based_ methods project point clouds to compact grids. Typically, VoxelNet [75] rasterizes point clouds into voxels, which are processed by PointNets [46] and 3D CNNs. To speed up, SECOND [65] introduces sparse 3D convolutions and PointPillars [30] elongates voxels into pillars. Other works exploit information of object parts [52] or shape [76] to improve the performance. On the other hand, _point-based_ methods take raw point clouds as inputs and make predictions on each point. PointRCNN [51] proposes a two-stage framework that first generates 3D proposals and then refines the proposals in the canonical coordinates. 3DSSD [66] is a lightweight one-stage detector with a fusion sampling strategy. To have the best of both worlds, _point-voxel-based_ methods are then explored. PV-RCNN [50] integrates 3D voxel CNN and PointNet-based set abstraction to efficiently create high-quality proposals. **Camera-only models:** 3D object detection based on images is challenging due to the lack of depth information, but attracts extensive attention considering the advantage of low cost. The most straightforward approach is to take _monocular_ detection methods [10, 37, 40, 61, 60] and apply post-processing across cameras. For example, Mono3D [10] generates 3D object proposals scored by semantic features. SMOKE [37] combines a single keypoint estimation with regressed 3D variables. To address the limitation of post-processing in monocular methods, _multi-view_ methods fuse information from all cameras in the intermediate layers. DETR3D [62] adopts a transformer-based detector [8] that Figure 1: An overview of 27 corruptions for 3D object detection, which are categorized into weather, sensor, motion, object, and alignment levels. As shown, some corruptions are effective for one modality, while the others are applied to both (_e.g_., _Snow_, _Moving Object_, _Shear_). fetches the image features by projecting object queries onto images. BEVFormer [34] exploits spatial-temporal information from multi-view images based on BEV queries. **LiDAR-camera fusion models:** To leverage the complementary information from LiDAR and camera inputs, fusion methods are also extensively studied. Following [36], we classify the newly developed methods into _point-level_, _proposal-level_, and _unified representation_ fusion methods. Point-level methods augment LiDAR point clouds with semantic image features and then apply existing LiDAR-only models for 3D detection, including PointPainting [57], EPNet [26], PointAugmenting [58], Focals Conv [13], _etc_. Proposal-level fusion methods [11, 45] generate 3D object proposals and integrate image features into these proposals. FUTR3D [12] and TransFusion [2] employ a query-based transformer decoder, which fuses image features with object queries. Moreover, BEVFusion [36] unifies the image feature and point cloud feature in a BEV representation space, which stands out as a new fusion strategy. ### Robustness Benchmarks It is well-known that deep learning models lack the robustness to adversarial examples [20, 55], common corruptions [25], and other kinds of distribution shifts [18, 19, 24]. In autonomous driving, many works collect new datasets to evaluate model robustness under different conditions. For example, the Seeing Through Fog (STF) [4], Canadian Adverse Driving Conditions (CADC) [44], and Ithaca365 [15] datasets are collected in adverse weathers; and others gather road anomalies of 2D images [9, 23, 43, 32]. Despite the efforts, these datasets only cover limited scenarios due to the high collection costs of rare data. Moreover, as mainly used for evaluation, these datasets have a big domain gap from the large-scale training datasets since they were collected in different cities with varying vehicles and sensors, making it hard for us to examine the effects of different factors (_e.g_., weather _vs_. city) on model robustness. One promising direction is to synthesize real-world corruptions on clean datasets to benchmark model robustness. For example, ImageNet-C [25] is first introduced in image classification with 15 corruption types, ranging from noise, blur, weather to digital corruptions. The similar methodology is further applied to 2D object detection [39] and point cloud recognition [47, 53]. However, many of these studied corruptions are hypothetical and thus unrealistic in the scenario of autonomous driving. It is still challenging to build a comprehensive benchmark for robustness evaluation of 3D object detection considering diverse real-world driving cases. We notice that two concurrent works [33, 68] to ours also study robustness of 3D object detection in autonomous driving. However, they mainly consider specific kinds of 3D detection models (_i.e_., LiDAR-only models in [33] and fusion models in [68]) and include limited types of corruptions with less evaluations, as compared in Appendix A.2. ## 3 Corruptions in 3D Object Detection Real-world corruptions arise from diverse scenarios in autonomous driving, based on which we systematically categorize the corruptions into _weather_, _sensor_, _motion_, _object_, and _alignment_ levels. We identify common corruption types for each level considering real-world driving scenarios, resulting in **27** distinct corruptions in total, as shown in Fig. 1. Among them, some corruptions are applied to both modalities simultaneously, such as weather-level ones, while the others are designed for a single modality, such as sensor-level ones. We visualize a subset of corruptions in Fig. 2. **Weather-level corruptions:** Weather change is usually encountered in autonomous driving, which can dramatically disrupt both LiDAR and camera inputs. For example, _fog_ reduces the visibility of objects in images and causes scattered points due to attenuation and backscattering [4, 22, 70]. Consequently, 3D detectors trained on data collected in normal weather tend to perform poorly under adverse weathers [4]. To study the robustness under weather changes, we consider 4 weather-level corruptions: _Snow_, _Rain_, _Fog_, and _Strong Sunlight_, as they are more common [4, 15, 44]. For LiDAR, we adopt physically based methods [21, 22, 28] to simulate the effects of rain, snow, and fog on point clouds from normal weather. We simulate the effect of strong sunlight by applying strong Gaussian noises to points along the sun direction [7]. For camera, we apply image augmentations [25] to simulate visually realistic weathers. **Sensor-level corruptions:** The sensors, when affected by numerous internal or external factors (_e.g_., sensor vibration [49], lighting conditions [25, 34] and reflective materials), can induce various kinds of corruptions to the captured data. Based on prior discussions on sensor noises [3, 7, 25, 47], we design 10 practical sensor-level corruptions--7 for point clouds and 3 for images. The point cloud corruptions are: _Density Decrease_, _Cutout_, _LiDAR Crosstalk_, _FOV Lost_, _Gaussian Noise_, _Uniform Noise_, and _Impulse Noise_. Density decrease simulates missing points commonly observed in typical datasets [17]. Cutout occurs when laser pulses have no echo in a local region (_e.g_., puddle) and is simulated by dropping points in a randomly selected area. LiDAR crosstalk [5] happens when multiple LiDARs operate at close range, which is simulated by applying strong Gaussian noises to a small subset of points. FOV lost simulates a limited field-of-view of LiDAR caused by occlusion. Moreover, due to the ranging inaccuracy of LiDAR, we consider 3 noise corruptions that apply Gaussian, uniform, and impulse noises to point coordinates, respectively. The 3 image corruptions include: _Gaussian Noise_, _Uniform Noise_, and _Impulse Noise_ to simulate the visual noise patterns due to low-lighting conditions or defects of camera [25]. Although we design sensor-level corruptions for LiDAR and camera separately, they can occur for both sensors at the same time, affecting LiDAR-camera fusion models further. **Motion-level corruptions:** An autonomous vehicle will encounter several types of corruptions during driving. In this paper, we introduce 3 motion-level corruptions: _Motion Compensation_, _Moving Object_, and _Motion Blur_, which are practical in the real world and studied for the first time. Vehicle ego-motion induces distortions to point clouds since the points in a frame are not obtained in the same coordinate system [73]. To obtain accurate point clouds, motion compensation is typically used with the aid of the localization information [6, 17]. However, this process can introduce noises, which we call motion compensation corruption, simulated by adding small Gaussian noises to the rotation and translation matrices of the vehicle's ego pose. The moving object corruption denotes the case that an object is moving rapidly in the scene. It can cause shifting points within the object's 3D bounding box [63] and blur the image patch of the object. The last corruption is motion blur on camera images, which is caused by driving too fast. **Object-level corruptions:** Objects in the real world always come in a variety of shapes and materials [9, 32], making it challenging to correctly recognize them. The viewing direction can also lead to wrong recognition of objects [16]. Based on this, we introduce 8 object-level corruptions: _Local Density Decrease_, _Local Cutout_, _Local Gaussian Noise_, _Local Uniform Noise_, _Local Impluse Noise_, _Shear_, _Scale_, and _Rotation_. The first five corruptions are only applied to LiDAR point clouds to simulate the distortions caused by different object materials or occlusion. As their names indicate, these corruptions only make changes to local sets of points within the objects' 3D bounding boxes. The last three corruptions simulate shape deformation of objects, and _Rotation_ can also simulate different view directions of objects. They can affect both LiDAR and camera inputs. To make consistent distortions to two modalities, we apply the same transformation of shear, scale, or rotation to both points and image patches belonging to the objects in the scene. **Alignment-level corruptions:** It was typically assumed that LiDAR and camera inputs are well aligned before feeding to the fusion models. However, this assumption can be invalid during long-time driving, _e.g_., the collection of the ONCE dataset [38] needs re-calibration almost every day to avoid misalignment between different sensors. In practice, an autonomous vehicle can encounter _Spatial Misalignment_ and _Temporal Misalignment_[68]. Spatial misalignment can be caused by sensor vibration due to bumps of the vehicle. We simulate it by adding random noises to the calibration matrices. Temporal misalignment happens when the data is stuck or delayed for a sensor. We keep the input of one modality the same as that at the previous timestamp to simulate temporal misalignment between the two modalities. **Discussion about the gap between synthetic and real-world corruptions.** Real-world corruptions can come from multiple and diverse sources. For example, an autonomous vehicle can encounter adverse weather and uncommon objects at the same time, leading to much more complicated corruptions. Although it is impossible to enumerate all real-world corruptions, we systematically design 27 corruption types grouped into five levels, which can serve as a practical testbed to perform controllable robustness evaluation. Especially, for weather-level corruptions, we adopt the state-of-the-art methods for simulation, which are shown to approximate real data well [21, 22]. Although there inevitably exists a gap, we validate that the model performance on synthetic weathers are consistent with that on real data under adverse weathers. More discussions are provided in Appendix A.4. ## 4 Corruption Robustness Benchmarks To comprehensively evaluate the corruption robustness of 3D object detection models, we establish three corruption robustness benchmarks based on the most widely used datasets in autonomous driving--KITTI [17], nuScenes [6], and Waymo [54]. We apply the aforementioned corruptions to the validation sets of these datasets and obtain **KITTI-C**, **nuScenes-C**, and **Waymo-C**, respectively. Note that although several corruptions naturally appear in few samples of the datasets, we still apply the synthetic corruptions to all data to fairly compare model robustness under different corruptions and reduce the efforts of filtering data. Besides, we build a unified toolkit comprising of all corruptions, that can be used for other datasets as well. Below we introduce Figure 2: Visualization of typical corruption types of each level in our benchmark (best viewed when zoomed in). Full visualization results of all corruptions are shown in Appendix A.3. the dataset details, evaluation metrics, and evaluated models of the three benchmarks, respectively. ### Kitti-C The KITTI dataset [17] contains 3712 training, 3769 validation, and 7518 test samples. As we do not have access to the test set, KITTI-C is constructed upon the validation set. Among the corruptions, we do not include _FOV Lost_, _Motion Compensation_ and _Temporal Misalignment_ since: 1) 3D object detection models usually take front-view point clouds of 90\({}^{\circ}\) FOV as inputs since the KITTI dataset only provides box annotations in front of the vehicle; 2) the localization and timestamp information of each frame is not provided in the dataset. Therefore, there are 24 corruptions in KITTI-C with 5 severities for each following [25]. The standard evaluation is performed on _Car_, _Pedestrian_ and _Cyclist_ categories at _Easy_, _Moderate_ and _Hard_ levels of difficulty. The evaluation metric is the Average Precision (AP) with 40 recall positions at an IoU threshold 0.7 for cars and 0.5 for pedestrians/cyclists. We denote model performance on the original validation set as \(\mathrm{AP}_{\mathrm{clean}}\). For each corruption type \(c\) at each severity \(s\), we adopt the same metric to measure model performance as \(\mathrm{AP}_{c,s}\). Then, the _corruption robustness_ of a model is calculated by averaging over all corruption types and severities as \[\mathrm{AP}_{\mathrm{cor}}=\frac{1}{|\mathcal{C}|}\sum_{c\in\mathcal{C}}\frac{ 1}{5}\sum_{s=1}^{5}\mathrm{AP}_{c,s}, \tag{1}\] where \(\mathcal{C}\) is the set of corruptions in evaluation. Note that for different kinds of 3D object detectors, the set of corruptions can be different (_e.g._, we do not evaluate camera noises for LiDAR-only models), thus the results of \(\mathrm{AP}_{\mathrm{cor}}\) are _not_ directly comparable between different kinds of models and we perform a fine-grained analysis under each corruption. We also calculate _relative corruption error (RCE)_ by measuring the percentage of performance drop as \[\mathrm{RCE}_{c,s}=\frac{\mathrm{AP}_{\mathrm{clean}}-\mathrm{AP}_{c,s}}{ \mathrm{AP}_{\mathrm{clean}}};\ \mathrm{RCE}=\frac{\mathrm{AP}_{\mathrm{clean}}-\mathrm{AP}_{\mathrm{cor}}}{ \mathrm{AP}_{\mathrm{clean}}}. \tag{2}\] We select 11 representative 3D object detection models trained on KITTI, including 6 LiDAR-only models: _SECOND_[65], _PointPillars_[30], _PointRCNN_[51], _Part-A\({}^{2}\)[52], _PV-RCNN_[50], and _3DSSD_[66]; 3 camera-only models: _SMOKE_[37], _PGD_[59], and _ImVoxelNet_[48]; and 2 LiDAR-camera fusion models: _EPNet_[26] and _Focals Conv_[13]. The details regarding their representations and detection heads are shown in Table 1(a). ### nuScenes-C The nuScenes dataset [6] contains 1000 sequences of approximately 20s duration with a LiDAR frequency of 20 FPS. The box annotations are provided for every 0.5s. Each frame has one point cloud and six images covering \(360^{\circ}\) horizontal FOV. In total, there are 40k frames which are split into 28k, 6k, 6k for training, validation, and testing. As the dataset provides full annotations and information of vehicle pose and timestamp, we can simulate all corruptions. Thus, we apply all 27 corruptions to the nuScenes validation set with 5 severities to obtain nuScenes-C. For 3D object detection, the main evaluation metrics are mean Average Precision (mAP) and nuScenes detection score (NDS) computed on 10 object categories. The mAP is calculated using the 2D center distance on the ground plane instead of the 3D IoU. The NDS metric consolidates mAP and other aspects (_e.g._, scale, orientation) into a unified score. Similar to KITTI-C, we denote model performance on the validation set as \(\mathrm{mAP}_{\mathrm{clean}}\) and \(\mathrm{NDS}_{\mathrm{clean}}\), and measure the corruption robustness \(\mathrm{mAP}_{\mathrm{cor}}\) and \(\mathrm{NDS}_{\mathrm{cor}}\) by averaging over all corruptions and severities. We also compute the relative corruption error \(\mathrm{RCE}\) under both mAP and NDS metrics similar to Eq. (2). On nuScenes-C, we select 10 3D detectors, including 3 LiDAR-only models: _PointPillars_[30], _SSN_[76], and _CenterPoint_[67]; 4 camera-only models: _FCOS3D_[60], _PGD_[59], _DETR3D_[62], and _BEVFormer_[34]; and 3 LiDAR-camera fusion models: _FUTR3D_[12], _TransFusion_[2], and _BEVFusion_[36]. The model details are shown in Table 1(b). \begin{table} \end{table} Table 1: The 3D object detection models adopted for corruption robustness evaluation on KITTI-C and nuScenes-C. We show the input modality, representation learning method (see Sec. 2.1), and detection head of each model. ### Waymo-C The Waymo open dataset [54] consists of 798 scenes for training and 202 scenes for validation. Similar to nuScenes-C, Waymo-C is constructed by applying all 27 corruptions to the Waymo validation set with 5 severities. The official evaluation metrics are mAP and mAPH by taking the heading accuracy into consideration. We similarly calculate the corruption robustness and relative corruption error on Waymo-C. Due to the license agreement, there are no pre-train models publicly. Thus, we train the LiDAR-only _PointPillars_[30], camera-only _BEVFormer_[34], and LiDAR-camera fusion _TransFusion_[2] on a subset of training data [34] for robustness evaluation. ## 5 Benchmarking Results We present the evaluation results on KITTI-C in Sec. 5.1, nuScenes-C in Sec. 5.2, and leave the results on Waymo-C in Appendix D. We summarize the key findings in Sec. 6. ### Results on KITTI-C We show the corruption robustness of 11 3D object detection models on KITTI-C in Table 2, in which we only report the results on the car class at moderate difficulty, while leaving full results of other classes and difficulties in Appendix B. Overall, the corruption robustness is highly correlated with the clean accuracy, as the models (_e.g_., PV-RCNN, Focals Conv) with higher \(\mathrm{AP}_{\mathrm{clean}}\) also achieve higher \(\mathrm{AP}_{\mathrm{cor}}\). It is not surprising due to the consistent performance degradation of different models. We further show the relative corruption error \(\mathrm{RCE}\) of these models under each level of corruptions in Fig. 3. Based on the evaluation results, we provide the analyses below. \begin{table} \begin{tabular}{c|c c c c c c|c c c c|c} \hline \hline \multirow{2}{*}{**Corruption**} & \multicolumn{4}{c|}{**LiDAR-only**} & \multicolumn{2}{c|}{**Camera-only**} & \multicolumn{2}{c}{**LC Fusion**} \\ & SECOND & PointPillars & PointRCNN & Part-\(A^{2}\) & PV-RCNN & 3DSSD & SMOKE & PGD & ImVoxelNet & EPNet & Focals Conv \\ \hline \hline \multicolumn{2}{c|}{**None (\(\mathrm{AP}_{\mathrm{clean}}\))**} & 81.59 & 78.41 & 80.57 & 82.45 & 84.39 & 80.03 & 7.09 & 8.10 & **11.49** & 82.72 & **85.88** \\ \hline \hline \multirow{4}{*}{**Weather**} & Snow & **52.34** & 36.47 & 50.36 & 42.70 & **52.35** & 27.12 & 2.47 & 6.03 & 0.22 & 34.58 & 34.77 \\ & Rain & **52.55** & 36.18 & 51.27 & 41.63 & 51.58 & 26.28 & 3.94 & 3.06 & 1.24 & 36.27 & 41.30 \\ & Fog & 74.10 & 64.28 & 72.14 & 71.61 & **79.47** & 45.89 & 5.63 & 0.87 & 1.34 & 44.35 & 44.55 \\ & Sunlight & 78.32 & 62.28 & 62.78 & 76.45 & 79.91 & 26.09 & 6.00 & 7.07 & 10.08 & 69.65 & **80.97** \\ \hline \multirow{6}{*}{**Sensor**} & Density & 80.18 & 76.49 & 80.35 & 80.53 & 82.79 & 77.65 & - & - & - & 82.09 & **84.95** \\ & Cutout & 73.59 & 70.28 & 73.94 & 76.08 & 76.09 & 73.05 & - & - & - & 76.10 & **78.06** \\ & Crossstalk & 80.24 & 70.85 & 71.53 & 79.95 & 82.34 & 46.49 & - & - & - & 82.10 & **85.82** \\ & Gaussian (L) & 64.90 & 74.68 & 61.20 & 60.73 & 65.11 & 59.14 & - & - & - & 60.88 & **82.14** \\ & Uniform (L) & 79.18 & 77.31 & 76.39 & 77.77 & 81.16 & 74.91 & - & - & - & 79.24 & **85.81** \\ & Impulse (L) & 81.43 & 78.17 & 79.78 & 80.80 & 82.81 & 78.28 & - & - & - & 81.63 & **85.01** \\ & Gaussian (C) & - & - & - & - & - & - & 1.56 & 1.71 & 2.43 & 80.64 & **80.97** \\ & Uniform (C) & - & - & - & - & - & - & 2.67 & 3.29 & 4.85 & 81.61 & **83.38** \\ & Impulse (C) & - & - & - & - & - & - & - & 1.83 & 1.14 & 2.13 & **81.18** & 80.83 \\ \hline \multirow{2}{*}{**Motion**} & Moving Obj. & 52.69 & 50.15 & 50.54 & 54.62 & 54.60 & 52.47 & 1.67 & 2.64 & 5.93 & **55.78** & 49.14 \\ & Motion Blur & - & - & - & - & - & - & 3.51 & 3.36 & 4.19 & 74.71 & **81.08** \\ \hline \multirow{6}{*}{**Object**} & Local Density & 75.10 & 69.56 & 74.24 & 79.57 & 77.63 & 77.96 & - & - & - & 76.73 & **80.84** \\ & Local Cutout & 68.29 & 61.80 & 67.94 & 75.06 & 72.29 & 73.22 & - & - & 69.92 & **76.64** \\ & Local Gaussian & 72.31 & 76.58 & 69.82 & 77.44 & 70.44 & 75.11 & - & - & - & 75.76 & **82.02** \\ & Local Uniform & 80.17 & 78.04 & 77.67 & 80.77 & 82.09 & 78.64 & - & - & 81.71 & **84.69** \\ & Local Impulse & 81.56 & 78.43 & 80.26 & 82.25 & 84.03 & 79.53 & - & - & - & 82.21 & **85.78** \\ & Shear & 41.64 & 39.63 & 39.80 & 37.08 & **47.72** & 26.56 & 1.68 & 2.99 & 1.33 & 41.43 & 45.77 \\ & Scale & 73.11 & 70.29 & 71.50 & 75.90 & **76.81** & 75.02 & 0.13 & 0.15 & 0.33 & 69.05 & 69.48 \\ & Rotation & 76.84 & 72.70 & 75.57 & 77.50 & **79.93** & 76.98 & 1.11 & 2.14 & 2.57 & 74.62 & 77.76 \\ \hline **Alignment** & Spatial & - & - & - & - & - & - & - & - & - & 35.14 & 43.01 \\ \hline **Average (\(\mathrm{AP}_{\mathrm{cor}}\))** & **70.45** & 65.48 & 67.74 & 69.92 & **72.59** & 60.55 & 2.68 & 2.42 & **3.05** & **67.81** & **71.87** \\ \hline \hline \end{tabular} \end{table} Table 2: The benchmarking results of 11 3D object detectors on **KITTI-C**. We show the performance under each corruption and the overall corruption robustness \(\mathrm{AP}_{\mathrm{cor}}\) averaged over all corruption types. The results are evaluated based on the car class at moderate difficulty. Figure 3: The relative corruption error \(\mathrm{RCE}\) of 11 3D object detectors on **KITTI-C**. We show the overall results under all corruptions and the results under each level of corruptions. **Comparison of corruption types.** Based on Table 2 and Fig. 3, we can observe that weather-level and motion-level corruptions affect the performance of LiDAR-only and fusion models most, while all corruptions cause significant performance drop for camera-only models. For example, _Snow_ and _Rain_ lead to more than \(35\%\)\(\mathrm{RCE}\) for all models, demonstrating the threats of adverse weathers on 3D object detectors. Besides, _Moving Object_ and _Shear_ are also challenging for all models, while _Spatial Misalignment_ has a great impact on fusion models. On the other hand, most models exhibit negligible performance drop under sensor-level and object-level corruptions, mainly due to their ubiquity in the training dataset. **Comparison of 3D object detectors.** Due to the inferior performance of camera-only models, we mainly compare LiDAR-only and LiDAR-camera fusion models. We notice that for corruptions that affect both modalities (_e.g._, _Snow_, _Moving Object_, _Shear_), LiDAR-only models lead to better performance. But for those that only corrupt point clouds (_e.g._, sensor noises), fusion models are more competitive. This is due to that the accurate image data can endow fusion models with better robustness under point cloud noises, but when images are also corrupted, fusion models are affected by both inputs, resulting in inferior performance. To further validate this, we apply sensor noises to LiDAR and camera inputs at the same time. We show the performance of Focals Conv [13] under the concurrence of LiDAR and camera noises in Fig. 4. It can be seen that the accuracy of Focals Conv further drops in the presence of both LiDAR and camera noises, leading to worse performance than LiDAR-only models that cannot be affected by camera noises. The results demonstrate that although fusion models are more robust to noises of one modality, they are potentially exposed Figure 4: The performance of Focals Conv [13] under the concurrence of LiDAR and camera noises. \begin{table} \begin{tabular}{c|c|c c|c c c c|c c c} \hline \hline \multirow{2}{*}{**Corruption**} & \multicolumn{3}{c|}{**LiDAR-only**} & \multicolumn{3}{c|}{**Camera-only**} & \multicolumn{3}{c}{**LC Fusion**} \\ & PointPillars & SSN & CenterPoint & FCOS3D & PGD & DETR3D & BEVFormer & FUTR3D & TransFusion & BEVFusion \\ \hline \hline \multicolumn{2}{c|}{**None (\(\mathrm{mAP}_{\mathrm{clean}}\))**} & **27.69** & **46.65** & **59.28** & **23.86** & **23.19** & **34.71** & **41.65** & **64.17** & **66.38** & **68.45** \\ \hline \hline \multirow{4}{*}{**Weather**} & Snow & 27.57 & 46.38 & 55.90 & 2.01 & 2.30 & 5.08 & 5.73 & 52.73 & **63.30** & 62.84 \\ & Rain & 27.71 & 46.50 & 56.08 & 13.00 & 13.51 & 20.39 & 24.97 & 58.40 & 65.35 & **66.13** \\ & Fog & 24.49 & 41.64 & 43.78 & 13.53 & 12.83 & 27.89 & 32.76 & 53.19 & 53.67 & **54.10** \\ & Sunlight & 23.71 & 40.28 & 54.20 & 17.20 & 22.77 & 34.66 & 41.68 & 57.70 & 55.14 & **64.42** \\ \hline \multirow{4}{*}{**Sensor**} & Density & 27.27 & 46.14 & 58.60 & - & - & - & - & 63.72 & 65.77 & **67.79** \\ & Cutout & 24.14 & 40.95 & 56.28 & - & - & - & - & 62.25 & 63.66 & **66.18** \\ & Crosstalk & 25.92 & 44.08 & 56.64 & - & - & - & - & 62.66 & 64.67 & **67.32** \\ & FOV Lost & 8.87 & 15.40 & 20.84 & - & - & - & - & 26.32 & 24.63 & **27.17** \\ & Gaussian (L) & 19.41 & 39.16 & 45.79 & - & - & - & - & 58.94 & 55.10 & **60.64** \\ & Uniform (L) & 25.60 & 45.00 & 56.12 & - & - & - & - & 63.21 & 64.72 & **66.81** \\ & Impulse (L) & 26.44 & 45.58 & 57.67 & - & - & - & - & 63.43 & 65.51 & **67.54** \\ & Gaussian (C) & - & - & - & 3.96 & 4.33 & 14.86 & 15.04 & 54.96 & **64.52** & 64.44 \\ & Uniform (C) & - & - & - & 8.12 & 8.48 & 21.49 & 23.00 & 57.61 & 65.26 & **65.81** \\ & Impulse (C) & - & - & - & 3.55 & 3.78 & 14.32 & 13.99 & 55.16 & **64.37** & 64.30 \\ \hline \multirow{4}{*}{**Motion**} & Compensation & 3.85 & 10.39 & 11.02 & - & - & - & - & **31.87** & 9.01 & 27.57 \\ & Moving Obj. & 19.38 & 35.11 & 44.30 & 10.36 & 10.47 & 16.63 & 20.22 & **45.43** & 51.01 & **51.63** \\ & Motion Blur & - & - & - & 10.19 & 9.64 & 11.06 & 19.79 & 55.99 & 64.39 & **64.74** \\ \hline \multirow{4}{*}{**Object**} & Local Density & 26.70 & 45.42 & 57.55 & - & - & - & - & 63.60 & 65.65 & **67.42** \\ & Local Cutout & 17.97 & 32.16 & 48.36 & - & - & - & - & 61.85 & 63.33 & **63.41** \\ & Local Gaussian & 25.93 & 43.71 & 51.13 & - & - & - & - & 62.94 & 63.76 & **64.34** \\ & Local Uniform & 27.69 & 46.87 & 57.87 & - & - & - & - & 64.09 & 66.20 & **67.58** \\ & Local Impulse & 27.67 & 46.88 & 58.49 & - & - & - & - & 64.02 & 66.29 & **67.91** \\ & Shear & 26.34 & 43.28 & 49.57 & 17.20 & 16.66 & 17.46 & 24.71 & 55.42 & **62.32** & 60.72 \\ & Scale & 27.29 & 45.98 & 51.13 & 6.75 & 6.57 & 12.02 & 17.64 & 56.79 & 64.13 & **64.57** \\ & Rotation & 27.80 & 46.93 & 54.68 & 17.21 & 16.84 & 27.28 & 33.97 & 59.64 & 63.36 & **65.13** \\ \hline \multirow{2}{*}{**Alignment**} & Spatial & - & - & - & - & - & - & - & 63.77 & 66.22 & **68.39** \\ & Temporal & - & - & - & - & - & - & - & **51.43** & 43.65 & 49.02 \\ \hline \multicolumn{2}{c|}{**Average (\(\mathrm{mAP}_{\mathrm{core}}\))**} & **23.42** & **40.37** & **49.81** & **10.26** & **10.68** & **18.60** & **22.79** & **56.99** & **58.73** & **61.03** \\ \hline \hline \end{tabular} \end{table} Table 3: The benchmarking results of 10 3D object detectors on **nuScenes-C**. We show the performance under each corruption and the overall corruption robustness \(\mathrm{mAP}_{\mathrm{cor}}\) averaged over all corruption types. to corruptions from multiple sensors. **Comparison of LiDAR-only models.** Among the six LiDAR-only detectors, we find that SECOND [65], PointRCNN [51], and PV-RCNN [50] possess better relative corruption robustness than the others, whose \(\mathrm{RCE}\) is \(13.65\%\), \(13.61\%\), and \(13.99\%\). The worst model is 3DSSD, exhibiting a \(24.34\%\) performance drop. In general, there does not exist a clear margin of robustness between voxel-based and point-based detectors, or between one-stage and two-stage detectors, different from previous findings [33]. However, we notice that the worst two models PointPillars [30] and 3DSSD [66] are developed for improving the efficiency of 3D object detection, which may indicate a trade-off between corruption robustness and efficiency. ### Results on nuScenes-C We report the corruption robustness of 10 3D detectors on nuScenes-C in Table 3 under the mAP metric, and leave the results under the NDS metric in Appendix C. The model performance is consistent for both metrics. We further show the relative corruption error \(\mathrm{RCE}\) under each level of corruptions in Fig. 5. Similar to the results on KITTI-C, models that have higher clean accuracy generally achieve better corruption robustness. But differently, the nuScenes dataset provides multi-view images, thus the camera-only models achieve competitive clean accuracy with LiDAR-only models, enabling us to compare their performance. We provide more detailed analyses below. **Comparison of corruption types.** From Fig. 5, we can observe that motion-level corruptions are significantly more detrimental to LiDAR-only and LiDAR-camera fusion models. They give rise to more than \(50\%\) performance drop for LiDAR-only models and about \(30\%\) drop for fusion models. Similar to KITTI-C, all corruptions remarkably degrade the performance of camera-only models. A notable difference from KITTI-C is that most models are resistant to weather-level corruptions. We think that the adverse weathers (_e.g_., rain) contained in the nuScenes dataset enable the detectors to predict robustly under weather-level corruptions. Among all corruptions, _FOV Lost_ and _Motion Compensation_ impair the models most, mainly due to the large distortions of the LiDAR point clouds. **Comparison of 3D object detectors.** For different categories of 3D object detectors, camera-only models are more prone to common corruptions, whose performance drops more than \(40\%\) under \(\mathrm{RCE}\). On the contrary, LiDAR-only and fusion models exhibit less than \(20\%\) performance drop. The reason is that LiDAR point clouds are inherently noisy due to the ranging inaccuracy [7] and self-occlusion, such that the models trained on point clouds are relatively robust to corruptions. The results may suggest the indispensability of LiDAR point clouds for reliable 3D object detection. **Comparison of camera-only models.** Though camera-only detectors are greatly affected by corruptions, we find that multi-view methods outperform monocular methods in terms of both clean and corruption accuracy. From Fig. 5, the overall performance drop of FCOS3D and PGD is \(57\%\) and \(54\%\), while that of DETR3D and BEVFormer is \(46\%\) and \(45\%\), respectively. Since monocular methods directly predict 3D objects from single images without considering 3D scene structure, they are more prone to noises [62] and exhibit inferior performance. Besides, BEVFormer performs better than DETR3D, especially under object-level corruptions (_e.g_., _Shear_, _Rotation_), since it can capture both semantic and location information of objects in the BEV space with being less affected by varying object shapes [31]. **Comparison of LiDAR-camera fusion models.** Based on the above analysis, fusion models demonstrate superior corruption robustness on nuScene-C. By carefully examining their performance, we find that there exists a trade-off between robustness under image corruptions and point cloud corruptions. Specifically, FUTR3D suffers from the largest performance drop (\(12.9\%\) on average) under _Gaussian_, _Uniform_ and _Implus_ noises of images, compared with \(2.5\%\) of TransFusion and \(5.3\%\) of BEVFusion. However, under _Motion Compensation_ that significantly distorts point clouds, FUTR3D obtains the highest mAP of \(31.87\%\) while TransFusion only has \(9.01\%\) mAP. The reason behind this trade-off is that fusion models have varying reliance on images or point clouds, resulting in the inconsistent robustness under the corresponding corruptions of different sensors. ## 6 Discussion and Conclusion In this paper, we systematically design 27 types of common corruptions in 3D object detection to benchmark corruption robustness of existing 3D object detectors. We establish three corruption robustness benchmarks--KITTI-C, Figure 5: The relative corruption error \(\mathrm{RCE}\) of 10 3D object detectors on **nuScenes-C**. We show the overall results under all corruptions and the results under each level of corruptions. nuScenes-C, and Waymo-C by synthesizing the corruptions on public datasets. By conducting large-scale experiments on 24 diverse 3D object detection models under corruptions, we draw some important findings, as summarized below: 1. In general, the corruption robustness of 3D object detection models is largely correlated with their clean performance, similar to the observation in [25]. 2. Among all corruption types, motion-level ones degrade the model performance most, which pose a significant threat to autonomous driving. Weather-level corruptions are also influential to models trained on normal weather. 3. Among all 3D detectors, LiDAR-camera fusion models have better corruption robustness, especially under those that apply distortions to only one modality. However, they are also exposed to corruptions from both sensors, leading to degraded performance in this case. Besides, there is a trade-off between robustness under image corruptions and point cloud corruptions of fusion models. 4. Camera-only models are more easily affected by common corruptions, demonstrating the indispasability of LiDAR point clouds for reliable 3D detection or the necessity of developing more robust camera-only models. 5. In Appendix E, we further try several data augmentation strategies, including those applied to point clouds [14, 72] and images [69, 71]. The experiments validate that they can hardly improve corruption robustness, leaving robustness enhancement of 3D object detection an open problem for future research. We hope our comprehensive benchmarks, in-depth analyses, and insightful findings can be helpful for understanding the corruption robustness of 3D object detection models and improving their robustness in future. ## Acknowledgement This work was supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 62276149, 62061136001, 62076145, 62076147, U19B2034, U1811461, U19A2081, 61972224), Beijing NSF Project (No. JQ19016), BNRist (BNR2022RC01006), Tsinghua Institute for Guo Qiang, and the High Performance Computing Center, Tsinghua University. Y. Dong was also supported by the China National Postdoctoral Program for Innovative Talents and Shuimu Tsinghua Scholar Program. J. Zhu was also supported by the XPlorer Prize.
2304.13828
Time-Interleaved C-band Co-Propagation of Quantum and Classical Channels
A successful commercial deployment of quantum key distribution (QKD) technologies requires integrating QKD links into existing fibers and sharing the same fiber networks with classical data traffic. To mitigate the spontaneous Raman scattering (SpRS) noise from classical data channels, several quantum/classical coexistence strategies have been developed. O-band solutions place the QKD channel in the O-band for lower SpRS noise but with the penalty of higher fiber loss and can rarely reach beyond 80 km of fiber; another method is C-band coexistence with attenuated classical channels, which sacrifices the performance of classical channels for lower SpRS noise. In this work, a time-interleaving technique is demonstrated to enable the co-propagation of quantum and classical channels in the C-band without sacrificing either performance. By embedding QKD pulses in the gaps between classical data frames, the quantum channel is isolated from SpRS noise in both wavelength and time domains. C-band co-propagation of a polarization-encoding decoy-state BB84 QKD channel with a 100 Gb/s QPSK channel is experimentally demonstrated with quantum bit error rate (QBER) of 1.12%, 2.04%, and 3.81% and secure key rates (SKR) of 39.5 kb/s, 6.35 kb/s, and 128 b/s over 20, 50, and 100 km fibers, respectively. These results were achieved with the presence of classical launch power up to 10 dBm, which is at least one order of magnitude higher than reported works. We also demonstrated the co-propagation of a QKD channel with eight classical channels with total launch power up to 18-dBm (9-dBm per channel), which is the highest power of classical channels reported in C-band coexistence works.
Jing Wang, Brian J. Rollick, Bernardo A. Huberman
2023-04-26T21:10:12Z
http://arxiv.org/abs/2304.13828v2
Time-Interleaving Enabled Co-propagation of QKD and Classical Channels over 100-km Fiber with 10-dBm Classical Launch Power ###### Abstract The commercial success and wide deployment of quantum key distribution (QKD) technology depend on the integration of QKD links into existing fiber networks and sharing of the same fibers with classical data traffic. To mitigate the spontaneous Raman scattering (SpRS) noise from classical data channels, several strategies have been developed with their pros and cons, e.g., the placement of QKD in the O-band sacrifices the fiber loss and can rarely reach beyond 80 km; the attenuation of classical channels sacrifices the performance of classical channels. In this work, we developed a time-interleaving technique to enable the co-propagation of quantum and classical channels in the C-band without sacrificing either performance. By embedding QKD pulses in the gaps between classical data frames, we can isolate the quantum channel from Raman noise in both wavelength and time domains. We experimentally demonstrated the co-propagation of a polarization-encoding decoy-state BB84 QKD channel with a 100 Gb/s QPSK channel with 10-dBm launch power in the C-band over 100 km of fiber. Quantum bit error rate (QBER) of 1.12%, 2.04%, and 3.81% and secure key rates (SKR) of 39.5 kb/s, 6.35 kb/s, and 128 b/s are achieved after 20, 50, and 100 km fibers with the presence of 10-dBm classical launch power. The dispersion walk-off effect of SpRS noise is also experimentally investigated. + Footnote †: preprint: APS/123-QED ## I Introduction The security of today's cryptographic algorithms is based on computational complexity, which is no longer secure against quantum computers running Shor's and Grover's algorithms [1; 2; 3]. While Post-Quantum Cryptographic (PQC) algorithms have been developed to deal with the emerging challenges of quantum computers, their security is still questionable. In 2022, two contenders of the NIST competition, post-quantum signature scheme Rainbow and Supersingular Isogeny Key Encapsulation (SIKE) have been broken [4; 5]. Quantum key distribution (QKD) is a promising candidate to address the emerging challenges of quantum computing since its security is guaranteed by the laws of physics [6; 7; 8; 9]. Whereas most QKD research focused on achieving longer distances and higher key rates [10; 11; 12; 13], few innovations have been done from the deployment perspective. So far, most reported QKD systems need dedicated dark fibers, as shown in Fig. 1(a), since the spontaneous Raman scattering (SpRS) noise from classical channels could easily overwhelm a QKD link. On the other hand, dark fibers are scarce and expensive resources and it is cost-prohibitive for network operators to reserve fibers for a single purpose. Therefore, sharing existing fiber networks with classical data traffic is essential for the commercial success of QKD technology. Several quantum/classical coexistence technologies have been developed to mitigate the SpRS noise from classical channels. Townsend first proposed the wavelength division multiplexing (WDM) of QKD and classical data channels by placing the QKD channel in the O-band and classical traffic in the C-band [14], as shown in Fig. 1(b). Thanks to the large wavelength separation, the quantum channel is out of the spectrum of SpRS noise from classical channels. The feasibility of this method was proven by using continuous-wave (CW) lasers around 1550 nm to emulate classical data traffic [15; 16; 17]. It was experimentally verified that SpRS noise in the O-band is orders of magnitude smaller than that in the C-band [18] and the coexistence with Tb/s classical data traffic with more than 20 dBm launch power is possible [19; 20; 21; 22]. Several world-record results have been reported, e.g., the highest classical data rate of 7.168 Tb/s and longest fiber distance of 80 km [18], the highest classical launch power of 25 dBm [21], and the highest coexistence secure key rate [22]. This method, however, is limited by the high fiber loss in the O-band and can rarely reach fiber distances beyond 80 km, even using G.654 ultra-low loss fibers [19; 21]. Moreover, network compatibility might be also an issue, since most deployed reconfigurable optical add-drop multiplexers (ROADMs) only support C-band routing/switching. Fig. 2 shows the state-of-the-art coexistence works in terms of fiber distance and classical launch power. O-band results are labeled by circles, which allow high-power classical channels but are bounded to 80 km of fiber distances. Another alternative is to leave both quantum and the classical channels in the C-band while reducing the power of classical channels, as shown in Fig. 1 (c). It was proven that normal QKD operation is impossible with the presence of even one classical channel with 0-dBm launch power [23]. To alleviate SpRS noise, one has to attenuate classical channels. Early experiments used CW lasers to emulate classical data channels [24; 25; 26], then scaled up to 100 Mb/s [27], 1 Gb/s [28; 29] and 10 Gb/s [30; 31], where classical channels are attenuated to just matching the receiver's sensitivities. For high data rates beyond 100 Gb/s, SpRS noise is so strong that normal QKD operation is impossible unless the classical power is attenuated below the receiver's sensitivities [32]. Although these attenuated classical channels are boosted by optical amplifiers at the receiver, they would fail to meet the distances or bit error rate (BER) requirements in real networks, so this method will not work in real-world deployment. Compared with the O-band solution, C-band coexistence was achieved by sacrificing the performance of classical channels and has stringent limitations on the launch power, channel number, and total bit rates of classical data traffic. In Fig. 2, C-band results are labeled by triangles. There is a clear dependence between the maximum fiber distance and allowed classical power. The dashed line shows a distance-power limit where all reported C-band results are below this line. Time-division multiplexing (TDM) is a third method to combine quantum and classical data frames together [33; 34], as shown in Fig. 1(d). To enable packet switching of quantum payload, classical wrappers, e.g. headers and trailers, are added before and after the quantum payload respectively. It allows quantum and classical packets to occupy the same fiber alternatively, but not in an efficient way. This is because QKD pulses are sparse in the time domain with narrow pulse widths, low repetition rates, and long periods, therefore their duty cycles are Figure 1: Existing solutions to the coexistence of quantum and classical channels in a shared fiber. (a) Dedicated dark fiber. (b) Quantum channel in the O-band. (c) Attenuate classical channels. (d) Time-division multiplexing of quantum and classical data frames. Figure 2: State-of-the-art in terms of fiber distances and classical launch power for QKD/classical coexistence in the same fiber. rather low. For example, a QKD pulse train with 100-ps pulse width and 25 MHz repetition has a duty cycle of 0.25%. The rest 99.75% time between two consecutive quantum pulses is empty. To employ the time slots more efficiently, Townsend first proposed to send QKD pulses during the time slots of 0 bits of a co-propagating classical data stream and demonstrated QKD over a 10-km passive optical network (PON) with -2.7 dBm classical launch power [35]. Other reported works included the integration of QKD and classical channels using special fibers, e.g. multicore [36] and hollow fibers [37]. In general, solutions based on special fibers are subjected to high manufacturing costs and short fiber distances. In this paper, we demonstrate a time-interleaving technique to enable the co-propagation of QKD and classical channels in the C-band with minimal interference of SpRS noise. By embedding quantum pulses into the gaps between classical data frames, we can isolate QKD pulses from SpRS noise in the time domain by temporal gating. By placing the QKD channel in the C-band, this method leverages the low fiber loss and the ROADM compatibility of deployed fiber networks, while at the same time removing the power limit on classical channels. Thus it allows the quantum/classical co-propagation in the C-band without sacrificing either performance. We demonstrated the C-band co-propagation of a polarization-encoding decoy-state BB84 QKD channel with a 10-dBm classical 100-Gb/s QPSK channel over 100 km of fiber. Quantum bit error rates (QBER) of 1.12%, 2.04%, and 3.81% and secure key rates (SKR) of 39.5 kb/s, 6.35 kb/s, and 128 b/s are achieved over 20, 50, and 100 km of fibers with the presence of 10-dBm classical launch power. It should be noted that the relatively low SKRs are limited by the slow response and long dead time of our low-cost single-photon detectors (SPDs). In Fig. 2, this work is labeled by a blue pentagram, which is the only outlier above the distance-power limit of C-band coexistence works. This paper is organized as follows. Section II shows the architecture and operation principles of the time-interleaving technique. Section III describes the experimental setup. Section IV presents the experimental results and investigates the dispersion walk-off between QKD pulses and SpRS noise. Finally, section V concludes the paper. ## II Operation principles The operation principles of the time-interleaving technique are shown in Fig. 3. At Alice, the classical transmitter (Tx) and quantum transmitter (QTx) are synchronized to generate classical data frames and QKD pulses and interleave them in the time domain. The QKD pulses are embedded in the gaps between classical data frames, so the long time period between consecutive QKD pulses is exploited to carry classical data frames. Since they use different wavelengths, QKD and classical channels can be separated in both wavelength and time domains. After fiber propagation, they are first separated by a WDM at Bob then both spectral filtering and temporal gating techniques are exploited to isolate the quantum pulses from SpRS noise. A narrow bandpass filter (BPF) blocks the out-of-band SpRS noise from entering the quantum receiver and gated SPDs eliminate out-of-window SpRS counts. ## III Experimental setup The experimental setup is shown in Fig. 4. A polarization-encoding decoy-state BB84 QKD system with three intensities [38] is shown in red boxes, whereas classical systems are in blue. An external cavity laser (ECL) at 193.9 THz (ITU-T Ch39, 1546.12 nm) is used as the light source, followed by an intensity modulator (\(IM_{1}\)) for pulse generation and decoy state preparation. A 10-GSa/s Tektronix arbitrary waveform generator (AWG) drives \(IM_{1}\) to generate 200-ps pulses with a 25-MHz repetition rate and 0.5% duty cycle. We use a polarization modulator (Pol-M) design in [39, 40, 41], which consists of a circulator, a phase modulator (PM), and a Faraday mirror (FM). The polarization beam splitter (PBS) before the Pol-M ensures linear input polarization. The polarizations are encoded in two non-orthogonal and conjugate bases, rectilinear and diagonal. The PM is driven by a Keysight function generator to prepare four polarization states 0\({}^{\circ}\) (H), 45\({}^{\circ}\) (D), 90\({}^{\circ}\) (V), and \(-45^{\circ}\) (AD) by applying voltages of 0, \(V_{\pi}/2\), \(V_{\pi}\), and 3\(V_{\pi}/2\) (\(V_{\pi}\)is the half-wave voltage). A variable optical attenuator (\(VOA_{1}\)) controls the pulse intensity at point A to the single-photon level. We prepared two classical channels, one 10 Gb/s intensity modulated-direct detection (IM-DD) OOK link, and one coherent 100 Gb/s QPSK link. Both have their wavelengths tunable in the C-band to investigate the wavelength dependence of SpRS noise. An optical switch selects the target classical channel to be interleaved with QKD pulses. To emulate gaps between classical data frames, \(IM_{2}\) curves windows on continuous data traffic to accommodate QKD pulses. With a repetition rate of 25 MHz, two consecutive quantum pulses are separated by 40 ns. To facilitate the following investigation of dispersion walk-off between QKD pulses and SpRS noise, a wide gap window of 15 ns is used. It will be shown that the required gap window depends on fiber length and wavelength differences between classical and quantum channels and 15 ns is unnecessary. With careful wavelength planning, a narrow gap window slightly more than the pulse width is enough to accommodate QKD pulses. An erbium-doped fiber amplifier (EDFA) and \(VOA_{2}\) control the launch power of classical channels. A 100-GHz BPF after the EDFA eliminates the broadband amplified spontaneous emission (ASE) noise. In experiments, the classical launch power at point A can be up to 10 dBm, which is at least one order of magnitude higher than reported C-band coexistence results [28; 29; 30; 31; 32]. Three fiber distances are tested in experiments, 20, 50, and 100 km. For fiber lengths of 20 and 50 km, a 10 Gb/s OOK channel co-propagates with the QKD channel; for 100 km fiber, a 100 Gb/s QPSK channel is sent with the QKD channel. The synchronization channel is omitted in the experiment since an optical clock link at 25 MHz needs less than -10 dBm launch power and the synchronization pulses can be delayed in the time domain to make them out of the gating window of SPDs. Therefore, excluding the synchronization channel in experiments has a negligible impact on the QKD performance. A dense wavelength-division multiplexer (DWDM) with a 100-GHz grid multiplexes the quantum and classical channels at Alice's site. After co-propagation through the fiber, they are separated by another DWDM at Bob. The quantum channel is further isolated by spectral filtering and temporal gating. A narrowband filter with a bandwidth of \(\Delta\nu_{B}\)=20 GHz is implemented by a wavelength-selective switch (WSS) to block the out-of-band SpRS noise from entering the quantum receiver. The four SPDs work in the Geiger mode with a gate width of \(\Delta\tau\)=4 ns to eliminate the out-of-window SpRS noise counts. They have a detection efficiency of \(\eta_{D}\)=20% at 1550 nm and a dead time of 10 \(\mu\)s. The dark count rate is 150 counts per second or \(Y_{0}=6\times 10^{-6}\) per gate. A beam splitter (BS) randomly selects the measurement basis, then two PBS take measurements in rectilinear and diagonal bases, respectively. During the key sifting, Alice and Bob compare their bases and discard those bits prepared and measured in different bases. The optical misalignment of the QKD system is \(e_{mis}\)=0.5-1%. An optical switch chooses the classical receiver, 10 Gb/s IMDb, or 100 Gb/s coherent receiver. The coherent receiver consists of a local oscillator, 90\({}^{\circ}\) hybrids, balanced detectors, ADCs, and digital signal processing (DSP). In experiments, a pair of Acacia (Cisco) CFP2-DCO coherent transceivers are used for the 100 Gb/s QPSK link. To fight against the photon number splitting (PNS) attack, three pulse intensities are used for decoy states [42; 43], with mean photon numbers per pulse of \(\mu_{1}=0.85\), \(\mu_{2}=0.04\), and \(\mu_{3}=~{}10^{-4}\) for signal, decoy, and vacuum states, respectively. Their emission probabilities are \(P_{\mu_{1}}=0.9\), \(P_{\mu_{2}}=P_{\mu_{3}}=0.05\). The experimental Figure 4: Experimental setup of coexistence of a polarization-encoding decoy-state BB84 QKD channel with classical 10 Gb/s IM-DD or 100 Gb/s coherent QPSK channels. ECL: external cavity laser. PC: polarization controller. IM: intensity modulator. PBS: polarization beam splitter. PM: phase modulator. FM: Faraday mirror. Pol-M: polarization modulator. RNG: random number generator. VOA: variable optical attenuator. DWDM: dense wavelength division multiplexer. WSS: wavelength selective switch. SPD: single photon detector. IQM: in-phase quadrature modulator. EDFA: erbium-doped fiber amplifier. LO: local oscillator. DSP: digital signal processing. BERT: bit error rate test. Figure 3: Architecture of time-interleaving of QKD pulses with classical data frames. parameters are summarized in Table 1. Since the time-interleaving technique allows continuous QKD operation without interruption or downtime, the key size can be arbitrarily long. We followed the secure key rate estimation in [44] in the asymptotic limit of infinite key size, shown in Eq. 1. \(R\) is the secure key rate in bits per pulse. \(q\) depends on the implementation of BB84 protocols and is 0.5 in our case since Alice and Bob use the same bases half of the time. \(\mu\) denotes the intensity of signal states. \(Q_{\mu}\) and \(E_{\mu}\) are the gain and QBER of signal states, and they are measured in experiments. \(Q_{1}^{L}\) is the lower bound of the gain of single-photon states, \(e_{1}^{U}\) is the upper bound of the error rate of single-photon states, and they are estimated using decoy state protocols [44]. \(f_{EC}\) is the error correction efficiency and we use \(f_{EC}=1.2\). \(H_{2}(x)=-x\log_{2}(x)-(1-x)\log_{2}(1-x)\) is the Shannon binary entropy function. \(\mu_{1}\), \(\mu_{2}\), \(\mu_{3}\) and \(P_{\mu_{1}}\), \(P_{\mu_{2}}\), \(P_{\mu_{3}}\) are optimized to maximized the SKR. \[R\geq q\{-Q_{\mu}f_{EC}(E_{\mu})H_{2}(E_{\mu})+Q_{1}^{L}[1-H_{2}(e_{1}^{U})]\} \tag{1}\] ## IV Experimental results Three scenarios of quantum/classical co-propagation are tested in the experiments. For the classical channel, the 10 Gb/s OOK channel is used for 20 and 50 km of fiber, and a 100 Gb/s QPSK channel is used for 100 km of fiber. Fig. 5 shows the performance of classical channels and there is no performance penalty due to the presence of the quantum channel. Fig. 5(a) shows the BER of 10 Gb/s OOK channel as a function of the received optical power. The eye diagrams are shown in the insets. For BER=\(10^{-9}\), there is a power penalty of 1.7 dB after 20 km of fiber compared with the back-to-back (B2B) case. After 50 km of fiber, the eye diagram is severely spread by dispersion and error-free transmission is impossible. Fig. 5(b) shows the BER of 100 Gb/s QPSK channel after 100 km fiber. Thanks to the local oscillator, the coherent receiver has a much better receiver sensitivity and there is no power penalty after 100 km of fiber with the help of DSP for dispersion compensation. Fig. 5(c) shows the Raman cross-section of a classical channel centered at 1548.52 nm (Ch36, 193.6 THz). To reveal the spectrum of Raman noise, the central peak of the pump wavelength is filtered out. Two local minimums are located 200-300 GHz away from the pump wavelength on both sides. The anti-Stokes noise on the shorter wavelength side is smaller than the Stokes noise on the longer wavelength side. The QBER and SKR performance of three scenarios is summarized in Table 2. Fig. 6 shows QBER and SKR as a function of the launch power of the classical channel. The quantum channel is fixed at 1546.12 nm (Ch39, 193.9 THz), whereas the classical channel is tuned in the C-band to investigate the wavelength dependence of Raman noise. The wavelength choice of the classical channel is determined by the availability of DWDM mux/demux in our lab. Fig. 6(a) and (b) show the QBER and SKR of the QKD channel co-propagating with a 10 Gb/s OOK over 20 km fiber. The key rates are shown in both bits per pulse and bits per second, where the relatively low key rate is limited by the slow response and long dead time of our low-cost SPDs. Without time-interleaving, QBER increases rapidly with the classical launch power and reaches 10% at 0-dBm, making regular QKD operation impossible beyond this power level. The dashed line indicates the lowest possible QBER with the classical channel off. With the help of time-interleaving, we are able to keep the QBER increasing slowly with classical power. A zoom-in of QBER is shown in the inset. The best QBER (1.12%) and SKR (39.5 kb/s) are achieved When the classical channel is placed at Ch36 (193.6 THz, 1548.52 nm). This is because the QKD channel (Ch39, 193.9 THz) is 300 GHz higher than the classical channel and located at the minimum of the SpRS noise. Since anti-Stokes noise is smaller, classical channels with longer wavelengths than the QKD channel contribute less Raman noise. Classical channels at Ch21, 28, and 33 make similar performances as Ch36, whereas classical channels at Ch52 and 62 make more Raman noise. The classical channel at Ch62 makes the worst QBER/SKR performance since it has the shortest wavelength and is the most far away from the quantum channel. With the presence of a 10-dBm launch power, we are able to keep QBER below 2.83% and SKR above 18 kb/s for classical channels across the C-band. Compared with reported results of C-band coexistence [28; 29; 30; 31; 32], our classical launch power of 10 dBm is at least one order of magnitude higher. Fig. 6(c) and (d) show the case of co-propagating with a 10 Gb/s OOK channel over 50 km fiber. In this case, the classical channel at Ch62 makes too much noise and does not allow secure key generation when the launch power is 10 dBm. The best performance is achieved by the classical channel at Ch36 again with QBER of 2.04% and SKR of 6.35 kb/s. The worst performance is for Ch52, with QBER of 4.14% and SKR of 640 b/s. Fig. 6(e) and (f) show the case with a 100 Gb/s QPSK channel over 100 km fiber. In this case, only classical channels at Ch36 and 44 allow secure key generation. Classical channels at other wavelengths are too far away from the quantum channel and the 15 ns gap window is not wide enough after 100 km fiber due to the dispersion \begin{table} \begin{tabular}{l|l} \hline \(q_{X}\), \(q_{Z}\) & 0.94, 0.06 \\ \(\mu_{1}\), \(\mu_{2}\), \(\mu_{3}\) & 0.85, 0.04, 10\({}^{-4}\) \\ \(P_{\mu_{1}}\), \(P_{\mu_{2}}\), \(P_{\mu_{3}}\) & 0.9, 0.05, 0.05 \\ \(\Delta\tau\) & 4 ns \\ \(\Delta\nu_{B}\) & 20 GHz \\ \(\eta_{D}\) & 0.2 \\ \(Y_{0}\) & \(6\times 10^{-6}\) \\ Dead time & 10 \(\mu\)s \\ \(e_{mis}\) & 0.5-1\% \\ \(f_{EC}\) & 1.2 \\ \hline \end{tabular} \end{table} Table 1: Experimental parameters walk-off between QKD pulses and SpRS noise. QBER and SKR performance for 20 km, 50 km, and 100 km fibers are summarized in Table 2. Successful QKD over 20, 50, and 100 km fiber with QBER less than 2.83%, 4.14%, and 4.33% are demonstrated with the presence of a 10-dBm classical channel. Fig. 7 shows the dispersion walk-off of the SpRS noise. QKD pulses at \(\lambda_{Q}\) are embedded into the gaps between classical data frames at \(\lambda_{C}\). A 15-ns gap window is used in our experiment to investigate the dispersion walk-off effect, but in a real deployment, it is not necessary to use such a wide gap. Spontaneous Raman scattering converts the incident photons from \(\lambda_{C}\) to \(\lambda_{Q}\) along the fiber. Suppose a photon is converted at the fiber distance d, it travels at the speed of \(\lambda_{C}\) for fiber length d before the conversion and at the speed of \(\lambda_{Q}\) for the rest of the fiber L-d after the conversion. Since \(\lambda_{C}\) and \(\lambda_{Q}\) have different speeds, SpRS noise photons generated at different locations arrive asynchronously at the fiber end. The dispersion-caused walk-off effect spreads the SpRS noise in the time domain. Although classical data frames have steep edges, noise frames after fiber propagation have \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Fiber length} & \multicolumn{1}{c|}{20 km} & \multicolumn{1}{c|}{50 km} & \multicolumn{1}{c|}{100 km} \\ \hline \multicolumn{2}{|c|}{Classical channel} & 10 Gb/s OOK & 10 Gb/s OOK & 100 Gb/s QPSK \\ \hline \multicolumn{2}{|c|}{Launch power} & \multicolumn{4}{c|}{10 dBm} \\ \hline \multirow{4}{*}{best} & & Ch36 & Ch36 & Ch36 \\ \cline{2-5} & QBER & 1.12\% & 2.04\% & 3.81\% \\ \cline{2-5} & Key Rate & 39.5 kb/s & 6.35 kb/s & 128 b/s \\ \cline{2-5} & & 1.58e-3 & 2.54e-4 & 5.1e-6 \\ \hline \multirow{4}{*}{Worst} & & Ch62 & Ch52 & Ch44 \\ \cline{2-5} & QBER & 2.83\% & 4.14\% & 4.33\% \\ \cline{1-1} \cline{2-5} & Key Rate & 18 kb/s & 640 b/s & 61.4 b/s \\ \cline{1-1} \cline{2-5} & & 7.19e-4 & 2.56e-5 & 2.45e-6 \\ \hline \end{tabular} \end{table} Table 2: Three experimental scenarios over 20, 50, and 100 km of fibers Figure 5: BER performance of classical channels and Raman cross-section. (a) BER vs received optical power of 10 Gb/s OOK. (b) BER vs received optical power of 100 Gb/s QPSK. (c) Raman cross-section of a classical channel at Ch36 (193.6 THz, 1548.52 nm). To reveal the spectrum of SpRS noise, the classical wavelength peak is filtered out. Figure 6: QBER and SKR of the QKD channel in three test scenarios. (a, b) Co-propagating with a 10 Gb/s OOK channel over 20 km of fiber. (c, d) Co-propagating with a 10 Gb/s OOK channel over 50 km of fiber. (e, f) Co-propagating with a 100 Gb/s OPSK channel over 100 km of fiber. gradual slopes and trapezoidal waveforms, so the gaps between noise frames are smaller than the gap between the original data frames. Fig. 8 shows the noise gap window measured by SPDs after 20, 50, and 100 km of fibers. The dispersion walk-off effect is determined by two factors, the fiber length and wavelength difference between \(\lambda_{C}\) and \(\lambda_{Q}\). Longer fiber length and a larger wavelength difference make more spreading of the SpRS noise, leading to more gentle slopes and smaller gaps between noise frames. Fig. 8(a) shows the noise gap after 20 km of fiber. The classical channel wavelength is tuned to Ch21, 24, 26, 28, 33, and 36. Due to the symmetry of dispersion, without loss of generality, we only show wavelengths longer than the quantum channel. Ch21 is most far away from \(\lambda_{Q}\), so its SpRS noise has the most gentle slopes and smallest gaps between noise frames. Ch36 is closest to \(\lambda_{Q}\), so its SpRS noise has the steepest edges and widest gaps. Fig. 8(b) and (c) show the noise gap after 50 and 100 km of fibers. Longer fibers make more gentle slopes and smaller gaps between noise frames. For 100 km fiber, the noise gap is almost closed for the classical channel at Ch28. This is why in Fig. 6(e) and (f), only classical channels at Ch36 and 44 allow SKR generation. To minimize the dispersion walk-off, the classical channel should be spectrally close to the QKD channel, ideally 200-300 GHz lower than the QKD channel to minimize the SpRS noise. In our experiments, the classical channel at Ch36 makes the minimum Raman noise and also has the least dispersion walk-off, so it makes the best QBER and SKR performance. For long fiber distances, a wider gap window is required to accommodate the walk-off effect. To validate the effective noise suppression in the gap windows, we measured SpRS noise inside and out of the gap window. Fig. 8(d) shows the noise counts in and out of the gap window, measured by a time histogram in 40 ns with 400 bins (100 ps/bin). The noise counts out of the gap window show a wavelength dependence similar as the Raman cross-section shown in Fig. 5(c); whereas the noise counts inside the gap are kept low thanks to the time-interleaving technique. ## V Conclusion In conclusion, we present the time-interleaving technique to enable the co-propagation of classical and QKD channels in the C-band over 100 km of fiber. By embedding the QKD pulses into the gaps between classical data frames, we can isolate QKD pulses from SpRS noise in both wavelength and time domains by spectral filtering and temporal gating. We demonstrated the co-propagation of a polarization-encoding decoy-state BB84 QKD channel and a 10 Gb/s OOK or 100 Gb/s QPSK classical channel in the C-band over 20, 50, and 100 km of fiber. With the presence of a 10-dBm classical channel, successful QKD operations with QBER of 1.12%, 2.04%, and 3.81%, and SKRs of 39.5 kb/s, 6.35 kb/s, and 128 b/s are achieved over 20, 50, and 100 km of fibers, respectively. Compared with reported works placing the QKD channel in the O-band, our method leverages the low fiber loss in the C-band for longer fiber distances. Meanwhile, compared with other C-band works, our method removes the power limit of classical channels and 10-dBm launch power is at least one order of magnitude higher than other C-band coexistence works. Dispersion walk-off between classical and quantum channels is also experimentally investigated. It is concluded that the best place for a QKD channel is 200-300 GHz higher than the classical channel, where both Raman noise and dispersion walk-off effect are minimized. Although the proposed time-interleaving technique handles the co-propagation case quite well, it cannot be used for the counter-propagation case, since the back-scattering noise photons arrive asynchronously with respect to QKD pulses. In this work, we only demonstrate one classical channel, but time-interleaving can also be applied to multiple classical channels with appropriate wavelength planning. This is the topic of our future research and several guidelines have been developed in this work. 1. Since anti-Stokes noise is smaller than Stokes noise, it is always better to put classical channels on the longer wavelength side of the QKD channel. 2. To mitigate dispersion walk-off, it is better to put classical/quantum channels close by in the optical spectrum. Longer fiber distance and larger wavelength differences make more severe walk-offs and require large gap windows. 3. Classical channels with short wavelengths and far away from the QKD channel are the worst choices. 4. If there have to be classical channels with shorter wavelengths than the QKD channel, they can be put 200-300 GHz away from the quantum wavelength, where is the local minimum of SpRS noise.
2302.06615
Self-mediated exploration in artificial intelligence inspired by cognitive psychology
Exploration of the physical environment is an indispensable precursor to data acquisition and enables knowledge generation via analytical or direct trialing. Artificial Intelligence lacks the exploratory capabilities of even the most underdeveloped organisms, hindering its autonomy and adaptability. Supported by cognitive psychology, this works links human behavior and artificial agents to endorse self-development. In accordance with reported data, paradigms of epistemic and achievement emotion are embedded to machine-learning methodology contingent on their impact when decision making. A study is subsequently designed to mirror previous human trials, which artificial agents are made to undergo repeatedly towards convergence. Results demonstrate causality, learned by the vast majority of agents, between their internal states and exploration to match those reported for human counterparts. The ramifications of these findings are pondered for both research into human cognition and betterment of artificial intelligence.
Gustavo Assunção, Miguel Castelo-Branco, Paulo Menezes
2023-02-13T18:20:44Z
http://arxiv.org/abs/2302.06615v1
# Self-mediated exploration in artificial intelligence inspired by cognitive psychology ###### Abstract Exploration of the physical environment is an indispensable precursor to data acquisition and enables knowledge generation via analytical or direct trialing. Artificial Intelligence lacks the exploratory capabilities of even the most underdeveloped organisms, hindering its autonomy and adaptability. Supported by cognitive psychology, this works links human behavior and artificial agents to endorse self-development. In accordance with reported data, paradigms of epistemic and achievement emotion are embedded to machine-learning methodology contingent on their impact when decision making. A study is subsequently designed to mirror previous human trials, which artificial agents are made to undergo repeatedly towards convergence. Results demonstrate causality, learned by the vast majority of agents, between their internal states and exploration to match those reported for human counterparts. The ramifications of these findings are pondered for both research into human cognition and betterment of artificial intelligence. _Keywords--_ exploration, artificial emotion, general artificial intelligence Introduction The extensive observation of links connecting separate epistemic and achievement emotional states to exploratory behavior in cognitive psychology [5, 32, 31, 9] support our work twofold. First, internal state functions are constructed consolidating this literature and accurately reflecting emotional variation, which is not present in current AI yet boasts the ability to serve as an exploratory drive. This is paired with deep learning methodology loosely emulating the neurophysiology underlying emotion-induced attention mechanisms and voluntary action. The ability to perform a baseline comparison against human behavior motivates this design, falling back on cognitive psychology to guide that assessment of the AI emotion-mediated exploratory process. Scrutiny of epistemic and achievement states in humans based on variable difficulty classification and completion tasks, as examined by cognitive psychology, has been double faceted. From one perspective, studies posit internal cognitive conditions including but not limited to incongruity, expectancy and self-appraisal (e.g. of value and control), as originating of emotional variation when compounded [20, 24, 32]. With explicit reproduction of real conditions, our artificial emotional formulae contrast with predominant AI methodology, which arbitrarily defines internal states to serve some specific purpose. Approaches applying model behavior difference as an adequacy criterion [14, 33], divergence in transition probability as reinforcement [1], or inferring states from intrinsic reward [36], while clearly showcasing effects congruent with emotion in real life, only consider inducive conditions indirectly and when fitting to their application specific narrative. When assessing outcome, cognitive psychology commonly observes epistemic and achievement emotions respectively transitioning into a confused and curious demeanor [23] or generating heightened motivation and pursuit of success [29]. Confronted with information contradictory of internalized knowledge, as is the case with high-confidence errors, people manifest surprisal then supplanted by the former mentioned conditions. Withal, experiencing a positive outcome in a task predisposes humans into seeking scenarios conducive to similar internal reactions. Consequently, both cognitive paths induce exploratory behavior [32], albeit with potentially different objectives. Considering the aforementioned conditions are reproducible with deep learning methodology, so can epistemic and achievement emotions be explicitly computed for AI, following those paths and mediating exploration. We maintain that replicating cognitive conditions promotive of epistemic and achievement emotion is achievable within deep learning methodology by considering standard performance metrics or other scores as condition determinants. Testing accuracy reflects the adequacy of a model towards some task by gauging overall correctness over an unseen dataset. Ergo, it can be employed as a pointer of error and achievement. In this arrangement, an overall escalation in accuracy scores can be interpreted as increasing success, whereas de-escalation entails a decrease of the latter. Variations in the feeling of pride should therefore be matched by variations in accuracy, corresponding to personal achievement or lack thereof [30]. Hence, an adequate accuracy-pride matching would plausibly entail a curve of positive slope and unknown convexity with small variations (depicted in Fig. 1). Further factoring in confidence when considering performance metrics such as accuracy can broaden the set of representable emotions. The onset of high-confidence errors triggers the feeling of surprise, derivative of the inherent cognitive incongruity mentioned previously. In addition, insecure attainment of success is discordant with ordinary procedure and likewise inductive of surprise [11]. Nevertheless, either scenario can instead bring about surprise reduction when compounded with low or high confidence scoring, respectively. A saddle-like behavior may thus describe this feeling, as polarized variations of accuracy and confidence together imply intense values of surprise, whilst matching magnitudes of the two indicate reduced or emotional lack thereof (depicted in Fig. 2). This view regarding surprise and pride specifically is widely backed by cognitive psychology literature [32, 31, 20, 24], supporting the adequacy of its explicit implementation as artificial emotion in AI. When probing for effects of emotion on human behavior, cognitive psychology has largely relied on tasks [12], such as classification and trivia, composed Figure 1: **Curves demonstrating how the emotion of pride may correlate with accuracy.** Example curves following a positive prediction of pride based on increasing accuracy, as described by cognitive psychology research [32, 31]. Considering how increasing task accuracy equates to personal achievement, pride should follow as a natural reaction. Ergo, changes in pride should follow the same direction as those of accuracy. Considering how a direct variation between these two factors would be highly unlikely in real-life, using a non-linear function to describe their relationship is a more plausible possibility, validated by psychological findings. so as to induce scenarios which partly or fully demonstrate the investigated correlation. The adulteration of common knowledge statements presented in a veracity assessment task demonstrated the causation of exploratory behavior by epistemic and achievement states [32, 31]. When uninformed human participants were presented purposefully incorrect statements, the confidence inherent to their personal knowledge associated with the mistakes performed led to the onset of surprise. Complementarily, correct responses prompted a sense of pride. Both scenarios impacted exploration, demonstrated by requests for additional information on the implicated topics. While AI-directed cognitive tasks are substantially dissimilar from those humans are presented with on experimental scenarios [17], this does not invalidate their adaptation to a machine friendlier format so comparable goals may be achieved. This is particularly relevant, considering deep learning methodology is a valid framework on which to explore a Figure 2: **Multiple perspective surface view demonstrating how the emotion of surprise correlates with accuracy and confidence.** A potential surface representative of how surprise fluctuates with polarized variations of task accuracy and agent confidence, as described by cognitive psychology research [32, 31]. High-confidence errors, as a result of low accuracy during moments of raised confidence, trigger cognitive incongruity and confusion in turn conducive to surprise bursts. Low-confidence success likewise corresponds with unexpectedness, also inducing this emotion. Thus, while several possibilities may follow this description, a saddle-like behavior seems the most plausible representation of surprise based on accuracy and confidence scores. range of phenomena covered by Psychology and Neuroscience [7, 6]. Hence our approach achieved this by first ensuring the task-oriented deep learning model had a near perfect performance in a simple classification task, as this symbolized a high level of confidence post-training. By then presenting it with novel data with partially adulterated labels (see Fig. 4b), both high-confidence errors and internally successful situations were to be expected. Thus, epistemic Figure 3: **Our system processes data via a task-oriented module, itself governed by the emotion-based decisions of an actor-critic module.** The proposed system employs a task-oriented module and a RL actor-critic module to associate emotion and exploration in a way conducive to improved performance in a given task. **a**, The task-oriented module first samples one data instance from the environment, to perform a simple classification task. It does this via a pre-trained neural model, whose convolutional layers extract meaningful visual information. **b**, The loaded data encompasses handwritten digit images from a dataset partially adulterated so that half of its labels will not match with their respective instances’ visual content. **c**, The actor-critic module is composed of two separate neural models, for the actor and the critic respectively. The variable accuracy resulting from the task-oriented model is compounded with a random high confidence score, to compute an epistemic or achievement emotion, according to reports in cognitive psychology research. The actor model \(\theta\) receives this emotional score (either of pride or surprise) as its sole input and decides on an appropriate exploratory rate for the task-oriented model. The critic model also receives a computed emotional score as input to its branch \(\omega_{s}\), in addition to the actor’s chosen exploration rate on its \(\omega_{a}\) branch. The resulting merged features are processed by \(\phi\) to generate a feedback signal scrutinizing the actor’s decision and the critic’s own performance. **d**, The AI system performs this routine continuously, sampling a new instance whose task-oriented evaluation triggers an emotional response, then processed into the actor-chosen exploratory rate. The latter determines the size of a same-type data batch to be analyzed in the following step. and achievement emotions could be elicited, bringing about exploratory behavior and allowing the whole system to be regarded as an AI participant in a cognitive psychology experiment. In precis, the results produced by this paradigm demonstrate a causal relationship wherein the epistemic and achievement emotions of surprise and pride serve as mediators of exploratory behavior, similarly to the findings reported by Vogl _et al._[32]. Hence, this work represents a first instance of support for development of artificial emotion and its integration in AI learning procedures. The impact boasted over knowledge acquisition, processing and overall behavior is undeniable and thus its implementation would be beneficial for AI autonomy. Moreover, we corroborate observations of human behavior, employing a framework which may well interface neuroscience and psychology further in future research. ## 2 Results ### Neural Models In order to carry out our psychology-like experiment, three models were implemented in tandem pertaining to task-oriented (Fig. 4a and Methods), actor and critic (Fig. 4c and Methods) neural circuitry. This combination constituted each artificial participant, wherein a feedforward convolutional module comprises the first element, enhancing visual cues representative of the content in images received as input from the task environment. Data resulting from this segment was then reduced into vector embeddings, to be ascribed a class using standard deep learning methodology. Summarily, the task-oriented module classified images into specific classes in an attempt to match their original or potentially tampered labels. The actor and critic modules are similarly based on feedforward methodology, instead focused on generating adequate embeddings to be reduced as either an action or rectifying signal, respectively. These are parsed from the emotional state of the artificial agent, taken as a sole input for the actor and in conjunction with the derived action for the critic. The rationale behind this arrangement rests strongly on the critic's encoding of action and state collectively. Despite cortical and limbic regions not being present explicitly, once agent state is activated by formulae-driven surprise or pride, the critic module is prompted to signal the actor regarding task-oriented data observations which led to that emotional exacerbation. The process enacts a continuous reinforcement of artificial neurons to become more responsive to stimuli triggering of emotional reaction in that participant. Consequently, this induces attentional shifting, as the actor will warrant the task-oriented module to perform further intake of the implicated data, in order to mitigate this emotional exacerbation. Naturally, this latter communication is done in the form of an exploratory drive signal, whose variation in terms of either epistemic of achievement emotion will be codified by the actor's decision policy. While not an exact representation of the basal ganglia and related structures, this arrangement boasts several similarities both architecturally and in terms of functioning, with an actor-critic structure governing the focus and rate of knowledge exploration of another task-oriented module. ### Learning Cycle Carrying out a cognitive experiment with artificial models first requires these to be familiarized with the tasks to be performed. Thus, the initial phase of our experiment regarded the task-oriented module alone and its instruction on image categorization to 10 distinct classes. In this phase, unscathed data is provided to the model for learning, which it achieves by adapting its weights to iteratively reduce the error between the inferred and real label of each instance. This process is repeated until predictions are near perfect and with low error, guaranteeing the high level of confidence retained by task-oriented modules in the next phase of the experiment. Learning a correlation between surprise or pride and exploratory behavior involved all three modules of the proposed system (depicted in Fig. 4d) and constituted the main phase of our cognitive experiment. A separate set of data with partially adulterated labels (4b) was made available to the task-oriented model, for classification at instance-wise steps mediated by the actor-critic combo. Here, an item is first picked randomly and processed to generate a corresponding label. Despite predictions matching real image content, the partial adulteration of dataset labels entails a portion of unavoidably incorrect predictions. Confidence in these is, nonetheless, considerable due the initial phase training process, meaning the system is able to experience both successful classification as well as incur high-confidence errors. Such circumstances induce emotional variation in accordance with the formulae described previously. This information is then considered by the actor, based on which it will decide how much more of the content depicted in the randomly picked image should be analyzed subsequently, if any. Processing of this later batch results in further emotional variation, whose comparison with single instance states can yield insight into emotional progression during the cognitive task, in addition to its relationship with exploratory fluctuation overall. Both the emotional state of the system and the corresponding actor-derived exploratory rate are taken in by the critic module for adequacy assessment. This process is largely dependent on whether or not the chosen rate improves the system's condition by contributing towards its objective. The latter is specified indirectly in terms of the reward provided, which varies analogously to common human functioning. This follows a standard assumption that participants typically intend to perform well in the activities they perform, maximizing success. In accordance, an artificial agent is given a basis reward whose polarity corresponds to that of the difference between explored batch accuracy and single instance accuracy, deeming exploration useful only if yielding improvement in terms of task performance. Additionally, the system is provided with a sparse reward matching the variation of epistemic/achievement emotion which occurs during a step. This serves to either minimize surprise or maximize pride, complying with the free-energy principle which illustrates a necessity of self-organizing agents to reduce uncertainty in future outcomes [13]. The reduction may stem from knowledge diversification, in turn boosted by an exploratory increase, or near complete reliance on current knowledge, wherein exploration is largely avoided. Variations in nature/nurture naturally influence emotion and decision making [22]. For the sake of generalizability, this cycle was applied to several artificial agents, all with randomly introduced noise over emotional formulae and actions chosen over time. Moreover, each system was reset every 20 steps, following Vogl's 20 item procedure [32]. Resetting marked the beginning of a learning episode, with each system performing several of these to solidify the robustness of our observations on emotion-exploration relationships. ### Validity of Results Execution of the learning cycle revealed strong associations between exploratory behavior and epistemic or achievement emotions, when implemented over 250 distinct artificial agents for surprise and pride separately. Moreover, results were analogous to the findings reported in the original cognitive psychology study we strived to emulate [32], supporting their validity and that of our AI-based approach. Model convergence was required to ensure behaviors learned by the artificial agents were not random. This was achieved (Fig. 5 middle column) as cumulative reward increases over time and plateaus at later episodes for both surprise and pride experiments. Specifically for surprise (top row), initial reward is restricted to \([-8.58,10.77]\), peaks at \(max_{r}^{s}=19.87\) and ends within the range of \([-4.37,19.77]\), with an early-stage dip of minimum \(min_{r}^{s}=-13.64\). For pride, initial reward is encompassed by \([-15.97,7.02]\), within which \(min_{r}^{p}\) is the minimum. Final reward here varies at \([-7.46,18.95]\), though the overall maximum value \(max_{r}^{p}=19.05\) is achieved shortly before. Average cumulative reward across agents also increases for both emotions, demonstrating stable but slight growth for pride as opposed to a short depression in earlier episodes followed by steady increase for surprise. Overall, this trend indicates agents successfully learn to correspond states to actions in a way which improves their stance in the task environment. In addition to episodic cumulative reward, the success of the learning cycle can be further corroborated by the emotional fluctuation observed in agents over time (Fig. 5 first column). Naturally, initial variation is well-balanced for both emotions, as the number of increases matches that of decreases over the first learning episode. Nonetheless, over time agents appear to favor surprise reduction or stasis (top trend), as bursts of this emotion become on average \(\Delta s=38.52\%\) less frequent by the final episode. It should also be noted how stasis is progressively preferential in the first 10 episodes, yet falls back as surprise decreases become more prominent later in the cycle. As for pride (bottom trend), average emotion variation among agents is minor yet still favoring an up wards tendency, with pride decreases occurring \(\Delta p=5.90\%\) fewer times between the first and last learning episodes. Moreover, there is no significant indicator of preference for pride increase over stasis, as the former's quantity remains mostly unchanged throughout the cycle whilst the observed trend increase is mainly fostered by the latter. Considering model convergence was achieved, in conjunction with biologically consistent minimization of surprise or maximization of pride, these results can collectively provide substantial support for the relationships observed between exploration and epistemic or achievement emotion. Figure 5: **Results for both surprise and pride, mirroring similar findings in cognitive psychology.** Leftmost column: Episodic mean of emotion differential between single sample and subsequent batch analysis steps, across all implemented agents over the entire learning cycle. This shows a clear decrease for surprise (top) and slight increase for pride (bottom), which are biologically congruent. Moreover, this surprise minimization and pride maximization are in accordance with the free-energy principle, further supporting their validity. Middle column: Mean cumulative reward obtained by agents at each episode of the cycle. While there is a clearer increase for surprise than for pride, both indicate agent convergence and learning of a useful relationship between surprise/pride and exploratory behavior. Rightmost column: Mean actor behavior at the end of the learning cycle, correlation surprise or pride with exploration. A positive variation of exploratory behavior with surprise increase strongly resonates with Vogl’s findings [32] which report the same behavior. As for pride, the obtained weak relationship further supports the conclusion that pride is not a strong precursor of exploration, as also demonstrated by Vogl’s diverging results for this emotion. ### Emotion vs Exploration Evidence for the emotional impact over exploratory behavior (Fig. 5 third column) came from the substantial number of artificial agents which, despite bearing emotional differences and added action noise, displayed likewise behavior after undergoing the same learning cycle. Similarly to the emotional variation results, a causation effect is most evident for the surprise experiment (top). Averaging the decision-making behavior of all 250 agents displayed a 15.4% increase of exploration in response to greater surprise. This global trend is quite similar to the mean behavior displayed by the 217 agents who learned a positive correlation, largely outshining the remainder 33 agents who instead learned negative correlations. Notwithstanding the variability of actor behavior, all instances were monotonic and either displayed a considerable increase (positive) or a more limited decrease (negative). Pride (bottom) is fairly opposite to surprise in terms of its relationship with exploration. While instances were monotonic likewise to surprise, a deflating effect was instead observed from the full set of agents. These demonstrated behaviors encompassing a large amount of positive weak correlations and few yet ample negative correlations. This receding effect was reflected in the modest 2.8% decrease of exploration observed for increasing pride. Despite 222 agents displaying a slight exploratory increase with the emotion, positive change proved minimal as represented by the mean behavior of this actor set. Contrarily, negative change is clearly substantial and mainly supported by 22 correlations decreasing exploration between 25% and 75% towards null. The remaining 6 agents, while displaying a more restrained reduction, also contribute to reducing exploratory behavior. Consequently, the global average was offset enough to not display an increasing curve, despite the abundance of agents supporting such positive correlation. ### Relationship Strength The progression of relationship strength between exploration and the emotions of pride and surprise can be further demonstrated by measuring how each data pair correlates throughout a learning cycle. Accordingly, Spearman's correlation coefficient \(\rho\)[37] can be obtained for each episode in an agent's cycle, assessing whether the observed monotonic relationship between an emotion and exploration (Fig. 5 third column) becomes increasingly robust over time. For both experiments, per-episode averaging of all 250 agent sample pairs (Fig. 6) demonstrated considerable variability in strength of correlation between exploration and surprise or pride. Hence, a sliding window encompassing 40 episodes was applied to smoothen these trends and clarify strength progression. While both display near zero coefficients in earlier episodes, this \(\overline{\rho}\) increases for surprise while decreasing for pride and resulting in \(\rho_{surprise}=0.461\) and \(\rho_{pride}=\) by the end of the cycles. A moderate positive correlation is evidenced for surprise and exploration, while pride weakly and negatively associates with this behavior. ## 3 Discussion Adaptive AI has recently been experiencing a surge of research interest in various competencies [35], in no small part thanks to its applicability in social and industrial settings. Based on emotional variation, this work focused on developing a deep learning framework for adaptive decision-making over a fundamental trait of autonomous behavior: exploration. While architectural design was inspired by neural circuitry, the foundation for its operation stemmed from cognitive psychology. Observations of human emotional behavior enabled us to create realistic emulations of epistemic and achievement states, then integrated in artificial agents as drivers of exploratory behavior. Moreover, for an agent to manifest this causation, a learning cycle inspired by similar experimentation with human participants was devised wherein it ascertains the outcome of its decision-making in a task scenario. The latter was undertaken by standard deep learning methodology for image classification, despite the process's task agnosticism, mediated by an actor-critic reinforcement learning architecture which resembles basal ganglia functioning. Lastly, this framework was extended to a considerable number of artificial agents, so observed correlations would be generalizable to the same extent as its cognitive psychology counterparts. Results from our experiment strongly resonate with reports on emotion me Figure 6: **Evolution of the correlation between exploration and its causal emotional score, over a learning cycle.** Agent episodic mean of Spearman’s correlation coefficient between actor-chosen exploratory rate and its causal surprise or pride score (pale), smoothed by a moving window of 40 samples (bold). As the learning cycle progresses, the positive relationship between surprise and exploration becomes more overt. Contrastingly, the negative relationship between this behavior and the emotion of pride does not progress as much, with the coefficient remaining closer to zero. diated human exploratory behavior and learning. The substantial minimization of epistemic surprise, in addition to conforming with the free energy principle, was shown to be congruent with the observed increase in correlation strength between this emotion and exploration. Contrarily, maximization of achievement pride was somewhat negligible. In addition, this second observation was also paralleled by the weak dampening relationship obtained for pride over exploration, as its strength stagnated closer to zero. Notwithstanding the latter, both experiments successfully produced artificial agents capable of self-mediating their exploratory behavior, exploiting internal emotional drives towards improved task performance. Studies exploring emotion in AI seldom consider both a robust psychological and a neurophysiological bases for its formulation, with fewer even employing it as a driver of learning factors. Regardless, our work relates with literature such as [21], wherein Mazzaglia _et al._ developed a latent dynamics model endowed with Bayesian surprise as the dissimilarity from its posterior to prior beliefs, which rewards exploration when occurring. Schillaci _et al._ also presented a process for estimating the change in prediction error (PE) as a metric of learning progress [26]. This enabled agents to shift attention towards more interesting change-inducing goals when progress is inadequate. In [10], authors suggest a form of intrinsic rewarding reliant on competence progress, analogous to pride, based on which agents can explore their goal space. While all reported efficient and effective exploration, none of these approaches attempt to re-create the conditions under which the topics were observed in studies of other academic fields, instead focusing solely on deep learning purposes. On a separate note, in terms of inspiration, our work is not unlike [25], where psychological findings were considered as a basis for designing AI experiments. Unlike most related research, our work also bears considerable interdisciplinary interest in addition to being advantageous towards autonomous AI and deep learning. By building agents which emulate core neural circuitry for executive functioning and emotional behavior, there is increasing plausibility that their sentiment guided exploratory tendencies are more useful for understanding this type of process in the brain [19]. This is aided by the re-creation of experimental conditions under which psychological studies assess the same emotion-exploration relationships in humans. Despite the necessary adaptations for the latter to be possible with standard AI methodology, the resulting framework becomes appropriate for corroboration of previous findings as well as for evaluation of existing hypotheses on human behavior and learning, otherwise not easily examinable. Additionally, approaches such as this, drawing inspiration from biological functioning, could also serve as a means to postulate novel theories on how cognition and/or perception develop from basic neural activity [28]. The duplicated study of Vogl _et al._[32] consistently demonstrated the causation effect which onset surprise boasts over exploration of knowledge, as evidenced by the successively positive path coefficients obtained when assessing surprise to curiosity, and curiosity to exploration effects. Additionally, within-person correlation coefficients of 0.285 or 0.262 were reported for this emotion and exploration, in first and second versions of the study, with even high values if considering curiosity intermediately. The \(15.4\%\) mean exploratory increase leveraged by our artificial agents over growing surprise validates the postulated path relationships, as agents clearly learn that adopting the same behavior displayed by humans grants them a better standing over time, when placed under equivalent test conditions. Our Spearman's correlation results also parallel these studies, as the non-windowed coefficient mean across agents reaches \(0.311\) by the final episode, being ostensibly close to the within-person correlation value of the first study, with the most participants, and reasonably near that of the second study. Contrastingly, results reported for the connection between exploration of knowledge and the feeling of pride lack the consistency observed for surprise across both studies. While negative correlation coefficients of \(-0.073\) and \(-0.177\) were obtained in the first and second studies, respectively, these near zero values indicate a weaker correlation between pride and exploration, if any. Moreover, path coefficients of either study were contradictory, first indicating a positive causation effect and a negative dampening impact secondly. In addition to also nearing zero (i.e. pride having a faint influence over exploration), coefficients were lower in the first study, positing a causation effect as less likely than the dampening of exploratory behavior. Again, these observations were validated in our AI experiment, as Spearman's correlation took longer to deviate from null, compared to the surprise experiment, and yet stagnated at a lower absolute value. Finally, the \(2.8\%\) mean exploratory decrease over growing pride supports the higher likelihood of this emotion's dampening effect over exploration. While this was due to a smaller amount of negative yet overt correlations, the insignificance of a considerably larger amount of positive relationships further presupposes that pride influence over exploration is negligible. It could also be postulated that exploratory decrement during surges of this emotion is a possibility given how pride as a unique positive emotion may be damaging for cognitive performance [3], of which exploratory behavior is a key aspect [4]. Yet, additional experimentation would be required to test such hypotheses. Reporting our findings meets Vogl's appeal for conceptual replication of their results to bolster generalizability [32; 31]. Employing an image classification task is quite different from the general knowledge trivia scenario devised by those authors, further supporting the conclusion that observed emotion-exploration correlations were not triggered by inherent characteristics of input stimuli. Likewise, cognitive incongruity was induced differently as half of data instances were assigned randomly incorrect labels at reset, rather than simply being labelled correct/incorrect. This meant contradictory information was a possibility, as samples with visually distinct content could be assigned the same label. Vogl _et al._ also stressed the importance of considering various indicators of knowledge exploration to attest the validity of an epistemic/achievement emotional origin. Unlike deep learning where exploration may be directly specified as a hyperparameter or numerical signal, Psychology relies on observations of human behavior to assess whether any exploratory process occurs and to what extent [11; 32; 5]. As a consequence of implementing models which objectively derive exploration from emotional scoring, this multi-indicator requisite was irrelevant in our approach, with no impact over findings. Finally, implementing artificial agents as participants in a within-"person" experiment is fundamentally distinct from human trialing and also constitutes further variability in comparison with that original study. This work promotes cognitive psychology as a major source of knowledge for autonomous AI research. As demonstrated, experimental conditions can be adapted for artificial agents to either develop behavioral traits useful for learning, or assess their legitimacy if already present in agent demeanor. Contrarily, comparison of results obtained via AI methodology and human observation should be approached with care, as artificial agents are unable to replace biological participants in experimental scenarios. This is because, even with advancements in autonomy and general intelligence, AI agents are likely to remain unconscious and refrained to supporting humans in complex tasks in any foreseeable future[17]. Instead, as supported by our results, observation of artificial agents can serve a corroborating usefulness or provide further scrutiny for hypotheses on human cognition and behavioral traits [34]. Our system addresses each emotion individually as a precursor of exploration, whereas emotional states are most typically overlapping and having a combined effect over general behavior [2]. Therefore, it would be interesting for subsequent iterations of this approach to focus on expanding the emotional basis of exploration to account for that overlap, further benefiting from and validating cognitive psychology hypotheses. For instance, epistemic and achievement states could be employed in tandem, equitably or via weighted contributions, as inputs to an actor module. This technique could entail either feature merging at a midstream level, or an already multi-emotional input derived from a separate module. Additionally, our proposed framework retains the competence for studying other behavioral traits and their relationship with emotional drives. Mediating exploitation or engagement through variable emotion during cognitive operation could prove beneficial, similarly to how agents here learned to explore when seemingly more useful for their own reward objective. Finally, we speculate that further research on these topics would push AI closer towards autonomy and general intelligence. ## 4 Methods ### Task Dataset The MNIST dataset [8] is a collection of hand-written digit grayscale images of size \(28x28\), encompassing several variations of \(0\) through \(9\) each of which with its respective numerical label. The training portion of the set is mostly well-balanced in terms of class instances, encompassing a total of \(60,000\) images. Likewise, the testing set is also well-balanced, yet including only \(10,000\) images. The training portion of the MNIST dataset was divided in half, with one part being solely used for the pre-experiment training of the task-oriented net work. The testing set was also employed only for assessment of this model's performance. The second half of the training dataset was employed in the main surprise/pride experiments, by first being adulterated so exactly 50% of its instances had random numerical labels different from the specific digits they represented. Evidently, this was done at random indices, so there would be good sparseness of correct and incorrect labels. Thus, the pre-trained task-oriented network, despite being technically correct when classifying any of the \(30,000\) images used in the experiment, would be met with disparate labels around 50% of the time and consequential emotional triggering. ### System Architecture In order to loosely simulate human neural functioning during the experiment, agent architecture employed two separate modules, specifically one for the task at hand and another for the processing of emotion into an exploratory rate. The task-oriented model was pre-trained for hand-written digit recognition, following standard supervised learning procedure, and receives as input \(28x28\) images of that data. In the main experiment scenario, these images were merely progressed through the model for classification. As for the actor and critic models, both received as input a single value pertaining to the current emotional score of the agent. The actor encompassed only one hidden fully-connected layer before the output, while the critic additionally received the actor-chosen exploratory rate, being composed by fully-connected separate branches for each input. These were merged and passed onto a three-layer perceptron before output. ### Task-Oriented Module The proposed technique is task agnostic, meaning the obtained results should hold or remain similar despite the variability of this task-oriented module. Regardless, for the sake of simplicity and to remain close to the original cognitive psychology experiment, a classification task was carried out by a simple VGG-like neural network architecture [27]. This encompassed two convolutional layers, respectively with 32 and 64 filters and a stride of 3, each of which following by max pooling, and generating high-level visual features which are passed on to a double-layered perceptron for final classification. A 0.5 dropout rate in-between the convolutional and perceptron sections of this networks also helps to optimize its performance. For the pre-experiment training of the task-oriented network, data from the MNIST dataset was put through this model for 50 epochs. Optimization was achieved using the Adam algorithm [15] with default parameters. Post-training, the model achieved a 99.2% accuracy with as low as 0.03 loss for the testing data of MNIST, demonstrative of its excellency in this recognition task. Finally, it is plausible to assume a model boasting these metrics would equate to a biological being with high levels of confidence derived from continuous success. ### Actor-Critic Module Considering our system required a continuous evaluation of its own emotional state \(s\), as well as learning to map it to the most appropriate action \(a\) as per basal ganglia functioning, we employed actor-critic modules in our artificial agents to emulate this behavior. Specifically considering we employ a form of directed exploration in our agent task, a deterministic approach would more adequately fit this behavior. Hence, deep deterministic policy gradients (DPG) [18] were implemented as AI parallels of the basal ganglia. This type of reinforcement learning methodology assumes separate neural networks, with one parameterized by \(\omega\) as the critic model \(Q_{\omega}\), and another by \(\theta\) as the actor model \(\mu_{\theta}\) (deterministically mapping \(s\) to \(a\)), attempting to maximize reward \(r(s,a)\). Both networks were composed of multi-layer perceptrons in our implementation. The gradient is obtained from applying the chain rule to a performance objective, here being the expected return \(J(\mu_{\theta})\): \[\nabla_{\theta}J=\mathbb{E}\left[\nabla_{\theta}\mu_{\theta}(s)\nabla_{a}Q(s,a \mid\omega)\mid_{a=\mu_{\theta}(s)}\right]\] Given how this gradient describes the performance of the action policy, it is used for updating the parameters of the actor model, with a learning rate of 0.001. In addition, we also implemented a replay buffer to reduce the variance from temporal correlations. In it, a \((s,a,r(s,a),s^{\prime})\) tuple is recorded with each agent step. Subsequently, a 64-long batch of index \(i\) is obtained from the buffer to update network parameters. Besides the replay buffer, target networks were also used to regularize learning. These networks, respectively \(\mu^{\prime}(s)\) and \(Q^{\prime}(s,a)\), copied the weights of their actor and critic counterparts to compute the temporal difference target by summing their outputs with the reward for each sample from the buffer. The loss of the critic model was based the output \(Q(s_{i},a_{i})\) obtained for the \(i^{th}\) sample and this target, representing the maximum future reward given by that sample's reward signal \(r_{i}\) and the critic value \(Q^{\prime}\) associated with the best possible action in the coming sample, discounted by a factor \(\gamma=0.99\). Thus, for each batch: \[L =\frac{1}{N}\underset{i}{\Sigma}(target-Q(s_{i},a_{i}\mid\omega)^{2})\] \[target =r_{i}+\gamma\cdot Q^{\prime}(s_{i+1},\mu^{\prime}(s_{i+1}\mid \theta^{\prime})\mid\omega^{\prime})\] Gradients from this loss function are then used to update critic network parameters, with a learning rate of 0.002. Subsequently, at the end of a training step, the target networks are also updated with the newly calculated actor and critic weights, respectively. These updates are softened by a factor \(\tau=0.005\). ### Emotion Functions While multiple options could be considered for representing surprise and pride (see Fig. 1 and Fig. 2), a single set of functions was selected randomly for the main experiments, out of those fitting the requirements defined by psychology research. As described, these factored in performance metrics of the task-oriented module, given the impact of action outcome over emotion [16]. Considering accuracy has been observed to positively predict pride [32], in addition to being readily available and already bound to a \([0,1]\) interval, the variation of this achievement emotion \(P\) over increasing accuracy \(a\) would plausibly entail a curve of positive slope and unknown convexity with occasional fluctuations. Thus, a possible example of the pride function could be: \[P\colon[0,1] \rightarrow[0,1]\] \[a \mapsto Clip\left[\left(100\cdot C_{1}\right)^{-\left(a-1\right)^{2}}+ \mathcal{N}(\mu,\sigma^{2})\right]\] With \(C_{1}>1\) and the Gaussian noise \(\mathcal{N}\) accounting for some variability related with personality differences. Moreover, clipping ensures bounding of the emotion to the range of \([0,1]\) as well. On another note, surprise is positively predicted by high-confidence errors [32], meaning another metric is necessary besides accuracy to represent its behavior. Thus, a confidence score \(c\) bound to the interval \([0.8,1]\) was introduced to simulate the high levels of confidence expected of a newly trained well-performing task-oriented module. The saddle-like rough surface of this epistemic emotion \(S\) could therefore be obtained from: \[S\colon[0,1]^{2}\rightarrow[0,1]\] \[c,a \mapsto Clip\left[\mathcal{T}\left(\mathcal{R}\left(a^{2}-c^{2} \right)\right)+0.5+\mathcal{N}(\mu,\sigma^{2})\right]\] Here, the \(45^{\circ}\pm C_{2}\) rotation \(\mathcal{R}\) around the surface's saddle point, with \(C_{2}\in[-20^{\circ},20^{\circ}]\), allied with a translation \(\mathcal{T}\) to each domain's interval midway make it so low-confidence success and high-confidence mistakes are met with high surprise, whereas the opposite scenarios induce less or none of this emotion. Likewise to pride, noise is also introduced for greater variability and clipping ensures bonding to the \([0,1]\) interval. For either function, \(\mathcal{N}(0,0.03)\) was employed, along with random combinations of \(C_{1}\) and \(C_{2}\) to generate varied artificial agents with individual differences yet following the same grand pattern. ### Main Experiments The described processes were carried out using the Keras framework over a Tensorflow background. A Nvidia RTX 2080 GPU was used to accelerate training and the learning cycle of the main experiments. Similarly to the training of the task-oriented module, the learning cycle of both the surprise and pride experiments employed the Adam optimizer with the previously specified actor and critic learning rates. Each artificial agent was also restricted to a maximum batch size of 64 to be sampled as a result of the chosen exploratory rate. Moreover, all agents underwent 100 episodes during the learning cycle, with each episode encompassing 20 steps. ## Acknowledgments This work has been partially supported by FCT under grant 2020.05620.BD, and OE - National funds of FCT/MCTES under project UIDP/00048/2020.
2310.09737
Dynamics and frictional dissipation from treading in the puddle
It was recently established that dogs share the same lapping technique as cats by flicking their tongue against the water surface and then yanking it back, dragging up a column of water. This liquid column appears frequently in daily life and industrial applications, such as walking through a puddle and roller printing. While governed by the Navier-Stokes equation, its dynamics are often studied by numerical means, which hinders a full understanding of the rich mixture of physics behind, for instance, the competition of surface and potential energies, and how the pinch-off is affected by the kinetic energy and water jet when a large cylinder is used. Combined with simple models, we elucidate the mechanism that drives the change of morphology and derive analytic expressions for the critical height and upper radius for the liquid column when transiting between three stages. Stage I is characterized by a static and reversible profile for the column whose upper radius r_t equals that of the cylinder. The column becomes irreversible and $r_t$ starts shrinking upon entering stage II. It is not until r_t stops shrinking that the column neck accelerates its contraction and descends toward the pool, the quantitative behavior of which is among the successful predictions of our theory. Pinch-off dominates the second half of stage III without its usual signature of self-similarity. This is discussed and explained with an interesting incident involving a water jet similar to that made by a dropping stone.
Chung-Hao Chen, Zong-Rou Jiang, Tzay-Ming Hong
2023-10-15T05:09:25Z
http://arxiv.org/abs/2310.09737v2
# Dynamics and frictional dissipation from walking in puddles ###### Abstract Liquid columns lifted on an infinite pool is a problem that has long been an interest. However, most of the studies didn't take into consideration that the top of the liquid column can contract. This effect is especially important when the pulling speed is low so that the lifting process can be seen as a quasi-static motion. In this study, we have found 3 stages when the liquid column is lifted up. ## I Introduction Stretching a liquid column is a common phenomenon in ordinary life. Examples such as shoes leaving a puddle, dogs or cats drinking water[1], Jesus lizard walking on a pond[2], or even the process of roller printing[3]. Due to academic interests and application value, systems such as a liquid column stretched by a cylinder were studied intensely previously. The stability condition of liquid columns was first studied in 1973 by J. F. Padday. et al. [4] The stability condition of a liquid column formed by withdrawing a wetted rod from a free liquid surface was discussed. Shape factor \(\beta^{\prime}\) was introduced to determine the threshold height for the liquid column being unstable. However, the physical picture behind these conditions was not well understood. In 1989, Ashutosh Sharma and Eli Ruckenstein [6] conducted a study on the stability condition of a hole in a liquid sheet, which can be seen as an inverse scenario of a stretching liquid column. By using the free energy approach, they find out the threshold radius that determines whether the hole shrinks or expands in a given thickness of the liquid sheet. They first mention that the qualitative characteristics can be obtained by treating the hole as a cylinder with the same radius when calculating free energy. However, the relation between threshold radius and thickness is still only determined by numerical approach as the analytical solution is only obtained in some special cases such as neglecting gravity force. In 1997, G. Debregeas and F. Brochard-Wyar[7] examined the threshold radius predicted by Taylor and Michael for a liquid column between a plate and pool to expand or shrink. However, most of them assume the top of the water column is fixed.[9] Recently in 2013, E. S. Benilov[8] took into consideration the extra degree of freedom that the top of water column \(R_{t}\) is not fixed. However, they made a wrong prediction and didn't study the motion in the unstable stage. In our experiment, we not only find the threshold height \(H^{\star}\) that the water column begins to be unstable but the relation between \(H^{\star}\) and cylinder diameter \(R\). Furthermore, we have found it has two different motions in an unstable stage. 1. The Water column shrinks but keeps the shape of the surface the same. Shrinking speed of \(R_{t}\) and the neck \(R_{min}\) are the same. 2. After a specific time duration, \(R_{t}\) will stop shrinking, resulting in an even faster pinch-off of the neck. The scaling laws and self-similarity are also examined. In the experiment, we observed that the boundary of the interface between liquid and metal cylinder \(r_{top}\) will only start to shrink when the lifted height exceeds a threshold height \(H\star\). In the shrinking process, the shrinking speed of \(r_{top}\) and the neck with minimum radius \(r_{min}\) are the same. When \(r_{top}\) is equal to a specific diameter \(r_{c}\), it stops shrinking. The kinetic energy suddenly goes to \(r_{min}\) and \(r_{min}\) begin to pinch off and form a liquid jet that penetrates into the water's surface. ## II Experimental setup and results We use aluminum cylinders of 14 different diameters ranging from 1 to 26 mm. Initially dipped in the water pool, the cylinder is raised at a constant speed as slow as 0.5 mm/s by a stepper motor to form a water column when pulled above the pool surface, as shown in Fig. 1. The levelness of the cylinder is checked by the level gauge attached at its top. In the meantime, we employ a high Figure 1: A water column of height \(H\) is generated when an aluminum cylinder is pulled out of the pool. While \(\theta\) and \(r_{t}\) denote the contact angle and radius at the top of the water column, \(r_{m}\) and \(z_{m}\) are the radius and height at its neck. speed camera with 8000 fps to capture the evolution of the column profile. As shown in Fig. 2, the profile of the water column undergoes three stages from being reversible in (a) to being irreversible in (b) and (c, d), which we shall denote by stages I, II, and III. The threshold height \(H^{\star}\) that marks the transition from stage I to II is plotted as a function of \(R\) in Fig. 3 that appears to level off at large \(R\). We also record how \(r_{t}\) shrinks with time \(t\) in stage II in Fig. 4 for three different sizes of cylinders. The constant upper radius \(r_{c}\), characteristic of stage III, is found to increase like \(H^{\star}\) when we substitute with a larger cylinder. Their values for different \(R\) are collected and plotted in Fig. 5. ## III Theoretical models ### Stage I: reversible An analytic expression of \(H^{\star}\) is possible by optimizing the total energy of the system with respect to \(r_{t}\): \[E_{\rm total}=2\pi r_{t}H\sigma_{lg}+\pi r_{t}^{2}\rho g\frac{H^{2}}{2}+( \sigma_{ls}-\sigma_{gs}-\sigma_{lg})\pi r_{t}^{2} \tag{1}\] where the water column has been approximated by a uniform cylinder, and \(\sigma_{lg}\), \(\sigma_{ls}\), and \(\sigma_{gs}\) are the surface tension constant for liquid-vapor, liquid-solid, and vapor-solid interfaces. We know from Young's formula that \(\Delta\sigma\equiv\sigma_{gs}+\sigma_{lg}-\sigma_{ls}\) is positive definite. Therefore, the Figure 4: The \(r_{t}\) shrinks with \(t\) in stage II for \(R\)=5, 7, 9, 11, and 13 mm. The red solid lines are fitting from the prediction in Eq.(5). Figure 5: A wider cylinder renders larger \(H^{\star}\) and \(r_{c}\) values, whose relationship follows excellently the red solid line predicted by Eq.(20). Figure 3: Threshold height \(H^{\star}\) vs. \(R\). The red solid line represents the theoretical prediction of Eq.(3). Figure 2: The profile of the water column goes through three stages: (a) \(r_{t}\) remains fixed at \(R\) as \(H\) is raised, (b) \(r_{t}\) starts shrinking after \(H\) reaches and is kept at \(H^{\star}\) - last for about 0.1\(\sim\)0.3 s, and (c, d) \(r_{t}\) appears to stop shrinking at \(r_{c}\), while the column starts to pinch off - last for about 0.004\(\sim\)0.006 s. quadratic equation of Eq.(1) will bend downward at \[r^{\star}=\frac{H^{\star}\sigma_{lg}}{\Delta\sigma-\frac{\rho gH^{\star}2}{2}} \tag{2}\] when \(H\) is small, as shown numerically in Fig. 6. Since \(r^{\star}<R\), \(r_{t}\) will remain at \(R\) to avoid costing energy until \(H\) is big enough to render \(r^{\star}\geq R\) and trigger an irreversible shrinkage for \(r_{t}\). The threshold value \(H^{\star}\) can thus be obtained by setting Eq.(2) equal \(R\): \[H^{\star}=\frac{\sigma_{lg}\Big{[}\sqrt{1+\frac{2\rho gR^{\star}2\Delta\sigma }{\sigma_{lg}^{2}}}-1\Big{]}}{\rho gR}. \tag{3}\] It fits nicely the experimental data in Fig. 3 with R-square = 0.96 and explains the saturation at large \(R\). ### Stage II: irreversible and quasi-static Once \(H\) exceeds \(H^{\star}\), the top radius of water column \(r_{t}\) starts to shrink. The shrinkage rate can be modeled by \[\frac{dE_{\rm total}}{dr_{t}}=-A\frac{dr_{t}}{dt} \tag{4}\] where \(A\) is a phenomenological constant. Solving this differential equation gives \[r_{t}(t)=\frac{\alpha}{2\beta}-Be^{\frac{2\beta}{\Delta}t} \tag{5}\] where \(\alpha\) and \(-\beta\) denote the coefficients of \(r_{t}\) and \(r_{t}^{2}\) in Eq.(1), and the constant \(B\) signifies a perturbation to tip the balance and drive the system rightward from the unstable state at the top of the red line in Fig. 6. Equation (5) agrees well with the experimental data in Fig. 4 with fitting parameters \(A\) and \(B\) that vary with \(R\). Although stage II is irreversible, we argue that it is quasi-static based on two observations: first, the shape of the water column remains unchanged because \(r_{t}\) and \(r_{m}\) exhibit the same shrinking speed and their correlation is strongly linear in Fig. 7. Second, the contact angle \(\theta\) and \(z_{m}\) defined in Fig. 1 are not affected by the receding motion, as shown in the inset of Fig. 7. As a result, we can assume the profile or \(r(z)\) is still dictated by the minimization of its surface energy and gravitational energy at each step of the way: \[E=\sigma_{lg}\int_{0}^{H^{\star}}2\pi r\sqrt{1+r^{\prime\,2}}\ dz+\int_{0}^{H^ {\star}}\rho g\pi r^{2}z\ dz \tag{6}\] By use of the Euler-Lagrange equation, we obtain \[\frac{\sigma_{lg}}{\rho g}\Big{[}\frac{1}{\sqrt{1+r^{\prime\,2}}}-\frac{rr^{ \prime\prime}}{(\sqrt{1+r^{\prime\,2}})^{3}}\Big{]}+rz=0 \tag{7}\] which cannot be solved analytically. To obtain an approximate solution, we appeal to the constraints on \(r^{\prime}\) that (a) \(r^{\prime}\rightarrow-\infty\) as \(z\to 0\), (b) \(r^{\prime}\to 0\) as \(z\to z_{m}\), and (c) \(r^{\prime}=\cot\theta\) as \(z=H^{\star}\). The simplest guess that satisfies all these conditions is that \[r^{\prime}=\frac{z^{2}-z_{m}^{2}}{z}\frac{H^{\star}}{H^{\star 2}-z_{m}^{2}} \cot\theta. \tag{8}\] Our choice of \((z^{2}-z_{m}^{2})\) over \((z-z_{m})\) to satisfy constraint (b) is based on setting \(z\to z_{m}\) in Eq.(7) to obtain \[\frac{\sigma_{lg}}{\rho g}(1-rr^{\prime\prime})+rz=0. \tag{9}\] From Fig. 2(b), we know \(rr^{\prime\prime}\approx 20\gg 1\) so that Eq.(2) can be further simplified to give \[r^{\prime}=\frac{\rho g}{\sigma_{lg}}(\frac{z^{2}-z_{m}^{2}}{2}). \tag{10}\] Comparing the coefficients of Eqs.(8) and (10) gives \[\frac{\rho g}{2\sigma_{lg}}=\frac{H^{\star}\cot\theta}{z_{m}(H^{\star 2}-z_{m} ^{2})} \tag{11}\] that uniquely determines \(z_{m}\) as a function of \(\theta\), \(H^{\star}\) and \(\sigma_{lg}\). This is consistent with the empirical finding that \(z_{m}\) is roughly independent of time in the inset of Fig. 7. Finally, after solving Eq.(8) with the boundary condition \(r(z=H^{\star})=r_{t}\), we set \(z=z_{m}\) to obtain: \[r_{t}=r_{m}+\Big{[}\frac{1}{2}-\frac{z_{m}^{2}}{H^{\star 2}-z_{m}^{2}}\ln( \frac{H^{\star}}{z_{m}})\Big{]}H^{\star}\cot\theta \tag{12}\] Since \(z_{m}\) is a constant from Eq.(11), the linear relation between \(r_{t}\) and \(r_{m}\) in Fig. 7 is derived. Figure 6: Equation (1) is numerically plotted against \(r_{t}\). Initially, the system falls on the right side of the blue line when \(H<H^{\star}\), and it is energetically unfavorable for \(r_{t}\) to decrease from \(R\). As \(H=H^{\star}\), \(r_{t}=R\) becomes sitting at the top of the red line which is an unstable point. Any perturbation is enough to tip the balance and trigger the shrinking process. ### Stage III: irreversible and pinch-off Since stage III lasts for only about 0.006 s, \(r_{t}\) appears to be stationary at \(r_{c}\). What needs to be answered is why the Rayleigh-Plateau instability is triggered at this stage and how. We know that the pull from the cylinder, \(2\pi r_{t}\sigma_{lg}\sin\theta\), is not big enough to sustain the weight of water in stage II, which is why the column has to keep shedding its load. Since the former is proportional to \(r_{t}\), while the latter is roughly to \(r_{t}^{2}\) when \(H^{\star}\) is fixed, their difference decreases as \(r_{t}\) shrinks. We conjecture that they become equal at \(r_{t}=r_{c}\), and the momentum to shrink that is carried over from stage II sets off the pinch-off phenomenon. Although the development of a singular neck in stage III is a dynamic process, \(r_{c}\) is predetermined at its borderline with stage II where a quasi-static approximation is feasible. So we can still perform the variational method on the potential energy to find how \(r_{c}\) is decided by \(H^{\star}\): \[E=\sigma_{lg}\int_{0}^{H^{\star}}2\pi r\sqrt{1+{r^{\prime}}^{2}}dz+\lambda\int _{0}^{H^{\star}}\pi r^{2}dz \tag{13}\] where the surface energy is assumed to dominate the gravitational energy since the volume of the water column has thinned down by more than fourfold since stage I. The Lagrange multiplier is incorporated to make sure the water volume equals \[\int_{0}^{H^{\star}}\pi r^{2}dz=\frac{2\pi r_{c}\sigma_{lg}\sin\theta}{\rho g}. \tag{14}\] Using the second form of the Euler-Lagrange equation, we obtain \[\frac{r}{\sqrt{1+{r^{\prime}}^{2}}}+\frac{\lambda r^{2}}{2\sigma_{lg}}=C \tag{15}\] where the constant \(C\) must equal \(r_{c}\sin\theta+\lambda r_{c}^{2}/2\sigma_{lg}\), \(r_{m}+\lambda r_{m}^{2}/2\sigma_{lg}\), and \(\lambda r(0)^{2}/2\sigma_{lg}\) at the same time for \(z=H^{\star}\), \(z_{m}\), and \(0\), respectively. Since the pool radius \(r(0)\) can be much bigger than \(r_{c}\) and \(r_{m}\), these three expressions can only be equivalent when \[r_{c}\sin\theta\approx r_{m}\approx\frac{\lambda r(0)^{2}}{2\sigma_{lg}}. \tag{16}\] Plug the expression of \(r^{\prime}\) from Eq.(15) in \(z=\int dr/r^{\prime}\) and set \(z=H^{\star}\) to obtain \[\frac{H^{\star}}{r_{c}}=\Big{[}\int_{\frac{r_{m}}{r_{c}}}^{1}+\int_{\frac{r_{ m}}{r_{c}}}^{\frac{r(0)}{r_{c}}}\Big{]}\frac{dx}{\sqrt{\left[\frac{x}{\sin \theta+\frac{\lambda r_{c}}{2\sigma_{lg}}\left(1-x^{2}\right)}\right]^{2}}-1} \tag{17}\] which is to be solved in conjunction with Eq.(14): \[\frac{2\sigma_{lg}\sin\theta}{\rho gr_{c}}=\Big{[}\int_{\frac{r_{m}}{r_{c}}}^ {1}+\int_{\frac{r_{m}}{r_{c}}}^{\frac{r(0)}{r_{c}}}\Big{]}\frac{x^{2}dx}{ \sqrt{\left[\frac{x}{\sin\theta+\frac{\lambda r_{c}}{2\sigma_{lg}}\left(1-x^{ 2}\right)}\right]^{2}}-1} \tag{18}\] where the factor \(\frac{\lambda r_{c}}{2\sigma_{lg}}=\frac{r^{2}\sin\theta}{r^{2}(0)}\ll 1\) from Eq.(16). Notice that both equations are dominated by the upper limit, \(x_{0}=\frac{r(0)}{r_{c}}\gg 1\), of their second integral near which \[\frac{1}{\sqrt{\left[\frac{x}{\sin\theta+\frac{xr_{c}}{2\sigma_{lg}}\left(1- x^{2}\right)}\right]^{2}}-1}\approx\frac{\sin\theta-\frac{\lambda r_{c}}{2 \sigma_{lg}}x^{2}}{x} \tag{19}\] where \(\frac{\lambda r_{c}}{2\sigma_{lg}}x_{0}^{2}=\sin\theta\). This greatly simplifies the integration to give \(\sin\theta\ln x_{0}\) and \(\sin\theta x_{0}^{2}/4\) for Eqs.(17) and (18), respectively. Combining these two results immediately predicts that \[\frac{H^{\star}}{r_{c}}\approx\frac{\sin\theta}{2}\ln\frac{8\sigma_{lg}}{\rho g r _{c}}. \tag{20}\] which agrees well with the experimental data in Fig. 5. During the pinch-off phenomenon, the bottleneck is found to plunge toward the pool surface before breaking up, as shown in Fig. 2(c, d). To understand the outcome in Fig. 8, Eq.(12) turns out to be applicable if we treat \(z_{m}\) and \(r_{m}\) as the variables while fixing \(r_{t}\) at \(r_{c}\). This success implies that their relationship is mostly dictated by the geometric constraints from the contact angle at both ends of the water column. It is worth checking the properties that are often associated with the pinch-off phenomenon because, rather than self-induced, we conjecture that it is being hastened by the incoming flow field from the shrinking motion in stage II. The first characteristic we examine is self-similarity, which seems to be missing in Fig. 10 because Figure 7: Both \(r_{t}\) and \(r_{m}\) shrink with time in regime II, but their values appear to be linearly proportional, as verified by the red solid fitting line from Eq.(12). Inset shows that \(z_{m}\) and \(\theta\) are not sensitive to time. the re-scaled contours of the profile fail to converge to a master curve. Nevertheless, the shrinkage of the neck radius in Fig. 9 still obeys the same power law \(r_{m}\propto\tau^{2/3}\) as in ordinary water column where \(\tau\equiv t_{c}-t\) and \(t_{c}\) is the pinch-off time. ## IV Conclusion and discussions In this study, we found 3 different stages of the liquid column's motion when lifted up by a cylinder. In stage I, the top of the water column \(r_{top}\) is fixed to the cylinder's radius \(R\). When \(r_{top}\) starts to shrink, the threshold height \(H^{\star}\) is determined by minimizing total energy. As \(R\) becomes larger, \(H^{\star}\) grows linearly at first and eventually saturated to a constant. In stage II, \(r_{top}\) starts to shrink. The motion of \(r_{top}\) change with time is described. We have discovered that the shrinking speed of \(r_{top}\) and the neck \(r_{min}\) should be the same, and also proved by a theory. In stage III, \(r_{top}\) stops shrinking. The relation between \(x_{min}\) and \(y_{min}\) is described. ###### Acknowledgements. We acknowledge the financial support from the National Science and Technology Council in Taiwan under Grant No. 111-2112-M007-025.
2301.08368
A Compact Source of Positron Beams with Small Thermal Emittance
We investigate electrostatic traps as a novel source of positron beams for accelerator physics applications. Penning-Malmberg (PM) traps are commonly employed in low-energy antimatter experiments. Positrons contained in the trap are cooled to room temperature or below. We calculate the thermal emittance of the positrons in the trap and show that it is comparable to or better than the performance of state-of-the-art photocathode guns. We propose a compact positron source comprised of a PM trap, electrostatic compressor, and rf accelerator that can be built and operated at a fraction of the cost and size of traditional target-based positron sources, albeit at a reduced repetition rate. We model the acceleration of a positron bunch up to an energy of 17.6 MeV with a final thermal emittance of 0.60 $\mu$m-rad and bunch length of 190 $\mu$m. This system may be useful for acceleration physics studies, such as investigations of flat-beam sources for linear colliders and positron plasma wakefield acceleration.
Rafi Hessami, Spencer Gessner
2023-01-20T00:14:59Z
http://arxiv.org/abs/2301.08368v2
# A Compact Source of Positron Beams with Small Thermal Emittance ###### Abstract We investigate electrostatic traps as a novel source of positron beams for accelerator physics applications. Penning-Malmberg (PM) traps are commonly employed in low-energy antimatter experiments. Positrons contained in the trap are cooled to room temperature or below. We calculate the thermal emittance of the positrons in the trap and show that it is comparable to or better than the performance of state-of-the-art photocathode guns. We propose a compact positron source comprised of a PM trap, electrostatic compressor, and rf accelerator that can be built and operated at a fraction of the cost and size of traditional target-based positron sources, albeit at a reduced repetition rate. We model the acceleration of a positron bunch up to an energy of 17.6 MeV with a final thermal emittance of 0.60 \(\mu\)m-rad and bunch length of 190 \(\mu\)m. This system may be useful for acceleration physics studies, such as investigations of flat-beam sources for linear colliders and positron plasma wakefield acceleration. ## I Introduction Positron beams are traditionally produced by sending high-energy electron beams into a high-Z target, capturing positrons from the resulting electromagnetic shower, and cooling the positrons in a damping ring before reacceleration [1]. This process requires significant experimental infrastructure and hardware. As a result, there are relatively few laboratories producing positron beams for accelerator physics experiments [2]. Research into advanced positron sources has been recognized as an area-of-need for future accelerator R&D [3]. One research area impacted by the lack of positron beam sources is Plasma Wakefield Acceleration (PWFA). PWFA is a promising technique for accelerating charged particles at high gradients. Preserving the quality of positron beams while accelerating them in plasma is an unsolved challenge [4; 5; 6; 7; 8; 9]. The question of how best to accelerate a positron beam in plasma can only be resolved by committing significant experimental and computational resources to the task. New types of positron sources will expand access to positron beams which can be used for these experiments. We propose a novel, compact, electrostatic positron source for accelerator physics research. Previous research has explored electron beams from ultra-cold plasmas (UCP) [10; 11] and magneto-optical traps (MOT) [12; 13]. Our concept is the first to examine this possibility for positron beams. The positron source is based on the electrostatic Penning-Malmberg (PM) trap, commonly employed in low-energy antimatter experiments [14]. These traps have the advantage of providing cold, low-emittance beams, although the repetition rate of these devices is too low to be useful for High Energy Physics applications such as Linear Colliders. The trap is combined with a short linac to compress and accelerate the beam such that the final energy and bunch length is suitable for injection in a plasma wake. While positron PWFA experiments are the motivation for this concept, the compact positron source would be of great interest to any facility that desires positron beams for physics studies. ## II Overview of the electrostatic positron beam source In this section, we provide a description of the electrostatic beam source and explain how properties of the electrostatic trap impact beam parameters like bunch length and emittance. The review of electrostatic traps by Danielson, et. al. provides a detailed overview of these systems [14]. ### Positron Sources Positrons for electrostatic traps are typically produced by \(\beta\)-decay emitters such as \({}^{22}\)Na. The emitters are sold as small encapsulated sources that can be attached to a vacuum beamline. The primary limitation of the encapsulated source is that they contain a limited amount of radioactive material for safe handling and produce at most \(10^{9}\) positrons per second [15]. An alternative method for generating positrons for the compact source employs a small, 9 MeV electron accelerator and impacts the beam on a high-Z target [16]. This creates low-energy positrons from an electromagnetic shower, but the initial beam energy is low enough as to not activate the target material which reduces shielding requirements. This approach is being pursued by the GBAR experiment at CERN with a goal \(10^{10}\) positrons per second from the target [16]. For both the encapsulated radioactive source and the compact accelerator-based source, the positrons have a large kinetic energy relative to the depth of the electrostatic trap and a large energy spread. In order to trap the positrons, the beam must first be sent through a moderator which slows the positrons. A commonly employed moderator is solid neon with an efficiency of \(10^{-2}\)[17]. Therefore, the flux of slow positrons into the trap is about \(10^{7}\) positrons per second for an encapsulated radioactive source and \(10^{8}\) positrons per second for the accelerator-based source. ### The Electrostatic Trap The positrons enter an electrostatic trap consisting of a series of ring electrodes surrounded by a solenoid magnet. The ring electrodes create the axial potential well that traps the positrons longitudinally, while the solenoid provides radial confinement. The depth of the well needs to be greater than the space charge potential of the positrons in the trap, given by \[\Delta\phi=\frac{enr_{p}^{2}}{4\varepsilon_{0}}\left[1+2\ln\left(\frac{r_{w}}{r _{p}}\right)\right], \tag{1}\] for positron density \(n\), plasma radius \(r_{p}\), and trap radius \(r_{w}\)[14]. The properties of the beam inside the electrostatic trap are defined by the trap's parameters. In particular, the radial extent of the positrons in the trap, and therefore the density of the positrons in the trap are defined by the magnetic field and the rotation rate of the positron plasma. The rotation rate is a free parameter which can be imposed upon the positron plasma through a "rotating wall" electrode [18]. In this scenario, the positron plasma is a uniform cylinder of charge extending to radius \(r_{p}\) with the density given by \[n=\frac{2\varepsilon_{0}B\omega_{r}}{e}, \tag{2}\] where \(B\) is the solenoid field and \(\omega_{r}\) is the rotation rate of the positron plasma. For our calculations and simulations, we selected for the desired output beam parameters and designed a hypothetical trap around those values, consistent with parameters achieved by traps utilized in existing experiments. The trap parameters and beam parameters used in the simulation are shown in Table 1. We note that the parameters we chose for our simulation are conservative. For example, we assume a solenoid field of 1 T whereas the GBAR experiment employs a 5 T magnet [19], and a trap temperature of 273 K whereas GBAR's cryo-cooled trap can produce positron plasmas as cold as 10 K via cyclotron radiation cooling. The trap temperature in our simulation is achieved using room-temperature nitrogen buffer gas for cooling [20]. The externally imposed \(\omega_{r}\) is roughly the same as GBAR's at around 3 MHz. The only constraint on \(\omega_{r}\) is that it is much less than the cyclotron frequency \(\Omega_{c}\). Since the Debye length of the positron plasma is much smaller than the plasma radius, the positron plasma is well-approximated as a uniform density cylinder with radius \(r_{p}\) and length \(l_{p}\)[21]. ## III Analytic equation for trap emittance Starting from the standard equation for normalized emittance \[\epsilon_{n}=\frac{1}{mc}\sqrt{\langle x^{2}\rangle\langle p_{x}^{2}\rangle- \langle xp_{x}\rangle^{2}} \tag{3}\] we derive an analytic expression for the transverse emittance in a single plane (here we consider the \(x\)-plane) of a positron beam at rest in the electrostatic trap. The only coherent motion of the positron plasma is the rotation about the axis, but since the thermal velocity is much \begin{table} \begin{tabular}{l l r} \hline Parameter & Symbol & Value \\ \hline Trap radius & \(r_{w}\) & 4 cm \\ Trap length & \(l_{w}\) & 10 cm \\ Magnetic field & \(B\) & 1 T \\ \(e^{+}\) plasma radius & \(r_{p}\) & 1.3 mm \\ \(e^{+}\) plasma length & \(r_{l}\) & 5 cm \\ Temperature & \(T\) & 273 K \\ Number of positrons & \(N\) & \(10^{8}\) \\ Space charge potential & \(\Delta\phi\) & 22.4 V \\ Debye length & \(\lambda_{D}\) & 60.6 \(\mu\)m \\ Cyclotron frequency & \(\Omega_{c}\) & 175.6 GHz \\ Rotation frequency & \(\omega_{r}\) & 3.2 MHz \\ Transverse emittance & \(\varepsilon_{x,y}\) & 0.11 \(\mu\)m-rad \\ \end{tabular} \end{table} Table 1: Parameters used to define the initial plasma distribution inside the trap. Figure 1: Depiction of the beamline used in the simulation. The end of the trap is denoted by A, the ends of the electrostatic accelerator are denoted by B and C, and the ends of the 3 GHz linac are denoted by D and E. greater than the rotational velocity \(v_{th}>>\omega_{r}r_{p}\), we can safely ignore \(x-p_{x}\) correlations. This assumption holds down to positron beam temperatures of a few Kelvin for the trap parameters considered here. The single-plane transverse emittance reduces to \(\epsilon_{x}=\sigma_{x}\sigma_{px}/mc\) and it remains to calculate \(\sigma_{px}\) and \(\sigma_{x}\). The momentum spread is purely thermal \[\sigma_{px}=\sqrt{mk_{B}T}, \tag{4}\] while \(\sigma_{x}\) is derived from the uniform positron density extending out to the edge of the plasma cylinder \(r_{p}\) \[\sigma_{x}^{2}=\frac{\langle x^{2}n(r)\rangle}{\langle n(r)\rangle}=\frac{r_{p }^{2}}{4}, \tag{5}\] with \(n(r)=n\), the constant beam density, cancelling out of the equation. Utilizing Equation 2 and the finite plasma length \(L_{p}\), we can rewrite \(r_{p}\) purely in terms of trap parameters \[r_{p}=\sqrt{\frac{qN}{2\pi\omega_{r}\epsilon_{0}BL_{p}}}, \tag{6}\] which gives \[\sigma_{x}^{2}=\frac{qN}{8\pi\epsilon_{0}B\omega_{r}L_{p}}. \tag{7}\] Combining equations 7, 4, and 3, we derive an equation for the normalized, thermal beam emittance defined solely in terms of trap parameters and bunch charge \[\epsilon_{th}=\frac{1}{mc}\sqrt{\frac{qNmk_{B}T}{8\pi\epsilon_{0}B\omega_{r}L _{p}}}. \tag{8}\] For the parameters in our simulation, we find a single-plane thermal emittance of \(0.11\)\(\mu\)m-rad, which is comparable to or better than the performance of state-of-the-art photocathode guns. The single-plane, thermal beam emittance results are encouraging, but do not describe the full dynamics of the beam in the trap. The positron beam is cooled in a strong magnetic field which generates correlations in the beam phase space that create angular momentum-dominated beams. Following the formalism in Ref [22], we define the transverse beam \(\mathbf{\Sigma}\) matrix as \[\mathbf{\Sigma}=\begin{bmatrix}\langle X\tilde{X}\rangle&\langle X\tilde{Y} \rangle\\ \langle Y\tilde{X}\rangle&\langle Y\tilde{Y}\rangle\end{bmatrix}, \tag{9}\] with \[\langle X\tilde{X}\rangle=\begin{bmatrix}\langle x^{2}\rangle&\langle xp_{x} \rangle\\ \langle xp_{x}\rangle&\langle p_{x}^{2}\rangle\end{bmatrix}, \tag{10}\] and \[\langle X\tilde{Y}\rangle=\begin{bmatrix}\langle xy\rangle&\langle xp_{y} \rangle\\ \langle yp_{x}\rangle&\langle p_{x}p_{y}\rangle\end{bmatrix}. \tag{11}\] The transverse emittance \(\varepsilon_{4D}\) describes all four dimensions of the transverse phase space and is given by \[\varepsilon_{4D}=\det(\mathbf{\Sigma})=\varepsilon_{eff}^{2}-\mathcal{L}^{2}, \tag{12}\] where \(\varepsilon_{eff}\) is the effective emittance in one plane and angular momentum \(\mathcal{L}=\frac{1}{2mc}\langle xp_{y}-yp_{x}\rangle\). The thermal emittance is related to the full transverse emittance by \(\varepsilon_{th}=\sqrt{\varepsilon_{4D}}\), and the effective single-plane emittance is \[\varepsilon_{eff}=\sqrt{\varepsilon_{th}^{2}+\mathcal{L}^{2}}. \tag{13}\] Figure 2: Longitudinal phasespace at demarcated positions along the beamline. The initial distribution corresponds to the beam inside the trap. Positions A through E correspond to the start and end of accelerator components described in Figure 1. The effective single-plane emittance will be dominated by angular momentum when \(\mathcal{L}\gg\varepsilon_{th}\). Intuitively, this means that although the volume of the beam in phase space \(\varepsilon_{th}\) is small, there are no projections of the beam phase space into the \(x-y\) plane such that \(\varepsilon_{x}=\varepsilon_{th}\) and \(\varepsilon_{y}=\varepsilon_{th}\). However, it is possible to manipulate the beam to minimize either \(\varepsilon_{x}\) or \(\varepsilon_{y}\) and produce a flat beam [22]. The amplitude of the angular momentum \(\mathcal{L}\) is given by \[\mathcal{L}=\frac{eB\sigma_{r}^{2}}{2mc}. \tag{14}\] For our parameters of \(B=1\) T and \(\sigma_{r}=0.65\) mm, we find \(\mathcal{L}\approx 250\)\(\mu\)m-rad. This is over 3 orders of magnitude greater than the thermal emittance, implying that this is indeed an angular-momentum dominated beam with \(\mathcal{L}\gg\varepsilon_{th}\). Such beams may be useful for tests of Linear Collider transport systems which employ flat beams from damping rings. ## IV Beamline design and simulation Figure 1 illustrates the beamline used to longitudinally compress and accelerate the beam. The entire beamline is encapsulated by a 1 T solenoid. The simulations of the beamline were performed with the General Particle Tracer (GPT) code [23]. The beam begins in the electrostatic trap with zero longitudinal energy. The initial bunch distribution is a uniform cylinder [14], and the longitudinal extent of the beam is defined by the position of the trap electrodes. The beam in the trap has a bunch length \(\sigma_{z}=14.4\) mm (50 mm uniform distribution). The bunch length is long compared to millimeter-scale bunches produced by photocathodes, and much longer than the micron-scale bunches required for PWFA experiments. Therefore, the beam must be longitudinally compressed as it is accelerated. Figure 2 shows the evolution of the longitudinal phase space along the beamline. Initial compression and acceleration of the long positron bunch is accomplished with a low-field electrostatic buncher inside the trap. A harmonic bunching potential is applied by ring electrodes, such that they provide an accelerating field that decreases linearly along the bunch from the tail to the head [24]. The bunching potential is 10 cm long and the bunch initially occupies the central portion of the potential (2.5 cm to 7.5 cm). The voltage drop across the buncher is 2 kV. Figure 3 shows the longitudinal field \(E_{z}\) as a function of position in the accelerator. The buncher creates a longitudinal focus 7 cm beyond the end of the trap at a longitudinal position of 17 cm in the simulation, immediately after position B in Fig. 3. A pulsed, 100 kV electrostatic accelerator extends from 16.4 cm to 26.4 cm (positions B to C). The high voltage pulse is provided by a nanosecond pulse generator. The accelerating pulse is timed with the beam such that the field is applied when the beam is between the two accelerating plates. The beam experiences a uniform accelerating field, but positrons at the back of the bunch experience the field for a longer period of time and gain energy relative to particles at the head of the bunch. The beam exits the electrostatic accelerator traveling roughly half the speed of light and undergoes velocity bunching as it travels toward the rf cavity. The second longitudinal focus is at \(z=0.50\) m with \(\sigma_{z}=1.3\) mm (position D). At this point, the bunch is short enough for injection into the RF cavity. The entrance to the s-band accelerator structure is located at \(z=0.50\) (position D). The capture phase of the s-band structure is set to both accelerate and longitudinally compress the beam to the final bunch length \(\sigma_{z}=190\)\(\mu\)m and energy of 17.6 MeV. Figure 4 shows the bunch length and emittance along the accelerator. There is an abrupt increase in the emittance from 0.11 \(\mu\)m-rad to 0.60 \(\mu\)m-rad at the start of the s-band cavity due to defocusing rf fields. Further studies will examine the possibility of tailoring the solenoidal magnet field Figure 3: Plot of the longitudinal field along the length of the beamline. The trap extends from \(z=0\) cm to \(z=10\) cm (Position A), the electrostatic accelerator extends from \(z=16.4\) cm (position B) to \(z=26.4\) cm (Position C), and the 3 GHz linac extends from \(z=50\) cm (Position D) to \(z=1.547\) cm (Position E). Figure 4: Bunch length and emittance along the beamline. The trap extends from \(z=0\) cm to \(z=10\) cm (Position A), the electrostatic accelerator extends from \(z=16.4\) cm (position B) to \(z=26.4\) cm (Position C), and the 3 GHz linac extends from \(z=50\) cm (Position D) to \(z=1.547\) cm (Position E). to suppress emittance growth at this location. Table 2 shows the output beam parameters. These parameters are comparable to those achieved by the AWAKE electron accelerator [25] for injection in a proton beam-driven plasma wakefield. ## V Conclusions and Future Work The electrostatic trap and beamline described here is capable of producing useful positron beams in a compact footprint. Such a device will enable access to positron beams for accelerator physics studies at universities and national laboratories that currently lack infrastructure for positron beam generation. Although the repetition rate of this positron source is too low for High Energy Physics applications, it is sufficient for studies at PWFA facilities, including the AWAKE facility which produces an experimental shot once every thirty seconds [26]. Further studies will be undertaken to explore tailored solenoidal magnetic fields that suppress emittance growth at the start of the rf cavity. We also plan to study remoderation of the positron beam to remove intrinsic angular momentum at the cost of reduced bunch charge [27]. The brighter positron beams produced by remoderation may prove useful as a compliment to Ultrafast Electron Diffraction (UED) experiments [28] where the positive beam charge can be used to reduce systematics when used in tandem with electron beams. The ultimate application of this technology would be a positron source for a damping ring-free collider [29]. This would require multiplexing of the compact positron source. Multiplexing of positron sources has been previously considered to meet the demands of the NLC collider concept [30]. However, given the repetition rate of existing compact positron sources, this would require thousands of sources operating simultaneously, so research in this direction should focus on increasing the repetition rate of a single source. ## VI Acknowledgements Many individuals helped to provide background on positron sources for this project. We thank Dirk Peter Van Der Werf, Samuel Niang, and Laszlo Liszkay for showing us the GBAR experiment at CERN. Thank you to David Cooke, David Cassidy, Allen Mills, and Cliff Surko for background on positrons from electrostatic traps. Thank you to Pietro Musumeci for background on UED systems. Klaus Floettman and Bas van der Geer provided input on simulations in ASTRA and GPT, respectively. Thank you to the AWAKE electron source group Seongyeol Kim, Mohsen Dayyani Kelisani, Steffen Doebert, and Edda Gschwendtner from CERN for their useful discussions and support.
2302.11369
Direct Optimization of Fast-Ion Confinement in Stellarators
Confining energetic ions such as alpha particles is a prime concern in the design of stellarators. However, directly measuring alpha confinement through numerical simulation of guiding-center trajectories has been considered to be too computationally expensive and noisy to include in the design loop, and instead has been most often used only as a tool to assess stellarator designs post hoc. In its place, proxy metrics, simplified measures of confinement, have often been used to design configurations because they are computationally more tractable and have been shown to be effective. Despite the success of proxies, it is unclear what is being sacrificed by using them to design the device rather than relying on direct trajectory calculations. In this study, we optimize stellarator designs for improved alpha particle confinement without the use of proxy metrics. In particular, we numerically optimize an objective function that measures alpha particle losses by simulating alpha particle trajectories. While this method is computationally expensive, we find that it can be used successfully to generate configurations with low losses.
David Bindel, Matt Landreman, Misha Padidar
2023-02-22T13:39:33Z
http://arxiv.org/abs/2302.11369v1
# Direct Optimization of Fast-Ion Confinement in Stellarators ###### Abstract Confining energetic ions such as alpha particles is a prime concern in the design of stellarators. However, directly measuring alpha confinement through numerical simulation of guiding-center trajectories has been considered to be too computationally expensive and noisy to include in the design loop, and instead has been most often used only as a tool to assess stellarator designs post hoc. In its place, proxy metrics, simplified measures of confinement, have often been used to design configurations because they are computationally more tractable and have been shown to be effective. Despite the success of proxies, it is unclear what is being sacrificed by using them to design the device rather than relying on direct trajectory calculations. In this study, we optimize stellarator designs for improved alpha particle confinement without the use of proxy metrics. In particular, we numerically optimize an objective function that measures alpha particle losses by simulating alpha particle trajectories. While this method is computationally expensive, we find that it can be used successfully to generate configurations with low losses. ## 1 Introduction Alpha particles are born in stellarators as a product of the fusion reaction. Born with 3.5 MeV, alpha particles carry a substantial amount of energy which, if confined, will heat the plasma and sustain the reaction. On the other hand, poor confinement of the alphas can have destructive effects on the plasma-facing components, and detract from plasma self-heating. Hence, confinement of fast ions is, and has been, a focal point in stellarator design [29, 23, 7, 30, 37]. Stellarator design is generally split into two stages. In the first stage the plasma shape is optimized such that the magnetohydrodynamic (MHD) equilibrium meets specified performance criteria, such as particle confinement, stability, and/or reduced turbulence. The second stage is then devoted to finding electromagnetic coil shapes and currents which generate the desired magnetic field. Due to the computational expense of simulating particle trajectories for long times, typically stage-one configurations are designed using proxy metrics for confinement, such as quasisymmetry (QS) [10, 32, 51], \(\Gamma_{c}\)[42, 5, 37, 7] and epsilon effective, \(\epsilon_{eff}\)[41]. Recently, numerical optimization of a QS metric has been particularly successful in improving particle confinement in stellarators, leading to configurations with less than 1% fast-ion losses [32, 55, 30]. Despite the success of QS and other proxies, it is unclear what is being sacrificed by using proxies to design the device rather than relying on exact calculations. For example, since QS is a sufficient condition for confinement, rather than a necessary condition, it may be overly stringent. Similarly, proxies in general only approximate the true goal of improving particle confinement, and do not capture the goal holistically or exactly. In this study we opt for a direct approach to achieve fast-ion confinement: we optimize stellarator designs by simulating fast-ion trajectories and minimizing the empirical loss of energy. Our model takes the form \[\begin{split}\underset{\mathbf{w}\in\mathbb{R}^{n\mathbf{w}}}{ \text{minimize}}&\mathcal{J}(\mathbf{w}):=\mathbb{E}_{\mathbf{x},v _{\parallel}}[\mathcal{J}_{\text{energy}}(\mathbf{x},v_{\parallel},\mathbf{ w})]\\ & B_{-}^{*}\leq B(\mathbf{x},\mathbf{w})\leq B_{+}^{*}\quad \forall\mathbf{x}\in\mathcal{P}\end{split} \tag{1}\] The objective \(\mathcal{J}\) measures the expected value of the energy lost, \(\mathcal{J}_{\text{energy}}\), due to alpha particles born with a random initial position, \(\mathbf{x}\), and parallel velocity, \(v_{\parallel}\), drifting through the last closed flux surface of the plasma. The decision variables \(\mathbf{w}\in\mathbb{R}^{n_{\mathbf{w}}}\) are Fourier coefficients representing the shape of the plasma boundary. Motivated by physical and engineering requirements, the infinite dimensional nonlinear bound constraints restrict the strength of the magnetic field \(B\) to an interval \([B_{-}^{*},B_{+}^{*}]\) at each point throughout the plasma volume \(\mathbf{x}\in\mathcal{P}\). By varying the shape of the plasma boundary we seek MHD equilibria that minimize the loss of alpha particle energy. The expected energy lost, \(\mathcal{J}(\mathbf{w})\), is computed empirically from Monte Carlo simulation of collision-less guiding center trajectories by use of an approximation for the alpha particle energy in terms of confinement time. Due to the lack of analytical derivatives, we solve eq.1 using derivative-free optimization methods. In this document, we discuss practical challenges such as the noisy objective computation, high computational cost, and choice of derivative-free optimization algorithm. Our numerical results show that the approach is indeed effective at finding desirable configurations, and that the configurations we find are visibly not quasi-symmetric. To the author's knowledge, only two stellarators have been designed by simulating alpha particle losses within the design loop: the ARIES-CS stellarator [29] and a design by Gori et. al. [20]. In the design of ARIES-CS, the average confinement time of \(\sim 500\) particles was included as a term in an optimization objective. The initial particle locations were held fixed during the optimization, leading to an "effective and robust" technique. Similarly, Gori et al. included the average confinement time of reflected particles in their optimization objective. To mitigate the high computational cost and time required to simulate particle trajectories both studies limited the particle simulation to a fixed number of toroidal transits. Despite the empirical success of these designs, there is not a clear description of the methods used and challenges faced. As part of our work we bring light to this approach. The paper is structured as follows. In Section2 we discuss the life cycle of alpha particles and the relevant physics to our numerical simulations. Section3 describes the computational workflow for modeling and evaluating candidate stage-one designs. In Section4 we mathematically formulate our design problem as an optimization problem. Section5 compares methods of computing the objective function via the simulation of alpha particle trajectories. Numerical results are presented in Section6, prior to a brief discussion of future research directions in Section7. ## 2 Physical Model We consider toroidal plasma configurations that are static MHD equilibria, satisfying \(\mu_{0}^{-1}(\nabla\times\mathbf{B})\times\mathbf{B}=\nabla p\), where \(p\) is the pressure and \(\mathbf{B}\in\mathbb{R}^{3}\) is the magnetic field. It is assumed that nested toroidal flux surfaces exist. For the numerical experiments in this work, we adopt the low \(\beta\) (plasma pressure divided by magnetic pressure) limit of \(p\approx 0\) and \(\nabla\times\mathbf{B}\approx 0\) for simplicity, but the methods here are fully applicable to MHD equilibria with substantial pressure and current. A convenient coordinate system for MHD equilibria is Boozer coordinates \(\mathbf{x}=(s,\theta,\zeta)\), where \(s\) is the toroidal flux normalized to be \(1\) at the plasma boundary, and \(\theta\) and \(\zeta\) are poloidal and toroidal angles. The domain of the coordinates is \(s,\theta,\zeta\in\mathcal{P}:=[0,1]\times[0,2\pi)\times[0,2\pi/n_{\text{fp}})\) for a stellarator with \(n_{\text{fp}}\) field periods. Motion of alpha particles in the equilibrium is modeled using the collisionless guiding center equations. For the case considered here of low \(\beta\), these equations are \[\begin{split}\frac{d\mathbf{R}}{dt}&=v_{\parallel} \mathbf{b}+\frac{mv_{\parallel}^{2}}{qB^{3}}\left(v_{\parallel}^{2}+\frac{v_{ \perp}^{2}}{2}\right)\mathbf{B}\times\nabla B,\\ \frac{dv_{\parallel}}{dt}&=-\frac{v_{\perp}^{2}}{2B }\mathbf{b}\cdot\nabla B.\end{split} \tag{2}\] Here, \(\mathbf{R}\) is the guiding center location, \(t\) is time, \(m\) is the particle's mass, \(q\) is the particle's charge, \(B=|\mathbf{B}|\) is the field strength, \(\mathbf{b}=\mathbf{B}/b\), and \(v_{\parallel}\) and \(v_{\perp}\) are the components of velocity parallel and perpendicular to \(\mathbf{B}\). The magnetic moment \(\mu=v_{\perp}^{2}/(2B)\) is conserved, as is the speed \(v=\sqrt{v_{\parallel}^{2}+v_{\perp}^{2}}\). Trapped particles, which have sufficiently small \(|v_{\parallel}/v_{\perp}|\), experience reversals in the sign of \(v_{\parallel}\). Particles that do not experience \(v_{\parallel}\) sign reversals are called "passing". Alpha particles are born isotropically with an energy of 3.5 MeV. We consider two models for the initial spatial distribution. The first model is based on the local fusion reaction rate, resulting in alpha particle birth throughout the plasma volume. The second model distributes alpha particles across a single specified flux surface. Either way, after birth, alpha particle guiding centers are followed for a specified amount of time, or until they exit the plasma boundary surface, at which time they are considered lost. The birth distribution of alpha particles is derived in a standard manner [14, 5, 30, 37], as follows. For calculations in which alpha particles are born throughout the volume, the spatial birth distribution is proportional to the local reaction rate [50], \(f(s,\theta,\zeta)\propto n_{D}n_{T}(\overline{\sigma v})_{DT}\). Here, \(D\) and \(T\) subscripts indicate deuterium and tritium, \(n_{D}\) and \(n_{T}\) are the species densities, which we assume to be equal, and \((\overline{\sigma v})_{DT}\) is the Maxwellian-averaged fusion cross-section, computed in [50] by \[(\overline{\sigma v})_{DT}(s)=3.68\times 10^{-18}T_{i}^{-2/3}(s)\exp(-19.94T_{i} ^{-1/3}(s))\ \mathrm{m^{3}sec^{-1}}, \tag{3}\] where \(T_{i}\) is the ion temperature in keV. Within the numerical experiments, we assume the following density and temperature profiles: \[n_{D}(s) =n_{T}(s)=2^{20}(1-s^{5})\ \mathrm{m^{-3}}, \tag{4}\] \[T_{i}(s) =12(1-s)\ \mathrm{keV}. \tag{5}\] These density and temperature profiles reflect plausible reactor parameters [29, 3], and the fact that temperature profiles in experiments are typically more peaked than density profiles. In this study the temperature and density profiles are held fixed in order to focus on the optimization of particle trajectories. The radial birth distribution of particles is thus proportional to \[f_{s}(s)\propto(1-s^{5})^{2}(1-s)^{-2/3}\exp(-19.94(12(1-s))^{-1/3}). \tag{6}\] as depicted in Figure 1 (left). Alternatively, to only consider particles born on a single flux surface, the localized initial radial distribution can be expressed as \(f_{s}(s)=\delta(s-s_{0})\), where \(s_{0}=0.25\) is used in numerical experiments. For either initial radial distribution, particles are initialized uniformly over flux surfaces. This uniformity is expressed by a determinant of the Jacobian from Boozer to Cartesian coordinates \(\sqrt{g}\), \[f_{\theta,\zeta}(\theta,\zeta\,|\,s)\propto|\sqrt{g}|. \tag{7}\] Figure 1 (right) shows \(f_{\theta,\zeta}\) for configuration \(\mathbf{A}\), which will be discussed in Section 6. Lastly, the isotropic velocity birth distribution corresponds to a uniform distribution of \(v_{\parallel}\) over \([-v_{\max},v_{\max}]\), where \(v_{\max}=\sqrt{2E/m}\) and \(E=3.5\) MeV. Defining the associated distribution \[f_{v_{\parallel}}(v_{\parallel})=\frac{1}{2v_{\max}}, \tag{8}\] the total birth distribution is \[f(s,\theta,\zeta,v_{\parallel})=f_{s}(s)f_{\theta,\zeta}(\theta,\zeta\,|\,s)f_{v _{\parallel}}(v_{\parallel}). \tag{9}\] Several mechanisms exist by which trapped particles are lost [18, 15, 19, 9, 46]. "Ripple trapped" particles, those trapped in a single field period or in coil ripple, typically experience a nonzero average radial magnetic drift and so are quickly lost. Other trapped particle trajectories may resemble the banana orbits of a tokamak, but with radial diffusion due to imperfect symmetry. Particles that transition between these two types of trapped states make additional radial excursions. Particles with wide banana orbits may also be directly lost. Generally, passing particles are not lost unless they are born very close to the plasma boundary. ## 3 Modeling and optimization software To evaluate candidate stage-one stellarator designs we rely on the SIMSOPT code [31]. SIMSOPT is a framework for stellarator modeling and optimization which interfaces with MHD equilibrium solvers such as VMEC[24] and SPEC[25], and houses infrastructure for defining magnetic fields, computing coordinate transformations, tracing particles, and computing properties of fields and equilibria. Certain rate-limiting computations in SIMSOPT, such as evaluating magnetic fields, are executed in C++. For ease of use, however, Python bindings are used through the PyBind11 library, allowing users to interface with SIMSOPT solely through the Python interface. In order to design stage-one configurations we first find an ideal MHD equilibrium by evaluating VMEC with a prescribed plasma boundary shape, current profile, and pressure profile. Subsequently, the magnetic field is transformed to Boozer coordinates which is used within the guiding center equations when tracing particles. Figure 1: (left) Radial probability density \(f_{s}(s)\) derived from the fusion reaction rate. (right) Density over Boozer coordinates \(\theta\) and \(\zeta\), \(f_{\theta,\zeta}\) for configuration \(\mathbf{A}\) which will be discussed in Section 6. The plasma boundary is paramterized as a Fourier series in the poloidal and toroidal angles \(\theta\) and \(\phi\), \[\begin{split} R(\theta,\phi)&=\sum_{n=0}^{n_{\text{ mode}}}R_{0,n}\cos(-n_{\text{fp}}n\phi)+\sum_{m=1}^{n_{\text{mode}}}\sum_{n=-n_{\text{ mode}}}^{n_{\text{mode}}}R_{m,n}\cos(m\theta-n_{\text{fp}}n\phi),\\ Z(\theta,\phi)&=\sum_{n=1}^{n_{\text{mode}}}Z_{0,n} \sin(-n_{\text{fp}}n\phi)+\sum_{m=1}^{n_{\text{mode}}}\sum_{n=-n_{\text{mode}}}^ {n_{\text{mode}}}Z_{m,n}\sin(m\theta-n_{\text{fp}}n\phi),\end{split} \tag{10}\] where \(n_{\text{mode}}=\{0,1,2,\ldots\}\) can be increased to achieve more complicated boundary representations. Field period symmetry with \(n_{\text{fp}}\) periods and stellarator symmetry have been assumed. Upon computing the equilibrium, a Boozer-coordinate representation of the magnetic field is computed using the BoozXform code, via SIMSOPT. Working in Boozer coordinates reduces the number of interpolations required to integrate the guiding center equations. Initial particle positions and parallel velocities can then be generated, and particles traced using the vacuum guiding center equations in Boozer coordinates up to a terminal time \(t_{\text{max}}\) or until stopping criteria are satisfied. The guiding center equations are solved using the adaptive Runge-Kutta scheme RK45. VMEC and the particle tracing codes allow parallelism through MPI. While VMEC can be run efficiently on a single core, particle tracing is embarrassingly parallel and benefits from the use of numerous cores. Even with dozens of MPI processes, particle tracing can take anywhere from seconds to minutes of wall-clock time to complete. In addition, there are substantial costs, typically around \(\sim 20\) seconds, associated with running VMEC, computing the Boozer transform, and interpolating the fields required for tracing. Timing results for simulating particle orbits are shown in Figure 2. In total, computing an equilibrium and tracing enough particles Figure 2: Wall-clock time required to trace a single particle until a terminal time \(t_{\text{max}}\) using a single processor on a computing cluster. Timing results were averaged over 2000 particles randomly generated throughout a four field period configuration, all of which were confined to their terminal time \(t_{\text{max}}\). The total time of an objective evaluation also includes the fixed time of evaluating VMEC, computing the Boozer transformation, and building interpolants of the \(\mathbf{B}\)-field, which took 19.07 seconds for this configuration. to evaluate the objective function, defined in Section 4.3, often takes between 30sec and 130sec of wall-clock-time, depending on the terminal trace time, the configuration, and the number of particles. The optimization process, which can be run on a single node, or multiple nodes on a computing cluster, is time consuming, often running for one to two days. For example, solving an optimization problem would consume 26 hours of wall-clock-time when using 48 MPI processes and a computational budget of 1000 function evaluations which each require tracing 3500 particles to 10ms. This poses a serious challenge in performing the optimization. In Section 7 we discuss future work that could reduce this burden. ## 4 Optimization model formulation We now outline a mathematical optimization problem that seeks stellarator configurations with good confinement of fast-ions. By varying the shape of the plasma boundary we minimize the energy lost due to alpha particles exiting the last closed flux surface. In the following, we describe the salient characteristics of the problem: the representation of decision variables, nonlinear constraints on the magnetic field strength, and an objective that quantifies the confinement of alpha particle energy. ### Decision variables The independent decision variables for optimization are the Fourier coefficients \(R_{m,n},Z_{m,n}\) which define the shape of plasma boundary in VMEC via eq.10. The number of modes used in the boundary description is controlled by the parameter \(n_{\text{mode}}\in\{0,1,2,\ldots\}\). Increasing \(n_{\text{mode}}\) increases the complexity of the boundary shape allowing for potential improvements in confinement, while setting \(n_{\text{mode}}=0\) only allows the major radius \(R_{0,0}\) to vary. The total number of decision variables satisfies \(n_{\mathbf{w}}=4n_{\text{mode}}^{2}+4n_{\text{mode}}\). The major radius of a design is central to particle trajectories simulation, since the Larmour radius and guiding center drifts scale with the square of the ratio of major radius to aspect ratio, \(\propto(R_{0,0}/A)^{2}\). Standardization of the device size is thus necessary in order to have realistic particle losses, and to prevent the optimization from shrinking the aspect ratio arbitrarily. In confinement studies, device size is typically standardized by constraining the minor radius or the plasma volume. We opt to constrain the minor radius _implicitly_ to \(a\approx a^{*}:=1.7\)m (the minor radius of ARIES-CS), by fixing the major radius, fixing the toroidal flux, and constraining the field strength. In particular, we fix the major radius based on the target aspect ratio \(A^{*}:=7\), \[R_{0,0}=a^{*}A^{*}. \tag{11}\] In Section 4.2, the toroidal flux and mean field strength will be selected to encourage the design to have an aspect ratio near \(A^{*}\). If the design achieves the aspect ratio of \(A^{*}\), it would also have an average minor radius of \(a^{*}\). Otherwise, the minor radius will only be near \(a^{*}\). The decision variables are collected into the vector \(\mathbf{w}\in\mathbb{R}^{n_{\mathbf{w}}}\) via \(\mathbf{w}=(R_{0,1},\ldots,Z_{0,0},\ldots)\). ### Nonlinear constraints Engineering limitations on electromagnetic coils and the associated support structure place an upper limit on the magnetic field strength. For low-temperature superconductors, the field strength is limited to be no more than 15T in the coil and approximately 5T throughout the plasma volume [29]. To achieve reactor relevant scaling of the magnetic field, we fix the toroidal flux so that if the plasma has an average minor radius of \(a^{*}\), the volume-averaged magnetic field strength is \(B^{*}:=5\)T, \[\Psi_{T}=\pi(a^{*})^{2}B^{*}. \tag{12}\] The value of toroidal flux set in eq.12 is used as an input parameter to the MHD equilibrium calculations, and does not need to be treated as a constraint in the optimization. When paired with the major radius constraint, eq.11, the toroidal flux constraint, eq.12, to zeroth order fixes the ratio of the the squared aspect ratio to volume-averaged magnetic field strength, i.e. \(A^{2}/B\approx(A^{*})^{2}/B^{*}\). Thus by placing bound constraints on the field strength we can constrain the range of the aspect ratio. In addition, bound constraints on the field strength are necessary in order to constrain the mirror ratio \(\max_{\mathbf{x}}B(\mathbf{x})/\min_{\mathbf{x}}B(\mathbf{x})\), which we find increases to unphysically large values when left unconstrained in optimization. We globally bound the field strength, \[B^{*}_{-}\leq B(\mathbf{x})\leq B^{*}_{+}\quad\forall\,\mathbf{x}\in\mathcal{ P}. \tag{13}\] The upper and lower bounds \(B^{*}_{+}=B^{*}\frac{2r^{*}}{r^{*}+1}\) and \(B^{*}=B^{*}\frac{2}{r^{*}+1}\) enforce that the mirror ratio is at most \(r^{*}:=1.35\), similar to W7-X and the Compact Helical System (CHS) [8, 44]. The upper bound on the field strength is derived from material properties and tolerances in coil engineering and the lower bound is motivated by requirements on confinement and transport based phenomena. The constraints eq.13 are "soft constraints" by nature, in that a small violation of the constraints is tolerable. To handle the infinite dimensional constraints, eq.13, we discretize the domain of the constraint into a uniform, \(n_{s}\times n_{\theta}\times n_{\zeta}\) grid. We then apply the magnetic field constraints at each of the \(n_{\text{grid}}=n_{s}n_{\theta}n_{\zeta}\) grid points \(\mathbf{x}_{i}\), \[\begin{split} B^{*}_{-}-B(\mathbf{x}_{i})\leq 0& i=1,\ldots,n_{\text{grid}},\\ B(\mathbf{x}_{i})-B^{*}_{+}\leq 0& i=1,\ldots,n_{ \text{grid}},\end{split} \tag{14}\] totaling \(2n_{\text{grid}}\) nonlinear simulation based constraints. ### Optimization objective Fast-ion optimization has two principle goals: minimizing the thermal energy lost from the system, and dispersing or concentrating the load of fast-ions on the plasma-facing components. We focus solely on the first goal of achieving excellent confinement of energy, noting that this also makes progress towards the second goal. The confinement of fast-ions is often measured by the loss fraction, the fraction of particles lost within a terminal time \(t_{\text{max}}\). While the loss fraction measures particle confinement, it does not reflect the fact that particles lost quickly, with energy of nearly 3.5 MeV, contribute more to the heat flux on plasma-facing components and detract more from plasma self-heating than particles lost at late times, which have slowed substantially. If collisions were included in the particle tracing calculations, the energy loss fraction could be computed straightforwardly. However particle tracing is often done without collisions, because it is easier to implement and because efficient algorithms can be applied in the collisionless case [1, 2]. Therefore, here we describe a physically motivated objective function that places greater weight on minimizing prompt losses within collisionless calculations. Fusion-produced alpha particles primarily experience collisions with electrons during which they deposit most of their energy. This follows from the fact that the slowing-down collision frequency [50] for alpha particles with background electrons is higher than the slowing-down collision frequency with ions as long as the alpha energy exceeds \(\sim 50T_{e}\)[22, page 40]. If reactor temperatures satisfy \(T_{e}\leq 16\) keV, then collisions with electrons dominate until the alphas have slowed to \(\leq 0.8\) MeV. This process can be described by \[\frac{dv}{dt}=-\nu_{s}^{\alpha/e}v, \tag{15}\] where \(\nu_{s}^{\alpha/e}\) is the alpha-electron slowing-down collision frequency, which is approximately independent of alpha energy [50]. \(\nu_{s}^{\alpha/e}\) will vary with time as the particle traverses regions of different density and temperature. We neglect this complexity treating \(\nu_{s}^{\alpha/e}\) as approximately constant, in which case the solution of eq.15 becomes \[v(t)=v(0)e^{-\nu_{s}^{\alpha/e}t}. \tag{16}\] The slowing-down time, \(1/\nu_{s}^{\alpha/e}\), is typically on the order of \(100\)ms for plausible reactor parameters. Assuming an initial energy of \(3.5\) MeV, the energy lost associated with an alpha particle lost at time \(\mathcal{T}\) is \(3.5e^{-2\nu_{s}^{\alpha/e}\mathcal{T}}\)MeV. In Figure3 we see that the energy decay model eq.16 is almost identical to the mean energy of alpha particles at any given time. Data for Figure3 was generated using _collisional tracing_ in the ANTS code [14]. \(20,000\) particles were traced from each of \(10\) configurations: the National Compact Stellarator eXperiment (NCSX) [56], Advanced Research Innovation and Evaluation Study - Compact Stellarator (ARIES-CS) [40], a quasi-axisymmetric (QA) stellarator developed at New York University (NYU) [16, 17], the Chinese First Quasi-axisymmetric Stellarator (CFQS) [38], a quasi-helically (QH) symmetric stellarator developed at the Max Planck Institute for Plasma Physics (IPP) [45], a QA stellarator developed at IPP [23], the Helically Symmetric eXperiment (HSX) [4], Wistell-A [6], the Large Helical Device (LHD) [26], and the Wendelstein 7-X (W7-X) [28]. Scattered are alpha particle energies at they moment they are lost. The mean of the particle energies (solid black line) is shown against the energy model eq.16 (dashed red line). The accuracy of the energy model in predicting the mean energy justifies its use as an optimization objective. Figure 3: Energy of alpha particles at the time they are lost. Data points were generated by tracing particles _with collisions_ from \(10\) configurations: NCSX, ARIES-CS, a QA from NYU, CFQS, a QH from IPP, a QA from IPP, HSX, Wistell-A, LHD, and W7-X. \(20,000\) particles were traced for each configuration. The solid black line indicates the regressed mean of the data, and the dashed red line is the energy decay model \(3.5\exp(-2t\nu_{s}^{\alpha/e})\) where the slowing-down time \(1/\nu_{s}^{\alpha/e}\approx 0.057\)sec was computed analytically using the volume averaged density and temperature [50]. The energy model very closely matches the mean particle energy. We take the expectation of this energy measure to compute our optimization objective, replacing \(\nu_{\mathrm{s}}^{\alpha/e}\) by the inverse of the fixed tracing time \(t_{\mathrm{max}}\): \[\mathcal{J}_{\mathrm{energy}}(\mathbf{x},v_{\parallel},\mathbf{w})=3.5e^{-2 \mathcal{T}(\mathbf{x},v_{\parallel},\mathbf{w})/t_{\mathrm{max}}} \tag{17}\] We write the confinement time as \(\mathcal{T}(\mathbf{x},v_{\parallel},\mathbf{w})\) to explicitly denote its dependence on the initial particle position, parallel velocity, and decision variables. For a particle that is lost at time \(t\) the confinement time is calculated as \(\mathcal{T}=\min\{t,t_{\mathrm{max}}\}\). To compute our optimization objective, the expected energy lost, \(\mathcal{J}(\mathbf{w})=\mathbb{E}[\mathcal{J}_{\mathrm{energy}}(\mathbf{x},v _{\parallel},\mathbf{w})]\) we integrate \(\mathcal{J}_{\mathrm{energy}}\) against the distribution \(f(\mathbf{x},v_{\parallel})\) of initial particle positions and parallel velocities, \[\mathcal{J}(\mathbf{w}):=\int_{\mathbf{x}}\int_{v_{\parallel}}3.5e^{-2 \mathcal{T}(\mathbf{x},v_{\parallel},\mathbf{w})/t_{\mathrm{max}}}\ f( \mathbf{x},v_{\parallel})\ dv_{\parallel}\,d\mathbf{x}. \tag{18}\] In Section5 we discuss three possible methods of computing this integral, by Monte Carlo (MC), by Simpson's rule, and by Quasi-Monte Carlo (QMC) [36]. As a simple alternative to this objective we can also minimize the energy lost from particles born on a single flux surface. This has the advantage of reducing the dimension of the objective computation. Hence we define the surface objective as \[\mathcal{J}_{s}(\mathbf{w}):=\int_{\theta,\zeta}\int_{v_{\parallel}}3.5e^{-2 \mathcal{T}(\mathbf{x},v_{\parallel},\mathbf{w})/t_{\mathrm{max}}}\ f(\theta, \zeta,v_{\parallel}|s)\ dv_{\parallel}\,d\theta\,d\zeta. \tag{19}\] Previous stellarator designs which leveraged optimization of empirical alpha particle losses, ARIES-CS and a design by Gori et. al., used the expected value of confinement time, and the conditional expectation of the confinement time over particles which bounce as optimization objectives. In this study, we opt to use the energy loss objective \(\mathcal{J}\) rather than mean confinement time due to the interpretation as energy. However, the mean confinement time and \(\mathcal{J}\) may be related through Jensen's inequality, \[\mathcal{J}_{\mathrm{energy}}(\mathbb{E}[\mathcal{T}])\leq\mathbb{E}[\mathcal{ J}_{\mathrm{energy}}(\mathcal{T})]=\mathcal{J}(\mathbf{w}). \tag{20}\] By a straightforward computation, \(\mathbb{E}[\mathcal{T}]\leq-\frac{t_{\mathrm{max}}}{2}\ln(\frac{\mathcal{J}( \mathbf{w})}{3.5})\). Hence maximizing the mean confinement time should reduce \(\mathcal{J}\) and similarly minimizing \(\mathcal{J}\) should increase the mean confinement time. The set of local minima for these two objectives is not in general the same. However, if there exist configurations with \(0\%\) losses, then the objectives share the set of global minimizers. ## 5 Numerical computation of objective Monte Carlo quadrature and deterministic numerical quadrature methods can be used to approximate the integral eq.18. Whether spawned on a mesh or randomly according to some distribution, particles with initial position and parallel velocity \((\mathbf{x}_{i},(v_{\parallel})_{i})\) are traced through time until breaching the last closed flux surface, \(s=1\), at some time \(t\leq t_{\mathrm{max}}\), or until the terminal tracing time is reached \(t=t_{\mathrm{max}}\). The confinement time is calculated as \(\mathcal{T}=\min\{t,t_{\mathrm{max}}\}\), which can be converted to the approximate energy lost due to potential particle ejection via eq.17. Quadrature methods combine the integrand values computed from evaluation points as a weighted sum, \[\mathcal{J}(\mathbf{w})\approx\sum_{i=1}^{N}\omega_{i}\mathcal{J}_{\mathrm{ energy}}(\mathbf{x}_{i},(v_{\parallel})_{i},\mathbf{w})f(\mathbf{x}_{i},(v_{ \parallel})_{i},\mathbf{w}) \tag{21}\] where the weights \(\{\omega_{i}\}_{i=1}^{N}\) and nodes \(\{\mathbf{x}_{i},(v_{\parallel})_{i}\}_{i=1}^{N}\) are determined by the quadrature method. We briefly explore three different methods for approximating our objectives: MC, QMC, and Simpson's rule [11]. MC quadrature samples \(N\) nodes randomly from some density \(\{\mathbf{x}_{i},(v_{\parallel})_{i}\}_{i=1}^{N}\sim g(\mathbf{x},v_{\parallel})\) and approximates the integral via eq.21 with weights \(\omega_{i}=(g(\mathbf{x}_{i},(v_{\parallel})_{i})N)^{-1}\). In our setting, \(f_{\theta,\zeta}(\theta,\zeta|s)\) varies depending on the MHD equilibrium computed from \(\mathbf{w}\). To simplify the sampling procedure, we opt to sample \(\theta\) and \(\zeta\) from a uniform distribution. Hence, initial particle positions and velocities are sampled from \[g(s,\theta,\zeta,v_{\parallel}):=f_{s}(s)f_{v_{\parallel}}(v_{\parallel})n_{ \mathrm{fp}}/4\pi^{2}. \tag{22}\] The standard deviation, and hence convergence rate, of the MC estimator is \(\sigma/\sqrt{N}\) where \(\sigma\) is the standard deviation of \(\mathcal{J}_{\mathrm{energy}}f/g\). On one hand MC is slow to deliver accurate estimates, but on the other hand it does not rely on smoothness assumptions to achieve its converge rate, unlike Simpson's rule. When used in the optimization loop, Monte Carlo Carlo methods can be applied in two ways: by regenerating the samples \(\{(\mathbf{x}_{i},(v_{\parallel})_{i}\}_{i=1}^{N}\) at each iteration, or by generating the samples once and holding them fixed throughout the optimization. We denote the former method as generic MC. The later method is known as the Sample Average Approximation method (SAA) [52]. A great benefit of using SAA is that it forms deterministic optimization problems which can solved by the any conventional optimization method. The principal drawback of SAA is the slight bias it incurs in the solution, similar to quadrature methods. When using generic MC to compute the optimization objective, stochastic optimization methods must be used to solve the optimization problem. Stochastic solvers tend to converge slowly, but arrive at unbiased solutions. Quasi-Monte Carlo methods are a deterministic analog of MC methods. Similar to MC they approximate integrals as sample averages. However, the points used in the sample average are not truly random, rather they are _low discrepancy sequences_, approximately random sequences. Quasi-Monte Carlo methods boast a convergence rate of \(O(1/N)\), when using \(N\) points in the approximation, which is an impressive improvement over MC and SAA. The constant in the convergence rate depends on the _total variation_ of the integrand, a measure of it's rate of change, rather than it's variance. Since the integrand in eq.18 depends on the confinement time, which is non-smooth, and perhaps even discontinuous in \(\mathbf{x}\) and \(v_{\parallel}\), the total variation of the integrand is large, and so QMC may not outperform MC until the number of samples is large. Simpson's rule uses quadratic interpolation of a function on a mesh to approximate the function's integral. High order quadrature methods, like Simpson's rule, achieve high-order convergence rates when the integrand can be well-approximated by a low-degree polynomial. However, since particle confinement times may jump chaotically under small perturbations in \(\mathbf{x},v_{\parallel}\), Simpson's rule and other high-order quadrature schemes are not expected to achieve a high-order convergence rate. In Figure4, we compare the approximation quality of four methods of computing the \(\mathcal{J}_{1/4}\): generic MC, SAA, Simpson's rule, and QMC. Figure4 (right) shows the relative error of MC, Simpson's rule and QMC in approximating the objective \(\mathcal{J}_{1/4}\) at a single point. Given the limits on sample size requirements, MC achieves similar accuracy to Simpson's rule and QMC. QMC performs slightly better than MC, but does not reliably do so at the sample sizes shown. Figure4 (left) shows the objective approximations over a one dimensional slice of space near an arbitrary configuration \(\mathbf{w}_{0}\). Spatially, we find that SAA provides a smooth approximation to the objective, which is beneficial for optimization. For this reason we use SAA to compute the objectives in the numerical experiments. Unfortunately, due to the extraordinarily high standard deviation of the confinement times typically \(>2000\) points are required to reduce the noise in the objective enough so that it can be tractably minimized by an optimization routine. The standard deviation of the confinement times is often of the same order of magnitude as the mean, though it decreases as the loss fraction decays to zero. In future work, variance reduction techniques [35, 34, 21] should be used to improve the accuracy of the objective computation and reduce the computational burden associated with tracing particles. ## 6 Numerical results In this section we explore numerical solutions of eq.1. We show physical properties of two, four field period vacuum configurations: configuration \(\mathbf{A}\) was optimized using the surface initialization loss \(\mathcal{J}_{1/4}\), and configuration \(\mathbf{B}\) optimized using the volumetric initialization loss \(\mathcal{J}\). We find that minimizers of \(\mathcal{J}_{1/4}\) also perform well under \(\mathcal{J}\), and that quasi-symmetry need not be satisfied for good confinement. Furthermore, we analyze the local relationship of particle losses with a quasi-symmetry metric, finding that reducing the violation of quasi-symmetry can increase particle losses. While our numerical solutions are vacuum configurations, the optimization model and numerical methods can be applied to finite-\(\beta\) configurations as well. Due to the computational expense of repeated particle tracing, our configurations were optimized with a terminal trace time of \(t_{\max}=10\)ms. A three dimensional view of the configurations is shown in fig.5. The data that support the findings of this study are openly available at the following [https://github.com/mishapadidar/alpha_particle_opt](https://github.com/mishapadidar/alpha_particle_opt). ### Methods Initial experimentation demonstrated that the optimization landscape contains many local solutions. To this end it was useful to search the optimization space by generating a host of initial points with varied rotational transform values, \(\iota\). Starting points for the fast-ion optimization were generated by solving the optimization Figure 4: (Left) Approximations of the objective function \(\mathcal{J}_{1/4}\) using Simpson’s rule, QMC, SAA, and MC across a one dimensional slice of space, around a point \(\mathbf{w}_{0}\). The curves were computed by tracing \(4096\) particles per point, with particles on a \(16^{3}\) mesh for Simpson’s rule. The shaded region is the \(95\%\) confidence interval for the objective value computation with MC. The black line (Actual) represents the actual value of the objective, and is computed using MC with \(32,000\) samples. (Right) Relative error of MC, QMC, and Simpson’s rule in computing the objective at a single point. The MC curve represents the expected relative error of the MC estimator given the sample size, and was computed by bootstrapping. problem, \[\underset{\mathbf{w}}{\text{minimize}}\ \ (A-A^{*})^{2}+(\iota-\iota^{*})^{2}+\sum_{i=1}^{n _{\text{grid}}}\max(B(\mathbf{x}_{i})-B_{+}^{*},0)^{2}+\max(B_{-}^{*}-B(\mathbf{ x}_{i}),0)^{2}, \tag{23}\] in SIMSOPT using concurrent function evaluations to compute forward difference gradients, and the default solver in Scipy's least-squares optimization routine [54]. Optimal solutions were found to the problem within 5% error for each target rotational transform \(\iota^{*}\). The decision variables were characterized by \(n_{\text{mode}}=1\), i.e \(n_{\mathbf{w}}=8\). The fast-ion optimization was initialized from the solutions of eq.23. The magnetic field bound constraints eq.14 were treated with a quadratic penalty method with penalty weights all equal to one, \[\underset{\mathbf{w}}{\text{minimize}}\ \ \ \mathcal{J}_{\text{ penalty}}(\mathbf{w}):=\mathcal{J}(\mathbf{w})+\sum_{i=1}^{n_{\text{grid}}} \max(B(\mathbf{x}_{i})-B_{+}^{*},0)^{2}+\max(B_{-}^{*}-B(\mathbf{x}_{i}),0)^{2}, \tag{24}\] with an analogous form for using the surface objective \(\mathcal{J}_{1/4}\). The particle loss objective \(\mathcal{J}(\mathbf{w})\) was computed using SAA, since it provided a reasonably smooth approximation of the objective. The penalty method was used because the field strength constraints are "soft constraints" -- they do not need to be satisfied exactly. Powell's BOBYQA algorithm [48] within the Python package PDFO [49] was used to solve eq.24. BOBYQA is a derivative-free trust region method that uses local quadratic approximations of the objective to make progress towards a minimum. BOBYQA performed particularly well in this problem due to its ability to handle computational noise and use samples efficiently [12]. Empirically we find that the efficiency of the optimization with terminal time \(t_{\max}\) can be substantially improved by warm-starting the optimization from a solution with near-zero losses at a shorter value of \(t_{\max}\), say \(t_{\max}/10\). The optimization up to the terminal time \(t_{\max}=10\)ms was performed solving a sequence of optimization problems, where at each step \(t_{\max}\) and the number of Fourier modes were increased: \((t_{\max},n_{\text{mode}})=\)(0.1ms, 1), (1ms, 1), (1ms, 2), (10ms, 2), (10ms, 3). For \(t_{\max}=0.1,1,10\)ms we use \(10^{4},7^{4},6^{4}\) particles, respectively, and \(8,48,48\) MPI processes to trace particles. Particles were traced until reaching the terminal tracing time of Figure 5: Three dimensional views of configuration \(\mathbf{A}\) (left) and configuration \(\mathbf{B}\) (middle). Cross sections of configuration \(\mathbf{A}\) (solid) and \(\mathbf{B}\) (dashed) at four cylindrical angles \(\phi\) across a field period (right). \(t_{\max}\), or until the particle reached the \(s=1\) flux surface or \(s=0.01\) flux surface. Particles reaching the \(s=1\) flux surface were deemed lost, while particles reaching the \(s=0.01\) flux surface were deemed to be confined to the terminal time \(t_{\max}\). The \(s=0.01\) stopping criteria is currently required as part of the tracing code in SIMSOPT, but should not be used in future work. ### Two solutions We present two solutions found by solving eq.1. Configuration \(\mathbf{A}\), with solution vector \(\mathbf{w_{A}}\), was found by minimizing the surface initialization objective \(\mathcal{J}_{1/4}\) which measures the energy lost by particles born on the \(s=0.25\) flux surface. Configuration \(\mathbf{B}\), with solution vector \(\mathbf{w_{B}}\), was found by minimizing the energy lost by particles born throughout the entire volume, i.e. objective \(\mathcal{J}\). Properties of configuration \(\mathbf{A}\) and \(\mathbf{B}\) can be seen in Table1. All configurations presented in this section were scaled to same \(1.7\)m minor radius and \(B_{0,0}(s=0)=5.7\)T field strength on the magnetic axis as the ARIES-CS configuration [29]. Configuration \(\mathbf{A}\)_almost_ reaches the global minimum value of \(\mathcal{J}_{1/4}\), attaining an objective value of \(\mathcal{J}_{1/4}(\mathbf{w_{A}})=0.475\) and a loss fraction of \(0.0046\) for particles born on the \(s=0.25\) flux surface; a global minimum would have zero particle losses and \(\mathcal{J}_{1/4}=0.473\). Configuration \(\mathbf{A}\) also reports a low loss fraction for particles born, according to \(f\), throughout the volume, \(0.022\). Similarly, configuration \(\mathbf{B}\) has a loss fraction of \(0.0215\) for particles born throughout the volume and a loss fraction of \(0.0094\) for particles born on the \(s=0.25\) flux surface. While the two configurations were optimized for different objectives, both configurations show good performance in both objectives. Optimizing using the surface loss \(\mathcal{J}_{1/4}\) reduces the dimension of the integral eq.19 and potentially the variance of the objective. Since improvement in the two objectives is highly correlated, in future work the surface loss objective could be used in place of the volume loss objective \(\mathcal{J}\), unless confinement times are largely dependent on the radial birth distribution due. Neither configuration \(\mathbf{A}\) nor \(\mathbf{B}\) has active constraints at the solution, and so the constraints do not limit the performance of the solutions. We do find however, that in general the constraints on the field strength are active throughout the optimization. Without constraining the field strength, the mirror ratio becomes unphysically large, the contours of \(B\) close poloidally, and solutions become approximately Quasi-Isodynamic [53]. In Figure6 we compare the alpha particle loss curves of configuration \(\mathbf{A}\) and \(\mathbf{B}\) to those of the stellarator configurations introduced in Figure3, as well as the QA and QH configurations from Landreman and Paul (labeled LP-QA and LP-QH)[32]. To compute the curves, \(5000\) particles born throughout the volume (left) or on the \(s=0.25\) flux surface (right), were traced until the terminal time \(10\)ms or until either they crossed the \(s=1\) flux surface and were considered lost, or reached \(s=0.01\) and were considered confined. Our configurations demonstrate good particle confinement up to the terminal time \(t_{\max}=10\)ms used in the optimization. Configuration \(\mathbf{A}\) and \(\mathbf{B}\) outperform all but LP-QA, LP-QH and Wistell-A in terms of particle losses from the \(s=0.25\) flux surface, and are only outperformed by Wistell-A and LP-QH in terms of losses of volumetrically initialized particles. The lowest loss fraction from the \(s=0.25\) flux surface, \(0\%\), is that of LP-QH. The QS \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Config. & Aspect Ratio & Mirror ratio & Mean \(\iota\) & Volume loss fraction & \(s=1/4\) loss fraction \\ \hline \(\mathbf{A}\) & 6.67 & 1.33 & 0.856 & 0.022 & 0.0046 \\ \hline \(\mathbf{B}\) & 6.61 & 1.32 & 1.023 & 0.0215 & 0.0094 \\ \hline \end{tabular} \end{table} Table 1: Properties of configurations \(\mathbf{A}\) and \(\mathbf{B}\). Loss fractions were computed by tracing 10,000 particles to \(t_{\max}=10\)ms. optimization problem posed by Landreman and Paul is computationally much less expensive to solve, and has a much smoother objective than \(\mathcal{J}\) and \(\mathcal{J}_{1/4}\), allowing for solutions to be refined to a much higher degree with gradient-based optimization methods. ### Local analysis of Quasi-symmetry Neither configuration \(\mathbf{A}\) nor configuration \(\mathbf{B}\) are QS. This is seen most clearly in Figure 7 by viewing the contours of the magnetic field strength in Boozer coordinates. QS fields have the representation \(B=B(s,m\theta-n\zeta)\) for some numbers \(m,n\) in Boozer coordinates, implying that the contours of \(B\) are straight when viewed as a function of \(\theta\) and \(\zeta\)[27]. Near the magnetic axis only \(m=1\) is possible [13], and to preserve field period symmetry \(n\) must be a multiple of the number of field periods, \(n\in kn_{\text{fp}}\) for \(k\in\mathbb{N}\). Quasi-axisymmetry occurs when \(m=1\), \(n=0\) and quasi-helical symmetry occurs when \(m=1\), \(n\neq 0\). Exact quasi-symmetry is a sufficient condition for perfect confinement. In addition, Landreman and Paul [32] showed that even precisely quasi-symmetric configurations can have excellent confinement properties. However, in general it is not clear how particle confinement degrades when QS is broken, or how particle confinement improves as the violation of QS is reduced. To explore this relationship, we modify configuration \(\mathbf{A}\) to reduce the violation of QS and examine how the corresponding particle losses are affected. The degree of \((m,n)\)-quasi-symmetry of a configuration can be measured by the metric proposed in [32], which we denote \(Q_{m,n}(\mathbf{w})\). Configuration \(\mathbf{A}\) can be modified to have Figure 6: Collisionless loss curves for 5000 particles born throughout the volume with distribution \(f\) (left) or born on the \(s=0.25\) flux surface (right). Configurations were scaled to a 1.7m minor radius and a \(B_{0,0}(s=0)=5.7\)T field strength on the magnetic axis, like the ARIES-CS reactor [29]. Wistell-A reported less than 0.1% losses from the \(s=0.25\) flux surface, and LP-QH reported no losses from the \(s=0.25\) flux surface and a loss fraction of 0.0002 for particles distributed throughout the volume. reduced violation of \((m,n)\)-quasi-symmetry by moving the solution vector \(\mathbf{w_{A}}\) in the negative gradient direction of \(Q_{m,n}\). For small step sizes \(\alpha\), configurations with decision variables \[\mathbf{w}_{m,n}(\alpha)=\mathbf{w_{A}}-\alpha\nabla Q_{m,n}(\mathbf{w_{A}}) \tag{25}\] will have a lower departure from QS than configuration \(\mathbf{A}.\) As seen in Figure 8, as the violation of QS is reduced along this path, particle losses increase substantially, from approximately \(0.43\%\) to approximately \(1\%\), for all three types of QS considered \((m,n)=(1,0),(1,4),(1,-4)\). Locally, the violation of QS has an inverse relationship with confinement and so quasi-symmetric configurations may be isolated from non-quasi-symmetric configurations with low losses. ## 7 Future work In the design of the ARIES-CS reactor, configurations with good alpha confinement were found by including alpha particle tracing calculations as part of an objective function within the optimization loop. In this study we have expanded upon this method, showing that it can be used to find configurations with fractional alpha losses. However, in it's current form, fast-ion optimization is computationally expensive, often taking multiple days to complete. Proxy metrics on the other hand, can be used to design stellarators in only a few hours on a computing cluster. To reduce the wall-clock-time of fast-ion optimization we propose three improvements: the use of variance reduction techniques to reduce the number of traced particles, symplectic particle tracing algorithms to improve the speed and accuracy of confinement time calculations, and multi-fidelity optimization methods to reduce the number of times particle tracing needs to be performed altogether. Law et. al. found in [33, 35] that combining variance reduction techniques such as importance sampling, control variates, and information reuse [43] can reduce the number of particles that must be traced by a factor Figure 7: Contours of \(B\) in the Boozer coordinates on four flux surface \(s=0.05,0.25,0.5,1.0\), for configuration \(\mathbf{A}\) (left four) and configuration \(\mathbf{B}\) (right four). of 100. In addition, variance reduction techniques can be implemented relatively quickly making them an easily implementable addition to fast-ion optimization methods. The time spent tracing can also be reduced by improving orbit integration time. Albert et. al. [1, 2] showed that symplectic tracing algorithms can trace particle trajectories three times faster than adaptive integration algorithms, such as RK45, while maintaining the same statistical accuracy. Lastly, we propose using multi-fidelity optimization methods to reduce the number of expensive particle tracing simulations [39, 47]. Multi-fidelity optimization methods for fast-ion optimization would rely on "low-fidelity models" of \(\mathcal{J}\) to take reliable steps towards minima without performing many expensive particle tracing simulations. Low fidelity models of the energy loss objective could leverage particle tracing simulations with larger step sizes or simply be proxies, such as quasi-symmetry metrics. In addition to improvements in optimization efficiency, there are improvements to be made in constructing objective functions. Thus far, particle tracing has only been used to measure confinement. However, now that particle losses can be tractably reduced, the destructive effects of alphas on plasma-facing components is a central design consideration. A "wall-loading" objective function can either concentrate or disperse the alpha particle load on the wall, depending on engineering considerations. ## 8 Acknowledgments We thank Max Ruth, Shane Henderson, and Rogerio Jorge for their useful discussions. This work was supported by a grant from the Simons Foundation (No. 560651, D.B.). Figure 8: Fraction of alpha particles lost as the violation of QS is reduced (right to left) along the line segment \(\mathbf{w_{A}}-\alpha\nabla Q_{m,n}(\mathbf{w_{A}})\), for three different types of QS: \((m,n)=(1,0),(1,4),(1,-4)\). Reducing the violation of QS increases the fraction of lost particles. The shaded region indicates the 95% confidence interval of the loss fraction.
2310.19524
Exploring Perceived Vulnerability of Pedestrians: Insights from a Forced-Choice Experiment
Individual differences in mobility (e.g., due to wheelchair use) during crowd movement are not well understood. Perceived vulnerability of neighbors in a crowd could affect, for example, how much space is given to them by others. To explore how pedestrians perceive people moving in front of them, in particular, how vulnerable they believe them to be, we asked \SI{51}{} participants to complete a Two-Alternatives-Forced Choice task (2AFC) in an internet browser. Participants were shown pairs of images, each showing a person, and then asked to select the person who appeared more vulnerable to them. For example, participants would choose between a male person in a wheelchair and a female person carrying a suitcase. In total 16 different stimuli (male vs female; no item/device, 1 suitcase, 2 suitcases, small backpack, large backpack, stroller, cane, and wheelchair), yielding n(n-1)/2 = 120 potential pairwise comparisons per participant. Results showed that wheelchair users appeared the most vulnerable and persons without any items/devices the least vulnerable. Persons carrying two suitcases were in the middle. These results informed the design of a main behavioral study (not reported here).
Paul Geoerg, Ann Katrin Boomers, Maxine Berthiaume, Maik Boltes, Max Kinateder
2023-10-30T13:23:38Z
http://arxiv.org/abs/2310.19524v1
# Exploring Perceived Vulnerability of Pedestrians: Insights from a Forced-Choice Experiment ###### Abstract Individual differences in mobility (e.g., due to wheelchair use) during crowd movement are not well understood. Perceived vulnerability of neighbors in a crowd could affect, for example, how much space is given to them by others. To explore how pedestrians perceive people moving in front of them, in particular, how vulnerable they believe them to be, we asked \(51\) participants to complete a Two-Alternatives-Forced Choice task (2AFC) in an internet browser. Participants were shown pairs of images each showing a person and then asked to select the person who appeared more vulnerable to them. For example, participants would choose between a male person in a wheelchair and a female person carrying a suitcase. In total \(16\) different stimuli (male vs female; no item/device, 1 suitcase, 2 suitcases, small backpack, large backpack, stroller, cane, and wheelchair), yielding \(n(n-1)/2=120\) potential pairwise comparisons per participant. Results showed that wheelchair users appeared the most vulnerable and persons without any items/devices the least vulnerable. Persons carrying two suitcases were in the middle. These results informed the design of a main behavioral study (not reported here). Pedestrian dynamics Accessibility Heterogeneous crowds Demographic change 2AFC Online study Perceived vulnerability Wheelchair user ## 1 Introduction This manuscript documents the background, rationale, procedure, and results of a pilot study designed to test the perceived vulnerability of pedestrians walking ahead of another person. The results of the pilot study informed the design of a larger behavioral study on pedestrian movement (not reported here, see [1]) Individual differences in mobility (e.g., due to wheelchair use) during crowd movement are not well understood. This has been recognized in several _review_ publications, e.g. [2, 3, 4]. While these have pointed out how, for example, certain functional limitations could affect pedestrian egress movement, they also report a lack of empirical data. Likely as a consequence, many engineering tools that aim to predict the evacuation performance of crowds are based exclusively on data from young adults without disabilities. However, previous work has found anecdotal evidence that pedestrians keep a larger distance from wheelchair users when walking in a crowd [5]. The perceived vulnerability of people walking next to each other (also called neighbors) in a crowd could potentially explain this effect on microscopic movement parameters (e.g., movement speed). More specifically, a relevant question that has not been answered is how the visibility of a disability (e.g., through recognition of an assistive device) or mobility-relevant properties (e.g., carrying heavy items, pushing strollers, etc.) shape how people see neighbors in a crowd. Individual pedestrians' behavioral reactions to visible mobility attributes may have cascading effects on micro- and macroscopic movement patterns in the crowd. Here, we explore two potential explanatory mechanisms: perceived vulnerability and perceived required space. That is, will people increase their interpersonal distance to wheelchair users because there is a social norm to be mindful of people who appear vulnerable or because wheelchair users simply appear to take up more space? The purpose of this pilot study was to collect subjective impressions from participants via an online study that informed the main behavioral study (a group of participants moving through a bottleneck). Participants reported ratings of perceived vulnerability of pedestrians and wheelchair users with different mobility attributes using a Two-Alternatives-Forced-Choice (2AFC) paradigm [6]. In a 2AFC paradigm, participants were shown two stimuli (e.g., images) and then need to select one. The 2AFC procedures allowed for generating unbiased responses and avoid certain response patterns compared to other procedures such as simple rating scales (e.g., anchoring) [6; 7; 8]. ### Research Questions The main research questions addressed in this pilot study were: 1. Are wheelchair users generally perceived to be more or less vulnerable than others? 2. Are there differences/similarities compared to other mobility-related attributes? ## 2 Methods ### Design In each trial, two images showing a person were presented. The participant's task was to select the person that appeared more vulnerable to them. More specifically, they were given the following instructions: "Imagine that you are walking behind each person while walking in a crowd. Please click on the person you would be more cautious around" (Figure 1(c)).1 Participants then chose between images in which the following attributes were manipulated: Footnote 1: We queried feedback on optimal item formulation from an expert in ecological psychology and experimental design 1. Mobility attributes (8 levels): baseline (no mobility aid or luggage), 1 suitcase, 2 suitcases, small backpack, large backpack, stroller, cane, wheelchair 2. Visible gender attributes of the person shown (male or female) This yielded a total of 16 unique stimuli (see 2.2). Stimuli were placed randomly either on the left or right side of the screen (see Figure 1(c)). Conditions in which two identical images were shown on the left and right side were excluded. In a 2AFC design, this translates into \(n(n-1)/2=120\) potential pairwise comparisons. ### Stimuli Sixteen stimuli were generated for the main trials of this study (Figure 1(b)) as well as three stimuli for the practice trials (Figure 1(a)). Each stimulus was generated from a vector graphic based on two baseline models purchased from a commercial stock image database (shutterstock.com) and then loaded into an image editing software (Gimp v.2.10.32, www.gimp.org). The baseline images showed stylized views of an adult person, with either stereotypical male or female visual attributes. The "male" character had short hair, a jacket and pants. The "female" character had long hair and was wearing a skirt. Both characters were Caucasian. The baseline models were then modified for varying mobility profiles. Care was taken so that the basic appearance (e.g., posture, size) of the baseline characters did not change, except for the changes associated with the mobility profiles. The three practice trial stimuli showed similar characteristics, however, their design differed so that no feature (e.g., assistive device) would match those of the main trials. They included a "female" character with crutches, a female character with grey hair, and an adult male character (Figure 1(a)). ### Procedure The study was implemented using PsychoPy (v2021.1.4) and hosted online using Pavlovia.org (an online platform to run, share, and explore psychometric studies). Participants needed to use an internet-connected device (either computer, Figure 1: Pilot study: (a) stimuli practice trials; (b) stimuli main trials; and (c) example screenshot of 2AFC task. tablet, or phone) to access the study. The study procedure itself is structured in a sequential order (Fig 2. Participants had to give informed consent before starting the study procedure, were instructed, practiced three exemplary trials to get familiar with the study design, and then started the 120 decision trials. Last, participants were asked for demographic information (age group, gender identity, prior experience with people with disabilities, and movement in crowds). The procedure was approved by the NRC Research Ethics Board (REB 2021-88). ### Sample Participants were recruited via e-mail invitation and posting on social media. \(51\) participants completed at least some of the main trials of the study. \(64.7\,\%\) (\(n=33\)) of the participants provided a complete data set and provided demographic information. Two participants (\(\approx 6.1\,\%\)) stated that they had a disability. Figure 3 shows the distribution of age, gender, and the amount of experience that participants had interacting with people with disabilities in their daily lives. Figure 3: Distribution of (a) age, (b) gender, and (c) reported frequency of interaction with persons with disabilities. Figure 2: Pilot study structure and procedure. Results and Discussion In this study, we investigated how visible mobility attributes, such as using a wheelchair, carrying luggage, or pushing a stroller affect the perceived vulnerability of pedestrians. 2AFC designs provide two metrics: Which image was selected (choice) and how long it took to make that decision. The former can provide direct answers to the research question (see section 1.1) and test for potential biases and stereotypical response patterns (e.g., a participant always clicking on the right stimulus). The latter can provide information on how difficult a decision was (the longer participants needed to decide between two stimuli, the harder the decision). In addition, response times can be used to filter non-credible responses (either too fast or too long, for a discussion in an adjacent area see [9]). ### Data processing In order to exclude data with non-credible responses, we first filtered the data for response time by removing values above and below two standard deviations of the average data (response time \(\bar{R}T=1.67\,\mathrm{s}\pm 7.62\,\mathrm{s}\)). Note that this approach has limitations (see [10] for a discussion). As a result, \(4.45\,\mathrm{\char 37}\) (\(265\)) from a total of \(5949\) responses were removed from the original data set. Next, we plotted histograms of the left and right clicks of participants to identify potential biases (see Figure 6(a)). None of the participants appeared to systematically prefer left to right and consequently, thus no participants were excluded. ### Choice Figure 4 shows the absolute frequencies that each stimulus was selected. The following observations were made: Stimuli showing... 1....a person without any assistive device were consistently rated as the least vulnerable; 2....a person in a wheelchair appeared to be the most vulnerable, followed by those with a cane and stroller; 3....persons with either one or two suitcases appeared to be in the middle; 4....persons carrying larger and smaller backpacks appeared less vulnerable than those carrying suitcases but more than a person without any assistive device or travel item; 5....persons with stereotypically female attributes consistently appeared to be more vulnerable than those with male attributes. These differences were most pronounced for the stimuli showing _cane_ users, and smallest for stimuli showing _wheelchair_ and _stroller_ users. This indicates that the perception of vulnerability scales with the space required by the item, i.e., a small backpack is perceived as less vulnerable than a large backpack, which is perceived as less vulnerable than a stroller. Similarly, the larger an item, the more vulnerable a person appears to be. Interestingly, this effect does not seem to be linear. For example, the differences in perception from small to big backpacks and from one to two suitcases were larger compared to the differences between other conditions. ### Response times Figure 6(b) shows the response times for each stimulus, when selected. The average response time across all stimuli was \(1.40\pm 1.79\,\mathrm{s}\) (median \(=0.81\,\mathrm{s}\)). Medians were consistently higher than averages. There were no noticeable differences in response times across stimuli (regardless of displayed gender attributes and items). Note that the number of data points varied notably because we only report the response times for the selected stimulus (see Figure 4). Figure 5 shows a heatmap of the average response time for each pairwise comparison. These data could indicate comparisons that were harder for participants to differentiate. The longest response times were reported for comparisons between male and female cane users (\(2.83\,\mathrm{s}\)), male with one suitcase and female with a large backpack (\(2.79\,\mathrm{s}\)), and male and female wheelchair users (\(2.74\,\mathrm{s}\)). The fastest response times were reported for comparisons between females with no items and female wheelchair users (\(0.47\,\mathrm{s}\)). This suggests the following patterns: * Decisions between male and female figures were harder when they were otherwise similar in terms of mobility attributes. * Decisions became easier when the contrast between items was larger (see, for example, the column for the male wheelchair user (\(m_{whee}\)) in Figure 5). ## 4 Conclusions This study reports on a pilot experiment designed to provide insights and guidance for a larger behavioral study [1]. The goal was to identify mobility profiles (i.e., attributes of people moving in a crowd) that differ in perceived vulnerability and space requirements. To this end, we asked participants to compare images showing characters that differed in their appearance, such as their stereotypical gender attributes (male vs female) or the kind of travel items (suitcases and backpacks) and assistive devices (cane and wheelchair) with them. We chose a 2AFC online study for this purpose and while this approach has clear limitations, a pattern could clearly be established, which answered the research questions and informed the design of the behavioral study: * Pedestrians in wheelchairs consistently appeared to be more vulnerable than any other mobility profile (research question 1). The inverse was true for pedestrians without any other item/mobility device (research question 2). * Other mobility-related attributes appeared to influence the perceived vulnerability as a function of their size, weight, and ease with which the items could be moved (e.g., characters with bag packs, in general, were seen as less vulnerable as those with suitcases and strollers). The only exception was the cane, which was likely interpreted as a clear sign of vulnerability, but to a lesser degree than the wheelchair (research question 2). * Within each category of items, characters that appeared to be female were consistently rated to be more vulnerable. * The more similar the displayed stimuli were, the longer participants needed to decide. The present work has a number of limitations that should be considered. Firstly, we only reported on a selection of mobility profiles. The profiles were selected with the future behavioral study in mind, but the list is certainly not exhaustive. For instance, we did not vary the appearance of age (neither children nor seniors were displayed). Secondly, we did not measure perceived vulnerability directly, but asked participants to indicate the person they would be "more cautious" around. This approach was chosen given that the term 'vulnerable' may be interpreted differently based on the participants' own backgrounds. However, prior to testing, we solicited feedback on this formulation from native speakers of English who were experts in environmental psychology and experimental design. Finally, a large number of participants did not complete the full study; Consequently, the data set might be imbalanced given the amount of missing demographic information. In addition, the influence of the participant's own mobility profile (e.g., gender, age, or living with a disability) could not be investigated. However, we believe that the limitations did not prevent this work from achieving its goal of informing the design of a behavioral study. In the behavioral study, groups of participants will be asked to move together through a bottleneck; Figure 4: Absolute frequencies of each stimulus being selected. Figure 5: Heatmap of average response times as a function of pairwise comparisons. Figure 6: (a) Histograms of left and right clicks for each participant ID, and (b) boxplots of response times for each stimulus, when selected. critically, the mobility attributes of two participants at the center of the group will be manipulated. In wheelchair conditions, two participants will be wheelchair users, in the luggage condition, two participants will be carrying two suitcases each, while in control conditions all participants will have similar mobility attributes.
2303.01253
Implementing engrams from a machine learning perspective: matching for prediction
Despite evidence for the existence of engrams as memory support structures in our brains, there is no consensus framework in neuroscience as to what their physical implementation might be. Here we propose how we might design a computer system to implement engrams using neural networks, with the main aim of exploring new ideas using machine learning techniques, guided by challenges in neuroscience. Building on autoencoders, we propose latent neural spaces as indexes for storing and retrieving information in a compressed format. We consider this technique as a first step towards predictive learning: autoencoders are designed to compare reconstructed information with the original information received, providing a kind of predictive ability, which is an attractive evolutionary argument. We then consider how different states in latent neural spaces corresponding to different types of sensory input could be linked by synchronous activation, providing the basis for a sparse implementation of memory using concept neurons. Finally, we list some of the challenges and questions that link neuroscience and data science and that could have implications for both fields, and conclude that a more interdisciplinary approach is needed, as many scientists have already suggested.
Jesus Marco de Lucas
2023-03-01T10:05:40Z
http://arxiv.org/abs/2303.01253v1
# Implementing engrams from a machine learning perspective: matching for prediction. ###### Abstract Despite evidence for the existence of engrams as memory support structures in our brains, there is no consensus framework in neuroscience as to what their physical implementation might be. Here we propose how we might design a computer system to implement engrams using neural networks, with the main aim of exploring new ideas using machine learning techniques, guided by challenges in neuroscience. Building on autoencoders, we propose latent neural spaces as indexes for storing and retrieving information in a compressed format. We consider this technique as a first step towards predictive learning: autoencoders are designed to compare reconstructed information with the original information received, providing a kind of predictive ability, which is an attractive evolutionary argument. We then consider how different states in latent neural spaces corresponding to different types of sensory input could be linked by synchronous activation, providing the basis for a sparse implementation of memory using concept neurons. Finally, we list some of the challenges and questions that link neuroscience and data science and that could have implications for both fields, and conclude that a more interdisciplinary approach is needed, as many scientists have already suggested. Engrams concept neuron autoencoder sparse memory neural networks ## 1 Introduction Neuroscience is probably the most challenging field of research today, given the intrinsic complexity and diversity of biological structures in the brain, the impact of advances in this field on our current and future lives, and the many challenges that are being addressed (Herrera et al., 2020). Our brain achieves incredible levels of performance compared to our current human-designed, electronic-based computers, both in terms of capabilities and energy consumption, so it seems natural to try to find inspiring challenges from the study of our brain to develop new ideas in computer science, even if in many cases these challenges in neuroscience will not be solved without new experimental breakthroughs. In fact the study of our brain has been a source of inspiration for artificial intelligence (AI) since its inception (Turing, 1950). Many advances in machine learning, from perceptrons (McCulloch and Pitts, 1943) to deep learning techniques (LeCun et al., 2015), have been inspired by the analogy with the structure of neural networks in our brains, even if we don't really know how they might handle learning tasks. It must also be said that the exploration of some of the most successful ideas in machine learning to establish an analogy with neural processes has been pursued extensively, but technical problems, such as the difficulty of establishing backpropagation in neural circuits, have not allowed these solutions to be applied by direct analogy to understanding our brain (Lillicrap et al., 2020). At present, both the neuroscientific and the machine learning scientific community promote the interest of this interplay at a global level (Richards et al., 2019; Zador et al., 2022). One such inspiring global challenge is to understand how our brains store and retrieve information, i.e. how our memory processes work. The "engram", a term proposed by Richard Semon (Semon, 1921) to refer to the physical substrate of our memory, is still a very active topic of research. After long studies trying to find the localisation of "engrams" in the brain (Eichenbaum, 2016), and despite significant advances in the knowledge of neural mechanisms in recent years, the reality is that we do not know the details of how our brain stores the memories it perceives (Josselyn and Tonegawa, 2020; Berlot, Popp and Diedrichsen, 2018; Gebicke-Haerter, 2014; Han et al., 2022; Fuentes-Ramos, Alaiz-Noya and Barco, 2021). In the field of machine learning, the issue of memory is considered somewhat secondary, as information is naturally stored digitally. However, the common interest in episodic memory, which is key for predictive tasks, and its relationship to attentional mechanisms (Vaswani et al., 2017) has significantly increased the potential convergence of the two fields in recent years. As a starting point, we have chosen a fascinating question related to this challenge: the possible existence of "concept cells" (Quiroga, 2012). Concept cells are individual neurons that selectively fire at an image or text that corresponds to a given identity, as measured in the brains of different people in many different examples. The opinion paper "No Pattern Separation in the Human Hippocamp" (Quian Quiroga, 2020) summarises the very interesting chain of developments in this field, triggered by his team's discovery in 2005 of the initially referred as the "Jennifer Aniston" neurons, the first example of an idea that was previously proposed within the community without a clear scientific basis, the "grandmother cells" (Gross, 2002). The results were popularised by the media, as the images presented included celebrities such as Jennifer Aniston (JA in what follows), Halie Berry or Jackie Chan, to name but a few. While the arguments presented in the cited paper and the long list of accompanying references provide clear support for the basic hypothesis, i.e. that concept cells underlie the engrams that support conceptual memory, there is no explicit "mechanistic" model of how these engrams could be encoded in a "sparse" mode while at the same time leading to the activation of a single neuron. ## 2 An analogy for structures supporting engrams: autoencoders Encoders, and in particular autoencoders ('Autoencoder', 2022), are a very interesting method that has benefited from the application of deep learning and convolutional neural network techniques. The basic idea is simple: to achieve a large reduction in dimensionality by applying a coding filter to large samples of data, such as images, and projecting them onto a multi-dimensional vector, the latent space. Our approach is also simple, we start with the scheme of an engram (memory) system for storing images, which includes four different interrelated parts: -an encoding system, which receives as input images from a vision system -A latent space, a set of nodes that store the vector values that are the output of the encoding system. -A decoding system capable of "recovering" the input image from a vector value in the latent space. -Critically, a layer of "concept nodes" that connect all value points in the latent space that are related to the same concept. A basic didactic scheme of these ideas is shown in figure1. The previous scheme, which can be easily implemented in a computer, already raises interesting questions about a possible analogous implementation in the neuronal circuits of the brain. This memory system could be built with neurons, dendrites or even sets of dendritic spines as nodes, while connections would be established as synapses, which could be either binary (0/1, active/not active) or "analogical" valued connections. The first idea to be explored in this analogy is how the information might be encoded, and in particular what kind of loss function might make sense, and what level of compression might be appropriate. As a first example, consider that our eyes as sensors have a resolution of about 500 Mpixels, while our visual memory capacity is probably much more limited, both as a short-term memory and, even more so, as a long-term memory. Thus, a first question to be raised following this analogy is what could be considered a realistic compression factor for images corresponding to a given person, and what kind of loss function makes sense. This question implies an analysis of the possible structure of the coding layer, the latent space, and also of the decoding layer. This decoding structure may or may not be symmetrical to the encoding structure, in which case the loss function may have a different resolution. This question raises the very interesting point of what external information is perceived by our senses but lost when processed in our brain. And on the other hand, what artefacts might be created by our brain when it tries to recover the original information from the compressed information encoded in the latent spaces. The second, much trickier point, which is also related to the first question, is how encoding might occur physically in our brains, and how we might improve our computational encoders by analogy. In our computers, we use deep learning techniques, including backpropagation and convolutional filters, to train these autoencoders, and we "force" the learning process to apply the desired loss function, for example by using a cross-entropy comparison. Here we have no answer, but some more didactic considerations. After all, the process of matching the original image with the reconstructed one can be seen as an example of "prediction", since recognition is the basis for further action. This is a very interesting evolutionary argument to support the idea that autoencoders could be implemented in our brains and, more generally, that engrams supporting memory have appeared as structures in living organisms. The idea is to look for examples where the basic mechanism, i.e. matching a sensory input with a neural feedback, is implemented and this configuration is selected by a biological system. Figure 1: Basic scheme for storing information using auto-encoders in concept nodes. Note that the structure of the encoder/decoder system could be different for each type of information, defining different latent spaces. The identity of a concept, that is the concept node, is defined by the associations (values) of this concept node with the various vector values that the information corresponding to this concept receives in these latent spaces (pictures, texts, sounds, internal signals, etc.). There is another related argument to consider regarding autoencoders and their possible implementation as neural networks in animals: given their "simple" structure and scalability, it is possible that encoders could be considered as a basic structure that could be genetically defined, providing newborns with memory capacity and basic learning functions without the need for initial training on a large data set. There is a clear analogy with transfer learning methods in machine learning, where the initial architecture and weights of a new neural network for a specific task are provided by a previous neural network trained on a very large database of images. ## 3 Building a concept node: associating sparse memory over encoders In all nervous systems, including our brain, the external data is provided by perception, either images, including symbols, from vision, or sound, taste, smell, etc. There are also many internal data channels, including chemical and electrical signals. To define a complete JA neuron, all the related information must be linked. What is more, this information usually needs to be linked to a wider context. Returning to our computational scheme, we have a vector value in the latent space corresponding to a visual perception, a picture of JA, another vector value in the latent space corresponding to the text "Jennifer Aniston", and so on. The JA node simply links all these values together so that the complete concept JA can be recovered from these connections. This structure can be implemented as a noSQL database, where each node corresponds to a concept. For computers, this is an associative task, associating a key (label JA) with different values. For our neurons in the brain, we could speculate that the initial association of an image and a text, which will give rise to a label or key, could be triggered by synchronicity, following the well-known formula "neurons that fire together, wire together" (Hebb, 1949). The JA neuron would link the representations in the two different latent spaces of the image and the text, as they would fire at the same time. We can also imagine that when we see a new text with the words 'Jennifer Aniston', the vector value in the latent space for words will be the same, and in this way we can establish a link with the JA node. However, it is not obvious how we can use the existing autoencoder to identify a new image of JA and link it to the existing key, since the new vector value in the latent space of images will not in principle be close to the previous vector value for the first image. We need a computational solution that applies a supervised classification on top of an unsupervised solution, which is what the autoencoder is. There are several possible computational approaches to this problem. The first could be quite direct: we apply transfer learning starting from a pre-existing neural network that has been trained in supervised mode on a very large database of images that have been classified, and use the new images to refine these pre-existing categories. For example, ResNet (He et al., 2015) is a neural network that, once trained on the ImageNet database (Deng et al., 2009), we could use as a basis for transfer learning. Following this approach, we could modify the previous architecture and consider the latent space as a pre-classifier layer. A new image of JA would initially be given the same values to classify in the scheme (or thesaurus) used by the training database (in the previous case it would be WordNet (Miller, 1995) and the image would be classified as corresponding to a woman), and in this way it could be linked to the previous image, which already exists as a concept node. As new images are initially labelled, a supervised classification scheme begins to be defined. A second approach, which is probably not too different in practice, could be to build the autoencoder in such a way that different images of the same concept correspond to very close points in the latent space, so that a concept is defined by locality in such a latent space, and the connection of one image to a previous one is defined by a minimum distance in this latent space. The structure of the latent space could be extended in this way to support hierarchical conceptualisation, providing a kind of generalisation. Several proposals, such as concept splatters (Grossmann, Groller and Waldner, 2022), have already been made to structure latent spaces in this direction. We can now begin to have an initial scheme for organising the different information associated with a given 'identity' such as JA. All visual information is "encoded" in a neural network oriented towards encoding images, and a corresponding latent space; all textual information is "encoded" in another neural network oriented towards encoding words, and a corresponding latent space; all information perceived as sound is "encoded" in another neural network, and so on. The "concept" neurons are implemented as indexes that connect the latent spaces and allow joint retrieval of the information associated with the "concept". The complete machine learning system to support engrams would be a combination of autoencoders embedded in deep convolutional neural network classifiers developed using transfer learning. The concept nodes would be indexes stored in a hierarchical noSQL database linking the points in the corresponding latent spaces. Translating this scheme to our brain, we can consider the ideas described long ago (Teyler and DiScenna, 1986; Treves and Rolls, 1994) and further developed in several papers proposing computational models involving the hippocampus (Kesner and Rolls, 2015), which is usually considered a central hub for many cognitive activities, including memory (Lisman et al., 2017). In our inspiring model, concept neurons, and presumably latent space encoding, could be located in hippocampal areas, while the corresponding encoding-decoding neural networks that support most of the information in the engram could be developed in the different cortical areas already identified by their different functional properties: visual cortex, language area, etc. This proposal overcomes the apparent conflict of a limited storage capacity due to the limited number of neurons in the hippocampus, whose main function would be to support indexing activity and associative connections to configure a conceptual space or cognitive map, as also discussed in recent work (Whittington et al., 2022). These associative connections would be established by synchrony, between points in different latent spaces, and by locality, between different points in the same latent space. ## 4 Global reflection and next questions Our ultimate goal is to learn from the knowledge and ideas of neuroscience to advance machine learning: we know that our brain is a much more energy-efficient machine than our computers, and that it is better at complex and abstract tasks. The complexity of the brain is reflected in the diversity and increasing number of publications on different topics in neuroscience. Similarly, publications on data science methods and applications have increased exponentially in recent years, following their success in solving problems that can be considered as artificial intelligence questions. In this sense, we have shown how a challenging question in neuroscience, the implementation of engrams in our brain, can trigger an interesting analysis from a computational point of view. Moreover, once a certain hypothesis in neuroscience is considered, whether correct or not, in this case the existence of concept neurons, different questions and possible solutions can be proposed from a data science point of view, stimulating the search for new ideas. The technical and scientific complexity of both fields, neuroscience and data science, makes interdisciplinarity a must in order to advance both fields together, as demanded by both communities. Developing such an exploration initially from only one perspective may generate many hypotheses, most of them wrong, but some of them may stimulate reflection from the other perspective. What we can be sure of is that the search for answers would benefit from more intense collaboration. Note that the proposal we have presented is not very original from the point of view of machine learning, since both autoencoders and NoSQL databases are well-known solutions for storing information. The key question is whether we could improve the design or combination of these computational tools by knowing how our brain works. Following a bottom-up analysis in neuroscience, the first question is to better understand neurons as cells, and also many other cells in the brain, to be able to explore their individual and collective properties either in simulations or in nanoprototypes, to understand their functionality, and to try to integrate these features into computational neural networks. In this respect, there are new possibilities to be explored with respect to the architecture of autoencoders, following the recent knowledge of the brain connectome and the relevant role of inhibitory neurons (Shapson-Coe et al., 2021), astrocytes (Labate and Kayasandik, 2023) or the consideration of computation at the dendritic level (Acharya et al., 2022). It's worth remembering, however, that we don't yet have a realistic simulation of a cell, and that neurons are a very complex and diverse type of cell. There are many topics being studied in the neuron as a cell, from metabolism to the origin of electrical potentials and excitability, even more at the dendritic level, and most of them could be crucial for understanding how neuronal circuits also work as complex systems. In any case, our main interest would be to find an idea of how the brain could assemble these cells and process the internal signals to be able to learn and memorise in such an efficient and powerful mode compared to our current techniques in machine learning. From our point of view, the main conclusion of this short note is that it would be interesting to explore, from a neuroscientific point of view and probably from an evolutionary perspective, an energy-efficient biological mechanism providing almost instantaneous basic pattern-matching capabilities, similarly to what autoencoders do using time-consuming and energy-intensive machine learning methods.
2301.00734
Nonlinear Non-Hermitian Landau-Zener-Stückelberg-Majorana interferometry
In this work, we have studied the non-Hermitian nonlinear LZSM interferometry in a non-Hermitian N-body interacting boson system in which the non-Hermicity is from the nonreciprocal tunnelings between the bosons. By using the mean-field approximation and projective Hilbert space, the effect of nonreciprocity and nonlinearity on the energy spectrum, the dynamics, and the formation of the interference fringes have been studied. The different symmetries and the impact of the two different types of reciprocity, i.e. the in-phase tunneling and anti-phase tunneling, on the energy spectrum and the phase transition between the Josephson oscillation and the self-trapping have been investigated. For the LZSM interferometry, the strength of the nonreciprocity is found to take an essential role in the population of the projective state and the strengths of the interference patterns in the projective space. While the conditions of destructive and constructive interference under the weak-coupling approximation still only depend on the strength of nonlinearity. Our result provides an application of the nonlinear non-Hermitian LZSM interferometry in studying the parameters of a non-Hermitian nonlinear two-level system which related to the nonlinearity and the non-Hermicity.
Xin Wang, H. D. Liu, L. B. Fu
2023-01-02T15:59:07Z
http://arxiv.org/abs/2301.00734v1
# Nonlinear Non-Hermitian Landau-Zener-Stuckelberg-Majorana interferometry ###### Abstract In this work, we have studied the non-Hermitian nonlinear LZSM interferometry in a non-Hermitian N-body interacting boson system in which the non-Hermicity is from the nonreciprocal tunnelings between the bosons. By using the mean-field approximation and projective Hilbert space, the effect of nonreciprocity and nonlinearity on the energy spectrum, the dynamics, and the formation of the interference fringes have been studied. The different symmetries and the impact of the two different types of reciprocity, i.e. the in-phase tunneling and anti-phase tunneling, on the energy spectrum and the phase transition between the Josephson oscillation and the self-trapping have been investigated. For the LZSM interferometry, the strength of the nonreciprocity is found to take an essential role in the population of the projective state and the strengths of the interference patterns in the projective space. While the conditions of destructive and constructive interference under the weak-coupling approximation still only depend on the strength of nonlinearity. Our result provides an application of the nonlinear non-Hermitian LZSM interferometry in studying the parameters of a non-Hermitian nonlinear two-level system which related to the nonlinearity and the non-Hermicity. ## I Introduction The quantum two-level system (TLS) is the most basic part of physical systems. Among them, the Landau-Zener (LZ) transition between two levels at an avoided crossing [1; 2; 3] has received widespread attention. When these two-level systems are under a strong periodic driving field, a series of LZ transitions occur and the transitions probability exhibit a periodic dependence on the phase (Stuckelberg phase) accumulated between transitions [1; 4]. The periodic change is called Landau-Zener-Stuckelberg-Majorana(LZSM) interferometry [5; 6]. With the development of research, LZSM interferometry has become an important phenomenon in quantum science and technology. On the one hand, LZSM interferometry is used for ultra-fast universal quantum control of a quantum-dot charge qubit [7] and characterized qubit dephasing [8], etc. On the other hand, it has involved many fields so far, such as molecular nanomagnets [9; 10], quasi-one-dimensional layered materials [11; 12], ultracold molecules [13], quantum noise [14], Bose-Einstein condensates [15; 16; 17; 18; 19], Rydberg atoms [20], etc. Interestingly, if a two-level system takes account of the nonlinear interaction, it may produce unexpected interference features [21; 22; 23; 24; 25; 26]. For the non-linear LZ model, the self-trapping phase transition may occur in LZSM interferometry [27; 28; 29; 30; 31], and there may be exceptional ring structures in the energy spectra [32; 33]. In recent years, the non-Hermitian quantum systems with real energy spectra received widespread attention in theory and experiment [34; 35; 36; 37; 38; 39; 40; 41]. There are two kinds of non-Hermicity, asymmetric coupling strengths in nonreciprocal systems and the gain-loss in reciprocal system. There are two kinds of non-Hermitian Hamiltonians, describing nonreciprocal systems with asymmetric coupling strengths [42; 43; 44; 45; 46] and gain-loss systems [37; 38; 39; 40; 41]. Bender and Boettcher discovered a series of parity-time (PT) -symmetric Hamiltonians [47], which could result in real energy spectra. Mostafazadeh generalized this type of Hamiltonian to a \(\eta\)-pseudo-Hermitian quantum theory which explains the conditions for the non-Hermitian system to have the real energy spectra (\(\eta\) is a positive Hermitian operator) [48; 49; 50]. The theory has been applied in many fields for more than ten years of development, such as quantum field theory [51; 52; 53; 54; 55], super-symmetric quantum mechanics [56; 57], non-commutative field theory [58], quantum information [59], etc. Especially, there always exists some exceptional points (EPs) in the real energy spectrum of the non-Hermitian system [60; 61], at which two or more eigenstates of the system coalesce. These EPs of the energy spectrum in the parameter space are closely related to the symmetry, topological properties, and phase transitions of the system [34; 35; 36]. Consequently, efforts have been put forward to extend the study of LZ problem to non-Hermitian system [62; 63; 64; 65; 66]. Therefore, for non-Hermitian systems and nonlinear LZSM interference, it is natural to ask how will the energy spectrum of the nonlinear LZ system changes if the non-Hermiticity emerges? Will non-linearity affect EPs? Since the populations of the bare states on the adiabatic eigenstates normally can not be normalized by a time-independent coefficient [66]. Can the interesting self-trapping effect in the case of nonlinear non-Hermitian still be observed? We shed lights on these questions in this paper. By setting up the projective Hilbert space, we show that the populations of the projective quantum states can still achieve LZSM interferometry and analyzed the influence of non-Hermicity and nonlinearity on the energy spectra and the interference. Then, we discussed the influence of non-Hermitian on the self-trapping effect. Finally, under the weak-coupling approximation of the projective quantum states, we further demonstrated the validity and accuracy of the proposed method. The structure of the paper is as follows. In Sec.II, we introduce a non-Hermitian \(N\)-body interacting boson system which is equivalent to a nonlinear nonreciprocal two-level system with periodic driving in the mean-field approximation, and discussed the energy spectrum of this two-level system, In Sec.III, the influence of nonlinear strength and non-Hermiticity on LZSM interferometry and the self-trapping effects has been studied. Under the weak-coupling limit, the non-Hermicity does not affect the conditions of destructive interference and constructive interference. Finally, the conclusions are summarized in Sec.IV. ## II Nonlinear nonhermitian two-level model The second quantized Hamiltonian of a nonreciprocal interacting-boson system is \[\hat{H_{0}}=\frac{\gamma}{2}(\hat{a}^{\dagger}\hat{a}-\hat{b}^{\dagger}\hat{b}) +\frac{\Delta_{2}}{2}\hat{a}^{\dagger}\hat{b}+\frac{\Delta_{1}}{2}\hat{a}\hat {b}^{\dagger}-\frac{c}{4N}(\hat{a}^{\dagger}\hat{a}-\hat{b}^{\dagger}\hat{b}) ^{2}, \tag{1}\] where annihilation operators \(\hat{a},\hat{b}\) and generation operators \(\hat{a}^{\dagger},\hat{b}^{\dagger}\) are for the different quantum states that are the left and right well in the double-well BEC system. \(\gamma=A\sin(\omega t)+\epsilon_{0}\) is the monochromatic driving field with amplitude \(A\), frequency \(\omega\), and offset \(\epsilon_{0}\). \(c\) is the interaction strength between bosons, \(\Delta_{i}\) (\(i=1,2\)) is the tunneling amplitude. When the total number of bosons \(N\rightarrow\infty\), all particles are assumed to be in the same spin coherent state in the mean-field approximation [67, 68]. Considering that the quantum states of the non-Hermitian system are in a dual Hilbert space to keep the normalize condition [50], the selected coherent states need to be defined by both left and right states as \[\begin{split}|\Psi^{r}_{\infty^{\prime}}\rangle&= \frac{1}{\sqrt{N!}}(\alpha_{1}\hat{a}^{\dagger}+\beta_{1}\hat{b}^{\dagger})^{N }|\emptyset\rangle,\\ |\Psi^{l}_{\infty^{\prime}}\rangle&=\frac{1}{\sqrt{N!}}(\alpha_{2}\hat{a}^{\dagger}+\beta_{2}\hat{b}^{\dagger})^{N}|\emptyset \rangle,\end{split} \tag{2}\] Based on this, we derive the semi-classical Hamiltonian (see Appendix. A) \[\begin{split}\hat{H}_{M}&=\frac{\langle\Psi^{l}_{ \infty}|\hat{H}_{0}|\Psi^{r}_{\infty^{\prime}}\rangle}{N}\\ &=\frac{\gamma}{2}(\alpha_{1}\alpha_{2}^{*}-\beta_{1}\beta_{2}^{ *})+\frac{\Delta_{2}}{2}\alpha_{2}^{*}\beta_{1}+\frac{\Delta_{1}}{2}\alpha_{1} \beta_{2}^{*}-\frac{c}{4}(\beta_{1}\beta_{2}^{*}-\alpha_{1}\alpha_{2}^{*})^{2},\end{split} \tag{3}\] by the dynamical evolution of the semiclassical Hamiltonian [67] \[i\dot{\alpha}_{1}=\frac{\partial\hat{H}_{m}}{\partial\alpha_{2}^{*}},\qquad \quad i\dot{\beta}_{1}=\frac{\partial\hat{H}_{m}}{\partial\beta_{2}^{*}}, \tag{4}\] we can construct the following dimensionless Schrodinger equation \[i\frac{\partial}{\partial t}\begin{pmatrix}\alpha_{1}\\ \beta_{1}\end{pmatrix}=\hat{H}_{mF}\begin{pmatrix}\alpha_{1}\\ \beta_{1}\end{pmatrix}, \tag{5}\] with the MF Hamiltonian \[\hat{H}_{mF}=\begin{pmatrix}\frac{\gamma}{2}+\frac{c}{2}(\beta_{1}\beta_{2}^{ *}-\alpha_{1}\alpha_{2}^{*})&\frac{\Delta_{1}}{2}\\ \frac{\Delta_{2}}{2}&-\frac{\gamma}{2}-\frac{c}{2}(\beta_{1}\beta_{2}^{*}- \alpha_{1}\alpha_{2}^{*})\end{pmatrix}, \tag{6}\] and state \(|\psi^{r}\rangle=(\alpha_{1},\beta_{1})^{T}\). Therefore, the model Hamiltonian under periodic driving can be described by a nonlinear nonreciprocal two-level Hamiltonian \[\hat{H}=\frac{\Delta_{1}+\Delta_{2}}{4}\hat{\sigma}_{x}+\frac{\Delta_{1}- \Delta_{2}}{4}i\hat{\sigma}_{y}+\frac{\gamma(t)+c(\beta_{1}\beta_{2}^{*}- \alpha_{1}\alpha_{2}^{*})}{2}\hat{\sigma}_{z} \tag{7}\] where \(\hat{\sigma}_{x,y,z}\) are the Pauli matrices, \(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}\) are the probability amplitudes. The dynamic equations of the system are [50] \[i\frac{\partial}{\partial t}|\psi^{r}\rangle=\hat{H}|\psi^{r}\rangle,\quad i \frac{\partial}{\partial t}|\psi^{l}\rangle=\hat{H}^{\dagger}|\psi^{l}\rangle, \tag{8}\] where \(\langle\psi^{l}|\psi^{r}\rangle=1\) and the quantum states \[|\psi^{r}\rangle=\alpha_{1}\ket{\uparrow}+\beta_{1}\ket{\downarrow},\quad| \psi^{l}\rangle=\alpha_{2}\ket{\uparrow}+\beta_{2}\ket{\downarrow} \tag{9}\] are represented under the diabatic basis \(\{\ket{\uparrow},\ket{\downarrow}\}\) with spin eigenstates \(\ket{\uparrow}\) and \(\ket{\downarrow}\). For the adiabatic basis, the left and right instantaneous eigenstates of the time-dependent Hamiltonian \(\hat{H}\) are derived by[50] \[\hat{H}|\psi^{r}_{n}\rangle=E_{n}|\psi^{r}_{n}\rangle,\quad\hat{H}^{\dagger}| \psi^{l}_{n}\rangle=E^{*}_{n}|\psi^{l}_{n}\rangle, \tag{10}\] where \(\langle\phi^{l}_{m}|\phi^{r}_{n}\rangle=\delta_{nm}\) (\(n=1,2\)), the eigenenergies \(E_{n}(t)\) are determined by the quartic equation (see Appendix. B) \[E^{4}+cE^{3}+\frac{1}{4}(c^{2}-\gamma^{2}-\Delta_{1}\Delta_{2})E^{2}-\frac{c \Delta_{1}\Delta_{2}}{4}E-\frac{\Delta_{1}\Delta_{2}c^{2}}{16}=0. \tag{11}\] By solving equation (11), we draw the energy spectrum of the system (7) (see Fig.1 and Fig.2). The two parameters \[\Delta\equiv\sqrt{|\Delta_{1}\Delta_{2}|},\quad k\equiv\sqrt{|\Delta_{1}/ \Delta_{2}|} \tag{12}\] Figure 1: Time evolution of the energy levels for different offsets: (a) \(\epsilon_{0}=0\) and (b) \(\epsilon_{0}=5\), where \(A=10\), \(\omega=1\) and \(\Delta_{1}\Delta_{2}>0\). The time-dependent adiabatic energy levels (i.e., \(\Delta=1\)) are shown by the red (\(c=0\)) and black (\(c=3\)) dashed lines, while the diabatic energy levels (i.e., \(\Delta=0\) ) are shown by the blue (\(c=0\)) and green (\(c=3\)) solid lines. are introduced to describe the mean tunneling amplitude and the nonreciprocity. In the in-phase tunneling case \(\Delta_{1}\Delta_{2}>0\) as shown in Fig.1, the energy spectrum of the system (7) is the same as the Hermitian Hamiltonian \(\hat{H}_{h}=\frac{\Delta}{2}\hat{\sigma}_{x}+\frac{\gamma(t)+c(|\beta|^{2}-| \omega|^{2})}{2}\hat{\sigma}_{z}\). Therefore, the Hamiltonian \(\hat{H}\) and quantum states \(|\psi^{\prime}\rangle\) of the two nonreciprocal systems can be related to the Hermitian system by following relation \[\hat{H}_{h}=\hat{S}\hat{H}\hat{S}^{-1},\qquad|\psi\rangle=\hat{S}|\psi^{\prime }\rangle=\left(\begin{array}{c}\alpha_{1}\\ k\beta_{1}\end{array}\right). \tag{13}\] where \(\hat{S}=\left(\begin{array}{cc}1&0\\ 0&k\end{array}\right)\). Compared with \(\hat{H}_{h}\), the nonreciprocity, which only affects the eigenstates of the system, neither changes the eigenvalue nor destroys the symmetry of the system. In the anti-phase tunneling case \(\Delta_{1}\Delta_{2}<0\) as shown in Fig.2, the non-adiabatic energy levels have a series of degenerate points (EPs) when \(c=0\) (see the crossing points of red dash lines in Fig.2, and the imaginary parts of \(E_{n}\) are not shown). Interestingly, when the nonlinearity is added (\(c\neq 0\)), the EPs disappear and the near-degenerate regions are formed (see the black dashed lines in Fig.2). When considering the offset (\(\epsilon_{0}\neq 0\)), the near-degenerate regions disappear near the times \(\dot{t}_{n}=\frac{t_{1}+t_{2}}{\tau}+\frac{2\omega t}{\omega}\) (with \(n\) being an integer), the period changes from \(\frac{\omega t}{\omega}\) to \(\frac{2\omega t}{\omega}\), and the ring energy levels will tend to degenerate at times \(t_{1}+\frac{2\omega t}{\omega}\)(with \(m\) being an integer) as \(\epsilon_{0}\) increases as shown in Fig.2. Obviously, the nonlinearity affects the EPs. By equation (11), \(E_{n}=0\) is the root of the equation iff \(c\Delta_{1}\Delta_{2}=0\). Therefore, the existence of \(c\) does not allow the existence of EPs in the anti-phase tunneling case \(\Delta_{1}\Delta_{2}<0\). Next, we analyzed the cases of the existence of real roots of the energy spectrum. For the special cases \(c=0\), the eigenenergies of the system are \(\pm\sqrt{\gamma^{2}(t)+\Delta_{1}\Delta_{2}}\). It is easy to find that the EPs emerge at \(\gamma^{2}(t)=-\Delta_{1}\Delta_{2}\) in the anti-phase tunneling case \(\Delta_{1}\Delta_{2}<0\). For \(c\neq 0\), the nature (real or not) of the roots of the energy equation (11) depend on the sign of \[\delta=-c^{2}\gamma^{2}\Delta_{1}\Delta_{2}\xi, \tag{14}\] with \(\xi=((c^{2}-\gamma^{2}-\Delta_{1}\Delta_{2})^{3}-27c^{2}\gamma^{2}\Delta_{1} \Delta_{2})\). When \(\delta>0\), there are two real roots and a pair of conjugate complex roots. The system will always have real eigenenergies. When \(\delta<0\), the equation has four unequal real roots if \(c^{2}+2(\Delta_{1}\Delta_{2}+\gamma^{2})\) and \((\Delta_{1}\Delta_{2}+\gamma^{2})(2c^{2}+\Delta_{1}\Delta_{2}+\gamma^{2})\) are both positive. Otherwise, the equation has two pairs of unequal conjugate complex roots. Obviously, for the in-phase tunneling case \(\Delta_{1}\Delta_{2}>0\), there always exists real eigenenergies of the system. For the anti-phase tunneling case with \(\delta<0\), the conditions that the energy equation has real roots can be simply described as \(\frac{\gamma^{2}}{\Delta^{2}}>1\) in \(f(\frac{c}{\Delta},\frac{\gamma}{\Delta})=[(\frac{c}{\Delta})^{2}-(\frac{c}{ \Delta})^{2}+1]^{3}+27(\frac{c}{\Delta})^{2}(\frac{c}{\Delta})^{2}<0\). Interestingly, \(\frac{\gamma}{\Delta}=\pm 1\) are exactly the tangent lines of \(f(\frac{c}{\Delta},\frac{\gamma}{\Delta})=0\). Therefore, the condition is naturally satisfied (as shown in Fig.3), so we get the same conclusion as \(\Delta_{1}\Delta_{2}>0\). Finally, we consider another two special case: \(\gamma=0\) and \(\xi=0\). The energy spectrum are all complex only when \(\delta=0\), \(c(\Delta_{1}\Delta_{2}-\gamma^{2})=0\), \((\Delta_{1}\Delta_{2}+\gamma^{2})(2c^{2}+\Delta_{1}\Delta_{2}+\gamma^{2})=0\) and \(c^{2}+2(\Delta_{1}\Delta_{2}+\gamma^{2})<0\). For, \(c\neq 0\) and \(\Delta_{1}\Delta_{2}\neq 0\), these conditions cannot be satisfied at the same time. In a word, the system will always have real eigen energies. These results on the nature of the eigenenergies can be explained by the symmetry related to the different types of nonreciprocal. For the in-phase tunneling case \(\Delta_{1}\Delta_{2}>0\), the symmetry of the system is unbroken since the system can be transformed into a Hermitian one with \(\hat{S}\). Therefore, the real eigen energies are guaranteed. While it is not a necessary result for the anti-phase case \(\Delta_{1}\Delta_{2}<0\). Although the nonlinearity \(c\) makes EPs disappear in the evolution of \(E_{n}\), the eigenvalues of one energy state are still complex. For these two cases, it is inevitable to have different effects on the evolution of states. So next we will analyze the dynamic evolution Figure 2: Time evolution of the energy levels for different offsets: (a) \(\epsilon_{0}=0\) and (b) \(\epsilon_{0}=5\), where \(A=10\), \(\omega=1\) and \(\Delta_{1}\Delta_{2}<0\). The time-dependent adiabatic energy levels (i.e., \(\Delta=\sqrt{|\Delta_{1}\Delta_{2}|}=1\)) are shown by the red (\(c=0\)) and black (\(c=3\)) dashed lines, while the diabatic energy levels (i.e., \(\Delta=0\) ) are shown by the blue (\(c=0\)) and green (\(c=3\)) solid lines. of the two cases based on the method of the projective Hilbert space. ## III Nonlinear non-Hermitian LZSM interferometry In the nonlinear Hermitian LZ system, The LZSM interference patterns can be destructive or constructive, which are determined by the Stuckelberg phases and the nonlinearity can strongly change the features of the LZSM interferometry. As shown in Fig. 4, the interference pattern of \(|\alpha_{1}|^{2}\) is axisymmetric for the linear in-phase tunneling case (\(c=0\), \(\Delta_{1}\Delta_{2}>0\)). In the nonlinear case (\(c\neq 0\)), the symmetry of the interference pattern is destroyed (as shown in Fig. 4b). When \(c=0\) and \(\Delta_{1}\Delta_{2}<0\), the Eps make the interference patterns divergent and form a singular region (white area in Fig. 4c). It is hard to study the influence of each parameter on the features of the LZSM interferometry. Next, we propose the concept of projective Hilbert space (see AppendixC for detail) and find the effect of the nonreciprocity \(k\). Through equations (8), without losing generality, the quantum state \(|\psi^{\prime}\rangle\) can be defined as \[|\psi^{\prime}\rangle=e^{\mu(\phi)+i\nu(\phi)}|\tilde{\psi}\rangle=e^{\mu(t)+i \nu(\phi)}\left(\begin{array}{c}\tilde{a}\\ \tilde{b}\end{array}\right), \tag{15}\] with the normalization relation \(\langle\tilde{\psi}|\tilde{\psi}\rangle=1\) (\(\mu\) and \(\nu\) are two real parameters), where \(|\tilde{\psi}\rangle=\left(\begin{array}{c}\tilde{a}\\ \tilde{b}\end{array}\right)\) is the quantum state in the projective Hilbert space. Then, we draw the normalized interference patterns \(|\tilde{a}|^{2}=|\alpha_{1}|^{2}/(|\alpha_{1}|^{2}+|\beta_{1}|^{2})\) (see Fig.5). Comparing with \(|\alpha_{1}|^{2}\), the regulation of the parameters on the \(|\tilde{a}|^{2}\) interference pattern are emerge when \(c=0\). This is because the LZSM interference is determined by the Stuckelberg phases. The phases accumulated in the evolution process are retained in the quantum states \(|\tilde{\psi}\rangle\) in the projective Hilbert space by removing the divergence caused by the non-Hermitian term \(e^{\mu(t)}\). In Fig.5, when \(c=0\), the populations of the corresponding the projective quantum states in the singular region of the quantum states are limited to the values affected by the nonreciprocity \(k\). To further reveal the influence of parameter \(k\), we next start from the simplest case with \(c=0\) and then analyze the case with \(c\neq 0\). Then, we demonstrated the validity and accuracy of the proposed method and numerical results in the weak-coupling limit. ### The effect of nonreciprocity and the projective quantum states in the linear non-Hermitian system Assuming \(c=0\), the Hamiltonian of the system (7) becomes \[\hat{H}_{mF}=\left(\begin{array}{cc}\frac{\gamma}{2}&\frac{\Delta_{1}}{2}\\ \frac{\Delta_{2}}{2}&-\frac{\gamma}{2}\end{array}\right), \tag{16}\] where \(\Delta_{1}\Delta_{2}<0\). Consider the quantum state \(|\psi^{\prime}\rangle=e^{\mu+i\nu}|\tilde{\psi}\rangle=e^{\mu+i\nu}\left( \begin{array}{c}\tilde{a}\\ \tilde{b}\end{array}\right)\), and Eq. (8), one can get \[\begin{split}\dot{\mu}&=-\frac{i}{2}\langle\tilde{\psi}|\hat{H} -\hat{H}^{\dagger}|\tilde{\psi}\rangle,\\ \dot{\nu}&=-\frac{1}{2}\langle\tilde{\psi}|\hat{H}+\hat{H}^{ \dagger}|\tilde{\psi}\rangle+i\langle\tilde{\psi}|\hat{\psi}\rangle,\end{split} \tag{17}\] Substituting Eq. (17) and the definition \(|\tilde{\psi}\rangle=\left(\begin{array}{c}\tilde{a}\\ \tilde{b}\end{array}\right)\equiv\left(\begin{array}{c}\sin\frac{\theta}{2 }e^{i\varphi}\\ \cos\frac{\theta}{2}\end{array}\right)\) into equation (8), we have (see AppendixC for details) \[\begin{split}\dot{\theta}&=-\Delta_{1}\sin\varphi\cos^{2 }\frac{\theta}{2}-\Delta_{2}\sin\varphi\sin^{2}\frac{\theta}{2},\\ \dot{\varphi}&=-\gamma-\frac{\Delta_{1}}{2}\cot \frac{\theta}{2}\cos\varphi+\frac{\Delta_{2}}{2}\tan\frac{\theta}{2}\cos \varphi,\\ \dot{\mu}&=\frac{\Delta_{2}-\Delta_{1}}{4}\sin \theta\sin\varphi,\\ \dot{\nu}&=\frac{\gamma}{2}-\frac{\Delta_{2}}{2}\tan \frac{\theta}{2}\cos\varphi.\end{split} \tag{18}\] For \(\epsilon_{0}=0\), when the time is long enough, the projective state will always be on a certain circle (\(\theta=0\)) of the Bloch sphere (see Fig.6). By Eq. (18), we can get the equation of the circle where the projective quantum state finally lies. surprisingly, we find the correlation between \(k\) and \(\theta_{0}=\lim_{t\to\infty}\theta\) as \[k^{2}=\tan^{2}\frac{\theta_{0}}{2}. \tag{19}\] Therefore, in combination with Fig.5, we can explain why \(|\vec{a}|^{2}\) is limited to a certain value in the singular region. ### The influence of interaction and non-Hermitian on population in the projective Hilbert space In the nonlinear Hermitian system[33], i.e \(\Delta=\Delta_{1}=\Delta_{2}\), when \(\epsilon_{0}=0\) and \(A\ll\omega\), the population of the system will have the self-trapping phase transition and the Josephson oscillation under the different nonlinearities, and the boundary line is \(c/\Delta=2\)[67; 69]. Based on this, we next study the nonlinear non-Hermitian LZSM interference patterns for \(\epsilon_{0}=0\) with different nonlinearities c, non-Hermitian parameters \(k\) and mean amplitudes \(\Delta\) [see Fig.7 and Fig.9]. Firstly, we consider the in-phase tunneling case \(\Delta_{1}\Delta_{2}>0\), where the symmetry of the system is unbroken. For the Hermitian Hamiltonian \(\hat{H}_{h}\), near the boundary of two different oscillations, the maximum population of the self-trapping region is \(0.5\), and then the amplitude gradually decreases with the increase of \(c/\Delta\). The populations of the state for non-Hermitian Hamiltonian \(\hat{H}\) with \(\Delta_{1}\neq\Delta_{2}\) is only different from those for the Hermitian Hamiltonian \(\hat{H}_{h}\) in a weight of \(k\) as shown in Eq. (13). Therefore, we can get \(|\vec{a}|^{2}=k^{2}|\vec{b}|^{2}\) at the boundary similar with the Hermitian case. Therefore, the boundary line \(c/\Delta=2\) (red dashed line in Fig.7) between the two regions (self-trapping and Josephson oscillation) is the same as that in the Hermitian system. The amplitude of the population of the projective quantum state is determined by the nonreciprocal \(k\) as show in Fig.7(a) and (b). Then, we consider the dynamical evolution of the projective quantum state near the boundary, by Eq. (8) and (15), one can obtain \[\begin{split}\dot{\theta}^{\prime}=&\text{Im}A\sin \theta^{\prime}-\Delta_{1}\sin\varphi^{\prime}\cos^{2}\frac{\theta^{\prime}}{2} -\Delta_{2}\sin\varphi^{\prime}\sin^{2}\frac{\theta^{\prime}}{2},\\ \dot{\varphi}^{\prime}=&-\gamma-\text{Re}A-\frac{ \Delta_{1}}{2}\cot\frac{\theta^{\prime}}{2}\cos\varphi^{\prime}+\frac{\Delta_{ 2}}{2}\tan\frac{\theta^{\prime}}{2}\cos\varphi^{\prime},\\ \dot{\mu}^{\prime}=&-\frac{\text{Im}A}{2}\cos \theta^{\prime}+\frac{\Delta_{2}-\Delta_{1}}{4}\sin\theta^{\prime}\sin\varphi^ {\prime},\\ \dot{\nu}^{\prime}=&\frac{\gamma}{2}+\frac{\text{ Re}A}{2}-\frac{\Delta_{2}}{2}\tan\frac{\theta^{\prime}}{2}\cos\varphi^{\prime}.\end{split} \tag{20}\] with the right quantum state \(|\psi^{\prime}\rangle=\left(\begin{array}{c}\alpha_{1}\\ \beta_{1}\end{array}\right)=e^{\omega^{\prime}+i\nu^{\prime}}\left(\begin{array} []{c}\tilde{a}\\ \tilde{b}\end{array}\right)=e^{\omega^{\prime}+i\nu^{\prime}}\left(\begin{array} []{c}\sin\frac{\theta^{\prime}}{2}e^{i\varphi^{\prime}}\\ \cos\frac{\theta^{\prime}}{2}\end{array}\right)\), and \[\begin{split}\dot{\theta}^{\prime}=&-\text{Im}A\sin \theta^{\prime}-\Delta_{2}\sin\varphi^{l}\cos^{2}\frac{\theta^{\prime}}{2}- \Delta_{1}\sin\varphi^{l}\sin^{2}\frac{\theta^{\prime}}{2},\\ \dot{\varphi}^{\prime}=&-\gamma-\text{Re}A-\frac{\Delta_{2}}{2} \cot\frac{\theta^{\prime}}{2}\cos\varphi^{\prime}+\frac{\Delta_{1}}{2}\tan \frac{\theta^{\prime}}{2}\cos\varphi^{l},\\ \dot{\mu}^{\prime}=&\frac{\text{Im}A}{2}\cos\theta^{\prime}+\frac{ \Delta_{1}-\Delta_{2}}{4}\sin\theta^{\prime}\sin\varphi^{l},\\ \dot{\nu}^{\prime}=&\frac{\gamma}{2}+\frac{\text{Re}A}{2}-\frac{ \Delta_{1}}{2}\tan\frac{\theta^{\prime}}{2}\cos\varphi^{l}.\end{split} \tag{21}\] with the left quantum state \(|\psi^{\prime}\rangle=\left(\begin{array}{c}\alpha_{2}\\ \beta_{2}\end{array}\right)=e^{\omega^{\prime}+i\nu^{\prime}}\left(\begin{array} []{c}\tilde{a}^{l}\\ \cos\frac{\theta^{\prime}}{2}\end{array}\right)\), where \(A\equiv c(\alpha_{1}\alpha_{2}^{*}-\beta_{1}\beta_{2}^{*})\). By numerical simulation, we give the dynamical evolution of the projective right state on the Bloch sphere near the boundary \(c/\Delta=2\) in Fig.8. Figure 6: The dynamical evolution trajectory of the projective right quantum state of the system (16) on the Bloch sphere with the different non-Hermitian: (a) \(k=2\) and (b) \(k=1/2\). The numerical simulation parameters: \(\frac{\Lambda}{\lambda}=2.5\), \(\epsilon_{0}=0\) and the initial condition is \((\tilde{a},\tilde{b})=(0,1)\). The z-axis coordinates of the points of the red dashed circle on the Bloch sphere are \(z_{0}=\cos\theta_{0}=\frac{1-\lambda^{2}}{1+\lambda^{2}}\). Figure 7: The nonlinear non-Hermitian LZSM interference patterns with different nonlinearities (a) \(k=2\), and (b) \(k=1/2\) for weak driving at \(\epsilon_{0}=0\) and the in-phase tunneling case \(\Delta_{1}\Delta_{2}>0\): the projective population \(|\vec{a}|^{2}\) as a function of \(\Delta/\omega\) and \(c/\omega\) for \(A/\omega=0.05\) from the initial time \(t_{0}=0\) to \(t=2\pi/\omega\), The red dashed-dotted line (with slope 1/2) is plotted to denote the boundary between the different oscillations. When \(c/\Delta>2\), the projective states can only evolve on the surface of the Bloch sphere above the red dashed circle as shown in Fig. 8 (b), (c), (e) and (f). The red circle represent the projective states of which the relative population difference \(|\tilde{b}|^{2}-|\tilde{a}|^{2}\) is \(\frac{|-k^{2}}{k^{2}+1}=\cos\theta_{0}\). By \(|\tilde{a}|^{2}=k^{2}|\tilde{b}|^{2}\) and the normalization condition, \(\cos\theta_{0}=|\tilde{b}|^{2}-|\tilde{a}|^{2}\) labels the boundary between the self-trapping region and the Josephson oscillation region. As we discussed before, the nonreciprocal \(k\) does not affect the constructive interference and destructive interference, but affects the the relative population difference of the state. When \(k\) is larger, the relative population difference at the boundary between the two regions are smaller [see the red circle in Fig. 8(a-c) and (d-f)] and the projective population probability \(|\tilde{a}|^{2}\) are smaller [see Fig. 7 (a) and (b)]. For the anti-phase tunneling case \(\Delta_{1}\Delta_{2}<0\), because of the existence of EPs in the linear case \(c=0\), the projective quantum states reaches self-trapping region no matter how weak the nonlinearity is. The trajectories of the projective states on the Bloch sphere will always above the red dashed circles which label the boundaries between the self-trapping region and the Josephson oscillation region as shown in Fig.9. the maximum population of the projective quantum state is still affected by the nonreciprocity \(k\) as shown in Eq. (19) and Fig.10(a-d). Compare Fig Fig.10(b) and (d) with Fig.10(a) and (c), it is easy to find that the stronger the nonlinearity, the stronger the degree of self-trapping effect. ### Weak-coupling limit of the projective quantum states: \(\Delta\ll\omega\) When the weak-coupling limit is considered, the adiabatic energy levels will be difficult to transition in the near-degenerate region. However, in this approximation, we only make \(|\tilde{a}^{g}(t)|^{2}\sim|\tilde{a}^{g}(t_{0})|^{2}\) and \(|\tilde{b}^{g}(t)|^{2}\sim|\tilde{b}^{g}(t_{0})|^{2}\) where \(g=r,l\). Assuming that the initial condition is \((\tilde{a}^{g}(t_{0}),\tilde{b}^{g}(t_{0}))=(0,1)\), the quantum state can always be written in the following form: \[|\psi^{g}(t)\rangle=e^{it^{g}(t)+i\theta^{g}(t)}\left(\begin{array}{c}0\\ 1\end{array}\right), \tag{22}\] Figure 8: The dynamics of the projective states represented by the trajectories spherical coordinates \((\theta,\phi)\) on the Bloch sphere in the in-phase tunneling case \(\Delta_{1}\Delta_{2}>0\) with different strengths of nonlinearity and nonreciprocity: (a) \(c/\Delta=1.9,k=2\), (b) \(c/\Delta=2,k=2\), (c) \(c/\Delta=2.1,k=2\), (d) \(c/\Delta=1.9,k=1/2\), (e) \(c/\Delta=2,k=1/2\), and (f) \(c/\Delta=2.1,k=1/2\). The other parameters are chosen as \(\frac{\lambda}{\omega}=0.05\), \(\omega=3\), and the initial state is \((\tilde{a},\tilde{b})=(0,1)\). The z-axis axis coordinates of the red dashed circle on the Bloch sphere are \(z_{0}=\cos\theta_{0}=\frac{1-k^{2}}{1+k^{2}}\), and the z-axis axis coordinates of the green dashed circle on the Bloch sphere are \(z_{0}^{{}^{\prime}}=0\). Figure 10: The dynamics of the projective states represented by the trajectories spherical coordinates \((\theta,\phi)\) on the Bloch sphere in the anti-phase tunneling case \(\Delta_{1}\Delta_{2}<0\) with different strengths of nonlinearity and nonreciprocity: (a) \(c/\Delta=0.1,k=2\), (b) \(c/\Delta=1,k=2\), (c) \(c/\Delta=0.1,k=1/2\), and (d) \(c/\Delta=1,k=1/2\). The other parameters are chosen as \(\frac{\lambda}{\omega}=0.05\), \(\epsilon_{0}=3\), and the initial state is \((\tilde{a},\tilde{b})=(0,1)\). The z-axis coordinates of the red dashed circle on the Bloch sphere are \(z_{0}=\cos\theta_{0}=\frac{1-k^{2}}{1+k^{2}}\), and the z-axis coordinates of the green dashed circle on the Bloch sphere are \(z_{0}^{{}^{\prime}}=0\). Figure 9: The nonlinear non-Hermitian LZSM interference patterns with different nonlinearities (a) \(k=2\), and (b) \(k=1/2\) for weak driving at \(\epsilon_{0}=0\) and the anti-phase tunneling case \(\Delta_{1}\Delta_{2}<0\): the projective population \(|\tilde{a}|^{2}\) as a function of \(\Delta/\omega\) and \(c/\omega\) for \(A/\omega=0.05\) from the initial time \(t_{0}=0\) to \(t=2\pi/\omega\). where \(g=r,l\). By Eqs. (8),(17) and (22), we get \(\hat{\mu}^{\prime}(t)+i\hat{\nu}^{\prime}(t)+\hat{\mu}^{\prime}(t)-i\hat{\nu}^{ \prime}(t)=0\). This means \[\beta_{1}(t)\beta_{2}^{*}(t)-\alpha_{1}(t)\alpha_{2}^{*}(t)\sim\beta_{1}(t_{0}) \beta_{2}^{*}(t_{0})-\alpha_{1}(t_{0})\alpha_{2}^{*}(t_{0}), \tag{23}\] Based on this approximation, we can transform the dynamic of the system from Schrodinger picture to Dirac picture by introducing the gauge transformation \(\phi^{\prime}(t)=U(t)\phi^{\prime}(t)\left[U(t)=\frac{\epsilon_{0}}{2i}-\frac{ A\cos(\omega t)}{2i\omega}+\frac{\epsilon_{2}}{2}(\beta_{1}^{*}\beta_{2}^{*}- \alpha_{1}\alpha_{2}^{*})\right.\) with \(\phi^{\prime}(t)=[\tilde{\alpha}_{1},\tilde{\beta}_{1}]^{T}\)[33]. Under the new basis, the nonlinear dynamic Eqs. (8) become (Assuming \(\Delta_{1}>0\)): \[i\frac{\partial}{\partial t}\left(\frac{\tilde{\alpha}_{1}}{\beta_{1}}\right) =\left(\begin{array}{cc}0&k\Omega\\ \frac{(-1)^{\prime}}{k}\Omega^{*}&0\end{array}\right)\left(\begin{array}{c} \tilde{\alpha}_{1}\\ \tilde{\beta}_{1}\end{array}\right), \tag{24}\] and \[i\frac{\partial}{\partial t}\left(\begin{array}{c}\tilde{\alpha}_{2}\\ \tilde{\beta}_{2}\end{array}\right)=\left(\begin{array}{cc}0&\frac{(-1)^{ \prime}}{k}\Omega^{*}\\ k\Omega&0\end{array}\right)\left(\begin{array}{c}\tilde{\alpha}_{2}\\ \tilde{\beta}_{2}\end{array}\right) \tag{25}\] with \[\Omega=\frac{\Delta}{2}e^{\beta\Phi(t)},\quad\Phi(t)=\epsilon_{0}t-\frac{A \cos(\omega t)}{\omega}+ct, \tag{26}\] and \(j=1,2\) corresponding to the anti-phase case \(\Delta_{2}<0\) and in-phase case \(\Delta_{2}>0\), respectively. \(\Omega\) denotes the field-induced Rabi frequency where \(\Phi(t)\) is the relative phase of two diabatic energy levels. The nonreciprocity \(k\) in front of \(\Omega\) correspond to the weight of the populations of the projective quantum state. Thus, we can understand the fact that the maximums value of the populations under the self-trapping regions change with \(k^{2}\) in the in-phase case \(\Delta_{1}\Delta_{2}>0\). In a full cycle, \(\Phi(t)\) can be approximately written as \[\Phi(t)\simeq\int_{t_{1}}^{t_{3}}(\epsilon_{0}+c-n\omega)dt=\frac{2\pi}{ \omega}(\epsilon_{0}+c-n\omega) \tag{27}\] with \(n=0,\pm 1,\pm 2,...\). When \(\Phi_{m}=2m\pi\), i.e. \(c+\epsilon_{0}\simeq(n+m)\omega=d\omega\) (\(m,d=0,\pm 1,\pm 2,...\)), the patterns are constructive. While, the patterns will be destructive when \(\Phi_{m}=(2m+\frac{1}{2})\pi\). By calculating the nonlinear equation (8), the linear equation(24), we can get the exact solution and approximate solution respectively. In Fig.11, we show multi-period LZSM interference fringes with different characteristics in the in-phase tunneling case \(\Delta_{2}>0\). when \(c=0,1\), i.e., \(\Phi_{m}=2m\pi\), the patterns are constructive, and when \(c=0.5,1.5\), i.e., \(\Phi_{m}=(2m+\frac{1}{2})\pi\), the patterns are destructive. In all nonlinear cases, the two are consistent. In Fig.12, we show the anti-phase tunneling case \(\Delta_{2}<0\). Like the in-phase tunneling case, the constructive interference and destructive interference only depend on \(m\), and the nonreciprocity \(k\) only affect the maximal value of the projective population probability \(|\tilde{\alpha}|^{2}\). ## IV Conclusion In this work, we have studied the non-Hermitian nonlinear LZSM interferometry in which the non-Hermicity is from the nonreciprocal tunnelings between the bosons. By using the mean-field approximation and projective Hilbert space, the effect of nonreciprocity and nonlinearity on the energy spectrum, the dynamics, and the formation of the interference fringes have been studied. The results show that different types of reciprocity correspond to different types of symmetries of the system. For the in-phase tunneling case \(\Delta_{1}\Delta_{2}>0\), the system can be transformed into a Hermitian one with a nonunitary transformation. It has the same energy spectrum and boundary between the Josephson region and the self-trapping region as the Hermitian one. While it is not a necessary result for the anti-phase case \(\Delta_{1}\Delta_{2}<0\). The EPs can only exist in its linear case \(c=0\) and the eigenvalues of one energy state will be complex in its nonlinear case. There is only a self-trapping region in this case since the evolution of the projective states will always be above the boundary when the nonlinearity exists. For the LZSM interferometry, the strength of the nonreciprocity \(k\) is found to take an essential role in the population of the projective state and determine the maximal values and strengths of the interference patterns in the projective space. Finally, under the weak-coupling approximation, we found that the types and strengths of the nonreciprocity do not affect the conditions of destructive and constructive interference. It only depends on the strength of nonlinearity. Our result provides a possible way to study the parameters of a non-Hermitian nonlinear two-level system and its related external fields by the LZSM interferometry. ###### Acknowledgements. We thank S. C. Li and F. Q. Dou for their helpful discussions. This work is supported by the National Natural Science Foundation of China (NSFC) (Grants Nos. 11875103, 12147206, 11725417, 12088101, 12047548, and U1930403), and Science Challenge Project (Grant No. TZ2018005)). ## Appendix A Semi-classical Hamiltonian In the non-Hermitian system, let \(\hat{H}\) be a non-Hermitian Hamiltonian with a complete biorthonormal eigenbasis \(\{|\psi^{\prime}_{n}\rangle,|\psi^{\prime}_{n}\rangle\}\), the orthogonal normalization of the quantum states are \[\langle\psi^{\prime}_{n}|\psi^{\prime}_{m}\rangle=\delta_{mn}. \tag{10}\] Similarly, for system (1), in the mean-field approximation, the coherent state should be written as \[|\Psi^{\prime}_{sc}\rangle = \frac{1}{\sqrt{N!}}(\alpha_{1}\hat{a}^{\dagger}+\beta_{1}\hat{b} ^{\dagger})^{N}|\emptyset\rangle, \tag{11}\] \[|\Psi^{\prime}_{sc}\rangle = \frac{1}{\sqrt{N!}}(\alpha_{2}\hat{a}^{\dagger}+\beta_{2}\hat{b} ^{\dagger})^{N}|\emptyset\rangle, \tag{12}\] According to the normalization condition \(\langle\Psi^{\prime}_{sc}|\Psi^{\prime}_{sc}\rangle=1\): \[\alpha_{1}\alpha_{2}^{*}+\beta_{1}\beta_{2}^{*}=1. \tag{13}\] Then, applying the Hamiltonian of system (1) to the right quantum state \(|\Psi^{\prime}_{sc}\rangle\), one can obtain \[\hat{H}|\psi^{\prime}_{SC}\rangle=\left[\frac{\gamma}{2}\hat{a}^{\dagger} \hat{a}-\hat{b}^{\dagger}\hat{b}+\frac{\Delta_{2}}{2}\hat{a}^{\dagger}\hat{b} +\frac{\Delta_{1}}{2}\hat{a}\hat{b}^{\dagger}-\frac{c}{4N}(\hat{a}^{\dagger} \hat{a}-\hat{b}^{\dagger}\hat{b}^{\dagger})^{2}\right]\frac{1}{\sqrt{N!}} \sum_{r=0}^{N}C^{r}_{N}(\alpha_{1}\hat{a}^{\dagger})^{N-r}(\beta_{1}\hat{b}^{ \dagger})^{r}|\emptyset\rangle, \tag{14}\] When calculating the expectation value of an observable, the quantum states of the systems are normalized. So in the system (1), the expectation value of \(\hat{H}_{0}\) should be written as \[\begin{split}\langle\Psi^{\prime}_{sc}|\hat{H}_{0}|\Psi^{\prime} _{sc}\rangle=&\frac{N\gamma}{2}\sum_{r=0}^{N}\frac{(N-1)!}{(N-r -1)!r!}(\alpha_{1}\alpha_{2}^{*})^{N-r-1}(\beta_{1}\beta_{2}^{*})^{r}\alpha_{1} \alpha_{2}^{*}-\frac{N\gamma}{2}\sum_{r=0}^{N}\frac{(N-1)!}{(N-r)!(r-1)!}( \alpha_{1}\alpha_{2}^{*})^{N-r}(\beta_{1}\beta_{2}^{*})^{r-1}\beta_{1}\beta_{ 2}^{*}\\ +& N(\frac{\Delta_{2}}{2}\sum_{r=0}^{N}C^{r}_{N-1}(N -r)(\alpha_{1}\alpha_{2}^{*})^{N-r-1}(\beta_{1}\beta_{2}^{*})^{r}\alpha_{2}^{ *}\beta_{1}+\frac{\Delta_{1}}{2}\sum_{r=0}^{N}C^{r-1}_{N-1}r(\alpha_{1}\alpha _{2}^{*})^{N-r}(\beta_{1}\beta_{2}^{*})^{r-1}\alpha_{1}\beta_{2}^{*})\\ +&\sum_{r=0}^{N}C^{r-1}_{N-1}r(\alpha_{1}\alpha_{2}^{ *})^{N-r}(\beta_{1}\beta_{2}^{*})^{r-1}\alpha_{1}\beta_{2}^{*})-\frac{cN}{4}( \beta_{1}\beta_{2}^{*}-\alpha_{1}\alpha_{2}^{*})^{2}\\ =&\frac{N\gamma}{2}(\alpha_{1}\alpha_{2}^{*}-\beta_{1 }\beta_{2}^{*})+\frac{N\Delta_{2}}{2}(\alpha_{2}^{*}\beta_{1})+\frac{N\Delta_{ 1}}{2}(\alpha_{1}\beta_{2}^{*})-\frac{cN}{4}(\beta_{1}\beta_{2}^{*}-\alpha_{1} \alpha_{2}^{*})^{2},\end{split} \tag{15}\] The expectation value of each particle is \[\hat{H}_{M}=\frac{\langle\Psi^{\prime}_{sc}|\hat{H}_{0}|\Psi^{\prime}_{sc} \rangle}{N}=-\frac{c}{4}(\beta_{1}\beta_{2}^{*}-\alpha_{1}\alpha_{2}^{*})^{2}+ \frac{\Delta_{2}}{2}(\alpha_{2}^{*}\beta_{1})+\frac{\Delta_{2}}{2}(\alpha_{1} \beta_{2}^{*})+\frac{\gamma}{2}(\alpha_{1}\alpha_{2}^{*}-\beta_{1}\beta_{2}^{*}). \tag{16}\] ## Appendix B Derivation of the Energy level equation In the non-Hermitian system, the Hamiltonian \(\hat{H}\) has a complete biorthonormal eigenbasis \(\{|\psi_{n}^{\prime}\rangle,|\psi_{n}^{l}\rangle\}\) of satisfying \[\hat{H}|\psi_{n}^{\prime}\rangle=E_{n}|\psi_{n}^{\prime}\rangle, \tag{10}\] \[\hat{H}^{\dagger}|\psi_{n}^{\prime}\rangle=E_{n}^{\ast}|\psi_{n}^{\dagger}\rangle, \tag{11}\] \[\langle\phi_{m}^{l}|\psi_{n}^{\prime}\rangle=\delta_{mn},\qquad\quad(n=1,2,...) \tag{12}\] By equations (10), we can naturally conclude that the adiabatic basis of the system (7) satisfies \[F\alpha_{1}+\frac{i\Delta}{2}\beta_{1}=E\alpha_{1},\quad\frac{i\Delta}{2} \alpha_{1}-F\beta_{1}=E\beta_{1}, \tag{13}\] \[F^{\ast}\alpha_{2}-\frac{i\Delta}{2}\beta_{2}=E^{\ast}\alpha_{1},\quad-\frac {i\Delta}{2}\alpha_{2}-F^{\ast}\beta_{2}=E^{\ast}\beta_{2}, \tag{14}\] \[\alpha_{1}\alpha_{2}^{\ast}+\beta_{1}\beta_{2}^{\ast}=1. \tag{15}\] where \(F\equiv\frac{\gamma}{2}+\frac{c}{2}(\beta_{1}\beta_{2}^{\ast}-\alpha_{1} \alpha_{2}^{\ast})\). To derive non-trivial solutions of Eqs. (10) and (11), we must ensure that \(|\hat{H}-E\hat{I}|=0\) and \(|\hat{H}^{\dagger}-E^{\ast}\hat{I}|=0\) (\(\hat{I}\) is an identity matrix). Namely, \[E^{2}-F^{2}+\frac{\Delta^{2}}{4}=0, \tag{16}\] \[E^{\ast 2}-F^{\ast 2}+\frac{\Delta^{2}}{4}=0, \tag{17}\] By (13) and the complex conjugate of Eq. (14), we have \[\frac{\alpha_{1}\alpha_{2}^{\ast}}{\beta_{1}\beta_{2}^{\ast}}=-\frac{4(E+F)^{ 2}}{\Delta^{2}}, \tag{18}\] By the normalization (15) and Eq. (16), it becomes \[\beta_{1}\beta_{2}^{\ast}=\frac{E-F}{2E}, \tag{19}\] Therefore, \[F\equiv\frac{\gamma}{2}+\frac{c}{2}(\beta_{1}\beta_{2}^{\ast}-\alpha_{1} \alpha_{2}^{\ast})=\frac{\gamma}{2}-\frac{cF}{2E}. \tag{20}\] Substitute Eq. (10) into Eq. (16), we finally have \[E^{4}+cE^{3}+\frac{1}{4}(c^{2}-\gamma^{2}+\Delta^{2})E^{2}+\frac{c\Delta^{2} }{4}E+\frac{\Delta^{2}c^{2}}{16}=0. \tag{21}\] ## Appendix C The projective space for non-Hermitian quantum system Consider the following Schrodinger equation \[i\frac{d}{dt}|\psi(t)\rangle=\hat{H}|\psi(t)\rangle, \tag{22}\] where \(\hat{H}\) is generally a non-Hermitian Hamiltonian. Let us define \(|\psi(t)\rangle=e^{\mu+i\nu}|\tilde{\psi}(t)\rangle\) with the normalization relation \(\langle\tilde{\psi}(t)|\tilde{\psi}(t)\rangle=1\) (\(\mu\) and \(\nu\) are two real parameters). From Eq. (10) and its Hermitian conjugation, one can get \[\hat{\mu}=-\frac{i}{2}\langle\tilde{\psi}|\hat{H}-\hat{H}^{\dagger}|\tilde{ \psi}\rangle, \tag{11}\] and \[\dot{\nu}=-\frac{1}{2}\langle\tilde{\psi}|\hat{H}+\hat{H}^{\dagger}|\tilde{ \psi}\rangle+i\langle\tilde{\psi}|\hat{\psi}\rangle. \tag{12}\] One has to keep mind that the above deduction is some different from what had been done by using adjoint equation of (10). In quantum theory with Hermitian Hamiltonian systems, \(|\psi(t)\rangle\) and \(|\tilde{\psi}(t)\rangle\) are equivalence, since the time evolution is unitary (probability preserving) and they are only different in a global phase. Under this equivalence, \(|\tilde{\psi}(t)\rangle\) can be employed as a vector on so-called projective Hilbert space of the system. However, for a system with a non-Hermitian Hamiltonian, the time evolution is not unitary. Hence, though the state vectors only differ in norms, they may describe different system states. Nevertheless, we can still formally set up the projective Hilbert space for a non-Hermitian system by using \(|\tilde{\psi}(t)\rangle\) as a state on it. Based on the above definition, from Eqs. (11) and (12), we can see that one can obtain the norm increment and the global phase of the state acquiring in its time evolution only from the trace in the projective space, the latter is as the same as for Hermitian systems. The global phase and its relation with the projective Hilbert space plays significant role in geometric (topology) properties of Hermitian quantum systems. Therefore, it may be interesting to study the geometric properties of a non-Hermitian system in such a point of view. In order to show such discussions clearly, we employ a two-level system, describing physics of two coupled sites with gain and loss, of which the counterpart Hermitian system also plays a role in illustrating the geometric properties of quantum systems. The time evolution of such a two-level system is described by a \(2\times 2\) matrix Hamiltonian system by the following equation, \[i\frac{d}{dt}\left(\begin{array}{c}a\\ b\end{array}\right)=\left(\begin{array}{cc}H_{11}&H_{12}\\ H_{21}&H_{22}\end{array}\right)\left(\begin{array}{c}a\\ b\end{array}\right), \tag{13}\] Then following the definition \(|\psi(t)\rangle=e^{\mu+i\nu}|\tilde{\psi}(t)\rangle\), one can get \[\frac{d}{dt}(i\mu-\nu)\tilde{a}+i\frac{d}{dt}\tilde{a}=H_{11}\tilde{a}+H_{12 }\tilde{b}, \tag{14}\] \[\frac{d}{dt}(i\mu-\nu)\tilde{b}+i\frac{d}{dt}\tilde{b}=H_{21}\tilde{a}+H_{22} \tilde{b}, \tag{15}\] Combining with their complex conjugations, and considering \(|\tilde{a}|^{2}+|\tilde{b}|^{2}=1\), we can easily verify the equations (11) and (12). For convenience and without losing generality, we then construct the vector in the projective space for a state \(|\psi(t)\rangle=\left(\begin{array}{c}a\\ b\end{array}\right)\) with \(|\tilde{\psi}(t)\rangle=\left(\begin{array}{c}\tilde{a}e^{i\varphi}\\ \tilde{b}\end{array}\right)\), \(\tilde{a}=\frac{a}{\sqrt{|\tilde{a}|^{2}+|\tilde{b}|^{2}}}\), \(\tilde{b}=\frac{b}{\sqrt{|\tilde{a}|^{2}+|\tilde{b}|^{2}}}\), and \(\varphi=\arg(a)-\arg(b)\). By denoting \(z=|b|^{2}-|a|^{2}\) which is just the relative population difference of the two levels, it then can be mapped to a sphere, the so-called Bloch sphere, with the coordinates \((\varphi,z)\). From Eq. (12), we can obtain the evolution of the total phase \[\frac{d}{dt}\beta=-1/2\langle\tilde{\psi}|\hat{H}+\hat{H}^{\dagger}|\tilde{ \psi}\rangle+1/2(1-z)\frac{d\varphi}{dt}. \tag{16}\] This equation is the same as what had been obtained for Hermitian systems by Aharonov and Anandan excepting that in the dynamic part Hermitian Hamiltonian \(\hat{H}\) is replaced by \((\hat{H}+\hat{H}^{\dagger})/2\). The second part in the right hand of the above equation is known as the geometric part. One can easily prove that, if the trace of the evolution is closed in the projective space, the geometric phase just equals to the half of solid angle of the close path on the Bloch sphere, which is just the so-called AA phase, the geometric phase of cyclic state.
2308.06896
Investigation of Phonon Lifetimes and Magnon-Phonon Coupling in YIG/GGG Hybrid Magnonic Systems in the Diffraction Limited Regime
Quantum memories facilitate the storage and retrieval of quantum information for on-chip and long-distance quantum communications. Thus, they play a critical role in quantum information processing and have diverse applications ranging from aerospace to medical imaging fields. Bulk acoustic wave (BAW) phonons are one of the most attractive candidates for quantum memories because of their long lifetime and high operating frequency. In this work, we establish a modeling approach that can be broadly used to design hybrid magnonic high-overtone bulk acoustic wave resonator (HBAR) structures for high-density, long-lasting quantum memories and efficient quantum transduction devices. We illustrate the approach by investigating a hybrid magnonic system, where BAW phonons are excited in a gadolinium iron garnet (GGG) thick film via coupling with magnons in a patterned yttrium iron garnet (YIG) thin film. We present theoretical and numerical analyses of the diffraction-limited BAW phonon lifetimes, modeshapes, and their coupling strengths to magnons in planar and confocal YIG/GGG HBAR structures. We utilize Fourier beam propagation and Hankel transform eigenvalue problem methods and discuss the effectiveness of the two methods to predict the HBAR phonons. We discuss strategies to improve the phonon lifetimes, since increased lifetimes have direct implications on the storage times of quantum states for quantum memory applications. We find that ultra-high, diffraction-limited, cooperativities and phonon lifetimes on the order of ~10^5 and ~10 milliseconds, respectively, could be achieved using a CHBAR structure with 10mum lateral YIG dimension. Additionally, the confocal HBAR structure will offer more than 100-fold improvement of integration density. A high integration density of on-chip memory or transduction centers is naturally desired for high-density memory or transduction devices.
Manoj Settipalli, Xufeng Zhang, Sanghamitra Neogi
2023-08-14T02:26:10Z
http://arxiv.org/abs/2308.06896v2
Investigation of Phonon Lifetimes and Magnon-Phonon Coupling in YIG/GGG Hybrid Magnonic Systems in the Diffraction Limited Regime ###### Abstract Quantum memories facilitate the storage and retrieval of quantum information for on-chip and long-distance quantum communications. Thus, they play a critical role in quantum information processing (QIP) and have diverse applications ranging from aerospace to medical imaging. It is well established that quantized vibrations (phonons) in mechanical oscillators can behave quantum mechanically and play an important role in QIP. Bulk acoustic wave (BAW) phonons, which vibrate within the bulk of a material, are promising candidates for storing quantum information due to their long lifetimes. In this work, we investigate a hybrid magnonic system, where BAW phonons are excited in a Gadolinium Iron Garnet (GGG) thick film via coupling with quantized electron spin-waves (magnons) in a Yttrium Iron Garnet (YIG) thin film. Recent experiments on a millimeter scale YIG/GGG device show that the memories are limited by the phonon lifetime of 0.2 \(\mu\)s at room temperature. The phonon lifetime is expected to be longer in the millikelvin regime. However, the phonon lifetime could be limited by diffraction in that regime, especially for small scale devices. A complete understanding of the diffraction-limited performance of the hybrid magnonic devices is not available. We present theoretical and numerical analyses of the diffraction-limited BAW phonon lifetimes, modeshapes, and their coupling strengths to magnons in planar and confocal high-overtone bulk acoustic wave resonator (HBAR) structures. We utilize (1) Fourier beam propagation and (2) Hankel transform eigenvalue problem approaches and discuss the effectiveness of the two methods to predict the HBAR BAW phonons. Our analyses predicts the diffraction-limited phonon lifetime to be on the order of 100 milliseconds in confocal HBAR structures. We illustrate that a focusing dome could significantly improve the performance of hybrid magnonic HBAR structures, for quantum memory and transduction applications. ## I Introduction Hybrid quantum systems constitute a rapidly emerging research field due to their integration of diverse platforms, and ability to synergistically address individual limitations. Among various systems explored, hybrid magnonic systems offer unique advantages and as a result, received significant attention in recent years [1; 2]. In these systems, quantized spin waves (magnons) coherently couple with other information carriers, such as photons and phonons. The coupling presents avenues for both fundamental investigations and practical implementations [3; 4; 5; 6; 7; 8; 9; 10; 11]. Hybrid magnonic systems often leverage magnetic materials with high spin density, such as yttrium iron garnet (YIG). Such materials enable strong coupling of magnons with microwave photons trapped in a cavity, with coupling strengths surpassing their respective dissipation rates [3; 4; 5; 6; 7]. Several devices have been proposed that benefit from the strong magnon-photon coupling, with applications ranging from microwave-to-optical transduction [12; 13], quantum magnonics [14; 15; 16], to dark matter detection [17; 18] and more [19; 20; 21; 22; 23; 24]. In addition to magnon-photon coupling, magnon-phonon coupling in hybrid magnonic devices has attracted attention for coherent and quantum information processing applications [25; 26; 27; 28; 11]. Magnon-phonon coupling has already shown success for classical information processing applications [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. Phonons are particularly attractive due to their long lifetime compared to other information carriers [42; 43; 44; 45]. Coupling of magnons with mechanical phonons in YIG spheres has been shown to enable magnon-phonon entanglement [46], nonreciprocal phonon propagation [47; 48], and other applications. Separately, high-overtone bulk acoustic wave resonator (HBAR) phonons, which operate in the GHz regime, have been shown to enable resonant magnon-phonon coupling via the magnetocaliste effect [49]. The HBAR phonons are commonly supported by structures composed of YIG thin-films [50; 51; 52; 53] bonded to a thicker substrate, such as a gadolinium gallium garnet (GGG) film, which can host evenly distributed HBAR resonances. A recent study demonstrated a triply resonant photon-magnon-phonon system using a YIG/GGG hybrid magnonic HBAR device [54]. Such resonant coupling can enable long-lasting multimode phononic quantum memories and realize quantum information processing and transduction applications. In this article, we investigate a class of planar and confocal HBAR YIG/GGG hybrid magnonic structures for quantum transduction applications. The performance figure of merit for magnon-phonon transduction can be measured using cooperativity, \(C=4g_{mb}^{2}/\kappa_{m}\kappa_{b}\), where \(g_{mb}\), \(\kappa_{m}\), and \(\kappa_{b}\) are magnon-phonon coupling strength, magnon dissipation rate, and phonon dissipation rate, respectively. The lifetime of magnons and phonons are given by \(\tau_{m}=1/\kappa_{m}\) and \(\tau_{b}=1/\kappa_{b}\), respectively. It is commonly accepted that the phonon lifetimes surpass the magnon lifetimes in hybrid magnonic devices. Past studies reported the magnon and phonon lifetime in YIG/GGG HBAR systems to be \(\tau_{m}=0.07\mu\)s and \(\tau_{b}~{}\sim 0.25\mu\)s, respectively [53; 54]. It is desirable to improve the system lifetime much beyond \(\sim 0.25\mu\)s for quantum memories and other information processing applications. At given temperature and frequency operating conditions, the phonon lifetime could be limited by material and diffraction losses. However, a complete understanding about the loss mechanisms in devices with given geometry or diverse operating conditions is not available. Phonon lifetime at room temperature is primarily limited by acoustic attenuation due to phonon-phonon interactions [55; 56]. The acoustic attenuation effects, \(\alpha\propto 1/\tau_{b}\), has been shown to follow a \(T^{4}\) temperature dependence. Due to the \(T^{4}\) behavior, these effects are less significant at low temperatures, such as mK regime where many qubit systems operate. Ideally, \(\tau_{b}\) can increase by multiple orders of magnitude at cryogenic temperatures. However, it is important to consider the diffraction losses to decide if low-T operations are a viable solution to generate long-lifetime phonons. In this study, we investigate the effects of diffraction losses on HBAR phonon modes in hybrid magnonic structures and leave the study of material losses for future research. One of our main objectives is to establish a predictive approach to analyze diffraction losses in HBAR structures. However, we like to emphasis that the knowledge of material losses in HBAR structures across various temperature and frequency operating conditions is limited. There is a strong need for experimental characterization of acoustic attenuation in YIG/GGG material systems to unlock their full potential. One important design aspect of hybrid magnonic devices is its scalability: the number of memory or transduction centers that could be incorporated into a single chip, in addition to other on-chip circuitry. The YIG film represents the transduction center in the YIG/GGG HBAR structures of interest. The smaller the lateral area of the YIG film, the greater the number of such centers in a single chip and the scalability. We can define the scalability factor, \(S\), as \(S=\frac{A_{d}^{0}}{A_{d}}\). Here, \(A_{d}\) is the lateral area of the YIG film for planar HBAR structures, and the cross-sectional area of the confocal dome surface for CHBAR structures in our study, respectively. \(A_{d}^{0}\) represents a reference value for the device. We consider the smallest known lateral area reported in literature to be the reference, \(A_{d}^{0}=0.8\times 0.9\) mm\({}^{2}\)[54]. The scalability of the device can be increased by reducing the lateral area of the YIG film. However, the reduced aperture will lead to high diffraction and affect phonon lifetime. In addition, a reduced magnon-phonon overlap in the YIG region could lead to a weaker magnon-phonon coupling. The reduced lifetime and the coupling will decrease the cooperativity. Therefore, a complete understanding of the trade-off between scalability and diffraction-limited performance is essential to obtain long-living scalable phononic quantum memories using HBAR structures. Note that we only consider the lateral scalability and use the same thickness, \(527.2\mu\)m, for all structures, to keep the phonon free spectral range relatively unchanged. In this study, we investigate HBAR phonon modes, their diffraction-limited lifetimes, and the magnon-phonon coupling strengths in planar and confocal HBAR YIG/GGG hybrid magnonic structures and identify strategies to improve the performance and scalability of these structures. We use the magnon lifetime to be \(0.07\mu\)s for all structures considered in this study [54]. The phonon lifetime \(\tau_{b}\) or the quality factor (\(Q=\omega\tau_{b}\)) is of particular interest due to its direct relevance to the quantum memory storage time. Henceforth, we drop the subscript \(b\) from \(\tau_{b}\), unless otherwise needed to distinguish it from magnon and photon lifetimes. We consider the magnon mode to be the fundamental mode or the Kittel mode, which has a well-designed analytical function for circular discs. However, no such analytical description exists for the HBAR phonon modes. We evaluate the existing numerical methods and discuss the appropriate methods to reliably model the HBAR phonons modes. One method estimated phonon lifetimes in planar HBARs by decomposing the initial beam in a Bessel function basis. They calculated the overlap of the propagating Bessel functions with the initial profile after every round trip and extracted the lifetime from an exponential fit [57]. However, the predicted lifetime was smaller than the experimental observations because they ignored the effects of the lateral confinement due to the transducer. Finite element analysis (FEA), as implemented in the COMSOL Multiphysics [58] software, has been a popular method to study HBAR phonons in planar and confocal geometries. A recent study estimated the diffraction loss in an epitaxial planar HBAR structure by measuring the power received at the opposite end to that of the actuator [59]. However, they considered half a round trip and also ignored the effect of localization due to the transducer. In general, acoustic wave amplitudes decay non-exponentially with time due to the mode mismatch between the input acoustic beam profile and the phonon mode of interest. Hence, it is important to consider how transducer localization affects over multiple round trips to accurately determine the characteristic decay of the mode of interest, instead of only half a round trip. Another study used FEA to predict the modeshapes of a CHBAR for qubit coupling applications [60]. They con sidered phonon modes at low-overtones or long wavelengths, hence it was possible to perform 3D FEA simulations for this system. However, as we will discuss later, it becomes intractable to simulate bulk acoustic waves with overtone numbers \(n\)\(\sim\)3000 using FEM since sampling required for these small wavelength modes is on the order of \(~{}\sim\) billion nodes. All these studies discussed longitudinal HBAR phonons. However, forward volume magnetostatic modes, such as Kittel modes, strongly couple to shear phonons [53, 54, 61]. The FEA analysis becomes particularly expensive for shear phonons since they are not axi-symmetric. One cannot leverage the axi-symmetric 2D FEA modelling available in COMSOL Multiphysics for these systems. A past study simulated a 2D YIG/GGG HBAR system in COMSOL without leveraging axi-symmetry [54]. They ignored the different impacts of Gouy phase effects between 2D and 3D diffraction [62]. It is necessary to consider such effects to accurately predict lifetimes and mode shapes. Due to these reasons, we explore other approaches to model shear phonon modes. We consider the Fourier beam propagation method (FBPM) [57, 63] which follows a Fox-Li-like [64] iterative approach to obtain the phonon modes and works with a plane-wave basis set. Although past studies used FBPM for longitudinal phonon modes, we illustrate that it can effectively model the shear phonon modes of interest. We provide s detailed description of this method and discuss an adaptive algorithm that allows to overcome some of the challenges of the standard FBPM method. In addition we consider another method that uses Hankel (HK) transform, which is a Fourier transfrom for axi-symmetric systems, and works with a Bessel function basis. Thus far, HK method has only been implemented for Fabry-Perot optical cavities [65, 66]. We adapt this approach to simulate YIG/GGG HBAR structures by leveraging their axi-symmetry and isotropic material properties. ## Methods ### HBAR Configurations Figure 1 shows the two representative HBAR structures investigated in this work. The structure in Figure 1(a) includes a thick GGG film joined together with a YIG thin film at the bottom. We refer to this structure as the planar HBAR structure since it has planar top and bottom surfaces. Figure 1(b) is a focusing HBAR structure that includes a GGG dome structure at the top and a planar bottom surface. We refer to this structure as the confocal HBAR (CHBAR) structure. All configurations considered here have a GGG layer of thickness, \(t_{\rm GGG}=527\mu\)m. This value refers to the length from the bottom surface of the GGG film to the mid point of the dome for the confocal HBAR, as shown. All YIG films are circular films or discs with thickness, \(t_{\rm YIG}=200\) nm, and radius of cross-section, \(R_{\rm YIG}\). Thus, the total HBAR device thickness is \(t_{\rm HBAR}=t_{\rm GGG}+t_{\rm YIG}=527.2\mu\)m. We consider planar and confocal HBAR structures with several \(R_{\rm YIG}\)'s ranging from \(10\mu\)m to \(200\mu\)m. The width \(W\) of the structure is \(1200\mu\)m unless otherwise mentioned. The radius of cross-section of the dome, \(R_{\rm cross}\), and its radius of curvature, \(R_{\rm curv}\), vary for different structures considered. Altogether, we consider 8 planar HBARs with varying \(R_{\rm YIG}\), 18 CHBARs with varying \(R_{\rm curv}\) and fixed \(R_{\rm cross}\), and 10 CHBARs with varying \(R_{\rm cross}\) and fixed \(R_{\rm curv}\). ### Magnon Modes The YIG thin film hosts magnons, generated by a static, \(\mathbf{B}_{0}\), and an oscillating RF magnetic field, \(\mathbf{B}_{\mathbf{RF}}\). \(\mathbf{B}_{0}\) is applied along the normal direction of the YIG film, while the oscillating \(\mathbf{B}_{\mathbf{RF}}\) acts in the YIG plane. The magnetic fields, \(\mathbf{B}_{0}\) and \(\mathbf{B}_{\mathbf{RF}}\), generate forward volume magnetostatic modes, as shown in Fig. 1. These modes precess in the \(x-y\) plane. The dynamic magnetization in the YIG region is given by \(\mathbf{m}_{\rm YIG}(x,y)=m_{0}(x,y)(\cos(\omega t)\mathbf{i}+\sin(\omega t) \mathbf{j})\). Here, \(m_{0}(x,y)\) represents the magnon modeshape and \(\omega\) is the frequency of precession. We practice the following convention throughout the article, bold-font symbols represent vector fields and corresponding normal-font symbols represent scalar values, respectively. We expect un-pinned spin waves at the top and bottom surfaces. This prediction can be justified using the following two arguments: (1) exchange interactions are expected to dominate at a thickness on the order of 200 nm and below and (2) as a consequence, the pinning effect is expected to disappear below a critical width on the order of 200 nm [67]. However, the magnetic dipolar interactions are expected to dominate at length Figure 1: **Representative HBAR configurations:** (a) Planar HBAR structure composed of a GGG thick film and a YIG thin film. (b) Confocal HBAR structure with a dome surface at the top and a planar bottom surface. \(x\) axis is along the out-of-plane direction while \(y\) and \(z\) axes point along the lateral and the normal direction of the structures, respectively. The origin of the coordinate axes is at the center of the bottom YIG surface. scales on the order of micrometers, over the exchange interactions [67]. As a result, we expect full-pinning of the magnetic spin waves at the lateral boundaries of the YIG discs of radii ranging from \(10\mu\)m to \(200\mu\)m. Considering these two expected results, we assume that \(\mathbf{m}_{\text{YIG}}\) is constant throughout the thickness of the YIG film along the \(z\) axis. Accordingly, we describe the modeshape of the pinned spin waves, \(m_{0}(x,y)\), using truncated Bessel functions [68; 69]: \[m_{0}(x,y)=\begin{cases}J_{0}\left(\frac{R}{R_{\text{YIG}}}\zeta_{j}\right),& \text{if }R\leq R_{\text{YIG}}\\ 0,&\text{otherwise}\end{cases} \tag{1}\] where radial coordinate \(R=\sqrt{x^{2}+y^{2}}\) and \(J_{0}\) is the \(0^{\text{th}}\) order bessel function. The zeroes of \(\zeta_{j}\) with \(j=0,1,2,...\), correspond to the fundamental and higher-order magnon modes respectively. In this article, we focus on the fundamental magnon mode or the Kittel mode, whose amplitude is represented by the truncated function, \(J_{0}(\frac{R}{R_{\text{YIG}}}\zeta_{0})\). The modeshape is constant along the \(z\)-direction. ### Phonon Modes The precessing forward volume magnon modes, \(\mathbf{m}_{\text{YIG}}(x,y)\), generate precessing shear deformations in the YIG region. The dynamic shear strain results in a circularly polarized chiral phonon traveling wave in the GGG region. The helicity of the chiral phonon mode is determined by the precession direction of the Kittel mode. The dynamic shear displacements can be expressed as \(\mathbf{u}_{\text{YIG}}(x,y)=u_{0}(x,y)(\cos(\omega t)\mathbf{i}+\sin(\omega t )\mathbf{j})\), where \(u_{0}(x,y)\) is the phonon modeshape and \(\omega\) is the precession frequency. In the following, we refer to the shear displacements, \(\mathbf{u}_{\text{YIG}}(x,y)\), as \(\mathbf{u}_{0}(x,y)\). The traveling wave undergoes reflections at the top GGG surface and the reflected waves interfere with the forward traveling wave. When the forward and reflected helical propagating waves interfere, they form rotating standing shear wave modes at certain frequencies. While the traveling chiral phonons are helical, the standing waves are not helical. The top GGG surface does not induce a \(\pi\) phase shift to the reflected wave, unlike circularly polarized light reflecting off a mirror. As a result, we obtain standing shear wave modes with zero net helicity. In this article, we use (1) Fourier beam propagation and (2) Hankel transform eigenvalue problem approaches to analyze the shear wave phonon modes in the chosen HBAR configurations. #### (1) Phonon Modeshape Analysis: Fourier Beam Propagation The Fourier beam propagation method (FBPM), also known as the angular spectrum method, predicts the field displacements or profiles of propagating waves at distances away from the aperture [63; 70]. It has been widely used to analyze beam propagation in the field of optics [70]. More recently, the FBPM is used to study HBAR phonons in planar [57] and CHBAR structures [63], adapting an iterative method similar to the Fox-Li approach [64]. The advantage of FBPM lies in its simplicity and the ability to predict the field profiles at any target distances from the source without needing to calculate the behavior at intermediate distances. The mathematical formulation of FBPM for acoustic waves in anisotropic systems is well established [63]. Here, we implement a reformulated approach in which the propagation is calculated using projector and propagator operators. This reformulation allows us to achieve a seven-fold speed up of the computation time. Our implementation is particularly advantageous for isotropic systems such as YIG and GGG. The iterative procedure begins with an initial input displacement field which we assume to be \[u_{0}^{(1)}(x,y)=\begin{cases}J_{0}\left(\frac{R}{R_{\text{YIG}}}\zeta_{0} \right),&\text{if }R\leq R_{\text{YIG}}\\ 0,&\text{otherwise}\end{cases} \tag{2}\] where the superscript represents iteration index (\(i=1\)) and the subscript in \(u_{0}^{(1)}(x,y)\) refers to \(z=0\). Note that the lateral phonon modeshape is the same as the Kittel modeshape mentioned earlier in Eq. 1. In FBPM, the input beams for initial and the subsequent iterations are decomposed into plane-waves. The field displacements at intermediate distances is then obtained by multiplying phase factors to the decomposed beam. We decompose the input acoustic beam into plane-waves using Fourier decomposition: \[\tilde{\mathbf{u}}_{0}^{(i)}(k_{x},k_{y})=\text{FFT}[\mathbf{u}_{0}^{(i)}(x,y )]. \tag{3}\] Here, \(\tilde{\mathbf{u}}_{0}^{(i)}\) is defined on a \(N\times N\) (\(k_{x},k_{y}\)) grid which is the Fourier conjugate of the spatial \(N\times N\) (\(x,y\)) grid. The index \(i\) represents the iteration number. We calculate the projections of \(\tilde{\mathbf{u}}_{0}^{(i)}(k_{x},k_{y})\) along the different polarization directions to obtain the amplitudes, \(A_{m}^{(i)}\), of the shear (\(m=1,2\)) and the longitudinal (\(m=3\)) modes: \[A_{m}^{(i)}=\mathbf{P}_{m}\cdot\tilde{\mathbf{u}}_{0}^{(i)}, \tag{4a}\] \[\text{with }\mathbf{P}_{m}=\frac{\Sigma_{cyc}(-1)^{|\text{sgn}(i-m)| }\hat{d}_{i}(\hat{d}_{m}\cdot\hat{d}_{i}-\hat{d}_{j}\cdot\hat{d}_{k})}{1+2 \Pi_{cyc}(\hat{d}_{i}\cdot\hat{d}_{j})-\Sigma_{cyc}(\hat{d}_{i}\cdot\hat{d}_{ j})^{2}}, \tag{4b}\] where \(\mathbf{P}_{m}\) is the projector onto polarization direction \(m\). Here, \(\hat{d}_{m}\) is the polarization vector for the plane-waves propagating along (\(k_{x},k_{y},k_{z,m}\)), (\(i,j,k\)) refers to the Cartesian directions, (\(i,j,k\)) = (\(1,2,3\)) and \(\Sigma_{cyc}\) and \(\Pi_{cyc}\) are cyclic sum and product operators, respectively, that cycle through variables \(i\), \(j\), and \(k\). We use the amplitudes \(A_{m}^{(i)}(k_{x},k_{y})\), to obtain the displacement field of the propagated beam, starting from the input beam, \(u_{0}^{(i)}(x,y)\). For a wave originating in the YIG thin film, the propagation distance to reach the upper GGG surface of the HBAR structures is \(t_{\text{HBAR}}\), as shown in Fig. 1. The displacement field of the propagated beam at the upper GGG surface is given by \[\mathbf{u}_{\text{{thBAR}}}^{(i)}(x,y)=\text{IFFT}[\Sigma_{m}A_{m} ^{(i)}G_{m}], \tag{5a}\] \[\text{with }G_{m}(k_{x},k_{y})=\hat{d}_{m}(k_{x},k_{y})e^{ik_{x,m}(k _{x},k_{y})t_{\text{HBAR}}}. \tag{5b}\] Here, \(G_{m}(k_{x},k_{y})\) is the propagator for plane-waves traveling along \((k_{x},k_{y},k_{z,m})\) with polarization \(m\). Typically, \(k_{z,m}\) is derived from their respective slowness surfaces [63] for each polarization and values of \((k_{x},k_{y})\). However, the functional relation can be simplified to \(k_{z,m}=\sqrt{\frac{\omega^{2}}{v_{m}^{2}}-k_{x}^{2}-k_{y}^{2}}\), for isotropic dispersions. We use this relationship for the propagating waves in our isotropic YIG/GGG HBAR structures. Here, \(\omega\) is the frequency of the initial wave and \(v_{m}\) is the velocity of phonons with polarization \(m\). Note that we need to compute \(k_{z,m}\), \(A_{m}\) (Eq. 4), and \(\mathbf{u}_{\text{{thBAR}}}(x,y)\) (Eq. 5) only during the first round trip, for a given \(\omega\). This aspect results in the seven-fold computational speed-up mentioned earlier. We show the properties of YIG and GGG in Table. 1. It can be noted that the mechanical properties of YIG and GGG are similar. Due to this reason, we use GGG values for both YIG and GGG, for simplicity. This approximation can be further justified by considering that our structures are composed of GGG thick films with ultrathin YIG films bonded to it. We choose the longitudinal polarization \(\hat{d}_{3}\) to be along the unit vector \(\mathbf{k}\) and the shear polarizations, \(\hat{d}_{1,2}\), to be two mutually perpendicular unit vectors perpendicular to \(\mathbf{k}\). Note that for isotropic systems with mutually perpendicular \(d_{m}\)s, like what is considered in this article, \(A_{m}\) reduces to \(A_{m}=\hat{d}_{m}\cdot\mathbf{\tilde{u}}\) at each \((k_{x},k_{y})\). The propagated plane-waves are periodic in the direction transverse to the beam propagation direction. When the diffracted waves reach the transverse boundaries of the computational domain, they introduce undesired reflections. To avoid these reflections, one needs to consider a sufficiently wide computational domain that can contain the waves even after multiple reflections. However, such large domains can significantly increase computational costs. An alternative approach is to introduce absorbing boundary regions that completely attenuate any waves that enter these regions [57]. In this study, we implement absorbing boundaries of thickness \(W_{\text{Abs}}=50\times\lambda\) (wavelength of the input field). We show the absorbing boundaries in in Fig. 1, using blue shaded regions. To simulate the effect of absorbing boundaries, we multiply \(\mathbf{u}_{\text{{thBAR}}}^{(i)}\) (Eq. 5) with a reflection operator \(R_{t_{\text{{thBAR}}}}\), defined as \[R_{t_{\text{{thBAR}}}}(x,y)=\begin{cases}1,&\text{if }R\leq W_{\text{eff}}/2\\ 0,&\text{otherwise}\end{cases} \tag{6}\] where \(W_{\text{eff}}=W-2W_{\text{Abs}}\) is the effective width of the simulation window without the absorbing boundaries. We propagate the attenuated reflected wave further through a distance \(t_{\text{HBAR}}\) to complete a full round trip. We implement the beam propagation by following the approach outlined in Eqs. 3 - 5b. After every round trip, we multiply the resulting complex displacement field by an additional phase \(R_{0,m}\) for each polarization \(m\): \[R_{0,m}(x,y)=\begin{cases}e^{i2k_{x0,m}t_{\text{YIG}}},&\text{if }R_{\text{YIG}}\leq R\leq W_{\text{eff}}/2\\ 1,&\text{if }R\leq R_{\text{YIG}}\\ 0.&\text{otherwise}\end{cases} \tag{7}\] where \(k_{z0,m}=k_{z,m}(k_{x}=0,k_{y}=0)\). The phase factor is introduced due to the finite width of the YIG film compared to the GGG width, \(W\). However, it is worth mentioning that this approximation is more appropriate for low-diffraction cases. We used this for all cases considered to simplify the analysis. We use the resulting displacement field as the new input field for the next round trip iteration. We repeat this process for \(N\) round trips. We calculate the complex sum of the displacement fields, \(\mathbf{U}_{0}(x,y)=\Sigma_{i=1}^{N}\mathbf{u}_{0}^{(i)}(x,y)\) at \(z=0\), to include the effect of interferences. We use the interference sum \(\mathbf{U}_{0}(x,y)\) obtained after \(N\) round trips as the initial beam of the next restart, \(\mathbf{u}_{0}^{(1)}(x,y)=\mathbf{U}_{0}(x,y)\), and restart the process. Such a restart process using interference sum as an input ensures fast convergence to the desired mode. The other mode components are attenuated in the interference sum through destructive interference. We continue this process until the varation of the modeshape is within a chosen tolerance. We obtain a converged standing wave with displacement field, \(\mathbf{u}_{0}(x,y)\), as a final outcome. The frequency overtones for plane-waves traveling along z direction in the HBAR structures are expected to be \(\omega_{m,n}=2\pi\times\frac{nn_{\text{{thBAR}}}}{2t_{\text{HBAR}}}\), with \(n=1,2,3,...\infty\). The thickness of the structure is \(t_{\text{HBAR}}\), velocity of wave is \(v_{m}\) and \(m\) represents polarization. The overtones are separated by \(\frac{v_{m}}{2t_{\text{HBAR}}}\), known as the free spectral range (FSR). However, the diffracting waves traveling in the \(z\) direction do not have well-defined \(k_{z,m}\)'s leading to Gouy phase effects [62], unlike plane-waves. The diffraction results in the shift of resonance frequencies from the monochromatic plane-wave overtones. In this work, we are interested in identifying the frequencies of the shear modes with \(m=2\). The reason is that the shear modes could couple with the Kittel magnon modes generated in the \begin{table} \begin{tabular}{l c c c} \hline \multicolumn{2}{c}{Young’s modulus Poisson’s ratio} & Density \\ & E (Pa) & \(\nu\) & \(\rho\) (kg/m\({}^{3}\)) \\ \hline YIG [71] & 0.2 \(\times\) 10\({}^{12}\) & 0.29 & 5170 \\ GGG [72] & 0.222 \(\times\) 10\({}^{12}\) & 0.28 & 7080 \\ \hline \end{tabular} \end{table} Table 1: Material properties of YIG and GGG. YIG thin-film (Eq. 1), in a rotating coordinate system. Note that it suffices to consider only \(m=1\) or \(m=2\) as the dominant shear polarization in the rotating coordinate system. To identify the resonance frequencies, we sweep through a frequency window of \(\omega_{0}\pm 5\times\mathrm{FSR}\) divided into 50 steps. Here, the frequency of interest, \(\omega_{0}=2\pi\times 9.825\) GHz, corresponds to the frequency of the \(2960^{th}\) overtone of a standing plane-wave in the HBAR structure. We choose this overtone to maintain the same frequency operating conditions as was considered in a previous article that investigated similar HBAR structures [54]. For each frequency, we propagate the beam in the structure and calculate the intensities of the interference sums \((I=\int_{A}\mathrm{Re}[\mathbf{U}_{0}(x,y)]^{2}dA)\) at \(z=0\) after \(N\) round trips. We obtain the resonance frequencies by noting the frequencies for which the intensities are maximum. We consider a YIG/GGG HBAR structure with \(t_{\mathrm{YIG}}=200\) nm, \(R_{\mathrm{YIG}}=200\mu\)m, \(t_{\mathrm{GGG}}=527\mu\)m, and \(L_{x,\mathrm{GGG}}=L_{y,\mathrm{GGG}}=1200\mu\)m, for this analysis. We show the intensities of the propagated beam in Fig. 2(a), for the frequency range considered. The central intensity peak corresponds to \(\omega_{0}=2\pi\times\frac{2960v_{2}}{2t_{\mathrm{HBAR}}}\), the exact value of the \(2960^{\mathrm{th}}\) shear plane-wave overtone. The peaks form at resonance frequencies separated by FSR = 3.32 MHz, indicating the presence of multiple shear wave overtones around the frequency, \(\omega_{0}/2\pi\sim 9.825\) GHz. In Figs. 2 (b) and (c), we show the \(y\)-component of the normalized resonant modeshape, \(\mathrm{Re}[\mathbf{U}_{0y}(x,y)]\) near \(\omega_{0}\). We obtain these modeshapes after 800 round trips. This simulation included one restart after 400 round trips. This number is arbitrary but we chose after performing multiple tests. The basis of selection is that the number is large enough such that the effects from Gouy phase are not significant on the interference sums. We compute the weighted deviation (WD) between the normalized displacement fields, obtained before the 1st and the 2nd restarts, \(\mathbf{U}_{0}^{1}(x,y)\) and \(\mathbf{U}_{0}^{2}(x,y)\), respectively: \(|(\mathbf{U}_{0}^{1}(x,y)-\mathbf{U}_{0}^{2}(x,y))|/|\Sigma_{x,y}\mathbf{U}_ {0}^{1}(x,y)|\). We monitor the WD between restarts to check for convergence. Figure 2 (d) shows that the weighted deviation between the displacement fields. As can be noted, WD is much less than \(10^{-7}\), indicating that the Figs. 2 (b) and (c) represent displacement fields converged within sufficient numerical tolerance. Some fringes remain in the modeshape profiles that are possible artifacts of our numerical analysis. These are the high frequency components resulting from the sampling of the sharp phase change induced by the YIG film. They are significantly lower in values, however, could be further reduced by either low-pass filtering or using a smoother phase change factor than \(R_{0,m}(x,y)\) (Eq. 7) used in this work. However, to obtain converged modes and identify the true modal frequency near an intensity peak of interest, we further narrow the frequency sweeping range with finer sampling (\(\sim 1\) kHz). To identify the true modal frequency near \(\omega_{0}\), we set the frequency to be \(\omega_{0}^{\mathrm{HBAR}}=\omega_{0}+\delta\omega\), with \(\delta\omega=2\pi\times 2.157\) kHz and restart the iterative process with Eq. 2 as the input beam. If such narrowing is not done, the modeshape can change significantly after each restart. This is due to the Gouy phase effects the beam incurs due to diffraction [62], as discussed earlier. This results in the detuning of overtones with respect to the plane-wave overtones. We illustrate the detuning effect in Fig. 3. We show the real and imaginary parts of the complex sums \(\mathbf{U}_{0y}^{(1)}(x,y)\) and \(\mathbf{U}_{0y}^{(2)}(x,y)\) computed before first and second restart (after 400 and 800 round trips), respectively. We calculate the complex sums at \(\omega_{0}\) without performing the \(\delta\omega\) correction discussed above. It can be noted that both the real and imaginary parts of \(\mathrm{U}_{0y}^{(1)}\) and \(\mathrm{U}_{0y}^{(2)}\) vary significantly during the round trips. Therefore, this approach does not result in converged mode shapes. Although the narrowed frequency sweeping helps with identifying the true resonant frequency, it is a tedious process to identify the necessary fine sampling. We need to restart the iterative process multiple times, particularly for high-diffraction cases. The results shown in Fig. 2, represent propagating waves in a low diffraction regime with Fresnel number \(N_{F}=213\). However, this work also considers wave propagation in HBAR structures with \(R_{\mathrm{YIG}}\) as low as \(10\mu\)m corresponding to a very low Fresnel number, \(N_{F}=1.06\) indicating a high-diffraction regime. We investigate these structures to discuss the effects of reduced actuator lateral area on the phonon modes and their lifetimes. The reduced area promises to increase scalability of the devices for mem Figure 2: **Resonant modes in planar HBAR structures:** (a) Multimode phonons, represented by the overtones of the fundamental mode, and their free spectral range (FSR). (b) Isometric view of the y-component of \(\mathbf{U}_{0}(x,y)\), \(\mathrm{U}_{0y}(x,y)\). (c). Top view of \(\mathrm{U}_{0y}(x,y)\) showing localization effect caused by the YIG film. (d) Weighted deviation (WD) between mode profiles before first and second restarts. ory applications. The high diffraction regime of these structures introduces significant Gouy phase effects on the propagating waves and detuning of frequencies. It becomes challenging to identify a resonant frequency by merely narrowing the frequency sweeping range. We implement an adaptive version of the FBPM algorithm to circumvent this challenge. We like to point out that the phonon modeshapes can be well predicted by using absorbing boundaries in the FBPM technique. However, their lifetime can be dependent on how the shape and size of the absorbing boundaries are defined. We do not carry out extensive investigation to narrow down the choice of the boundaries and left for a future investigation. #### Phonon Modeshape: Adaptive Fourier Beam Propagation The FBPM approach described above can be formulated as an eigenvalue problem as given below: \[\mathbf{RTu}_{0}(x,y)=\Lambda\mathbf{u}_{0}(x,y) \tag{8}\] where \(\mathbf{RT}\) is the round trip operator that includes all the transformations described in the previous section (Eqs. 3-7) and \(\Lambda\) is the eigenvalue corresponding to the eigenvector \(\mathbf{u}_{0}\). \(\Lambda\) is real for a standing wave mode. A complex \(\Lambda\) applies a global phase to the input beam profile after a round trip, which implies a traveling wave mode. The phase prevents the waves to interfere constructively over multiple round trips. It is possible to solve the eigenvalue problem for two-dimensional (2D) problems (e.g., if HBARs are represented by planar 2D structure), however, it cannot be directly solved for three-dimensional problem of our interest. We develop an iterative method to obtain the mode that satisfies Eq. 8 with a real \(\Lambda\). In the adaptive FBPM (a-FBPM) algorithm, we iteratively adjust the frequencies based on the phase difference incurred over a round trip to arrive at the desired standing wave modes. We outline the steps of the iterative method below. 1. Start with an input beam, \(\mathbf{u}_{0}^{(i)}(x,y)\), that travels in the HBAR structure, for the \(i^{th}\) round trip. The initial input beam (\(i=1\)) has an initial wavelength, \(\lambda^{(1)}=\frac{2t}{n}\) and frequency \(\omega^{(1)}=2\pi\times\frac{nve}{2t}\). Here, \(n\) is the overtone number, \(t=t_{\text{HBAR}}\) and \(v_{2}\) is the velocity of shear phonons. The wavelength and frequency are estimated based on how overtones are estimated for an open-ended column for \(n^{\mathbf{th}}\) overtone. The initial input modeshape is chosen to be a truncated Bessel function as shown in Eq. 2. 2. Calculate \(\mathbf{u}_{0}^{(i+1)}(x,y)=\mathbf{RTu}_{0}^{(i)}(x,y)\). 3. Estimate the real eigenvalue, \(\Lambda^{(i)}=\sqrt{\frac{I^{(i+1)}}{I^{(i)}}}\), where \(I^{(i)}=\int|\mathbf{u}_{0}^{(i)}|^{2}dxdy\) and \(I^{(i+1)}=\int|\mathbf{u}_{0}^{(i+1)}|^{2}dxdy\). 4. Calculate the residual, \(\mathrm{R}^{(i)}=||(\mathbf{u}_{0}^{(i+1)}(x,y)-\Lambda^{(i)}\mathbf{u}_{0}^{ (i)}(x,y))||/||\Lambda^{(i)}\mathbf{u}_{0}^{(i)}(x,y)||\). 5. If \(\mathrm{R}^{(i)}\leq\mathrm{tol}\), end simulation and output final eigenvalue and modeshape of resonant modes, else continue to the next step. We choose the tolerance to be \(\mathrm{tol}=10^{-6}\) for all the a-FBPM calculations of this study. 6. If \(\mathrm{R}^{(i)}>\mathrm{tol}\), estimate the global phase factor induced by the round trip operation \(\mathbf{RT}\), \(\theta^{(i)}=\mathrm{Arg}(\mathbf{u}_{0}^{(i)}(0,0))-\mathrm{Arg}(\mathbf{u}_{ 0}^{(i+1)}(0,0))\). 7. Set \(\omega^{(i+1)}=\omega^{(i)}+\frac{v_{2}\theta^{(i)}}{2t_{\text{HBAR}}}\) and \(\lambda^{(i+1)}\) accordingly (\(\lambda^{(i+1)}=2\pi nv_{2}/\omega^{(i+1)}\)). The steps 6 and 7 makes the algorithm adaptive. 8. Repeat the process until step 5 is satisfied or the maximum number of round trips \(N\) is reached, at which point we set \(\mathbf{u}_{0}^{(1)}(x,y)=\mathbf{U}_{0}(x,y)=\Sigma_{n=1}^{N}\mathbf{u}_{0}^ {(i)}(x,y)\), and we restart the iterative procedure. We demonstrate the effectiveness of the adaptive FBPM algorithm in predicting the resonant modes in HBAR structures in Fig. 4. We show the decay of the residual, R, as a function of the number of steps. We show the results for two planar HBAR structures with \(R_{\text{YIG}}=200~{}\mu\)m (red) and \(10~{}\mu\)m (blue), respectively. We calculate the complex interference sum after every \(NR\) round trips and restart the iterative procedure. The choice of the number, \(NR\), is an informed guess. We choose \(NR=400\) and \(40\) for the HBARs with Figure 3: **Lateral mode profiles obtained with FBPM showing Gouy phase effects:** The real (solid) and imaginary (dashed) components of the y-component of the interference sum before the first \(\mathrm{U}_{0y}^{(1)}\) and second restarts \(\mathrm{U}_{0y}^{(2)}\) at \(\omega_{0}=2\pi\times\frac{2960v_{2}}{2t_{\text{HBAR}}}\). The components deviate significantly between restarts. 200 \(\mu\)m and 10 \(\mu\)m, respectively. We choose a smaller number of round trips, \(NR\), for the 10 \(\mu\)m case, since the HBAR with \(R_{\rm YIG}=10\)\(\mu\)m represents a high-diffraction structure compared to the 200 \(\mu\)m case and the mode-shape decays rapidly for this case. As shown in Fig. 4, it takes a total of \(\sim\)650 and \(\sim\)1200 round trips (including restarts) to obtain converged resonant modes for the 10 \(\mu\)m and the 200 \(\mu\)m case, respectively. The step-like features of the residual decay coincide with the steps when we compute the interference sums and use it as input for the next restart of the iterative procedure. The jumps occur because some errors get canceled due to destructive interference when an interference sum is computed. In some cases, R can have a momentary rise due to accumulation of errors from previous round trips, however, the overall trend continues to decrease ultimately reaching convergence. As shown in Fig. 4, we continue the iterative process till R \(<10^{-8}\) to show continued convergence even after the tolerance is reached. #### iii.2.2 (2) Phonon Modeshape Analysis: Hankel Transform Eigenvalue Problem As we discussed previously, the challenges with the standard FBPM approach are that it is tedious to identify the converged mode by sweeping through frequencies and obtaining the convergent mode is a slow process. The a-FBPM approach avoids frequency sweeping and mostly overcomes the slow convergence issues. However, as we discuss later, a-FBPM still suffers from convergence challenges for some cases, particularly for the confocal HBAR. Here, we discuss an alternate method to the FBPM/a-FBPM methods, which leverages the axi-symmetry of the YIG/GGG HBAR system due to their isotropic material properties and the circular cross-section of the YIG film. Note that although the vector field of the shear acoustic phonon mode does not obey axi-symmetry, the scalar field describing the dominant shear-component of the \(\mathbf{u}_{0}\) is expected to obey axi-symmetry. Such a system can be modelled using the axi-symmetric equivalent of a Fourier transform, the Hankel (HK) transform, reducing the problem from a from the three-dimensional (3D) to a two-dimensional (2D) one. The 2D problem can be obtained by representing HBARs as planar 2D structure, for example. Here, we provide a brief description of the HK method. We encourage the interested reader to find a detailed description of the method elsewhere [65, 66]. The Hankel transform approach has been mostly used in the field of optics, e.g., Fabry-Perot cavities [65], however, it has not been applied for acoustics problems, to the best of our knowledge. We cast the systems described in Fig. 1 into an axi-symmetric setting about the \(z\)-axis. We transform the \(x-y\) coordinates into the radial coordinate \(r\), with limits \(0\leq r\leq W/2\) discretized into \(N\) points as \[r_{j}=W/2\left(\frac{\zeta_{j}}{\zeta_{N}}\right) \tag{9}\] where \(\zeta_{j}\) (\(j=1,2,3,..,N\)) are the roots of the \(J_{1}\) Bessel function. The round trip operator for the HK approach \(\mathbf{RT}_{\rm hk}\) is given as: \[\mathbf{RT}_{\rm hk}=\mathbf{R}_{0}\mathbf{PR}_{{}_{\rm HBAR}}\mathbf{P} \tag{10}\] where \(\mathbf{P}\), is the \(N\times N\) propagator matrix defined as: \[\mathbf{P} =(H^{+})^{-1}\tilde{G}H^{+}, \tag{11a}\] \[H_{ij}^{+} =\frac{W^{2}}{2\zeta_{N}^{2}}\frac{J_{0}(\zeta_{j}\zeta_{j}/\zeta _{N})}{\zeta_{N}^{2}J_{0}^{2}(\zeta_{j})},\] (11b) \[\tilde{G}_{ij} =\exp\left(-i\frac{2\zeta_{j}^{2}}{W^{2}}\right)\delta_{jj}. \tag{11c}\] Here, \(H^{+}\) is the Hankel transform whose matrix inverse is taken to be an approximation of the inverse Hankel transform, and \(\tilde{G}\) is the Green's function obtained from the Fourier transform of the Fresnel propagator in the paraxial approximation. \(\mathbf{R}_{{}_{\rm HBAR}}\), and \(\mathbf{R}_{0}\) are \(N\times N\) diagonal reflection matrix operators at \(z=t_{\rm HBAR}\) and \(z=0\) matrix operators, respectively and are defined as \[\mathbf{R}_{0,jj} =R_{0,2}(r_{j}), \tag{12a}\] \[\mathbf{R}_{t_{\rm HBAR},jj} =R_{t_{\rm HBAR},2}(r_{j}). \tag{12b}\] Since the dimensionality of the problem is reduced from 3D to 2D, the Hankel eigenvalue problem Figure 4: **Obtaining resonant modes using the adaptive FBPM algorithm:** Decay of residual (R) of modeshapes as a function of number of round trips considered. Results are shown for two planar HBAR structures with \(R_{\rm YIG}=200\)\(\mu\)m (red, left and bottom axes) and 10 \(\mu\)m (blue, right and top axes). The circles with arrows point to the axis corresponding to each plot. \(\Lambda u(r)\) can be solved to obtain the axisymmetric scalar phonon modeshaps \(u(r)\) and their corresponding eigenvalue \(\Lambda\). This method, unlike the FBPM/a-FBPM approaches, does not require a Fox-Li-like iterative process and can also predict higher-order phonon modes. Although this approach has a limited applicability due to its axi-symmetric constraints, it was helpful in obtaining phonon modes and lifetimes of various HBAR structures considered in this study in a computationally inexpensive manner. The expedited analysis allowed us to analyze a larger numbers of HBAR structures, as we show in the Results and Discussion section. #### Diffraction-Limited Phonon Lifetime In this article, we estimate the diffraction-limited lifetime of phonons using three different methods: (**A**) Eigenvalue method, (**B**) Exponential curve fitting method and (**C**) Clipping method. (**A**) Eigenvalue method: In this method, we obtain the eigenvalues using the adaptive FBPM simulation and use them to estimate the phonon lifetime. The elastic energy contained in a acoustic beam is given by \(E\propto\int|\textbf{u}|^{2}dxdy\). We do not consider the \(z\)-dependence of **u** for the energy calculation, since the mode shape largely remains unaltered throughout the thickness. Starting with a cavity phonon mode with a total elastic energy \(E_{tot}\), the ratio of the elastic energy left in the cavity after one round trip (\(E_{in}\)) to \(E_{tot}\), is given as \(E_{in}/E_{tot}=\Lambda^{2}\). Here, \(\Lambda\) is the real eigenvalue obtained using the a-FBPM simulations. We assume an exponential decay of energy with propagation time, \(E_{in}=E_{tot}e^{-t_{\text{RT}}/\tau}\). Here, \(t_{\text{RT}}=2t_{\text{HBAR}}/v_{2}\) is the time taken by shear waves to complete a round trip. \(\tau\) is the phonon lifetime of the phonon mode and can then be obtained from \[\tau=\frac{-2t_{\text{HBAR}}}{v_{2}\text{ln}(\Lambda^{2})}. \tag{13}\] Since these computations are performed on a finite \(x-y\) mesh grid, \(\Lambda\), and therefore \(\tau\) can be sensitive to the mesh density. Hence, it is important to select an optimal mesh density to ensure the convergence of \(\Lambda\) since it directly influences \(\tau\). In Fig. 5, we show the variation of \(\Lambda\) with mesh density \(N_{x}\) (\(=N_{y}\)) for both the low-diffraction (\(R_{\text{YIG}}=200\mu\)m) and the high-diffraction (\(R_{\text{YIG}}=10\mu\)m) cases. We find that the variation of \(\Lambda\) is much less than 1% for both cases when \(N_{x}\geq 1024\) and above. Consequentially, we choose \(N_{x}=N_{y}=1024\) to compute phonon modes and lifetimes for all cases presented in this article. (**B**) Exponential curve fitting method: In this method, we allow the initial beam to travel for multiple round trips and estimate the diffraction loss by evaluating the overlap of the mode profile with the initial input profile, after each round trip. We estimate the overlap using the following function: \[I(t)=\frac{|\left\langle\textbf{u}_{0}(t)|\textbf{u}_{0}(0)\right\rangle|^{2} }{|\left\langle\textbf{u}_{0}(0)|\textbf{u}_{0}(0)\right\rangle|^{2}}. \tag{14}\] Here, \(t\) is the integral multiple of the time taken for each round trip, \(t_{\text{RT}}\). The propagation is continued until \(I(t)<0.1\) is reached or the time is \(t>3\) ms, whichever is reached first. In Fig. 6, we show the decay of the overlap function, \(I(t)\), for beams traveling in a HBAR structure with \(R_{\text{YIG}}=200\mu\)m. We show the different decays in this figure to highlight the effect of the finite width of the transducer (YIG film) on the propagating beam. The transducer induces a localizing effect on the beam that can be modeled by introducing a phase factor. We find that \(I(t)\) decays at a much slower rate when the YIG-induced phase factor is considered, compared to the case without phase factor. We fit the two decays with exponential functions and obtain the phonon lifetimes as \(\tau^{wephase}=0.1\) ms and \(\tau^{wphase}=14.1\) ms, respectively. The remarkable two-orders of magnitude difference between these two estimates highlights the importance of including the actuator phase effects for such an analysis. Subsequently, we included the phase effects in all our analyses to obtain the phonon lifetimes. A past study has used a similar approach to estimate the lifetime of longitudinal HBAR phonons in an AlN/Sapphire structure [57]. However, the phonons were treated as superpositions of Bessel functions in a large simulation window without accounting for the effect of the localization induced by the AlN transducer. Interestingly, we find that the lifetime predicted include the phase effects closely matches with the predicted \(\tau\) of 15.5 ms using the eigenvalue method, discussed earlier. Figure 5: **Eigenvalues of two planar HBAR structures with \(R_{\text{YIG}}=200\mu\)m (red) and \(10\mu\)m (blue):** Variation of simulated \(\Lambda\) for meshes with density ranging from \(128\times 128\) to \(2400\times 2400\) points, keeping the width \(W\) fixed at \(1200\mu\)m. The circles with arrows point to the axis corresponding to each plot. (**C**) Clipping method: In this method, we obtain \(E_{in}\propto\int_{V_{in}}|\mathbf{u}|^{2}dxdy\), contained in a volume, \(V_{in}\), of a cylinder spanning YIG disc and the thickness of the HBAR structure. We calculate \(E_{in}\) for a converged modeshape, obtained with a-FBPM or Hankel method. With the \(E_{in}\) calculated, \(\tau\) is obtained as \(\tau=\frac{-2t\text{HBAR}}{v\text{ln}(E_{in}/E_{tot})}\). This could be taken as a lower limit to the phonon lifetime since it assumes that all energy outside the cylinder of interest is lost after a round trip. This method is similar to the clipping method used in Fabry-Perot cavities [65, 66] and HBAR resonators excited opto-mechanically using photon beams [73]. In these optical systems, the clipping method is more applicable since the input photon beam spans the entire XY plane and is clipped by the localizing finite cross-section mirrors or domes. In our system, the input acoustic beam is already laterally confined so this method may have limited applicability. We still include the discussion here as a consistency check since the method results in the lower bound for the predicted lifetimes. ### Magnon-Phonon Coupling The magnon-phonon coupling strength, \(g_{mb}\), is given by \[g_{mb}=\frac{B}{\sqrt{A_{\nu}Q_{\eta}}}\left|\int_{V_{\text{YIG}}}\left[\frac{ du_{x}^{*}}{dz}m_{x}+\frac{du_{y}^{*}}{dz}m_{y}\right]d\mathbf{r}\right|. \tag{15}\] Here, \(B=7\times 10^{5}\) J/m\({}^{3}\) is the magnetoelastic constant of YIG, \((m_{x},m_{y})\) are the x-y components of the magnetization \(\mathbf{m}=\mathbf{m}_{\text{YIG}}\), \((u_{x}^{*},u_{y}^{*})\) and \((\frac{du_{x}^{*}}{dz},\frac{du_{y}^{*}}{dz})\) represent the complex conjugates of the x-y components of displacement \(\mathbf{u}\) and their corresponding shear strains, respectively. We integrate the coupling between the magnon and the phonon modes in the volume of the YIG film, \(V_{\text{YIG}}\), since the magnon modes reside in YIG. The term \(\sqrt{A_{\nu}Q_{\eta}}\) in the denominator is a normalizing factor, as we describe below. We normalize the magnon modes as \[\frac{M_{s}}{\gamma}\int_{V_{\text{YIG}}}\mathbf{m}_{\nu}^{*}(\mathbf{r}) \cdot(\mathbf{k}\times\mathbf{m}_{\nu^{\prime}}(\mathbf{r}))d\mathbf{r}=-iA_ {\nu}\delta_{\nu,\nu^{\prime}}, \tag{16}\] and the HBAR phonon modes as \[2\omega_{\eta}\rho\int_{V_{\text{HBAR}}}\mathbf{u}_{\eta}^{*}(\mathbf{r}) \cdot\mathbf{u}_{\eta^{\prime}}(\mathbf{r})d\mathbf{r}=Q_{\eta}\delta_{\eta, \eta^{\prime}}. \tag{17}\] where, \(\mu_{0}M_{s}=0.175\)T is the saturation magnetization, \(\gamma=2\pi\times 28.5\) GHz/T is the gyromagnetic ratio, \(\rho=7080\) kg/m\({}^{3}\) is the GGG material density, \(\mathbf{k}\) is the unit vector along the z axis, \(\nu\) and \(\eta\) are magnon and phonon mode indices, respectively, with \(\omega_{\eta}\) being the corresponding phonon mode frequency. Here, \(V_{\text{HBAR}}\) is the volume of the entire HBAR structure. \(A_{\nu}\) and \(Q_{\eta}\) are phonon and magnon normalization constants, respectively and are of the same dimensionality. If zero diffraction is assumed, the acoustic energy is completely localized in the volume \(V_{in}\), resulting in a complete overlap of lateral mode profiles of magnon and phonon modes. We obtain the zero diffraction magnon-phonon coupling strength to be \(g_{mb}^{0}/2\pi=1.13\) MHz, independent of \(R_{\text{YIG}}\). This prediction compares well with a previous experimental result of 1 MHz [53], however, is slightly higher than another experimental result of 0.75 MHz [54]. Although in HBAR structures some acoustic energy can lie outside \(V_{in}\) due to diffraction which can impact both \(\tau\)\(g_{mb}\), which we will discuss in the Results and Discussion section. ## III Results and Discussion Here, we discuss the phonon lifetimes and the magnon-phonon coupling in the HBAR structures shown in Fig. 1, in the diffraction-limited regime. We investigate the diffraction-limited performance because we are interested in improving the scalability of hybrid magnonic devices. The diffraction effects play an increasingly important role as we reduce the radius of the YIG disc as the reduced aperture increases the diffraction of the acoustic waves propagating into the GGG region. ### Diffraction-Limited Phonon Lifetime In Fig. 7 (a), we show the variation of phonon lifetimes in planar HBAR structures with decreasing YIG radius. We calculate the lifetimes using the three methods Figure 6: **Decay of the overlap function, \(I(t)\), for propagating beams in a HBAR structure with \(R_{\text{YIG}}=200\mu\)m**. Solid lines indicate \(I(t)\) while dashed lines indicate their respective exponential fits. The finite width of the YIG film induces a localizing effect on the beam, modeled with a phase factor. \(I(t)\) decays at a much slower rate when the phase effect is included (blue, top and right axes) compared to the no-phase-effect case (red, bottom and left axes). discussed in the Methods section: (**A**) eigenvalue (red), (**B**) exponential fitting (blue, purple), and (**C**) clipping methods (green). As can be noted from the Fig. 7, all methods predict that the lifetimes decrease with decreasing \(R_{\text{YIG}}\) due to increased diffraction. We obtain the highest and the lowest lifetimes to be \(\tau=15.5\)ms (\(Q=\omega\tau=9.57\times 10^{8}\)) and \(\tau=10.5\mu\)s (\(Q=6.48\times 10^{5}\)), for HBAR structures with \(R_{\text{YIG}}=200\mu\)m and \(10\mu\)m, respectively, using the eigenvalue method. These results indicate that the lifetimes are reduced by more than three orders of magnitude in the high-diffraction regime. The eigenvalue method results in the highest lifetime estimates, among the three methods considered (red). For the low-to-medium diffraction cases, the lifetimes predicted by the eigenvalue method (red) match closely with those predicted from the exponential fitting method (blue), when we include the phase effects due to the YIG film. This can be explained using the following arguments. The input beam can be described as a superposition of the eigenmodes of the HBAR cavity. Once the initial input beam undergoes multiple round trips, only the fundamental mode is likely to survive, while the rest of the mode components decay at a faster rate. When we operate at the overtone frequencies of one of the fundamental modes, the fundamental mode becomes the dominant mode. It can be argued that this dominant mode is the same as the fundamental eigenmode found by solving the eigenvalue equation, Eq. 8. Therefore, the decay of the overlap function of the input field is expected to have a similar character to that of the decay of the dominant eigenmode that it is composed of. As a result, we obtain similar lifetimes values using the two different methods. However, we observe 35-fold reduced lifetimes predictions from the exponential fit compared to the eigenvalue method, for the highest diffraction case (\(R_{\text{YIG}}=10\mu\)m). In the exponential fit method, we perform the exponential fitting of the acoustic overlap function, \(I(t)\), starting at \(t=0\). Due to high diffraction, significant amount of the energy of the input acoustic beam spreads out laterally even before the YIG film introduces localizing effects on the propagating beam. This results in a rapid reduction of \(I(t)\) during the first few round trips. The loss of acoustic overlap during the initial transient phase results in lower \(\tau\) predictions. We expect that the two lifetime predictions will be closer if the exponential fit is obtained after a steady state has been achieved. Note that these exponential fitting predictions are computed by including the phase effects due to the YIG film. We also show the lifetimes predicted from the exponential fitting method, when the phase effects due to the YIG film are ignored (purple). These effects were often ignored in past analyses of HBAR phonons, however, as we show in Fig. 7, they significantly affect the predictions. We obtain the \(\tau^{wophase}\)'s that are consistently lower than the \(\tau^{wophase}\)'s, as also discussed in the Methods section (Fig. 6). The difference between the two predictions \(\tau^{wophase}\) and \(\tau^{wophase}\) decreases monotonously from \(135\) to \(1.35\) fold as we decrease \(R_{\text{YIG}}\) from \(=200\mu\)m to \(=10\mu\)m, respectively. This is expected since the effect of the YIG film width on the mode diminishes as we approach \(R_{\text{YIG}}\to 0\). We also show the lifetime predictions from the clipping method (green) which assumes that the energy outside the cylindrical volume of interest, \(V_{in}\), is completely lost after a round trip. We find the lifetimes to be more than an order of magnitude lower than those predicted from the eigenvalue method for all cases considered. We find the lifetimes are an order of magnitude lower than those predicted from the eigenvalue method for all cases considered. We obtain the lifetime in the highest diffraction regime to be \(\tau=0.311\mu\)s and the respective quality fac Figure 7: **Diffraction-limited performance of planar HBAR structures:** (a) Phonon lifetimes, \(\tau\), calculated from the eigenvalue (red), exponential fitting (blue, purple), and clipping methods (green). (b) Ratio between magnon-phonon coupling in HBAR structures in various diffraction regimes and that in the zero-diffraction limit, \(g_{mb}/g_{mb}^{0}\) (red). Decrease of \(g_{mb}\) is connected to the spread of modeshape in the high-diffraction regime, as we decrease \(R_{\text{YIG}}\). Increasing amount of acoustic energy spreads into the GGG region, causing reduced overlap between phonon and magnon mode profiles in the YIG region. The ratio of acoustic energy spreading out to the total energy, \(E_{out}/E_{tot}\) (black), increases in the high-diffraction regime. (c) Magnon-phonon cooperativity \(C\). \(R_{\text{YIG}}\) is varied keeping all other geometric factors fixed. Solid lines correspond to a-FBPM predictions, while the circles of the same color correspond to the respective HK predictions. tor, \(Q\) is \(1.92\times 10^{4}\). Interestingly, these values are still marginally greater than experimentally observed values of \(0.25\mu\)s and \(1.45\times 10^{4}\), respectively [54], for a device operating in the low-diffracting regime. The reason for this discrepancy is that the experimental values are obtained for HBAR devices operating at room temperatures. In this limit, the performance of HBAR structures is limited by the material attenuation effects [55, 56]. In our study, we ignore the material attenuation effects and only discuss the diffraction-limited performance. Our predictions will be more representative for devices operating in the cryogenic or potentially milli Kelvin regimes. For example, the HBAR devices could be envisioned to effectively couple with superconducting qubit systems, etc., which typically operate in the milli Kelvin regime. We like to address here the uncertainties associated with the numerical prediction of the HBAR phonon lifetimes using different methods. In the eigenvalue method, we obtain the lifetimes from the ratio between the HBAR thickness (\(t_{\text{HBAR}}\)) and a function of the phonon velocity and the eigenvalue, \(\Lambda\), as shown in Eq. 13. Following Eq. 13, an error propagation relation can be written as \[\left|\frac{\text{d}\tau}{\tau}\right|=\left|\frac{v_{2}\tau}{t_{\text{HBAR}} }\times\frac{\text{d}\Lambda}{\Lambda}\right|\propto\left|\tau\times\frac{ \text{d}\Lambda}{\Lambda}\right|. \tag{18}\] Here, the order of magnitude of the prefactor \(\frac{v_{2}}{t_{\text{HBAR}}}\sim 10^{7}\). However, the prefactor does not affect the predictions in our study since we keep the material and thickness fixed. Equation 18 suggests that the uncertainty \(\left|\frac{\text{d}\tau}{\tau}\right|\) increases monotonously with \(\tau\) when other factors are fixed. It can be deduced from Equation 18 that to predict a lifetime within \(x\%\) of variability, \(\Lambda\) has to be predicted to be within \(\frac{t_{\text{HBAR}}}{v_{m}\tau}\%\) of variability. This implies that we have to be increasingly more stringent with our \(\Lambda\) convergence criteria as \(\tau\) increases. Figure 8 shows that the conditions for desired accuracy (relative tolerance) needed for \(\Lambda\) predictions become more stringent for high-\(\tau\) (high-\(Q\)) systems. When \(\tau=0.1\mu\)s and the desired accuracy is set to 10%, \(\Lambda\) must be predicted within 15% accuracy, which is attainable following our numerical procedure. The accuracy requirement increases fast when the lifetime values are much higher. For example, when \(\tau=1\)ms, and the desired accuracy is set to 10% of \(\tau\) value, \(\Lambda\) must be predicted within \(\sim\)\(10^{-3}\%\) of accuracy. For our HBAR with \(R_{\text{YIG}}=200\mu\)m system, we obtain \(\left|\frac{\text{d}\tau}{\tau}\right|=7.9\times 10^{-4}\) with \(\left|\frac{\text{d}\Lambda}{\Lambda}\right|=4.7\times 10^{-7}\) between last two restarts. On the other hand for the HBAR with \(R_{\text{YIG}}=10\mu\)m system, \(\left|\frac{\text{d}\tau}{\tau}\right|=3.9\times 10^{-7}\) with \(\left|\frac{\text{d}\Lambda}{\Lambda}\right|=3.8\times 10^{-8}\). However, such high accuracy is challenging to achieve numerically because of the computational expense involved in simulating a large number of iterations. We discuss an alternative HK approach to expedite the numerical analysis. #### Magnon-Phonon Coupling in Diffraction-Limited Regime We now turn to discuss the effect of diffraction on the magnon-phonon coupling strength, \(g_{mb}\). We compute the ratio between coupling strength, \(g_{mb}\), in the planar HBAR structures, and that in the zero-diffraction limit, \(g_{mb}^{0}\), to estimate the effect of diffraction on magnon-phonon coupling. In Fig. 7(b), we show the variation of the ratio between coupling strengths, \(g_{mb}/g_{mb}^{0}\), in planar HBAR structures with decreasing YIG radius. The ratio is equal to 1 for low-diffraction cases with \(R_{\text{YIG}}\gtrsim 100\mu\)m, as expected. As we reduce \(R_{\text{YIG}}\) to \(40\mu\)m, \(g_{mb}\) only changes by 5% from the \(g_{mb}^{0}\) limit. Note that this case typically corresponds to a strongly diffracting regime, however, the effect of diffraction is minimized. This is because the YIG disc helps to localize the beam and thus, preserve the magnon-phonon overlap and the coupling strength. However, we observe a sharp decline of \(g_{mb}\) for \(R_{\text{YIG}}\leq 40\mu\)m. \(g_{mb}\) drops to \(77.4\%\) of \(g_{mb}^{0}\) for HBAR with \(R_{\text{YIG}}=20\mu\)m and \(55.7\%\) of \(g_{mb}^{0}\) for \(R_{\text{YIG}}=10\mu\)m, the highest diffraction case considered here. The decrease of \(g_{mb}\) can be explained in the following way. In the zero diffraction limit, the acoustic energy of a propagating beam is completely localized in the volume \(V_{in}\), of a cylinder with radius equal to \(R_{\text{YIG}}\) and the thickness of the HBAR structure. This results in a complete overlap of lateral mode profiles of magnon and phonon modes. However, as we decrease \(R_{\text{YIG}}\), the modeshape spreads out beyond \(V_{in}\), due to diffraction. Correspondingly, some acoustic energy also leaks out of \(V_{in}\) and spreads into the GGG region. As a result, there is reduced overlap between the phonon and the magnon mode profiles in the YIG region. We refer to the energy of the mode lying outside \(V_{in}\) as \(E_{out}\). We show the change of the ratio \(E_{out}/E_{tot}\) with decreasing \(R_{\text{YIG}}\) in Fig. 7(b) (black). As we can see the ratio of acoustic energy of the eigen Figure 8: **Relative tolerance of \(\Lambda\) at various \(\tau\)** required to predict \(\tau\) with 10% accuracy. mode residing outside \(V_{in}\) to the total energy increases in the high-diffraction regime. Finally, we combine the phonon lifetimes and the magnon-phonon coupling strength to determine the performance figure of merit of the HBAR structures, defined by the magnon-phonon cooperativity, \(C=4g_{mb}^{2}/\kappa_{m}\kappa_{b}=4g_{mb}^{2}\tau_{m}\tau_{b}\). We show the variation of \(C\) with \(R_{\rm YIG}\) in Fig. 7 (c). Using \(\tau\) predicted from the eigenvalue method, we obtain a monotonically decreasing diffraction-limited \(C\) ranging from \(21.9\times 10^{4}\) to \(46.4\), as we decrease \(R_{\rm YIG}\) from \(200\mu\)m to \(10\mu\)m. On the other hand, using \(\tau\) predicted from the clipping method, \(C\) is obtained in the range between \(1.2\times 10^{4}\) and \(1.4\). Note that even the HBAR with \(R_{\rm YIG}=10\mu\)m has a high cooperativity. This implies that if material acoustic attenuation effects are eliminated (e.g. operating in the milli Kelvin regime), one can achieve remarkable scalability with planar HBAR structures. However, the performance of the HBAR structure with \(R_{\rm YIG}=10\mu\)m is limited by a lifetime of \(10.5\mu\)s (\(0.31\mu\)s) predicted by the eigenvalue (clipping) methods and only \(55.7\%\) of maximum coupling strength \(g_{mb}^{0}\). Thus, it is desirable to pursue design approaches to further improve these performance parameters. In addition to the a-FBPM predictions discussed so far, we also show predictions from HK method in Fig. 7 using circles, with colors matching to their corresponding a-FBPM predictions. Note that we use \(|\Lambda|\) instead of \(\Lambda\) in Eq. 13 to calculate \(\tau\) using the HK method. As can be noted from Fig. 7, \(\tau\), \(g_{mb}\), and \(C\) predicted by the HK method are in an excellent agreement with a-FBPM predictions, for all \(R_{\rm YIG}\) cases considered. The close match validates our implementation of the HK method. As we have discussed earlier, a-FBPM has broader applicability compared to HK method, and can be applied for systems with anisotropy in material properties and transducer shapes. However, we find that it is an expensive and uncertain process to converge the residual R as the rates of convergence for different HBAR structures are slow and variable. Also, one needs to select the number of round trips before restarting the simulation using a trial-and-error approach. In comparison, HK method is computationally inexpensive to make predictions. Additionally, it can be applied to the YIG/GGG HBAR systems since they are isotropic with YIG transducer being a circular thin film disc. Henceforth, we only present predictions from the HK method in this article. #### Enhancing Scalability and Performance in Diffraction-Limited Regime The discussions presented in the previous subsections show that the phonon lifetimes are reduced by three orders or magnitude in the high-diffraction regime, when \(R_{\rm YIG}=10\mu\)m, compared to low-diffraction cases. The magnon-phonon coupling is also reduced by a factor of two. The phonon lifetime reduction plays a dominant role and results in a three orders of magnitude reduction of the cooperativity, the performance figure of merit of the HBAR structures. Here, we discuss a strategy to mitigate the diffraction losses and improve the performance of the HBAR structures. We illustrate that the use of focusing dome-like surface structures could significantly improve the performance figure of merit of HBAR structures. Confocal HBAR structures have been demonstrated to have Q-factors on the order of \(10^{7}\) while resulting in \(10^{3}\)-fold reduction in device volumes [73]. However, they have not been employed in hybrid magnomechanical systems. Our results show that a confocal geometry could enhance the phonon lifetimes and improve \(g_{mb}\) due to lowered mode volume of phonons resulting from reduced lateral spread of the phonon modes. We first discuss a strategy to identify the shape of the dome that will result in the highest \(g_{mb}\). We find that \(g_{mb}\) is highest when there is maximum overlap between the phonon and the magnon modeshapes. We assume that the CHBAR phonons can be described by Laguerre-Gaussian (LG) functions. The fundamental mode, LG\({}_{00}\), is of the form: \(u_{y}\propto e^{-R^{2}/\omega_{0}^{2}}\) due to the axi-symmetry of our system. Here, \(R\) is the radial coordinate \(\sqrt{x^{2}+y^{2}}\). \(w_{0}\) is the waist of the Gaussian beam at the \(z=0\) of the HBAR structure and is an adjustable parameter that can be varied by changing the dome radius of curvature \(R_{\rm curv}\), \(t_{\rm HBAR}\), or \(\omega\). We ignore the effects of lateral extent of the structure and the dome, and the localizing effects of the YIG film. We obtain theoretical estimates of the overlap of the CHBAR phonon modeshapes with Bessel magnon modes and extract the optimal dome shape using the condition where the overlap is maximum. We normalize the modeshapes according to Eqs. 16, 17 and calculate the coupling strengths using Eq. 15. We show our predicted values of the \(g_{mb}/g_{mb}^{0}\) ratio in Fig. 9, as a function of the beam waist, \(w_{0}\). We find that \(g_{mb}/g_{mb}^{0}\) ratio peaks at \(w_{0}/R_{\rm YIG}~{}\sim 0.65\). Interestingly, we find that this optimal ratio, \(w_{0}/R_{\rm YIG}\), is constant and does not depend on \(R_{\rm YIG}\), for all the CHBAR structures considered in this article. If we decrease \(w_{0}\) lower than \(w_{0}~{}\sim 0.65R_{\rm YIG}\), the coupling strength decline sharply. On the other hand, the coupling decrease slowly if \(w_{0}\) is increased beyond the optimal value. We use this optimal \(w_{0}\) value to provide an initial estimate of the radius of curvature, \(R_{\rm curv}\), of the dome [74]: \[R_{\rm curv} = \frac{1}{Re[1/(q_{0}+t_{\rm HBAR})]}, \tag{19a}\] \[\text{with }q_{0} = \frac{i\pi w_{0}^{2}}{\lambda}. \tag{19b}\] This expression indicates that we can get multiple estimates of \(R_{\rm curv}\) by varying \(t_{\rm HBAR}\), \(\lambda\) and \(w_{0}\). Using Eqs. 19 (a-b), we obtain the analytical estimate to be \(R_{\rm curv}/t_{\rm HBAR}{\sim}1.5\) that corresponds to \(w_{0}/R_{\rm YIG}\sim\)0.65 and the peak coupling value as shown in Fig. 9. Note that this \(R_{\rm curv}\) estimate is obtained only to maximize \(g_{mb}/g_{mb}^{0}\), and the analysis does not consider the effects of the dome shape on phonon lifetimes and cooperativities. Also, this analysis ignores YIG localization effects and assumes phonon modeshapes to be perfectly Gaussian. To obtain a better picture of the overall performance of these CHBAR structures, we carry out a numerical study using the HK method. We include the effects of the lateral extents of the dome and the device, the attenuation window, and the localization effects of the YIG film, which are not considered for the analytical predictions. We modify the equations for analyzing CHBAR structures in the following way. We modify Eq. 6 and the radial form of Eq. 11c to include the phase induced by the dome surface as \[R_{t_{\rm CHBAR},m}(x,y)=\begin{cases}e^{ik_{x0,m}r^{2}/R_{\rm curv}},&\text{if } r\leq R_{\rm cross}\\ 1,&\text{if }R_{\rm cross}<r\leq\frac{W_{\rm eff}}{2}\\ 0.&\text{otherwise}\end{cases} \tag{20}\] We sweep across various values centered around the analytical estimate, \(R_{\rm curv}/t_{\rm HBAR}\)\(\sim\)1.5, to identify the optimal \(R_{\rm curv}\) for the CHBAR structures. We choose the CHBAR structure with \(R_{\rm YIG}=10\mu\)m and the dome \(x-y\) cross-section radius, \(R_{\rm cross}=60\mu\)m, to perform the analysis. We show the phonon lifetime, magnon-phonon coupling and cooperativity in Fig. 10, to highlight the effect of the focusing dome on the performance of CHBAR structure with \(R_{\rm YIG}=10\mu\)m. We show the variation of \(\tau\) with \(R_{\rm curv}/t_{\rm HBAR}\) in Fig. 10(a). The predicted \(\tau\) shows a peak at \(R_{\rm curv}/t_{\rm HBAR}=1.09\), a value less than the analytical prediction for maximum \(g_{mb}\). The peak value reaches \(\tau\)\(\sim\)218ms using the eigenvalue method (red circles). Incidentally, this value is \(\sim\)500 times larger than the value at \(R_{\rm curv}/t_{\rm HBAR}\)\(\sim\)1.5, justifying the necessity of performing the sweep. The \(\tau\) peak value for the CHBAR structure obtained using the eigenvalue (clipping) method is \(\sim\)\(1.7\times 10^{4}\) (\(\sim\)\(7.3\times 10^{4}\)) factor greater than the corresponding planar HBAR \(\tau\) prediction, shown with red horizontal line in Fig. 10(a). We show the predictions from the clipping method using green circles. To obtain these predictions, we take \(V_{in}\) to be the volume defined by the dome's lateral cross-sectional area and the structure thickness. The predicted \(\tau\) peaks at \(\sim\)22.7ms, lower than those predicted by the eigenvalue method, as expected. We note a sharp reduction in \(\tau\) is observed for \(R_{\rm curv}/t_{\rm HBAR}<1\). This corresponds to the situation when the center of curvature of the dome is inside the thickness of the HBAR structure, resulting in a negative '\(g-\)parameter', given by \(g=1-\frac{t_{\rm HBAR}}{R_{\rm curv}}\). It has been shown that one cannot obtain real and finite Gaussian beam solutions for a negative \(g-\)parameter [73]. We find that this is the case for the modes corresponding to \(R_{\rm curv}/t_{\rm HBAR}<1\), that is, they are lossy modes which do not have well-defined Gaussian characters. We show the corresponding \(g_{mb}/g_{mb}^{0}\) estimates in Fig. 10(b). We obtain a peak value of 0.99 at \(R_{\rm curv}/t_{\rm HBAR}\)\(\sim\)1.4, a value close to the analytical estimate of \(\sim\)1.5. However, \(\tau\) is reduced by more than two orders of magnitude at this point. A key aspect to note for these systems is that reducing the phonon mode volume by means of focusing does not necessarily improve \(g_{mb}\) beyond its maximum achievable value, \(g_{mb}^{0}\), which represents the case when there is complete lateral overlap of phonon and magnon modes. We use the \(\tau\) and \(g_{mb}\) values to calculate the figure of merit, \(C\), for all the CHBAR structures considered. As can be seen from Fig. 10(c), the \(C\) predictions reach peak value at \(R_{\rm curv}/t_{\rm HBAR}=1.09\). This further illustrates that \(\tau\) dominates the performance of these structures, since there is a ceiling to the achievable \(g_{mb}\). The peak \(C\) is calculated to be \(2.5\times 10^{6}\) (\(2.6\times 10^{5}\)) using the eigenvalue (clipping) method. This CHBAR structure shows several orders of magnitude improvements in \(\tau\) and \(C\) compared to its planar counterpart, and reaches up to 90% of \(g_{mb}^{0}\) with a scalability factor of \(S\sim\)64, limited by the \(x-y\) cross-sectional area of the dome. Note that we only vary \(R_{\rm curv}\) for this analysis keeping the \(R_{\rm cross}\) fixed at 60\(\mu\)m which limits the lateral scalability of these devices. We perform additional analysis to further improve the scalability of the CHBAR devices. We reduce the cross-sectional area of the dome while keeping \(R_{\rm curv}/t_{\rm HBAR}=1.09\) and \(R_{\rm YIG}=10\mu\)m fixed. We show the variation of \(\tau\), \(g_{mb}\), and \(C\) with \(R_{\rm cross}/R_{\rm YIG}\) in Fig. 11. As we decrease \(R_{\rm cross}/R_{\rm YIG}\) from 6 to 1, the phonon lifetime is reduced by more than 4 orders of magnitude from \(\sim\)218 Figure 9: **Determination of dome-shape that results in the highest magnon-phonon coupling, \(g_{mb}\), in CHBAR structures:** Variation of \(g_{mb}/g_{mb}^{0}\) with the waist of the fundamental phonon mode. \(g_{mb}/g_{mb}^{0}\) is maximum when \(w_{0}/R_{\rm YIG}\sim 0.65\). ms (22.7 ms) to \(4.83\mu\)s (0.134\(\mu\)s) as predicted by the eigenvalue (clipping) method. The magnon-phonon coupling shows a sharp decline for \(R_{\rm cross}/R_{\rm YIG}<3\) reaching the lowest value of \(g_{mb}/g_{mb}^{0}\sim\)0.32 at \(R_{\rm cross}/R_{\rm YIG}=1\). While the coupling strengths are minimally affected for \(R_{\rm cross}/R_{\rm YIG}\geq 3\). The cooperativity, \(C\), vs \(R_{\rm cross}/R_{\rm YIG}\) follows a similar trend to that of \(\tau\), as shown in Fig. 11(c). Using the predictions from the eigenvalue (clipping) method, we obtain that \(C\) is decreased by more than 5 orders of magnitude from \(2.5\times 10^{6}\) (\(2.6\times 10^{5}\)) to 7.2 (0.19) as we decrease \(R_{\rm cross}/R_{\rm YIG}\). Additionally, we find that the performance of these CHBAR structures are lowered below those of planar HBAR values, shown with solid lines, if \(R_{\rm cross}/R_{\rm YIG}<2\). Considering all the different aspects, we find that the optimal geometry for the CHBAR structures is represented by the condition \(R_{\rm cross}/R_{\rm YIG}=4.5\). The phonon lifetime is long and the lifetime variation is relatively flat in the range between \(4.5\) and \(6\). The numerical analysis of this CHBAR structure (\(R_{\rm curv}/t_{\rm HBAR}=1.09\), \(R_{\rm YIG}=10\mu\)m, \(R_{\rm cross}/R_{\rm YIG}=4.5\)) using the eigenvalue (clipping) method predicts that the diffraction-limited lifetime of \(\tau=\)144.7 ms (11.1 ms) and a scalability of \(S=113\) could be achieved at a cooperativity \(C=1.64\times 10^{6}\) (\(1.25\times 10^{5}\)). The analysis and the results presented in Figs. 10 and 11 provide a proof-of-concept that a focusing dome improves the performance of hybrid magnonic HBAR structures, for quantum memory and transduction applications. Figure 11: **Performance of CHBAR structures with varied radius of cross-section, \(R_{\rm cross}\), of the dome-shape:** (a) Phonon lifetimes, \(\tau\), calculated from the eigenvalue (red) and clipping methods (green), (b) Ratio between magnon-phonon coupling in HBAR structures in various diffraction regimes and that in the zero-diffraction limit, \(g_{mb}/g_{mb}^{0}\) and (c)Magnon-phonon cooperativity, \(C\), calculated using the HK method. \(R_{\rm cross}\) is varied keeping \(R_{\rm YIG}=10\mu\)m and \(R_{\rm curv}/t_{\rm HBAR}=1.09\) fixed. Solid lines represent corresponding values for a planar HBAR structure with \(R_{\rm YIG}=10\mu\)m, while the circles of the same color correspond to the CHBAR values. Figure 10: **Performance of CHBAR structures with varied radius of curvature, \(R_{\rm curv}\), of the dome-shape:** (a) Phonon lifetimes, \(\tau\), calculated from the eigenvalue (red) and clipping methods (green), (b) Ratio between magnon-phonon coupling in HBAR structures in various diffraction regimes and that in the zero-diffraction limit, \(g_{mb}/g_{mb}^{0}\) and (c) Magnon-phonon cooperativity, \(C\), calculated using the HK method. \(R_{\rm curv}\) is varied keeping \(R_{\rm YIG}=10\mu\)m and \(R_{\rm cross}=60\mu\)m fixed. Solid lines represent corresponding values for a planar HBAR structure with \(R_{\rm YIG}=10\mu\)m, while the circles of the same color correspond to the CHBAR values. Conclusion and outlook In summary, our study provides key insights into the diffraction-limited performance of YIG/GGG HBAR hybrid magnonic devices. Additionally, we establish a prediction approach that can be used for designing scalable, long-lasting quantum memories and efficient quantum transduction systems. We present analytical and numerical analyses of the diffraction-limited BAW phonon lifetimes, modeshapes, and the magnon-phonon coupling strengths in planar and confocal HBAR structures. We use (1) Fourier beam propagation (FBPM) and (2) Hankel transform (HK) eigenvalue problem approaches to analyze the shear wave phonon modes in the HBAR structures. The FBPM approach has been widely used to analyze beam propagation in the field of optics, and more recently, to study HBAR phonons in planar and confocal HBAR (CHBAR) structures. Here, we implement a reformulated approach that allows us to achieve a seven-fold speed up of the computation time. However, we find that the number of round trips needed in FBPM method to convergence the phonons are often large and unpredictable. We implement an adaptive FBPM (a-FBPM) approach that mostly allows to overcome the slow convergence issues over the standard approach. The a-FBPM still suffers from convergence challenges for the confocal HBAR structures. We implement the HK method that leverages the axi-symmetry of the YIG/GGG HBAR system and is computationally inexpensive. The HK approach has been mostly used in the field of optics, e.g., Fabry-Perot cavities, however, it has not been applied for acoustics problems, to the best of our knowledge. Our analysis predicts that the diffraction-limited \(\tau\) of a planar HBAR structure with lateral YIG dimension, \(R_{\mathrm{YIG}}=200\mu\)m, is on the order of milliseconds. A recent study reported that the room temperature performance of a YIG/GGG HBAR structure is limited by the phonon lifetime at 0.25\(\mu\)s [54]. The HBAR structure had larger YIG lateral area 0.72 mm\({}^{2}\) than that of the HBARs considered in our study. The difference in \(\tau\) values implies that the performance of the previously studied system is not limited by diffraction losses since diffraction effects are less dominant for larger YIG lateral area. Instead, the performance is likely to be limited by material losses. Assuming that the material-limited lifetime could be pushed to \(\sim\)0.1 ms at mK temperatures, we find that the planar HBAR structures are not affected by diffraction effects even at \(R_{\mathrm{YIG}}=50\mu\)m, which already offers significant 50-fold scalability. The scalability can be further improved by scaling down the YIG film lateral area. Increased lifetimes have direct implications on the storage times of quantum states for quantum memory applications, whereas scalability is naturally desired to accommodate multiple on-chip memory centers or to have other on-chip hardware alongside the HBAR structures. We acknowledge that a full analysis of the material-limited lifetimes is still necessary to obtain a comprehensive understanding of the shear waves in YIG and GGG material systems. This may include a clear identification of the operation regimes (e.g., Akhiezer and Landau-Rumer) at various temperatures and frequencies of interest. The performance of planar HBAR devices is heavily dependent on maintaining perfectly parallel surfaces. We illustrate that the focusing dome structure allows to mitigate the sensitivity to parallelism and significantly improve the phonon lifetime and scalability of these systems. To identify the shape of the dome structure, we theoretically estimate overlap of the CHBAR phonon modeshapes with magnon modes and extract the shape paramters using the condition where the overlap is maximum. We refine the estimate with numerical analysis and obtain a radius of curvature and cross-section radius for which both \(\tau\) and \(C\) are optimal and the coupling is close to its peak value. Overall, we find that ultra-high, diffraction-limited, cooperativities and phonon lifetimes on the order of \(\sim\)10\({}^{5}\) and \(\sim\)10 ms, respectively, could be achieved using a CHBAR structure with \(R_{\mathrm{YIG}}=10\mu\)m. In addition to enhanced \(\tau\) and \(C\), the confocal HBAR structure is predicted to provide more than 100-fold improvement of scalability. Our study provides key insights into the diffraction-limited performance induced by the lateral area of the YIG film and the dome structure. Various other parameters, such as the thickness of the YIG and GGG regions, geometrical misalignment, and imperfections can also play an important role in determining the performance. Additionally, the photon and magnon lifetimes and their coupling strength could also play important role in the overall device performance. We assume that these parameters remain invariant in our study. However, we acknowledge that a comprehensive analysis is necessary to understand to optimize the device performance. Such an analysis could be performed using the current state-of-the-art machine learning techniques, which is a promising research direction for the future. Anharmonic phonon interactions at the surface and interfaces, and the coupling of bulk acoustic waves to surface acoustic waves, is another interesting direction to explore. The current study only considers the coupling of the fundamental magnon mode to the fundamental phonon mode for the given structure. However, it is possible that the higher-order mode coupling could play a major role considering the broadband nature of the phonon modes and improved lifetimes introduced by the dome structure. It will be interesting to explore the applicability of YIG/GGG HBAR structures for quantum transduction applications by coupling with other quantum information carriers such as superconducting qubits, which also operate in the microwave frequency regime and at milli Kelvin temperatures. Our study is concerned with coherent states, and it will be important to explore how the insights provided here translate to systems dealing with non-classical states, e.g. squeezed states, cat states, and Fock states. ## Acknowledgements We are indebted to Prof. Xufeng Zhang for sharing valuable insights regarding the outstanding challenges of hybrid magnonic systems and several helpful discussions. We gratefully acknowledge funding from the Quantum Explorations in Science & Technology (QuEST) grant provided by the CU Boulder Research & Innovation Office (RIO) in partnership with the College of Engineering and Applied Science, the College of Arts and Sciences, JILA, and the National Institute of Standards and Technology (NIST). We acknowledge the computing resources provided the RMACC Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. The Summit supercomputer is a joint effort of the University of Colorado Boulder and Colorado State University.
2302.02473
Level-p-complexity of Boolean Functions using Thinning, Memoization, and Polynomials
This paper describes a purely functional library for computing level-$p$-complexity of Boolean functions, and applies it to two-level iterated majority. Boolean functions are simply functions from $n$ bits to one bit, and they can describe digital circuits, voting systems, etc. An example of a Boolean function is majority, which returns the value that has majority among the $n$ input bits for odd $n$. The complexity of a Boolean function $f$ measures the cost of evaluating it: how many bits of the input are needed to be certain about the result of $f$. There are many competing complexity measures but we focus on level-$p$-complexity -- a function of the probability $p$ that a bit is 1. The level-$p$-complexity $D_p(f)$ is the minimum expected cost when the input bits are independent and identically distributed with Bernoulli($p$) distribution. We specify the problem as choosing the minimum expected cost of all possible decision trees -- which directly translates to a clearly correct, but very inefficient implementation. The library uses thinning and memoization for efficiency and type classes for separation of concerns. The complexity is represented using (sets of) polynomials, and the order relation used for thinning is implemented using polynomial factorisation and root-counting. Finally we compute the complexity for two-level iterated majority and improve on an earlier result by J.~Jansson.
Julia Jansson, Patrik Jansson
2023-02-05T20:05:19Z
http://arxiv.org/abs/2302.02473v4
# Level-\(p\)-complexity of Boolean Functions using Thinning, Memoization, and Polynomials ###### Abstract This paper describes a purely functional library for computing level-\(p\)-complexity of Boolean functions, and applies it to two-level iterated majority. Boolean functions are simply functions from \(n\) bits to one bit, and they can describe digital circuits, voting systems, etc. An example of a Boolean function is majority, which returns the value that has majority among the \(n\) input bits for odd \(n\). The complexity of a Boolean function \(f\) measures the _cost_ of evaluating it: how many bits of the input are needed to be certain about the result of \(f\). There are many competing complexity measures but we focus on level-\(p\)-complexity -- a function of the probability \(p\) that a bit is \(1\). The level-\(p\)-complexity \(D_{p}(f)\) is the minimum expected cost when the input bits are independent and identically distributed with Bernoulli\((p)\) distribution. We specify the problem as choosing the minimum expected cost of all possible decision trees -- which directly translates to a clearly correct, but very inefficient implementation. The library uses thinning and memoization for efficiency and type classes for separation of concerns. The complexity is represented using polynomials, and the order relation used for thinning is implemented using polynomial factorisation and root-counting. Finally we compute the complexity for two-level iterated majority and improve on an earlier result by J. Jansson. 10.1017/xxxxx ## 1 Introduction Boolean functions are wide-spread in mathematics and computer science and can describe yes-no voter systems, hardware circuits, and predicates (O'Donnell, 2014). A Boolean function is a function from \(n\) bits to one bit, for example majority (\(\mathit{maj}_{n}\)), which returns the value that has majority among the \(n\) inputs for odd \(n\). We are interested in the cost of evaluating Boolean functions: in the context of vote-counting after an election the cost is the number of votes we need to count before we know the outcome for certain. ### Vote counting example In US elections a presidential candidate can lose even if they win the popular vote. One reason for this is that the outcome is not directly determined by the majority, but rather majority iterated two times.1 Our running example is a very much simplified case: consider 3 states with 3 voters in each. Footnote 1: The actual presidential election is a direct majority vote among the electors who are not formally bound by their state’s outcome. \[\underbrace{\underbrace{x_{(1,1)},x_{(1,2)},x_{(1,3)}}_{m_{1}=ma\hat{j}_{3} \ (...)},\underbrace{x_{(2,1)},x_{(2,2)},x_{(2,3)}}_{m_{2}=ma\hat{j}_{3}\ (...)},x_{(3,1)},x_{(3,2)},x_{(3,3)}}_{m_{3}=ma\hat{j}_{3}\ (...)}}_{ma\hat{j}_{3}(m_{1},m_{2},m_{3})}\] We first compute the majority in each "state" of three bits, and then the majority of \(m_{1}\), \(m_{2}\), and \(m_{3}\). For example we see here \(\mathbf{0},\mathbf{1},\mathbf{0}\) which gives \(m_{1}=\mathbf{0}\), then \(\mathbf{1},\mathbf{0},\mathbf{1}\) which gives \(m_{2}=\mathbf{1}\), and \(\mathbf{0},\mathbf{1},\mathbf{0}\) again which gives \(m_{3}=\mathbf{0}\). The final majority is \(\mathbf{0}\): \[\underbrace{\underbrace{\mathbf{0},\mathbf{1},\mathbf{0}}_{m_{1}=\mathbf{0}},\underbrace{\mathbf{1},\mathbf{0},\mathbf{1}}_{m_{2}=\mathbf{1}},\underbrace{ \mathbf{0},\mathbf{1},\mathbf{0}}_{m_{3}=\mathbf{0}}}_{ma\hat{j}_{3}=\mathbf{ 0}}}_{ma\hat{j}_{3}=\mathbf{0}}\] But if we switch the first and 8th bit (perhaps through gerrymandering) we get another example with the changed bits marked in red: \[\underbrace{\underbrace{\mathbf{1},\mathbf{1},\mathbf{0}}_{m_{1}=1}, \underbrace{\mathbf{1},\mathbf{0},\mathbf{1}}_{m_{2}=\mathbf{1}},\underbrace{ \mathbf{0},\mathbf{0},\mathbf{0}}_{m_{3}=\mathbf{0}}}_{ma\hat{j}_{3}=\mathbf{ 1}}}_{ma\hat{j}_{3}=\mathbf{1}}\] This changes \(m_{1}\) from \(\mathbf{0}\) to \(\mathbf{1}\) without affecting \(m_{2}\), or \(m_{3}\). But now the two-level majority is changed to \(\mathbf{1}\), just from the switch of two bits. Both examples have four \(\mathbf{1}\)'s and five \(\mathbf{0}\)'s but the result is different based on the positioning of the bits. In our case the two-level majority is \(\mathbf{1}\) even though there are fewer \(\mathbf{1}\)'s than \(\mathbf{0}\)'s. This means that the \(\mathbf{0}\)'s "lose" even though they won the "popular vote". ### Related work We use binary decision trees to describe the evaluation order of Boolean functions. The depth of the decision tree corresponds to the number of votes needed to know the outcome for certain. This is called deterministic complexity. Another well-known notion is randomized complexity, and the complexity bounds of iterated majority have been studied in Landau et al. (2006), Leonardos (2013) and Magniez et al. (2016). Iterated majority on two levels corresponds to the Boolean function for US elections as described above, and we are particularly interested in this function. Other relevant concepts are certificate complexity, degree of a Boolean function, and communication complexity (Buhrman and De Wolf, 2002). Complexity measures related specifically to circuits are circuit complexity, additive, and multiplicative complexity (Wegener, 1987). Thus, there are many competing complexity measures but we focus on level-\(p\)-complexity -- a function of the probability that a bit is 1 (Garban and Steif, 2014). Level-\(p\)-complexity is more complicated than deterministic complexity but is easier to compute than other more general complexity measures like full randomized complexity. Moreover, level-\(p\)-complexity has many interesting properties, as can be seen in (Jansson, 2022). This paper presents a purely functional library for computing level-\(p\)-complexity of Boolean functions in general, and for two-level iterated three-bit majority in particular. The implementation is in Haskell but should work also in other languages. ### Motivation To get a feeling for what the end result will look like we start with two examples which will be explained in detail later: the level-\(p\)-complexity of 2-level iterated majority \(\textit{maj}_{3}^{2}\) and of a 5-bit function we call \(\textit{sim}_{5}\), defined in Fig. 1.12. The complexity is a piecewise polynomial function of the probability \(p\) and \(\textit{sim}_{5}\) is the smallest arity Boolean function we have found which has more than one polynomial piece contributing to the complexity. Polynomials are represented by their coefficients: for example, \(P\left[5,-8,8\right]\) represents \(5-8x+8x^{2}\). The function \(\textit{genAlgThinMemo}\) uses thinning and memoization to generate a set of minimal cost polynomials. Footnote 2: The function \(\textit{sim}_{5}\) is referred to as \(f_{AC}\) in (Jansson, 2022). \(\textit{ps5}=\textit{genAlgThinMemo}\,5\,\textit{sim}_{5}::\textit{Set}\,( \textit{Poly}\,\mathbb{Q})\) \(\textit{check5}=\textit{ps5}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, For our running example, \(\mathit{maj}_{3}^{2}\), a crude estimate indicates we would have \(10^{111}\) decision trees to search and very many polynomials. Thus the computation would be intractable if it were not for the combination of thinning, memoization, and symbolic comparison of polynomials. Thanks to symmetries in the problem there turns out to be just one dominating polynomial: \[\begin{array}{l}\mathit{ps9=genAlgThinMemo9\;\mathit{maj}_{3}^{2}::\mathit{ Set}\left(\mathit{Poly}\,\mathbb{Q}\right)}\\ \mathit{check9=ps9\;\mathit{\texttt{=}}\mathit{fromList}\left[\,P\left[4,4,6,9, -61,23,67,-64,16\right]\right]}\end{array}\] The graph, shown later in Fig. 4, shows that only 4 bits are needed in the limiting cases of \(p=0\) or \(p=1\) and that a bit more than 6 bits are needed in the maximum at \(p=1\) / 2. ### Contributions This paper presents a Haskell library for computing level-\(p\)-complexity of Boolean functions in general, and for \(\mathit{maj}_{3}^{2}\) in particular. The level-\(p\)-complexity of \(\mathit{maj}_{3}^{2}\) was conjectured in Jansson (2022), but could not be proven because it was hard to generate all possible decision trees. This paper fills that gap, by showing that the conjecture is false and by computing the true level-\(p\)-complexity of \(\mathit{maj}_{3}^{2}\). The strength of our implementation is that it can calculate the level-\(p\)-complexity for boolean functions quickly and correctly, compared to tedious calculations by hand. Our specification uses exhaustive search and considers all possible candidates (decision trees). Some partial candidates dominate (many) others, which may be discarded. Thinning (Bird and Gibbons, 2020) is an algorithmic design technique which maintains a small set of partial candidates which provably dominate all other candidates. We hope that one contribution of this paper is an interesting example of how a combination of algorithmic techniques can be used to make the intractable tractable. The code in this paper is available on GitHub3 and uses packages from Jansson et al. (2022). Footnote 3: The paper repository is at [https://github.com/juliajansson/BoFunComplexity](https://github.com/juliajansson/BoFunComplexity). ## 2 Background To explain what level-\(p\)-complexity of Boolean functions means we introduce some background about Boolean functions, decision trees, cost and complexity. ### Boolean functions A Boolean function \(f:\mathbb{B}^{n}\,\rightarrow\,\mathbb{B}\) is a function from \(n\) Boolean inputs to one Boolean output. The Boolean input type \(\mathbb{B}\) could be \(\left\{\mathit{False},\mathit{True}\right\},\left\{\mathit{F},\mathit{T}\right\}\) or \(\left\{0,1\right\}\) and from now on we use \(\mathbf{0}\) for false and \(\mathbf{1}\) for true in our notation. The easiest example of a Boolean function is the function which is constant \(\mathbf{0}\) or constant \(\mathbf{1}\). The usual logical gates like _and_ and _or_ are very common Boolean functions. Another example is the dictator function (also known as first projection), which is defined as \(\mathit{dict}_{n}\left[x_{1},...,x_{n}\right]=x_{1}\) when the dictator is bit 1. A naive implementation of Boolean functions could be as functions \(f:\left[\mathbb{B}\right]\rightarrow\mathbb{B}\), but that turns out to be inefficient. Instead we use Binary Decision Diagrams _BDD_s (Bryant, 1986) as implemented in Masahiro Sakai's excellent Hackage package4. In the complexity computation, we only need two operations on Boolean functions which we capture in the following type class interface: Footnote 4: [https://github.com/msakai/haskell-decision-diagrams](https://github.com/msakai/haskell-decision-diagrams) **class**_BoFun bf_**where** _isConst_::_bf_ \(\rightarrow\)_Maybe_ \(\mathbb{B}\) _setBit_ ::_Index_ \(\rightarrow\)\(\mathbb{B}\)\(\rightarrow\)_bf_ \(\rightarrow\)_bf_ **type**_Index_ = \(\mathbb{N}\) The use of a type class here means we keep the interface to the BDD implementation minimal, which makes proofs easier and gives better feedback from the type system. The first method, _isConst_ \(f\), returns _Just_\(b\) iff the function \(f\) is constant and always returns \(b::\mathbb{B}\). The second method, _setBit_\(i\)\(b\)\(f\), restricts a Boolean function (on \(n+1\) bits) by setting its \(i\):th bit to \(b\). The result is a "subfunction" on the remaining \(n\) bits, abbreviated \(f_{b}^{i}\), and illustrated in Figure 1. As an example, for the function \(\mathit{and}_{2}\) we have that _setBit_\(i\)**0**_and_\({}_{2}=\mathit{const}\)**0** and _setBit_\(i\)**1**_and_\({}_{2}=\mathit{id}\). For \(\mathit{and}_{2}\) we get the same result for \(i=1\), or 2 but for the dictator function it depends if we pick the dictator index or not. We get _setBit_\(1\)_b_dict_\({}_{n+1}=\mathit{const}_{n}\)_b_, since the result of the dictator function is already decided. Otherwise, if \(i\neq 1\), we get _setBit_\(i\)_b_dict_\({}_{n+1}=\mathit{dict}_{n}\) irrespective of the value of \(b\) since only the value of the dictator bit matters. This behaviour is shown in Figure 2. ### Decision trees Consider a decision tree that picks the \(n\) bits of a Boolean function \(f\) in a deterministic way depending on the values of the bits picked further up the tree. Decision Figure 1: The tree of subfunctions of a Boolean function \(f\). For brevity _setBit_\(i\)_b_\(f\) is denoted \(f_{b}^{i}\). This tree structure is also the call-graph for our generation of decision trees. Note that this is related to, but not the same as, the decision trees. trees are referred to as algorithms in (Jansson, 2022), (Garban and Steif, 2014) and (Landau et al., 2006). Given a natural number \(n\) and a Boolean function \(f\), a decision tree \(t\) describes one way to evaluate the function \(f\). The Haskell datatype is as follows: ``` dataDecTree=ResB|PickIndexDecTreeDecrederderiving(Eq,Ord,Show) ``` Parts of the "rules of the game" in the mathematical literature is that you must return a _Res_ult if the function is constant and you may only _Pick_ an index once. We can capture most of these rules with a type family version of the _DecTree_ datatype (here expressed in _Agda_ syntax). Here we use two type indices: \(t\,{:}\,\)_DecTree_\(n\,f\) is a decision tree for the Boolean function \(f\), of arity \(n\). The _Res_ constructor may only be used for constant functions (but for any arity), while _Pick_\(i\) takes two subtrees for Boolean functions of arity \(n\) to a tree of arity \(suc\;n=n+1\). ``` dataDecTree:(n:N)-(f:BoolFunn)-Setwhere Res:(b:B)-DecTreen(constnb) Pick:{f:BoolFun(suc\;n)-(i:Fin(suc\;n))-DecTreen(setBiti\(0\)f)-DecTreen(setBiti\(1\)f)-DecTree(suc\(n\))f setBit:Fin(suc\(n\))-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B--B- **class**_TreeAlg_a_**where** _res_::B_-a_ _pic_::_Index_-a_-a_-a_-a_foldDT::_TreeAlg_a_-DecTree_-a_foldDT_(_Res_b_)_-res_b_foldDT_(_Pick_i_to_t_1)_-pic_i_(_foldDT_t_0)_(_foldDT_t_1)_ The _TreeAlg_ class is used to define our decision trees but also for several other purposes. (In the implementation we additionally require some total order on \(a\) to enable efficient set computations.) We see that our decision tree type is the initial algebra of _TreeAlg_ and that we can reimplement a generic version of _ex1_ which can be instantiated to any _TreeAlg_ instance: **instance**_TreeAlg_DecTree_**where**_res_=Res_;_pic_=_Pick_; _ex1_::_TreeAlg_a_-a_-a_ex1_-pic_1(_pic_3(_res_0)(_pic_2(_res_0)(_res_1)))_(_pic_2(_pic_3(_res_0)(_res_1))(_res_1))_ ### Cost The cost of a given \(x\in\mathbb{B}^{n}\) for some decision tree \(t\) is the length of the path from root to leaf when the input to \(f\) is \(x\). This computation can be defined as an instance of _TreeAlg_: **type**_CostFun_=B\({}^{n}\)-Int** **instance**_TreeAlg_CostFun_**where**_res_=resC_;_pic_=_pickC_resC_::B_-CostFun_resC_b_=_const_0_pickC_:_index_-CostFun_-CostFun_pickC_i_0\(c\)1_-_1_+_if index_i_i_then_c_1_-_1_+_else_c_0_t_cost_::DecTree_-CostFun_cost_=_foldDT_ Figure 3: An example of a decision tree for _maj\({}_{3}\)_. The root node branches on the value of bit 1. If it is **0**, it picks bit 3, while if it is **1**, it picks bit 2. It then picks the last remaining bit if necessary. We get that \(\mathit{cost}\,\mathit{ex1}\)\(\left[\mathbf{1},\mathbf{0},\mathbf{1}\right]\) is 3, while \(\mathit{cost}\,\mathit{ex1}\)\(\left[\mathbf{1},\mathbf{1},\mathbf{0}\right]\) is 2, as can be seen in Figure 3. Taking the maximum of the cost over all \(x\in\mathbb{B}^{n}\) gives us the depth of the decision tree \(t\). This can also be defined as an instance of \(\mathit{TreeAlg}\). \begin{tabular}{l} **type**_MaxCost_ = \(\mathbb{N}\) \\ _pickMC_ \(i\)_\(m_{1}\)_\(m_{2}=1+\mathit{max}\)_\(m_{1}\)_\(m_{2}\) \\ **instance**_TreeAlg MaxCost_**where**_res_=_const_\(0\); \(\mathit{pic}=\mathit{pickMC}\) \\ _maxCost_ : DecTree_ \(\rightarrow\)_MaxCost_ \\ _maxCost_ = _foldDT_ \\ \end{tabular} By evaluating _maxCost ex1_ we get that the maximum cost is 3 for this example. Another kind of cost is _expected_ cost where we let the bits be independent and identically distributed. We use the distribution \(\pi_{p}\) for the input bits which means that they are i.i.d. with Bernoulli distribution with parameter \(p\in[0,1]\)(Garban and Steif, 2014). As for the other cost notions, expected cost is also implemented as an instance of \(\mathit{TreeAlg}\). \begin{tabular}{l} **type**_ExpCost_\(a=\mathit{Poly}\)_\(a\) \\ **instance**_Ring_\(a\Rightarrow\mathit{TreeAlg}\)_(_ExpCost_\(a\))_**where**_res_=_resPoly_; \(\mathit{pic}=\mathit{pickPoly}\) \\ _expCost_ :: Ring_\(a\Rightarrow\mathit{DecTree}\)\(\rightarrow\)_Poly_\(a\) \\ _expCost_ = _foldDT_ \\ \end{tabular} Note that the expected cost of any decision tree for a Boolean function of \(n\) bits will always be a polynomial. We represent polynomials as lists of coefficients: \(P\)\(\left[1,2,3\right]\) represents \(\lambda p\to 1+2*p+3*p^{2}\). The polynomial implementation relies heavily on material from Jansson et al. (2022). This includes the polynomial ring operations (\((+)\), \((-)\), \((*)\)), \(\mathit{gcd}\), \(\mathit{divMod}\), symbolic derivative, and ordering. The _res_ and _pic_ functions are as follows: \begin{tabular}{l} _resPoly_ :: Ring_\(a\Rightarrow\mathbb{B}\)\(\rightarrow\)_\(a\) \\ _resPoly_ \_b_ = _zero_ \\ _pickPoly_ :: Ring_\(a\Rightarrow\mathit{Index}\)\(\rightarrow\)_Poly_\(a\)\(\rightarrow\)_Poly_\(a\) \\ _pickPoly_ \_i_\(p_{0}\)_\(p_{1}=\mathit{one}+\left(\mathit{one}-\mathit{xP}\right)*p_{0}+\mathit{xP}*p_{1}\) \\ \end{tabular} Here \(\mathit{zero}=P\left[\right]\) and \(\mathit{one}=P\left[1\right]\) represent \(\mathit{const}\,0\) and \(\mathit{const}\,1\) respectively while \(\mathit{xP}=P\left[0,1\right]\) is "the polynomial \(p\)". For \(\mathit{pickPoly}\_\,p_{0}\)\(p_{1}\) we first have to pick one bit and then if this bit is \(\mathbf{0}\) (with probability \(\mathbb{P}(x_{i}=\mathbf{0})=(1-p)\)) we get \(p_{0}\) which is the polynomial for this case. If the bit is instead \(\mathbf{1}\) (with probability \(\mathbb{P}(x_{i}=\mathbf{1})=p\)) we get \(p_{1}\). The expected cost of the decision tree _ex1_ is \(2+2p-2p^{2}\). ### Complexity Now that we have introduced some notions of cost of decision trees, we can introduce complexity of a Boolean function which is the minimum of the cost over all decision trees for the given function. Using _maxCost_ we specify the concept of deterministic complexity as \(D\left(f\right)=\min_{t}\left(\mathit{maxCost}\ t\right)\) where \(t\) ranges over all the decision trees for the function \(f\). The type is \(D:\mathit{BoFun}\)_\(\mathit{bf}\)_\(\rightarrow\)_\(\mathit{bf}\)_\(\rightarrow\)_\(\mathbb{N}\) and to calculate it we need to generate all the decision trees. The level-\(p\)-complexity is defined using \(\mathit{expCost}\) as \(D_{p}(f)=\min_{t}\left(\mathit{evalPoly}\left(\mathit{expCost}\ t\right)\ p\right)\) Thus, for each probability \(p\) and Boolean function \(f\) the minimum over all the decision trees will give the smallest expected cost. If we flip the argument order we can see that \(D_{p}(f)\) takes a Boolean function \(f\) to a function from \(p\) to the smallest expected cost. As _expCost_ always returns a polynomial, the level-\(p\)-complexity is a continuous, piecewise polynomial, function of \(p\). In our implementation, we do not implement a special type for piecewise polynomials, we just represent them as sets of polynomials, and leave the last minimum to the surrounding code. More about the implementation is explained in Section 3. ### Examples of Boolean functions and their costs For the constant functions, we already know the result so \(\mathit{maxCost}\left(\mathit{Res}\ b\right)=0\) and \(\mathit{expCost}\left(\mathit{Res}\ b\right)=\mathit{zero}\) and then \(D(\mathit{const}_{n})=D_{p}(\mathit{const}_{n})=0\). For the dictator function, there is only one minimizing decision tree irrespective of input: the one that picks bit \(1\) first. After asking the first bit the function is reduced to the constant function as can be seen in Figure 2 and we get the optimal decision tree \(\mathit{optTree}=\mathit{Pick}\ 1\left(\mathit{Res}\ \mathbf{0}\right)\left( \mathit{Res}\ \mathbf{1}\right)\). The results are \[\mathit{maxCost}\ \mathit{optTree}=1+\mathit{max}\ 0\ 0=1\] \[\mathit{expCost}\ \mathit{optTree}=\mathit{one}+\left(\mathit{one}-xP \right)*\mathit{zero}+xP*\mathit{zero}=\mathit{one}\.\] This then gives \(D(\mathit{dict}_{n})=1\), and similarly \(D_{p}(\mathit{dict}_{n})=1\). The parity function is \[\mathit{count}::\mathit{Eq}\ a\Rightarrow a\ \rightarrow\ \left[\mathit{a}\right] \rightarrow\mathit{Int}\] \[\mathit{count}\ x=\mathit{length}\circ\mathit{filter}\ \left(x\ \texttt{zz}\right)\] \[\mathit{par}_{n}::\mathbb{B}^{n}\ \rightarrow\ \mathbb{B}\] \[\mathit{par}_{n}=\mathit{odd}\circ\mathit{count}\ \mathbf{1}\] In this case all bits have to be picked to determine the parity, regardless of input. For example, if we first ask one bit to determine \(\mathit{par}_{n+1}\), then we are left with two subtrees: \(t_{0}\) for \(\mathit{par}_{n}\) and \(t_{1}\) for \(\neg\mathit{par}_{n}\) as seen in Figure 4. Recursively, this gives \[\mathit{maxCost}\left(\mathit{Pick}\ i_{0}\ t_{1}\right) =1+\mathit{max}\left(\mathit{maxCost}\ t_{0}\right)\left( \mathit{maxCost}\ t_{1}\right)=1+\mathit{max}\ n\ n=1+\mathit{n}\] \[\mathit{expCost}\ \left(\mathit{Pick}\ i_{0}\ t_{1}\right) =\mathit{one}+\left(\mathit{one}-xP\right)*\left(\mathit{expCost} \ t_{0}\right)+xP*\left(\mathit{expCost}\ t_{1}\right)\] \[=\mathit{one}+\left(\mathit{one}-xP\right)*n+xP*n=1+n\] Thus, \(D(\mathit{par}_{n})=D_{p}(\mathit{par}_{n})=n\). This can also be seen if you compare Figure 2 with Figure 4, the minimum depth of the dictator tree is \(1\), while the minimum depth of the parity tree is \(n\). We now introduce the Boolean function _same_ which checks if all bits are equal: Figure 4: The recursive structure of the parity function \((\mathit{par}_{n})\). The pattern repeats all the way down to \(\mathit{par}_{0}=\mathit{const}\ \mathbf{0}\). _same_::_B_^n_-_B_ _same_bs_=_and_bs_v_-(_or_bs_) Using _same_ we construct a very specific function of 5 bits where we first split the bits into two groups, one with the first three bits and the second with the last two bits. On the first group, called _as_, we check if the bits are not the same, and on the second group, called _cs_ we check if the bits are the same. _sim_5_::_B_^5_-_B_ _sim_5_bs_=_(_same_as_)_v_-_sc_ **where**(_as_,_cs_)=_splitAt_3_bs_ The point of this function is to illustrate a special case where the best decision tree depends on \(p\) so that the level-_p_-complexity consists of several different polynomials. This computation is shown in Section 4.1. One of the major goals of this paper was to calculate the level-_p_-complexity of 9 bit iterated majority called _majj\({}_{3}^{2}\)_. When extending the majority function to _maj\({}_{3}^{2}\)_, we use _maj\({}_{3}\)_ inside _maj\({}_{3}\)_. _maj\({}_{3}^{2}\)_:_B_^9_-_B_ _maj\({}_{3}^{2}\)_bs_=_maj\({}_{3}\)_[maj\({}_{3}\)_bs_1_,_maj\({}_{3}\)_bs_2_,_maj\({}_{3}\)_bs_3_] **where**(_bs_1_,_rest_)=_splitAt_3_bs_ _(_bs_2_,_bs_3_)=_splitAt_3_rest_ _maj\({}_{n}\)_:_B_^n_-_B_ _maj\({}_{n}\)_bs_=_count_1_bs_>_count_0_0_bs_ It is hard to calculate \(D_{p}(\textit{maj}_{3}^{2})\) by hand because there are very many different decision trees, and this motivated our Haskell implementation of the calculations explained in Section 3. ## 3 Computing the level-_p_-complexity The process of generating decision trees, memoization, thinning and comparing polynomials is explained more in detail. ### Generating decision trees The decision trees of a function \(f\) can be described in terms of the decision trees for the immediate subfunctions (\(f_{b}^{i}\!=\!\textit{setBit}\,i\,\textit{b}\,f\)) for different _i_::_Index_ and _b_::_B_. Given the Boolean function \(f\), if \(f\) is constant (returning _b_) we return the singleton set \(\{\,\textit{res}\,\textit{b}\}\). Otherwise, \(f\)'s decision trees are generated recursively by asking each bit \(i\), and generating the decision trees for the subfunctions \(f_{\textbf{0}}^{i}\) and \(f_{\textbf{1}}^{i}\). The recursive step is shown in the formula below: \[\textit{genAlg}_{n+1}\,f\!=\!\{\,\textit{pic}\,i\,\textit{t}_{0}\,\textit{t}_ {1}\,|\,\textit{i}\leftarrow\{\,\textit{1}\,\textit{..}\,\textit{n}\},\textit{ t}_{0}\leftarrow\textit{genAlg}_{n}\,f_{\textbf{0}}^{i},\textit{t}_{1} \leftarrow\textit{genAlg}_{n}\,f_{\textbf{1}}^{i}\}\] The complexity computation starts from a Boolean function \(f:\textit{BoolFun}\,\textit{n}\), and generates many decision trees for it. There are two top level cases: either the function \(f\) is constant (and returns _b_:B), in which case there is only one decision tree: _res_\(b\); or the function \(f\) still depends on some of the input bits. In the latter case, for each index \(i\!:\!\mbox{\it Fin}\;n\) we can generate two "subfunctions" \(f_{0}\;i\!=\!\mbox{\it setBit}\;i\;\mbox{\bf 0}\,f\) and \(f_{1}\;i\!=\!\mbox{\it setBit}\;i\;\mbox{\bf 1}\,f\). Now, if we recursively generate a decision tree \(t_{0}\) for \(f_{0}\;i\) and \(t_{1}\) for \(f_{1}\;i\) we can combine them to a bigger decision tree using \(\mbox{\it pic}\;i\;t_{0}\;t_{1}\). Now we only need to do this for all combinations of \(i\), \(t_{0}\), and \(t_{1}\). We would like to enumerate the cost polynomials of all the decision trees of a particular Boolean function (\(n\!=\!9\), \(f\!=\!maj_{3}^{2}\) is our main goal). Without taking symmetries into account there are \(2*n\) immediate subfunctions \(f_{b}^{i}\) and if \(T_{g}\) is the cardinality of the enumeration for subfunction \(g\) we have that \[T_{\!f}\!=\!\sum_{i=1}^{n}T_{f_{0}^{i}}*T_{f_{1}^{i}}\] These numbers can be really big if we count all decision trees, but if we only care about their cost polynomials, many decision trees will collapse to the same polynomial, making the counts more manageable (but still possibly really big). Even the total number of subfunctions encountered (the number of recursive calls) can be quite big. If all the \(2*n\) immediate subfunctions are different, and if all of them would generate \(2*(n-1)\) different subfunctions in turn, the number of subfunctions would be \(2^{n}*n!\). But in practice many subfunctions will be the same. When computing the polynomials for the 9-bit function \(\mbox{\it maj}_{3}^{2}\), for example, only 215 distinct subfunctions are encountered. As a smaller example, for the 3-bit majority function \(\mbox{\it maj}_{3}\), choosing \(i\!=\!1,2\), or 3 gives exactly the same subfunctions. Fig. 3 illustrates a simplified call graph of \(\mbox{\it genAlg}_{3}\;\mbox{\it maj}_{3}\) and the results (the expected cost polynomials) for the different subfunctions. In this case all the sets are singletons, but that is very unusual for more realistic Boolean functions. It would take too long to compute all polynomials for the 9-bit function \(\mbox{\it maj}_{3}^{2}\) but there are 21 distinct 7-bit sub-functions, and the first one of them already has 18021 polynomials. Thus we can expect billions of polynomials for \(\mbox{\it maj}_{3}^{2}\) and this means we need to look at ways to keep only the most promising candidates at each level. This leads us to the algorithmic design technique of thinning. Figure 3: A simplified computation tree of \(\mbox{\it genAlg}_{3}\;\mbox{\it maj}_{3}\). Each node shows the input and output of the local call to \(\mbox{\it genAlg}_{\cdot}\). As all the functions involved are “symmetric” in the index (_setBit_\(i\;b\,f\) _== setBit_\(j\,b\,f\) for all \(i\) and \(j\)) we only show edges for \(\bf 0\) and \(\bf 1\) from each level. ### Thinning The general shape of the specification has two phases: "generate all candidates" followed by "pick the best one(s)". The first phase is recursive and we would like to push as much as possible of "pick the best" into the recursive computation. In the extreme case of a greedy algorithm, we can thin the intermediate sets all the way down to singletons, but even if the sets are a bit bigger than that we can still reduce the computation cost significantly. A good (but abstract) reference for thinning is the Algebra of Programming book (Bird and de Moor, 1997, Chapter 8) and more concrete references are the corresponding developments in Agda (Mu et al., 2009) and Haskell (Bird and Gibbons, 2020). We are looking for a "smallest" polynomial, but we only have a preorder, not a total order, which means that we may need to keep a set of incomparable candidates (elements \(x\neq y\) for which neither \(x\prec y\) nor \(y\prec x\)). We start from a strict preorder \((\prec):a\,\rightarrow\,a\,\rightarrow\,\mathit{Prop}\) (an irreflexive and transitive relation). You can think of \(\mathit{Prop}\) as \(\mathbb{B}\) because we only work with decidable relations and finite sets in this application. As we are looking for minima, we say that \(x\)_dominates_\(y\) if \(x\prec y\). In our case we will use it for polynomials, but the theory works more generally. We lift the order relation to sets in two steps. First \(\mathit{ys}\mathrel{\dot{\prec}}x\) means that \(\mathit{ys}\)_dominates_\(x\), meaning that some element in \(\mathit{ys}\) is smaller than \(x\). If this holds, there is no need to add \(x\) to \(\mathit{ys}\) because we already have at least one better element in \(\mathit{ys}\). Then \(\mathit{ys}\mathrel{\ddot{\prec}}xs\) means that \(\mathit{ys}\) dominates all of \(\mathit{xs}\). \((\mathrel{\ddot{\prec}}):\mathit{Set}\,a\,\rightarrow\,\mathit{a}\,\rightarrow\,\mathit{Prop}\) \(\mathit{ys}\mathrel{\dot{\prec}}x=\exists\,\mathit{y}\in\mathit{ys}\). \(y\prec x\) \((\mathrel{\ddot{\prec}}):\mathit{Set}\,a\,\rightarrow\,\mathit{Set}\,a\, \rightarrow\,\mathit{Prop}\) \(\mathit{ys}\mathrel{\ddot{\prec}}xs=\forall\,x\in\mathit{xs}\). \(\mathit{ys}\mathrel{\dot{\prec}}x\) Finally, we combine subset and domination into the thinning relation: \[\mathit{Thin}\,\mathit{ys}\,\mathit{xs}=(\mathit{ys}\subseteq\mathit{xs}) \wedge\mathit{ys}\mathrel{\ddot{\prec}}(\mathit{xs}\setminus\mathit{ys})\] We will use this relation in the specification of our efficient computation to ensure that the small set of polynomials computed, still "dominates" the big set of all the polynomials generated by \(\mathit{genAlg}_{n}\,f\). But first we introduce the helper function \(\mathit{thin}:\mathit{Set}\,a\rightarrow\mathit{Set}\,a\) which aims at removing some elements, while still keeping the minima in the set. It has to refine the relation \(\mathit{Thin}\) which means that if \(\mathit{ys}=\mathit{thin}\,\mathit{xs}\) then \(\mathit{ys}\) must be a subset of \(\mathit{xs}\) (\(\mathit{ys}\subseteq\mathit{xs}\)) and \(\mathit{ys}\) must dominate the rest of \(\mathit{xs}\) (\(\mathit{ys}\mathrel{\ddot{\prec}}(\mathit{xs}\setminus\mathit{ys})\)). A trivial (but useless) implementation would be \(\mathit{thin}=\mathit{id}\), and any implementation which removes some "dominated" elements could be helpful. The best we can hope for is that \(\mathit{thin}\) gives us a set of only incomparable elements. If \(\mathit{thin}\) compares all pairs of elements, it can compute a smallest thinning. In general that may not be needed (and a linear time greedy approximation is good enough), but in some settings almost any algorithmic cost which can reduce the intermediate sets will pay off. We use the following greedy version (inspired by Bird and Gibbons (2020)) as one of the methods of the class \(\mathit{Thinnable}\): _thin \(::\)Thinnable \(a\Rightarrow\)Set \(a\)\(\rightarrow\)Set \(a\) thin = fold thinStep \(\emptyset\) thinStep \(::\)Thinnable \(a\Rightarrow\)Set \(a\)\(\rightarrow\)\(a\)\(\rightarrow\)Set \(a\) thinStep \(ys\)\(x\) = \(\mathbf{if}\)\(ys\)\(\dot{\ }x\)\(\mathbf{then}\)\(ys\)\(\mathbf{else}\)insert \(x\)\(ys\)_ It starts from an empty set and considers one element \(x\) at a time. If the set \(ys\) collected thus far already dominates \(x\), it is kept unchanged, otherwise \(x\) is inserted. (The optimal version also removes from \(ys\) all elements dominated by \(x\).) It is easy to prove that _thin_ implements the specification _Thin_. Now we have what we need to specify when an efficient \(\mathit{genAlg}T_{n}f\) computation is correct. Our specification (\(\mathit{spec}\ n\ f\)) states a relation between a (very big) set \(\mathit{xs}=\mathit{genAlg}_{n}f\) and a smaller set \(\mathit{ys}=\mathit{genAlg}T_{n}f\) we get by applying thinning at each recursive step. We want to prove that \(\mathit{ys}\subseteq\mathit{xs}\) and \(\mathit{ys}\)\(\ddot{\ }(\mathit{xs}\setminus\mathit{ys})\) because then we know we have not lost any of the candidates for minimality. \[\begin{array}{l}\mathit{spec}\ n\ f=\mathbf{let}\ \mathit{xs}=\mathit{genAlg}_{n}\ \ f\\ \mathit{ys}=\mathit{genAlg}T_{n}\ f\\ \mathbf{in}\ \ (\mathit{ys}\subseteq\mathit{xs})\land(\mathit{ys}\ \ddot{\ }(\mathit{xs}\setminus\mathit{ys}))\end{array}\] We can first take care of the simplest case (for any \(n\)). If the function \(f\) is constant (returning some \(\mathit{b}:\mathbb{B}\)), both \(\mathit{xs}\) and \(\mathit{ys}\) will be the singleton set containing \(\mathit{res}\ b\). Thus both properties trivially hold. We then proceed by induction on \(n\) to prove \(S_{n}=\forall\,f:\mathit{BoolFun}\ n\). \(\mathit{spec}\ n\ f\). In the base case \(\mathit{n}=0\) the function is necessarily constant, and we have already covered that above. In the inductive step case, assume the induction hypothesis \(\mathit{IH}=S_{n}\) and prove \(S_{n+1}\) for a function \(f:\mathit{BoolFun}\ (n+1)\). We have already covered the constant function case, so we focus on the main recursive clause of the definitions of \(\mathit{genAlg}_{n}f\) and \(\mathit{genAlg}T_{n}f\): \[\begin{array}{l}\mathit{genAlg}_{n+1}\ \ f=\ \ \ \ \ \ \ \{\mathit{pic}\ i\ \mathit{x}_{0}\ \mathit{x}_{1}\mid i\leftarrow[1\,..\,n],\mathit{x}_{0}\leftarrow\mathit{ genAlg}_{n}\ \ f^{i}_{\mathbf{0}},\mathit{x}_{1}\leftarrow\mathit{genAlg}_{n}\ \ f^{i}_{\mathbf{1}}\}\\ \mathit{genAlg}T_{n+1}\,f=\mathit{thin}\ \{\mathit{pic}\ i\ \mathit{y}_{0}\ y_{1}\mid i \leftarrow[1\,..\,n],\mathit{y}_{0}\leftarrow\mathit{genAlg}T_{n}\,f^{i}_{ \mathbf{0}},\mathit{y}_{1}\leftarrow\mathit{genAlg}T_{n}\,f^{i}_{\mathbf{1}} \}\end{array}\] All subfunctions \(f^{i}_{b}:\mathit{BoolFun}\ n\) used in the recursive calls satisfy the induction hypothesis: \(\mathit{spec}\ n\ f^{i}_{b}\). If we name the sets involved in these hypotheses \(\mathit{xs}^{i}_{b}\) and \(\mathit{ys}^{i}_{b}\) we can thus assume \(\mathit{ys}^{i}_{b}\subseteq\mathit{xs}^{i}_{b}\) and \(\mathit{ys}^{i}_{b}\ \dot{\ }(\mathit{xs}^{i}_{b}\setminus\mathit{ys}^{i}_{b})\). First, the subset property: we want to prove that \(\mathit{genAlg}T_{n+1}\,f\subseteq\mathit{genAlg}_{n+1}\,f\), or equivalently, \(\forall\ y\). \((y\in\mathit{genAlg}T_{n+1}\,f)\Rightarrow(y\in\mathit{genAlg}_{n+1}\,f)\). Let \(y\in\mathit{genAlg}T_{n+1}\,f\). We know from the specification of _thin_ and the definition of \(\mathit{genAlg}T_{n+1}\,f\) that \(y=\mathit{pic}\ i\ y_{0}\ y_{1}\) for some \(y_{0}\in\mathit{ys}^{i}_{0}\) and \(y_{1}\in\mathit{ys}^{i}_{1}\). The subset part of the induction hypothesis gives us that \(y_{0}\in\mathit{xs}^{i}_{0}\) and \(y_{1}\in\mathit{xs}^{i}_{1}\). Thus we can see from the definition of \(\mathit{genAlg}_{n+1}\,f\) that \(y\in\mathit{genAlg}_{n+1}\,f\). Now for the "domination" property we need to show that \(\forall\,x\in\mathit{xs}\setminus\mathit{ys}\). \(\mathit{ys}\ \dot{\ }x\) where \(\mathit{xs}=\mathit{genAlg}_{n+1}\,f\) and \(\mathit{ys}=\mathit{genAlg}T_{n+1}\,f\). Let \(x\in\mathit{xs}\setminus\mathit{ys}\). Given the definition of \(\mathit{xs}\) it must be of the form \(x=\mathit{pic}\ i\ \mathit{x}_{0}\ \mathit{x}_{1}\) where \(\mathit{x}_{0}\in\mathit{xs}^{i}_{\mathbf{0}}\) and \(\mathit{x}_{1}\in\mathit{xs}^{i}_{\mathbf{1}}\). The (second part of the) induction hypothesis provides the existence of \(y_{b}\in\mathit{ys}^{i}_{b}\) such that \(y_{b}\prec x_{b}\). From these \(y_{b}\) we can build \(y^{\prime}=\mathit{pic}\ i\ y_{0}\ y_{1}\) as a candidate element to "dominate" \(\mathit{xs}\). We can now show that \(y^{\prime}\prec x\) by polynomial algebra: true_ \(\Longrightarrow\) \(-\) Follows from the induction hypothesis \((y_{0}\prec z_{0})\wedge(y_{1}\prec z_{1})\) \(\Longrightarrow\) \(-\) In the interval \((0,1)\) both \(1-xP\) and \(xP\) are positive \(1+(1-xP)*y_{0}+xP*y_{1}\prec 1+(1-xP)*x_{0}+xP*x_{1}\) \(\Leftrightarrow\) \(-\) Def. of _pic_ for polynomials _pic_\(i\)\(y_{0}\)\(y_{1}\prec\)_pic_\(i\)\(x_{0}\)\(x_{1}\) \(\Leftrightarrow\) \(-\) Def. of \(y^{\prime}\) and \(x\) \(y^{\prime}\prec x\) We are not quite done, because \(y^{\prime}\) may not be in _ys_. It is clear from the definition of _genAlgT_\({}_{n+1}\,f\) that \(y^{\prime}\) is in the set \(ys^{\prime}\) sent to _thin_, but it may be "thinned away". But, either \(y^{\prime}\in ys=\)_thin_\(ys^{\prime}\) in which case we take the final \(y=y^{\prime}\), or there exists another \(y\in ys\) such that \(y\prec y^{\prime}\) and then we get get \(y\prec x\) by transitivity. To sum up, we have now proved that we can push a powerful _thin_ step into the recursive enumeration of all cost polynomials in such a way that any minimum is guaranteed to reside in the much smaller set of polynomials thus computed. ### Memoization The call graph of _genAlgT_\({}_{n}\,f\) is the same as the call graph of _genAlg_\({}_{n}\,f\) and, as mentioned above, it can be exponentially big. Thus, even though thinning helps in making the intermediate sets exponentially smaller, we still have one source of exponential computational complexity to tackle. Fortunately, the same subfunctions often appear in many different nodes and this means we can save a significant amount of computation time using memoization. The classical example of memoization is the Fibonacci function. Naively computing \(\mbox{\it fib}\,(n+2)=\mbox{\it fib}\,(n+1)+\mbox{\it fib}\,n\) leads to exponential growth in the number of function calls. But if we fill in a table indexed by \(n\) with already computed results we get can compute \(\mbox{\it fib}\,n\) in linear time. Similarly, here we "just" need to tabulate the result of the calls to _genAlg_\({}_{n}\,f\) so as to avoid recomputation. The challenge is that the input we need to tabulate is now a Boolean function which is not as nicely structured as a natural number index. Fortunately, thanks to Hinze (2000), Elliot, and others we have generic Trie-based memo functions only aackage library away5. The _MemoTrie_ library provides the _Memoizable_ class and suitable instances and helper functions for most types. We only need to provide a _Memoizable_ instance for _BDD_s, and we do this using _inSig_ and _outSig_ from the _BDD_ package (decision-diagrams). They expose the top-level structure of a _BDD_: _Sig bf_ is isomorphic to _Either_\(\mathbb{B}\) (_Index_, _bf_, _bf_) where \(\mbox{\it bf}=\mbox{\it BDDFun}\). We define our top-level function _genAlgThinMemo_ by applying memoization to _genAlgT_. ### Comparing polynomials As argued above that the key to an efficient computation of the best cost polynomials is to compare polynomials as soon as possible and throw away those which are "uniformly worse". The specification of \(p\!\prec\!q\) is \(p\,x\!\leqslant\!q\,x\) for all \(0\!\leqslant\!x\!\leqslant\!1\) and \(p\,x\!<\!q\,x\) for some \(0\!<\!x\!<\!1\). Note that (\(\prec\)) is a strict pre-order -- if the polynomials cross, neither is "uniformly worse" and we keep both. If we have two polynomials \(p\), and \(q\), we want to know if \(p\!\leqslant\!q\) for all inputs in the interval \([0,1]\). Equivalently, we need to check if \(0\!\leqslant\!q-p\) in that interval. As the difference is also a polynomial, we can focus our attention to locating polynomial roots in the unit interval. If there are no roots (Fig. 2a) in the unit interval, the polynomial stays on "one side of zero" and we just need to check the sign of the polynomial at any point. If there is at least one single-root (Fig. 2b), the original polynomials cross and we return _Nothing_. Similarly for triple-roots or roots of any odd order. Finally, if the polynomial only has roots of even order (some double-roots, or quadruple-roots, etc. as in Fig. 2c) the polynomial stays on one side of zero, and we can check a few points to see what side that is. (If the number of distinct roots is \(r\) we check up to \(r\!+\!1\) points to make sure at least one will be non-zero and thus tell us on which side of zero the polynomial lies.) Thus, the top-level of the polynomial partial order implementation is as follows: \[\begin{array}{l}\mbox{\it cmpPoly}::(\mbox{\it Ord }a,\mbox{\it Field }a)\!\Rightarrow\mbox{\it Poly }a\,\rightarrow\,\mbox{\it Poly }a\,\rightarrow\,\mbox{\it Maybe Ordering}\\ \mbox{\it cmpPoly}\,p\,q=\mbox{\it cmpZero}\,(q-p)\\ \mbox{\it cmpZero}::(\mbox{\it Ord }a,\mbox{\it Field }a)\!\Rightarrow\mbox{\it Poly }a\,\rightarrow\,\mbox{\it Maybe Ordering}\\ \mbox{\it cmpZero}\,p\,|\,\mbox{\it isZero }p\,=\mbox{\it Just }EQ\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\mid\mbox{\it all even }(\mbox{\it numRoots}^{\prime}\,p)=\mbox{\it if}\qquad\mbox{\it any }(0\!<\!)\mbox{\it vals }\mbox{\bf then }\mbox{\it Just }LT\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ To make this work, we "just" need to implement the root-counting functions \(\mathit{numRoots}\) and \(\mathit{numRoots}^{\prime}\): \[\begin{array}{l}\mathit{numRoots}::\left(\mathit{Ord}\ a,\mathit{Field}\ a \right)\Rightarrow\mathit{Poly}\ a\ \rightarrow\mathit{Int}\\ \mathit{numRoots}=\mathit{sum}\circ\mathit{numRoots}^{\prime}\\ \mathit{numRoots}^{\prime}::\left(\mathit{Ord}\ a,\mathit{Field}\ a \right)\Rightarrow\mathit{Poly}\ a\ \rightarrow\ \left[\mathit{Int}\right]\end{array}\] The second function computes real root multiplicities: \(\mathit{numRoots}^{\prime}\ p=[\,1,3\,]\) means \(\mathit{p}\) has one single and one triple root in the open interval \((0,1)\). From this we get that \(\mathit{p}\) has \(2=\mathit{length}\left[\,1,3\,\right]\) distinct real roots and \(4=\mathit{sum}\left[\,1,3\,\right]\) real roots if we count multiplicities. We will not provide all the code here, because that would take us too far from the main topic of the paper, but we will illustrate the main algorithms and concepts. ### Isolating real roots and Descartes rule of signs First out is Yun's algorithm (Yun, 1976) for square-free factorisation: given a polynomial \(\mathit{p}\) it computes a list of polynomial factors \(p_{i}\), each of which only has single-roots, and such that \(p=C\prod_{i}p_{i}\). Note the exponent \(i\): the factor \(p_{2}\), for example, appears squared in \(\mathit{p}\). If \(\mathit{p}\) only has single-roots, the list from Yun's algorithm has just one element, \(p_{1}\), but in any case we get a finite list of polynomials, each of which is "square-free".6 Footnote 6: Yun’s algorithm is built around repeated computation of the polynomial greatest common divisor of \(\mathit{p}\) and its derivative, \(\mathit{p}^{\prime}\). See the associated code for the details. Second in line is Descartes rule of signs which can be used to determine the number of real zeros of a polynomial function. It tells us that the number of positive real zeros in a polynomial function \(\mathit{f}\) is the same, or less than by an even number, as the number of changes in the sign of the coefficients. Together with some polynomial transformations, this is used to count the zeroes in the interval \([0,1)\). If the rule gives zero or one, we are done: we have isolated an interval \([0,1)\) with either no root or exactly one root. For our use case we don't need to know the actual root, just if it exists in the interval or not. If the rule gives more than one, we don't quite know the exact number of roots yet (only an upper bound). In that case we subdivide the interval into the lower \([0,1/2)\) and upper \([1/2,1)\) halves. Fortunately the polynomial coefficients can be transformed to make the domain the unit interval again so that we can call ourselves recursively. After a finite number of steps, this bisection terminates and we get a list of disjoint isolating intervals where we know there is exactly one root in each. Combining Yun and Descartes, we implement our "root counter", and thus our partial order on polynomials. ## 4 Results Using the method from the previous section we can now calculate the level-\(p\)-complexity of Boolean functions with our function \(\mathit{genAlgThinMemo}\). First we return to our example from the beginning (\(\mathit{sim}_{5}\)), where we get several polynomials which are optimal in different intervals. Then, we calculate the level-\(p\)-complexity for \(\mathit{maj}_{3}^{2}\) which is lower than the proposed result in (Jansson, 2022), which means that our current method is better. ### Level-\(p\)-complexity for \(\mathit{sim}_{5}\) When we run \(\mathit{genAlgThinMemo}\,5\,\mathit{sim}_{5}\) it returns a set of four polynomials: \[\{P_{1}(p) = 2+6p-10p^{2}+8p^{3}-4p^{4}, P_{2}(p) = 4-2p-3p^{2}+8p^{3}-2p^{4},\] \[P_{3}(p) = 5-8p+9p^{2}-2p^{4}, P_{4}(p) = 5-8p+8p^{2}\}\] We don't compute their intersection points, just that they do intersect in the unit interval. The four polynomials were shown already in Fig. 1. The level-\(p\)-complexity for \(\mathit{sim}_{5}\) is the piecewise polynomial, pointwise minimum, of these four, with two different polynomials in different intervals: \(D_{p}(\mathit{sim}_{5})=P_{4}(p)\) for \(p\in[\approx 0.356,\approx 0.644]\) and \(D_{p}(\mathit{sim}_{5})=P_{1}(p)\) in the rest of the unit interval. As seen in Figure 1, the level-\(p\)-complexity has two maxima. ### Level-\(p\)-complexity for \(\mathit{maj}_{3}^{2}\) Running \(\mathit{genAlgThinMemo}\,9\,\mathit{maj}_{3}^{2}\) we get \(\{\,P\,[4,4,6,9,-61,23,67,-64,16]\,\}\), which means that the expected cost (\(P_{*}\)) of the best decision tree (\(\mathit{T}_{*}\)) is \[P_{*}(p) = 4+4p+6p^{2}+9p^{3}-61p^{4}+23p^{5}+67p^{6}-64p^{7}+16p^{8}\,.\] This can be compared to the decision tree (that we call \(\mathit{T}_{t}\)) conjectured in (Jansson, 2022) to be the best. Its expected cost is slightly higher (thus worse): \[P_{t}(p) = 4+4p+7p^{2}+6p^{3}-57p^{4}+20p^{5}+68p^{6}-64p^{7}+16p^{8}\,.\] Figure 1: Level-\(p\)-complexity of \(\mathit{sim}_{5}\), where the dots show the intersections of the costs of the decision trees. The expected costs for decision trees \(\,T_{*}\) and \(\,T_{t}\) can be seen in Figure 4.2. Comparing the two polynomials using _cmpPoly_\(P_{*}\)\(P_{t}\) shows that the new one has strictly lower expected cost than the one from the thesis. The difference, which factors to exactly \(p^{2}(1-p)^{2}(1-p+p^{2})\), is illustrated in Fig. 4.3, and we note that it is non-negative in the whole interval. The value of the polynomials at the endpoints is 4 and the maximum of \(P_{*}\) is \(\approx 6.14\) compared to the maximum of \(P_{t}\) which is \(\approx 6.19\). The conjecture in (Jansson, 2022) is thus false and the correct formula for the level-\(p\)-complexity of _maj\({}_{3}^{2}\)_ is \(P_{*}\). At the time of publication of (Jansson, 2022) it was believed that sifting through all the possible decision trees would be intractable. Fortunately, using a combination of thinning, memoization, and exact comparison of polynomials, it is now possible to compute the correct complexity in less than a second on the author's laptop. Figure 4.3: Difference between the expected costs of \(\,T_{t}\) and \(\,T_{*}\). Figure 4.2: Expected costs of the two different decision trees. Because they are very close we also show their difference in Fig. 4.3. ## 5 Conclusions This paper describes a Haskell library for computing level-\(p\)-complexity of Boolean functions, and applies it to two-level iterated majority (_maj\({}_{3}^{2}\)_). The problem specification is straightforward: generate all possible decision trees, compute their expected cost polynomials, and select the best ones. The implementation is more of a challenge because of two sources of exponential computational cost: an exponential growth in the set of decision trees and an exponential growth in the size of the recursive call graph (the collection of subfunctions). The library uses thinning to tackle the first and memoization to handle the second source of inefficiency. In combination with efficient data structures (binary decision diagrams for the Boolean function input, sets of polynomials for the output) this enables computing the level-\(p\)-complexity for our target example _maj\({}_{3}^{2}\)_ in less than a second. From the mathematics point of view the strength of the methods used in this paper to compute the level-\(p\)-complexity is that we can get a correct result to something which is very hard to calculate by hand. From a computer science point of view the paper is an instructive example of how a combination of algorithmic and symbolic tools can tame a doubly exponential computational cost. The library uses type-classes for separation of concerns: the actual implementation type for Boolean functions (the input) is abstracted over by the _BoFun_ class; and the corresponding type for the output is modelled by the _TreeAlg_ class. We also use our own class _Thinnable_ for thinning (and pre-orders), and the _Memoizable_ class from hackage. This means that our main function has the following type: \[\begin{array}{l}\mbox{\it genAlgThinMemo}::\mbox{\it(BoFun\,bf},\mbox{ \it Memoizable\,bf},\mbox{\it TreeAlg\,a},\mbox{\it Thinnable\,a})\Rightarrow\\ \mbox{\rule{0.0pt}{12.9pt}}\mbox{\rule{0.0pt}{12.9pt}}\mbox{\rule{0.0pt}{12. 9pt}}\mbox{\rule{0.0pt}{12.9pt}}\mbox{\rule{0.0pt}{12. ## Acknowledgments The authors would like to extend their gratitude to Jeffrey Steif for the idea of exploring level-p-complexity and for supervising the preceding work, reported in Jansson (2022), and to Tim Richter and Jeremy Gibbons for taking their time to give valuable feedback on the first draft of this paper. The work presented in this paper heavily relies on free software, among others on GHC, Agda, Haskell, git, Emacs, LaTeX and on the Ubuntu operating system, Mathematica, and Visual Studio Code. It is our pleasure to thank all developers of these excellent products. ### Conflicts of Interest None.
2310.02662
Dynamics and Probability in the Toss of a Coin with Symmetric Inhomogeneous Density
Under investigation in this paper is the dynamics and probability of heads in the toss of a coin with symmetric inhomogeneous density. Such coins are assumed to have diagonal inertia matrix. The rotational motion of the coin is determined by the initial angular momentum and initial position of the coin. We described the dynamic behavior of the unit normal vector and calculated the limiting probability of heads as time goes to infinity with respect to the fixed initial parameters. Our probability formula extends the formula for homogeneous coins by Keller and Diaconis et al.
Shilun Li
2023-10-04T08:51:08Z
http://arxiv.org/abs/2310.02662v1
# Dynamics and Probability in the Toss of a Coin ###### Abstract Under investigation in this paper is the dynamics and probability of heads in the toss of a coin with symmetric inhomogeneous density. Such coins are assumed to have diagonal inertia matrix. The rotational motion of the coin is determined by the initial angular momentum and initial position of the coin. We described the dynamic behavior of the unit normal vector and calculated the limiting probability of heads as time goes to infinity with respect to the fixed initial parameters. Our probability formula extends the formula for homogeneous coins by Keller and Diaconis et al. keywords: coin toss, rigid body, limiting probability, dynamic equations + Footnote †: journal: Journal of Mathematical Biology ## 1 Introduction The motion of a coin toss can be modeled with a dynamical system governed by mechanics laws, determined entirely on the initial configuration. The outcomes can be random due to the variations in the initial parameters. Several physical mechanisms for randomness in coin toss have been reported, see [1]. Keller considered a specific uniform coin with initial velocity and angular velocity imparted at the instant of tossing [2]. The uniform coin has inertia matrix given by \(\mathrm{diag}(I_{x},I_{y},I_{z})\) with \(I_{x}=I_{y}<I_{z}\), spins without air resistance and lands without bouncing. Assuming that the coin rotates about a horizontal axis that lies along a diameter of the coin, Keller proved that the limiting probability of heads is \(50\%\). Building upon Keller's work, Diaconis et al found dynamical bias in the toss of a uniform coin which depends on the angle \(\theta\) between initial the angular momentum \(\mathbf{L}\) and normal of heads \(\mathbf{n}\)[3]. The probability of heads if \(50\%\) if and only if \(\theta=\frac{\pi}{2}\). If a coin starts out heads, it ends up heads more often. Diaconis et al also measured empirical distributions of \(\theta\) from real coin flip experiments and estimated that the probability of heads is \(50.83\%\) given the coin starts out heads. While Keller and Diaconis et al neglects air resistance and bouncing of the coin, Vulovic and Prange[4] analysed the effect of bouncing on the probability. They found that bouncing adds randomness to the toss which results in an increase in fairness. Yue and Zhang[5] takes into account both bouncing and air resistance. The non-linearity of air resistance and bouncing causes acute sensitivity to initial conditions, adding randomness to the coin toss. On the other hands, Lindley[6] followed by Gelman et al[7] considered non-uniform coins with mass inhomogeneously distributed. They gave informal arguments without rigorous proofs suggesting that the inhomogeneity of the coin will not affect the probability if the coin is caught in hand. In this paper, we will investigate the dynamical bias of coins with symmetric inhomogeneous density, which is also referred to as non-uniform coins. Non-uniform coins are coins with inertia matrix given by \(\text{diag}(I_{x},I_{y},I_{z})\) where \(I_{x}<I_{y}<I_{z}\). We will neglect the influence of air resistance and bouncing, assuming that the coin rotates freely in the air. ## 2 Preliminaries We will first introduce three coordinate systems centered at the centroid of the coin with orthonormal basis: * Reference frame \(\{\mathbf{i},\mathbf{j},\mathbf{k}\}\) where \(-\mathbf{k}\) is the direction of gravity and \(\mathbf{i},\mathbf{j}\) independent of time. * Body fixed frame \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{n}\}\) where \(\mathbf{n}\) is the normal to the heads of the coin. * Intermediate frame \(\{\mathbf{\varepsilon_{1}},\mathbf{\varepsilon_{2}},\mathbf{l}\}\) where \(\mathbf{\varepsilon_{1}}=\frac{\mathbf{k}-\mathbf{<k,l>}}{||\mathbf{k}-\mathbf{<k,l>}||}\), \(\mathbf{\varepsilon_{2}}=\mathbf{n}\times\mathbf{\varepsilon_{1}}\), as shown in Figure 1. We use superscript r refer to the coordinates of vectors in the reference frame, b for the body fixed frame, and no superscript for basis independent situations. The intermediate frame is only introduced for calculating the rotational matrix between the body fixed frame and the reference frame and its existence will be suppressed in section 3. Angular momentum theorem applies in the reference frame [8; 9]. It tells us the angular momentum \(\mathbf{L^{r}}\) is conserved in the reference frame since the coin is torque free if we ignore air resistance. Then the coordinates of \(\mathbf{L^{r}}\) and \(\mathbf{l^{r}}=\frac{\mathbf{L^{r}}}{||\mathbf{L^{r}}||}\) are time independent in the reference frame. But the coordinates in body frame are time dependent. Using spherical coordinates, we can write \[\mathbf{l^{r}}=(\cos\alpha\sin\beta,\sin\alpha\sin\beta,\cos\beta), \tag{1}\] and \[\mathbf{l^{b}}=(\cos\varphi_{t}\sin\theta_{t},\sin\varphi_{t}\sin\theta_{t},\cos \theta_{t}), \tag{2}\] where \(\alpha\), \(\beta\) and \(\theta_{t}\) are shown in Figure 1. Any orthonormal basis can be rotated to another orthonormal basis by a sequence of three Euler angles [10], precession pr, nutation nu, and rotation rt, as shown in the right panel of Figure 1. The rotation matrix (acting by left multiplication) in terms of Euler angles is \[A=\begin{pmatrix}C_{pr}C_{rt}-S_{pr}C_{nu}S_{rt}&-C_{pr}S_{rt}-S_{pr}C_{nu}C_{rt}&S_ {pr}S_{nu}\\ S_{pr}C_{rt}+C_{pr}C_{nu}S_{rt}&-S_{pr}S_{rt}+C_{pr}C_{nu}C_{rt}&-C_{pr}S_{nu}\\ S_{nu}S_{rt}&S_{nu}C_{rt}&C_{nu}\end{pmatrix}, \tag{3}\] where \(S\) and \(C\) denote the trigonometric functions \(\sin\) and \(\cos\), e.g. \(S_{nu},\ C_{nu}\) denote \(\sin(nu),\ \cos(nu)\), respectively. Intermediate frame acts as a bridge between the body frame and the reference frame. Let \(A_{1}\) be the rotation matrix described by Euler angles \(\{pr_{1},nu_{1},rt_{1}\}\) from \(\{\mathbf{i},\mathbf{j},\mathbf{k}\}\) to \(\{\mathbf{\varepsilon_{1}},\mathbf{\varepsilon_{2}},\mathbf{l}\}\), \(A_{2}\) be the rotation matrix described by Euler angles \(\{pr_{2},nu_{2},rt_{2}\}\) from \(\{\mathbf{\varepsilon_{1}},\mathbf{\varepsilon_{2}},\mathbf{l}\}\) to \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{n}\}\). We have, \[(\mathbf{i},\mathbf{j},\mathbf{k})\xrightarrow{(pr_{1},nu_{1},rt_{1})}(\mathbf{\varepsilon_{1 }},\mathbf{\varepsilon_{2}},\mathbf{l})\xrightarrow{(pr_{2},nu_{2},rt_{2})}(\mathbf{e_{1} },\mathbf{e_{2}},\mathbf{n}) \tag{4}\] From the definition of the basis \(\{\mathbf{\varepsilon_{1}},\mathbf{\varepsilon_{2}},\mathbf{l}\}\), we have \(\mathbf{k}=(\sin\beta)\mathbf{\varepsilon_{1}}+(\cos\beta)\mathbf{l}\). So the coordinates of \(\mathbf{k}\) in the system \(\{\mathbf{\varepsilon_{1}},\mathbf{\varepsilon_{2}},\mathbf{l}\}\) and \(\{\mathbf{i},\mathbf{j},\mathbf{k}\}\) are \(\big{(}\sin\beta,0,\cos\beta\big{)}\) and \((0,0,1)\), respectively. In addition, the coordinates of \(\mathbf{l}\) in the system \(\{\mathbf{i},\mathbf{j},\mathbf{k}\}\) and \(\{\mathbf{\varepsilon_{1}},\mathbf{\varepsilon_{2}},\mathbf{l}\}\) are \(\big{(}\cos\alpha\sin\beta,\sin\alpha\sin\beta,\cos\beta\big{)}\) and \((0,0,1)\), respectively. Then the rotation matrix \(A_{1}\) satisfies \[\begin{cases}(\sin\beta,0,\cos\beta)^{T}=A_{1}^{T}(0,0,1)^{T},\\ \\ (\cos\alpha\sin\beta,\sin\alpha\sin\beta,\cos\beta)^{T}=A_{1}(0,0,1)^{T}.\end{cases} \tag{5}\] The components of \(A_{1}\) have form (3). Those equations imply \((pr_{1},\ nu_{1},\ rt_{1})=(\alpha+\frac{\pi}{2},\beta,\frac{\pi}{2})\) or \((\alpha-\frac{\pi}{2},-\beta,-\frac{\pi}{2})\). Then we obtain \[A_{1}=\begin{pmatrix}-C_{\alpha}C_{\beta}&S_{\alpha}&C_{\alpha}S_{\beta}\\ -S_{\alpha}C_{\beta}&-C_{\alpha}&S_{\alpha}S_{\beta}\\ S_{\beta}&0&C_{\beta}\end{pmatrix} \tag{6}\] Let \(\psi_{t}\) be the dihedral angle \(\mathbf{k}-\mathbf{l}-\mathbf{n}\), the dynamic angle of the plane spanned by \(\mathbf{n}(t)\) and \(\mathbf{l}\) rotating around the plane spanned by \(\mathbf{k}\) and \(\mathbf{l}\), as shown in Figure 1. Then \(\psi_{t}\) mod \(2\pi\) is also the longitude of \(\mathbf{n}\) in the intermediate frame. Similarly, we have \((pr_{2},\ nu_{2},\ rt_{2})=(\psi_{t}+\frac{\pi}{2},\theta_{t},\frac{\pi}{2}- \varphi_{t})\) or \((\psi_{t}-\frac{\pi}{2},-\theta_{t},-\frac{\pi}{2}-\varphi_{t})\). Therefore \[A_{2}=\begin{pmatrix}-S_{\psi_{t}}S_{\varphi_{t}}-C_{\psi_{t}}C_{\theta_{t}}C_{ \varphi_{t}}&S_{\psi_{t}}C_{\varphi_{t}}-C_{\psi_{t}}C_{\theta_{t}}S_{\varphi_ {t}}&C_{\psi_{t}}S_{\theta_{t}}\\ C_{\psi_{t}}S_{\varphi_{t}}-S_{\psi_{t}}C_{\theta_{t}}C_{\varphi_{t}}&-C_{\psi_{t }}C_{\varphi_{t}}-S_{\psi_{t}}C_{\theta_{t}}S_{\varphi_{t}}&S_{\psi_{t}}S_{\theta _{t}}\\ S_{\theta_{t}}C_{\varphi_{t}}&S_{\theta_{t}}S_{\varphi_{t}}&C_{\theta_{t}}\end{pmatrix} \tag{7}\] ## 3 The evolution of normal vector \(\mathbf{n^{\tau}}\) ### Dynamic equations of angular momentum \(\mathbf{l^{b}}\) The coin rotates freely not subject to any net forces or torques around the fixed centroid. This is a classical Euler-Poinsot problem. The dynamic equations are given in Landau[11] by \[\begin{cases}\frac{d}{dt}\mathbf{L^{b}}_{x}=(I_{z}^{-1}-I_{y}^{-1})\mathbf{L^{b}}_{x} ^{b}\mathbf{L^{b}}_{z},\\ \frac{d}{dt}\mathbf{L^{b}}_{y}=(I_{x}^{-1}-I_{z}^{-1})\mathbf{L^{b}}_{z}\mathbf{L^{b}}_{x},\\ \frac{d}{dt}\mathbf{L^{b}}_{z}=(I_{y}^{-1}-I_{x}^{-1})\mathbf{L^{b}}_{x}\mathbf{L^{b}}_{y}. \end{cases} \tag{8}\] Or in terms of Euler angles, \[\begin{cases}\frac{d\psi_{t}}{dt}=||\mathbf{L}||\left(\frac{\cos^{2}\varphi_{t}}{I_{x}} +\frac{\sin^{2}\varphi_{t}}{I_{y}}\right),\\ \frac{d\varphi_{t}}{dt}=||\mathbf{L}||\cos\theta_{t}\left(\frac{\cos^{2}\varphi_{t }}{I_{x}}+\frac{\sin^{2}\varphi_{t}}{I_{y}}-\frac{1}{I_{z}}\right),\\ \frac{d\theta_{t}}{dt}=\frac{||\mathbf{L}||}{2}\left(I_{x}^{-1}-I_{y}^{-1}\right) \sin\theta_{t}\sin(2\varphi_{t}).\end{cases} \tag{9}\] Note that for non-uniform coins, there is no explicit analytical solution for \(\mathbf{L^{b}}(t)\). The rotational kinetic energy of the coin is given by \[E=\frac{1}{2}\bigg{(}\frac{(\mathbf{L^{b}}_{x})^{2}}{I_{x}}+\frac{(\mathbf{L^{b}}_{y} )^{2}}{I_{y}}+\frac{(\mathbf{L^{b}}_{z})^{2}}{I_{z}}\bigg{)}=\frac{||\mathbf{L}||^{2}} {2}\bigg{(}\frac{(\mathbf{l^{b}}_{x})^{2}}{I_{x}}+\frac{(\mathbf{l^{b}}_{y})^{2}}{I_{ y}}+\frac{(\mathbf{l^{b}}_{z})^{2}}{I_{z}}\bigg{)} \tag{10}\] which is constant with respect to \(t\). Therefore, \(\mathbf{l^{b}}\) must lie on the fixed ellipsoid \(\frac{x^{2}}{I_{x}}+\frac{y^{2}}{I_{y}}+\frac{z^{2}}{I_{z}}=\frac{2E}{||\mathbf{L} ||}\) and the sphere \(x^{2}+y^{2}+z^{2}=1\) in the body fixed frame for all \(t\). The intersection is a closed curve as shown in Figure 2. So \(\mathbf{l^{b}}\) is periodic. In the special case of uniform coins, the angular velocity or angular momentum rotates and traces out a circle in the body-fixed frame. ### The normal \(\mathbf{n^{r}}\) Based on the evolution of \(\mathbf{L^{b}}\) in the body frame, and the motion of \(\mathbf{L^{b}}\) relative to normal vector \(\mathbf{n}\), we can further derive the evolution of \(\mathbf{n^{r}}\) in reference frame. **Theorem 3.1**.: _Given an initial angular momentum_ \[\mathbf{L^{r}}=||\mathbf{L}||\left(\cos\alpha\sin\beta,\sin\alpha\sin\beta,\cos\beta \right).\] _Then at time \(t\), the unit normal vector_ \[\mathbf{n^{r}}=\begin{pmatrix}-C_{\alpha}C_{\beta}C_{\psi_{t}}S_{\theta_{t}}+S_{ \alpha}S_{\psi_{t}}S_{\theta_{t}}+C_{\alpha}S_{\beta}C_{\theta_{t}}\\ -S_{\alpha}C_{\beta}C_{\psi_{t}}S_{\theta_{t}}-C_{\alpha}S_{\psi_{t}}S_{ \theta_{t}}+S_{\alpha}S_{\beta}C_{\theta_{t}}\\ S_{\beta}C_{\psi_{t}}S_{\theta_{t}}+C_{\beta}C_{\theta_{t}}\end{pmatrix},\] _where \((\varphi_{t},\theta_{t},\psi_{t})\) are determined by equations (9)._ Proof.: The theorem directly follows from equations (6), (7) and \(\mathbf{n^{r}}=A_{1}A_{2}(0,0,1)^{T}\) Figure 2: Two possible paths of \(\mathbf{l^{b}}\) for non-uniform coins. In Figure 3, the coin is heads up when \(\mathbf{n}^{\mathbf{r}}\) is at the north hemisphere and tails is up otherwise. The figures show that \(\mathbf{n}\) precesses around the angular momentum \(\mathbf{l}\). For uniform coins, \(\mathbf{n}\) spin around \(\mathbf{l}\) in a circle, and the angle \(\theta_{t}\) between \(\mathbf{l}\) and \(\mathbf{n}\) stay constant. For non-uniform coins, \(\mathbf{n}\) spin with nutation around \(\mathbf{l}\) in a ring between 2 parallel circles. From Theorem 3.1, we obtain the criterion for the coin landing heads up: **Corollary 3.1.1**.: \(\mathbf{n}^{\mathbf{r}}(t)\) _satisfies_ \[\mathbf{n}^{\mathbf{r}}(t)\cdot\mathbf{k}=\cos\beta\cos\theta_{t}+\sin\beta\sin\theta_{t} \cos\psi_{t}, \tag{11}\] _and the coin is head up at time \(t\) if and only if_ \[\sin\beta\sin\theta_{t}\cos\psi_{t}>-\cos\beta\cos\theta_{t}.\] Equation (11) is just the law of cosines for the spherical triangle, the shaded part in the left panel in Figure 1, which is formed by the endpoints of unit vectors \(\mathbf{n},\mathbf{k}\) and \(\mathbf{l}\), **Remark**.: _(property on precession \(\psi_{t}\)) For the uniform coins with \(I_{x}=I_{y}\), \(\mathbf{n}\) precesses around \(\mathbf{l}\) at a constant speed \(||\mathbf{L}||I_{x}^{-1}\). However, for non-uniform coins, the \(\mathbf{n}\) precesses around \(\mathbf{l}\) at speed varying from \(||\mathbf{L}||I_{y}^{-1}\) to \(||\mathbf{L}||I_{x}^{-1}\)._ **Remark**.: _(property on nutation \(\theta_{t}\)) For uniform coins, \(\theta_{t}\) is constant. For non-uniform coins \(\theta_{t}\) varies periodically from \(\theta_{m}\) to \(\theta_{M}\), which is given by:_ \[\theta_{m} =\arccos(\sqrt{c_{2}}),\ \theta_{M}=\pi-\arccos(\sqrt{c_{2}}), \text{if }c_{1}<0, \tag{12}\] \[\theta_{m} =\arccos(\sqrt{c_{2}}),\ \theta_{M}=\arccos(\sqrt{c_{1}}), \text{if }c_{2}\geq 0\text{ and }\theta_{0}\in[0,\frac{\pi}{2}],\] (13) \[\theta_{m} =\pi-\arccos(\sqrt{c_{1}}),\ \theta_{M}=\pi-\arccos(\sqrt{c_{2}}), \text{if }c_{1}\geq 0\text{ and }\theta_{0}\in(\frac{\pi}{2},\pi]. \tag{14}\] _with \(c_{1},c_{2}\) given by (15), (16)._ Proof.: Since \(||\mathbf{l}^{\mathbf{b}}||=1\) and \(\mathbf{l}^{\mathbf{b}}\) satisfies equation (10). Adopting Lagrange method, the extremums of \(\mathbf{l}_{z}^{\mathbf{b}}\), locating on \(\mathbf{l}_{x}^{\mathbf{b}}=0\) or \(\mathbf{l}_{y}^{\mathbf{b}}=0\), are given by \[c_{1}=\cos^{2}\theta_{0}-\frac{I_{x}^{-1}-I_{y}^{-1}}{I_{y}^{-1}-I_{z}^{-1}} \cos^{2}\varphi_{0}\sin^{2}\theta_{0} \tag{15}\] Figure 3: The path of \(\mathbf{n}^{\mathbf{r}}\) for uniform (left) and non-uniform (right) coin. \[c_{2}=\cos^{2}\theta_{0}+\frac{I_{x}^{-1}-I_{y}^{-1}}{I_{x}^{-1}-I_{z}^{-1}}\sin^ {2}\varphi_{0}\sin^{2}\theta_{0} \tag{16}\] Since \(\mathbf{l}_{z}^{\mathbf{b}}=\cos\theta_{t}\), For the uniform case, \(c_{1}=c_{2}\) so \(\theta_{t}\) is constant. For the non-uniform case, we have the desired result by considering the three situations: \(c_{1}<0\), \(c_{2}\geq 0\) and \(\theta_{0}\in[0,\frac{\pi}{2}]\), \(c_{1}\geq 0\) and \(\theta_{0}\in(\frac{\pi}{2},\pi]\). \(\theta_{m}\) and \(\theta_{M}\) are are independent of \(||\mathbf{L}||\). They are determined by \(I_{x},I_{y}\) and \(I_{z}\). For non-uniform coins, \(\theta_{m}\) and \(\theta_{M}\) are either supplementary, both acute, or both obtuse. When the initial \(\theta_{0}\) is close enough to \(\frac{\pi}{2}\), contained in \[S_{F}=\{(\varphi_{0},\theta_{0})|\cot^{2}\theta_{0}<\frac{I_{x}^{-1}-I_{y}^{- 1}}{I_{y}^{-1}-I_{z}^{-1}}\cos^{2}\varphi_{0}\}, \tag{17}\] then \(\theta_{m}\) and \(\theta_{M}\) are supplementary. Under this condition, the two boundary circles perpendicular to \(\mathbf{l}\) corresponding to \(\theta_{m}\) and \(\theta_{M}\) in Figure 3 are centered symmetrically around the spherical center. Let us denote the fair region as the set of initial parameters \((\varphi_{0},\theta_{0})\) such that the proportions of "heads" zone and "tails" zone of \(\mathbf{n}^{\mathbf{r}}\) are equal (to 50%). \(S_{F}\) in (17) is the fair region for non-uniform coins. On the other hand, for uniform coins, the proportion of "heads" zone is 50% if and only if \(\theta_{0}=\frac{\pi}{2}\). The fair region is shown in Figure 4. The probability of heads (which we will formulate in section 4) is approximately the proportion of the "heads" zone of \(\mathbf{n}^{\mathbf{r}}\). So we can assume the coin is fair when the initial parameters \((\varphi_{0},\theta_{0})\) is inside the fair region. ## 4 Probability of Heads As the orientation of the coin is determined by \(||\mathbf{L}||\), \((\varphi_{0},\theta_{0})\) and \(t\), we will define the probability of heads \(p\) as the limiting probability of \(\mathbf{n}_{z}^{\mathbf{r}}>0\) as \(t\to\infty\) given \((\varphi_{0},\theta_{0})\) and a distribution on \(||\mathbf{L}||\). When \((\varphi_{0},\theta_{0})\) is in the fair region, referred to as the fair case, we assume \(p\) is 50%. So let us now consider the situation where \((\varphi_{0},\theta_{0})\) is outside the fair region. Notice that in this situation, either \(\theta_{t}<\frac{\pi}{2}\) for all \(t\) as show in the left of Figure 2, which we refer to as the acute case, or \(\theta_{t}>\frac{\pi}{2}\) for all \(t\), which we refer to as the obtuse case. The key to obtaining \(p\) is to obtain the limiting joint distribution of \((\psi_{t},\varphi_{t})\). However, it is suffice to obtain the limiting distribution of \((\psi_{t}\ \mathrm{mod}\ 2\pi,\varphi_{t}\ \mathrm{mod}\ 2\pi)\). This will rely on the following lemma about limit distributions. **Lemma 4.1**.: _If \(X\) is a random variable with characteristic function vanishing at infinity. Let \(g_{1},...,g_{n}:\mathbb{R}\to\mathbb{R}\) be real valued continuous function such that \(\lim_{t\to\infty}|\sum_{i=1}^{n}m_{i}g_{i}(t)|=\infty\) for any \((m_{1},..,m_{n})\in\mathbb{Z}^{n}\setminus\{0\}\). Then \((g_{1}(t)X\ \mathrm{mod}\ 1,...,g_{n}(t)X\ \ \mathrm{mod}\ 1)\) converges in distribution to \(\mathcal{U}[0,1]^{n}\) as \(t\to\infty\)._ Figure 4: Fair region of initial parameter \((\varphi_{0},\theta_{0})\)Dotted line: Uniform coin Gray region: non-uniform coin Proof.: Let us consider the case where \(n=2\), cases where \(n>2\) follows similarly. Let \(Y_{1}=(g_{1}(t)X\bmod 1)\) and \(Y_{2}=(g_{2}(t)X\bmod 1)\). The cumulative distribution function and characteristic function of \(X\) are denoted by \(F(x)\) and \(\Phi_{X}\), respectively. The characteristic function \(\Phi_{Y}\) of \((Y_{1},Y_{2})\) is determined by \(\Phi_{X}\). Consider \(\Phi_{Y}(2\pi m_{1},2\pi m_{2})\) for any \((m_{1},m_{2})\in\mathbb{Z}^{2}\), which are called Fourier coefficients in Engel [12], we have \[\Phi_{Y}(2\pi m_{1},2\pi m_{2})\] \[= \mathbb{E}\left[\exp\{i(2\pi m_{1}Y_{1}+2\pi m_{2}Y_{2})\}\right]\] \[= \int_{-\infty}^{\infty}\exp\{2\pi i(m_{1}Y_{1}+m_{2}Y_{2})\}dF(x)\] \[= \sum_{k_{1},k_{2}\in\mathbb{Z}}\int_{I_{k_{1},k_{2}}}\exp\{2\pi i (m_{1}Y_{1}+m_{2}Y_{2})\}dF(x)\] \[= \sum_{k_{1},k_{2}\in\mathbb{Z}}\int_{I_{k_{1},k_{2}}}\exp\{2\pi i [m_{1}(g_{1}(t)X-k_{1})+m_{2}(g_{2}(t)X-k_{2})]\}dF(x)\] \[= \sum_{k_{1},k_{2}\in\mathbb{Z}}\int_{I_{k_{1},k_{2}}}\exp\{2\pi i (m_{1}g_{1}(t)X+m_{2}g_{2}(t)X)\}dF(x)\] \[= \int_{-\infty}^{\infty}\exp\{2\pi i(m_{1}g_{1}(t)X+m_{2}g_{2}(t) X)\}dF(x)\] \[= \Phi_{X}[m_{1}g_{1}(t)+m_{2}g_{2}(t)]\] where \(I_{k_{1},k_{2}}=[k_{1},k_{1}+1)\times[k_{2},k_{2}+1)\). When \(t\to\infty\), \(m_{1}g_{1}(t)+m_{2}g_{2}(t)\to\infty\) and \(\Phi_{X}[m_{1}g_{1}(t)+m_{2}g_{2}(t)]\to 0\). So \[\lim_{t\to\infty}\Phi_{Y}(2\pi m_{1},2\pi m_{2})=0 \tag{18}\] for all \((m_{1},m_{2})\in\mathbb{Z}^{2}\setminus\{(0,0)\}\). Since \(Y\) is supported by \([0,1]\times[0,1]\), and the Fourier coefficients of \(\mathcal{U}[0,1]^{2}\) are zero, according to page 361 of Billingsley[13], \[Y\xrightarrow{\mathcal{D}}\mathcal{U}[0,1]^{2} \tag{19}\] which completes the proof. **Lemma 4.2**.: _Suppose \(\theta_{t}<\frac{\pi}{2}\) for all \(t\) or \(\theta_{t}>\frac{\pi}{2}\) for all \(t\). For all Schwartz densities of \(||\mathbf{L}||\), all initial parameters \((\varphi_{0},\theta_{0},I_{x},I_{y},I_{z})\) excluding a measure 0 set, when \(t\to\infty\), we have \((\psi_{t}\text{ mod }2\pi,\varphi_{t}\text{ mod }2\pi)\xrightarrow{ \mathcal{D}}\mathcal{U}[0,2\pi]^{2}\) in distribution._ Proof.: Recall that \((\mathbf{l}_{x}^{\mathbf{b}},\mathbf{l}_{y}^{\mathbf{b}},\mathbf{l}_{z}^{\mathbf{b}})\) lies on a closed curve and is periodic with some period \(T\) and \((\varphi_{t},\psi_{t})\) is its spherical coordinates. So \((\varphi_{t},\psi_{t})\) has period \(T\). Let us denote \(c_{1}\) as the maximum of \(\mathbf{l}_{z}^{\mathbf{b}}\) on the curve and \(c_{2}\) as the minimum of \(\mathbf{l}_{z}^{\mathbf{b}}\) on the curve. Define \[h(t):=\int_{0}^{t}\cos\theta_{\tau}\left(\frac{\cos^{2}\varphi_{\tau}}{I_{x}} +\frac{\sin^{2}\varphi_{\tau}}{I_{y}}-\frac{1}{I_{z}}\right)d\tau, \tag{20}\] and \[g(t):=\int_{0}^{t}\left(\frac{\cos^{2}\varphi_{\tau}}{I_{x}}+\frac{\sin^{2} \varphi_{\tau}}{I_{y}}\right)d\tau. \tag{21}\] From (9), we have \[\varphi_{t}=\varphi_{0}+||\mathbf{L}||h(t),\quad\psi_{t}=\psi_{0}+||\mathbf{L}||g(t) \tag{22}\] Now since \(||\mathbf{L}||\) is Schwartz, the characteristic function vanishes at infinity. By Lemma 4.1, it is suffice for us to show \[\lim_{t\to\infty}|m_{1}h(t)+m_{2}g(t)|=\infty,\quad\forall m_{1},m_{2}\in\frac {1}{2\pi}\mathbb{Z}^{2}\setminus\{0\} \tag{23}\] which is equivalent to the condition \[m_{1}h(T)+m_{2}g(T)\neq 0,\quad\forall m_{1},m_{2}\in\frac{1}{2\pi}\mathbb{Z}^{2} \setminus\{0\} \tag{24}\] since \(h^{\prime}(t)\) and \(g^{\prime}(t)\) is periodic with period \(T\). By (10), we get the equality: \[\frac{\sin^{2}\theta_{t}}{\sin^{2}\theta_{0}}=\frac{\frac{2E}{||\mathbf{L}||^{2}}- I_{z}^{-1}}{I_{x}^{-1}-I_{z}^{-1}-(I_{x}^{-1}-I_{y}^{-1})\sin^{2}\varphi_{t}}= \frac{I_{x}^{-1}-I_{z}^{-1}-(I_{x}^{-1}-I_{y}^{-1})\sin^{2}\varphi_{0}}{I_{x}^{ -1}-I_{z}^{-1}-(I_{x}^{-1}-I_{y}^{-1})\sin^{2}\varphi_{t}}. \tag{25}\] Using equation (25), we can rewrite \(m_{1}h^{\prime}(t)+m_{2}g^{\prime}(t)\) as \[m_{1}h^{\prime}(t)+m_{2}g^{\prime}(t)=\frac{a_{0}(m_{1}\cos\theta_{t}+m_{2})} {\sin^{2}\theta_{t}}+\frac{m_{2}}{I_{z}}=\frac{a_{0}(m_{1}\mathbf{l}_{z}^{\mathbf{b}} +m_{2})}{1-(\mathbf{l}_{z}^{\mathbf{b}})^{2}}+\frac{m_{2}}{I_{z}} \tag{26}\] where \(a_{0}=\sin^{2}\theta_{0}\left[I_{x}^{-1}-I_{z}^{-1}-(I_{x}^{-1}-I_{y}^{-1}) \sin^{2}\varphi_{0}\right]\). To make a change of variable, we need \(\frac{dz(t)}{dt}\). From (9) and the relation \[(\mathbf{l}_{x}^{\mathbf{b}})^{2}=\frac{a_{0}-I_{y}^{-1}+(I_{y}^{-1}-I_{z}^{-1})(\mathbf{ l}_{z}^{\mathbf{b}})^{2}}{I_{x}^{-1}-I_{y}^{-1}},\quad(\mathbf{l}_{y}^{\mathbf{b}})^{2}=\frac{-a_ {0}+I_{x}^{-1}-(I_{x}^{-1}-I_{z}^{-1})(\mathbf{l}_{z}^{\mathbf{b}})^{2}}{I_{x}^{-1}-I_{ y}^{-1}} \tag{27}\] given by (10), we have \[\frac{d\mathbf{l}_{z}^{\mathbf{b}}}{dt}=\pm||\mathbf{L}||\sqrt{A(\mathbf{l}_{z}^{\mathbf{b}})^{4} +B(\mathbf{l}_{z}^{\mathbf{b}})^{2}+C} \tag{28}\] for \[A= -(I_{x}^{-1}-I_{z}^{-1})(I_{y}^{-1}-I_{z}^{-1}), \tag{29}\] \[B= \frac{2I_{z}-I_{x}-I_{y}}{I_{x}I_{y}I_{z}}-a_{0}(I_{x}^{-1}+I_{y}^ {-1}-2I_{z}^{-1}),\] (30) \[C= -(a_{0}-I_{x}^{-1})(a_{0}-I_{y}^{-1}), \tag{31}\] where \(+\) is taken when \(\mathbf{l}_{z}^{\mathbf{b}}\) is going from \(c_{1}\) to \(c_{2}\) and \(-\) is taken when \(\mathbf{l}_{z}^{\mathbf{b}}\) is going from \(c_{2}\) to \(c_{1}\). So with a change of variables from \(t\) to \(\mathbf{l}_{z}^{\mathbf{b}}\), we can finally express the condition in (24) as \[m_{1}h(T)+m_{2}g(T)=\frac{2}{||\mathbf{L}||}\int_{c_{1}}^{c_{2}}\frac{\frac{a_{0} (m_{1}z+m_{2})}{1-z^{2}}+\frac{m_{2}}{I_{z}}}{\sqrt{Az^{4}+Bz^{2}+C}}dz\neq 0, \quad\forall(m_{1},m_{2})\in\frac{1}{2\pi}\mathbb{Z}^{2}\setminus\{0\} \tag{32}\] It is very rare that one of the countable the integral equations \[\int_{c_{1}}^{c_{2}}\frac{\frac{a_{0}(m_{1}z+m_{2})}{1-z^{2}}+\frac{m_{2}}{I_ {z}}}{\sqrt{Az^{4}+Bz^{2}+C}}dz=0 \tag{33}\] has a solution. We will assume that the set of \((\varphi_{0},\theta_{0},I_{x},I_{y},I_{z})\) such that (33) has a solution for some \((m_{1},m_{2})\in\frac{1}{2\pi}\mathbb{Z}^{2}\setminus\{0\}\) has Lebesgue measure \(0\) in \(\mathbb{R}^{5}\). The proof will be left as an open problem. So for all \((\varphi_{0},\theta_{0},I_{x},I_{y},I_{z})\) excluding a measure \(0\) set, the condition to Lemma 4.1 is satisfied. According to the lemma, \[(\psi_{t}\ {\rm mod}\ 2\pi,\varphi_{t}\ {\rm mod}\ 2\pi)\xrightarrow{\mathcal{D}} \mathcal{U}[0,2\pi]^{2} \tag{34}\] as \(t\to\infty\) which completes the proof. Let us assume from now on that the parameters \((\varphi_{0},\theta_{0},I_{x},I_{y},I_{z})\) are not in the measure \(0\) set of Lemma 4.3. Now it remains for us to find the distribution of \(\theta_{t}\) before we can calculate the probability of heads. The following lemma obtains the distribution of \(\theta_{t}\) via its relation with \(\varphi_{t}\): **Lemma 4.3**.: _Suppose \(\theta_{t}<\frac{\pi}{2}\) for all \(t\) or \(\theta_{t}>\frac{\pi}{2}\) for all \(t\). \(\theta_{t}\) is a function of \(\cos(2\varphi_{t})\) independent of \((\psi_{t}\text{ mod }2\pi)\). When \(t\to\infty\), we have \(\csc^{2}\theta_{t}\xrightarrow{\mathcal{D}}\text{Arsine}(a,b)\) for some \(a,b\) where \(\text{Arsine}(a,b)\) denotes the Arsine distribution on \((a,b)\). Moreover, the limiting pdf of \(\theta_{t}\) as \(t\to\infty\) is given by_ \[f_{\theta}(y)=\frac{2|\cot y|}{\pi\sqrt{|(1-\csc^{2}\theta_{m}\sin^{2}y)(\csc ^{2}\theta_{M}\sin^{2}y-1)|}}. \tag{35}\] Proof.: By (25) we get \[\sin^{2}\theta_{t}=\frac{\sin^{2}\theta_{0}\left[I_{x}^{-1}-I_{z}^{-1}-(I_{x} ^{-1}-I_{y}^{-1})\sin^{2}\varphi_{0}\right]}{I_{x}^{-1}-I_{z}^{-1}-(I_{x}^{-1 }-I_{y}^{-1})\sin^{2}\varphi_{t}}=\frac{1}{k_{1}\cos(2\varphi_{t})+k_{2}} \tag{36}\] for some \(k_{1},k_{2}\) determined by \(I_{x},I_{y},I_{z}\),\(\varphi_{0}\) and \(\theta_{0}\). By Lemma 4.2, we know that \(\varphi_{t}\xrightarrow{\mathcal{D}}\mathcal{U}[0,2\pi]\) as \(t\to\infty\). So we have \[\csc^{2}\theta_{t}\xrightarrow{\mathcal{D}}\text{Arsine}(k_{2}-k_{1},k_{2}+k_ {1}) \tag{37}\] as \(t\to\infty\). Comparing the expression of \(k_{1},k_{2}\) with the formula of \(\theta_{m},\theta_{M}\), we conclude that in the acute case: \[\csc^{2}\theta_{t}\xrightarrow{\mathcal{D}}\text{Arsine}(\csc^{2}\theta_{M}, \csc^{2}\theta_{m}), \tag{38}\] and in the obtuse case: \[\csc^{2}\theta_{t}\xrightarrow{\mathcal{D}}\text{Arsine}(\csc^{2}\theta_{m}, \csc^{2}\theta_{M}), \tag{39}\] as \(t\to\infty\). So in both acute and obtuse cases, the limiting pdf of \(\csc^{2}\theta_{t}\) is given by: \[f(x)=\frac{1}{\pi\sqrt{|(x-\csc^{2}\theta_{m})(\csc^{2}\theta_{M}-x)|}}. \tag{40}\] So the limiting pdf of \(\theta_{t}\) is given by: \[f_{\theta}(y)=\frac{2|\cot y|}{\pi\sqrt{|(1-\csc^{2}\theta_{M}\sin^{2}y)(\csc^ {2}\theta_{m}\sin^{2}y-1)|}}. \tag{41}\] which completes the proof. Now with the result of Lemma 4.2 and Lemma 4.3, we can start calculating the probability of heads. **Theorem 4.4**.: _For all Schwartz densities of \(||\mathbf{L}_{l}||\), the limiting probability of heads as \(t\to\infty\) with \((\varphi_{0},\theta_{0})\) fixed, is given by_ \[p(\beta,\varphi_{0},\theta_{0})=\frac{1}{2}+\frac{1}{\pi^{2}}\int_{\theta_{m} (\varphi_{0},\theta_{0})}^{\theta_{M}(\varphi_{0},\theta_{0})}\arcsin(\min\{ 1,\cot\beta\cot y\})f_{\theta}(y)dy, \tag{42}\] _with \(f_{\theta}(y)\) is given in Lemma 4.3. In the special case when \(I_{x}=I_{y}<I_{z}\)(uniform coins),_ \[p(\beta,\varphi_{0},\theta_{0})=p(\beta,\theta_{0})=\frac{1}{2}+\frac{1}{\pi }\arcsin(\min\{1,\cot\beta\cot\theta_{0}\}). \tag{43}\] Proof.: Let us first consider the case where \(\theta_{t}<\frac{\pi}{2}\) for all \(t\) or \(\theta_{t}>\frac{\pi}{2}\) for all \(t\). By Lemma 4.3, the limiting pdf of \(\theta_{t}\) is \(f_{\theta}(y)\), given in 41.By Lemma 4.2, the limiting pdf of \(\psi_{t}\) mod \(2\pi\) is \(f_{\psi}(x)=\frac{1}{2\pi}\). Let us define \[D=\{x\in(0,2\pi),y\in(\theta_{m},\theta_{M})|\cos(x)>-\cot( \beta)\cot(y)\},\] \[D_{1}=\{x\in(0,\pi),y\in(\theta_{m},\theta_{M})|\cos(x)>-\cot( \beta)\cot(y)\}.\] By Corollary 3.1.1, \(D\) is the region where the coin is heads up. Thus the limiting probability of heads is given by \[p(\beta,\varphi_{0},\theta_{0})= \iint_{D}f_{\psi}(x)f_{\theta}(y)dxdy\] \[= 2\iint_{D_{1}}f_{\psi}(x)f_{\theta}(y)dxdy\] \[= \frac{1}{\pi}\iint_{D_{1}}f_{\theta}(y)dxdy\] \[= \frac{1}{\pi}\int_{\theta_{m}}^{\theta_{M}}\arccos(-\min\{1,\cot \beta\cot y\})f_{\theta}(y)dy\] \[= \frac{1}{2}+\frac{1}{\pi}\int_{\theta_{m}}^{\theta_{M}}\arcsin( \min\{1,\cot\beta\cot y\})f_{\theta}(y)dy\] as desired. When \((\varphi_{0},\theta_{0})\) lies in the fair region, \(\theta_{m}\) and \(\theta_{M}\) are complementary and \(p=\frac{1}{2}\). Notice that (42) still holds as \[\int_{\theta_{m}}^{\theta_{M}}\arcsin(\min\{1,\cot\beta\cot y\})f_{\theta}(y) dy=0 \tag{44}\] since \[\arcsin(\min\{1,\cot\beta\cot y\})f_{\theta}(y)=-\arcsin(\min\{1,\cot\beta \cot(\pi-y)\})f_{\theta}(\pi-y). \tag{45}\] So (42) holds for all situations of \(\theta_{t}\). In the special case when \(I_{x}=I_{y}<I_{z}\), \(\theta_{m}=\theta_{M}\) and the integral is integrated at a direct delta distribution at \(\theta_{0}\), which gives us \[p(\beta,\varphi_{0},\theta_{0})=p(\beta,\theta_{0})=\frac{1}{2}+\frac{1}{\pi }\arcsin(\min\{1,\cot\beta\cot\theta_{0}\}) \tag{46}\] as desired. We can also see from Equation (45) that it is natural to assume the probability is \(50\%\) in the fair case. Note that \(\mathbf{l^{b}}\) traces out a curve symmetric along the x-y plane and as shown on the right of Figure 2. If the limiting distribution of \(\mathbf{l^{b}_{z}}\) as \(t\to\infty\) exists, then since \(\mathbf{l^{b}_{z}}(t\mod T)=-\mathbf{l^{b}_{z}}(-t\mod T)\), the limiting distribution of \(\mathbf{l^{b}_{z}}\) will be symmetric along the x-y plane. Thus by Equation (45) and \(\theta_{t}=\arccos(\mathbf{l^{b}_{z}})\), the integral in (44) is \(0\), giving us a probability of heads of \(50\%\). Usually, coin flips tend to start with \(\beta=\theta_{0}\), i.e. face of the coin facing straight up. An immediate corollary to Theorem 4.4 is **Corollary 4.4.1**.: _With the assumptions of Theorem 4.4 and further assuming that heads is facing straight up at the initial position, the limiting probability of heads as \(t\to\infty\) with \((\varphi_{0},\theta_{0})\) fixed, is given by_ \[p(\varphi_{0},\theta_{0})=\frac{1}{2}+\frac{1}{\pi^{2}}\int_{\theta_{m}( \varphi_{0},\theta_{0})}^{\theta_{M}(\varphi_{0},\theta_{0})}\arcsin(\min\{1, \cot\theta_{0}\cot y\})f_{\theta}(y)dy, \tag{47}\] _In the special case when \(I_{x}=I_{y}<I_{z}\)(uniform coins),_ \[p(\varphi_{0},\theta_{0})=p(\theta_{0})=\frac{1}{2}+\frac{1}{\pi}\arcsin(\min \{1,\cot^{2}\theta_{0}\}). \tag{48}\] Formula (43) for uniform coins is also shown in Theorem 2 by Diaconis et al.[3] Furthermore, if the flip is Keller flip (\(\theta_{0}=\frac{\pi}{2}\)), then \(p\) is just \(\frac{1}{2}\). We can assume that a normal coin toss starts with heads facing straight up. So it remains for us to find the distribution of the initial parameters \((\varphi_{0},\theta_{0})\). We use the \(\theta_{0}\) values from the \(27\) real flip experiments by Diaconis et al[3] to be the empirical distribution of \(\theta_{0}\). And we will assume that \(\varphi_{0}\) is uniform distributed in \([0,2\pi)\). Then using the probability formula in Corollary 4.4.1, we can calculate the probability of heads of a normal non-uniform coin toss with the assistance of the computer. Like Diaconis et al[3], we use an American half dollar which has \(I_{x}=6.68g\cdot cm^{2}\), \(I_{z}=13.24g\cdot cm^{2}\) and assume that \(I_{y}=7.35g\cdot cm^{2}\). As a result, we obtain the probability of heads of a non-uniform coin \(P=50.45\%\). This is closer to \(50\%\) compared to the probability of the uniform coin calculated by Diaconis et al [3] which is \(50.83\%\). This shows that non-uniform coins are fairer than uniform coins. ## 5 Conclusions While Coin-tossing is often used to make a decision between two options, the tossed coins are usually not absolutely uniform in our daily life. In this work, we investigated the dynamic behavior of non-uniform coins whose inertia matrix is given by \(\text{diag}(I_{x},I_{y},I_{z})\) where \(I_{x}<I_{y}<I_{z}\). These coins include homogeneous coins with axis symmetrical convex parts, such as ellipse, rectangular, oblong shapes, on the surface and symmetrical inhomogeneous coins. We expressed the status, heads or tails, in terms of the initial direction of the angular momentum, the precession and nutation of the normal vector. We provided calculation of the limiting probability of heads as \(t\to\infty\), with fixed initial direction of the angular momentum and distribution on magnitude of the angular momentum. The results from Keller[2] and Diaconis[3] are special cases of our study. In Figure 4, the fair region of initial parameters \((\varphi_{0},\theta_{0})\) of non-uniform coin has positive area while the fair region of the uniform coins is only a line in \(\mathbb{R}^{2}\). The area of the fair region for non-uniform coin depends on \(I_{x},I_{y},I_{z}\). So there are much more situations of initial conditions where the non-uniform coin is fair and the uniform in not fair. In addition, Equation (43) implies the probability of heads for uniform coin is \(100\%\) if \(\theta_{0}\leq\frac{\pi}{4}\). Figure 5 shows a situation (\(\beta=\theta_{0}=\pi/4\)) where the non-uniform coin is clearly fairer than the uniform coin. The two figures are possible regions of the normal vector with the same angular momentum for the uniform coin, non-uniform coin, respectively. Note that in left panel the possible region of unit vector of the uniform coin is inside the northern hemisphere. The coin never turns over and therefore the probability of heads is \(100\%\). But note that in the right panel, there is a small region inside the southern hemisphere due to nutation. Intuitively, we see there should be a small probability of the coin landing in tails. Corollary 4.4.1 proves that in this situation, the probability of heads is strictly less than \(100\%\) for non-uniform coins. Figure 5: The path of \(\mathbf{n}\) for uniform (left) and non-uniform (right) coin when \(\beta=\theta_{0}=\pi/4\). ## Acknowledgements I would like to express my deepest thanks to Prof. Persi Diaconis from Stanford University for his guidance and mentorship on this research. His lectures on Mathematics and Statistics of Gambling greatly inspired me and gave me valuable insights on this topic.
2302.12660
Optimizing Population Accumulation in Quantum States Using Microwave Spectroscopy
We present an all-optical method for efficiently preparing cold atoms in a desired Zeeman state, either on the magnetically insensitive clock state (m_F=0) or a particular state suitable for processing or storing quantum information. By applying the theoretical fitting model to a single microwave spectrum, we can individually determine the population distribution, microwave polarization ratio, and microwave Rabi frequency. We can dynamically track the population distribution during the optical pumping process using this real-time microwave spectrum. In a steady-state condition, a simplified model, which considers resonant and off-resonant transitions, indicates that there is an upper limit to the purity under a weak optical pumping field. The population purity up to 96(2)% or 98(1)% on the desired quantum state has been achieved after optimizing the intensity and polarization of the optical pumping field. Our study provides valuable information and potential applications in precision measurement and quantum computation research.
Jia-You Liou, Chi-En Wu, Hsuan-Jui Su, Yi-Hsin Chen
2023-02-24T14:26:10Z
http://arxiv.org/abs/2302.12660v1
# Optimizing Population Accumulation in Quantum States Using Microwave Spectroscopy ###### Abstract We present an all-optical method for efficiently preparing cold atoms in a desired Zeeman state, either on the magnetically insensitive clock state (\(m_{F}=0\)) or a particular state suitable for processing or storing quantum information. By applying the theoretical fitting model to a single microwave spectrum, we can individually determine the population distribution, microwave polarization ratio, and microwave Rabi frequency. We can dynamically track the population distribution during the optical pumping process using this real-time microwave spectrum. In a steady-state condition, a simplified model, which considers resonant and off-resonant transitions, indicates that there is an upper limit to the purity under a weak optical pumping field. The population purity up to 96(2)% or 98(1)% on the desired quantum state has been achieved after optimizing the intensity and polarization of the optical pumping field. Our study provides valuable information and potential applications in precision measurement and quantum computation research. + Footnote †: preprint: APS/123-QED ## I Introduction Preparing atomic populations into a specific Zeeman state is essential for quantum information science and precision measurements, such as quantum memory [1; 2; 3; 4], quantum manipulation [5; 6], single-photon generation [7], atomic magnetometry [8; 9; 10], and atomic clock [11; 12; 13; 14]. Accumulating the population in the desired Zeeman sublevel can increase the optical density (OD) of the atomic medium according to the Clebsch-Gordan coefficient of the selected atomic transition, and can avoid energy loss in the manipulation of the light retrieval [3; 4; 15; 16]. To make two light fields strongly interact, the researchers prepared the population in the two specific ground Zeeman states, forming two motionless light pulses via the quantum interference [5]. A phase shift of \(\pi\) by a single photon has been proposed in such an ultrahigh OD system [17]. In addition, increasing the population in the clock state (\(m_{F}=0\)), which is a magnetically insensitive state, leads to an apparent reduction in the noise for the atomic clock [13]. Optical pumping (OP) is a standard method to pump all atoms into an uncoupled dark state. By applying the stimulated Raman adiabatic passage (STIRAP), one can coherently transfer the population between two quantum states via two coherent light pulses [18]. Moreover, through combining optical pumping with microwave pumping, the researchers prepared the population in a clock state with a purity of 83% [13]. With appropriate polarization configuration and laser power, state purities of more than 96% [19] or 97% [20] were achieved. At present, the methods to prepare and detect the population distribution include measuring (1) the electromagnetically-induced-transparency-based transmission [19], (2) the transition between different Zeeman states with the same quantum number \(F\)[21; 9], (3) the hyperfine state transition (\(\Delta F=\pm 1\)) driving by a microwave [22; 23; 24; 25; 26; 11], and (4) the Ramsey atom interferometer [27; 13]. In addition, the atomic population distribution can be detected via the reflected microwave power spectrum using a real-time nondestructive imaging system [28]. In this work, we applied microwave spectroscopy to measure the population distribution and provided an analytical solution for atomic state detection. Figure 1: (a) Schematic of the experimental setup. The Qpole field used for capturing the cold atoms is not shown in the figure. WP: waveplate. (b) Relevant energy levels for \({}^{87}\)Rb atoms and laser excitations. Optical pumping fields OP\({}_{1}\) and OP\({}_{2}\) were applied to pump the population away from hyperfine states \(F=1\) and \(F=2\), respectively. The trapping field, which captures atoms, also serves as an image field. A mono-color CCD camera was employed to detect the fluorescence signals. A microwave field was frequency scanned around 6.835 GHz to obtain the microwave spectrum. (c) Timing sequence for the cooling process, population preparation, and population detection. (d) A typical microwave spectrum displays seven major peaks under a magnetic field in an arbitrary direction. (e) Microwave transitions between the Zeeman sublevels. The energy splittings for \(F=2\) and \(F=1\) are \(m_{F}\)\(\times\)0.7 MHz/G and \(m_{F}\)\(\times\)(-0.7) MHz/G, respectively. The Rabi frequency of the applied microwave was individually determined from the detection of the population cycling transition. Microwave pulses with high accuracy would be helpful in the atomic ground-state manipulation and for the Ramsey interference study [29; 30; 31; 27; 32; 33]. The environment magnetic field can be compensated with a precision of below two mG from microwave spectroscopy. After knowing the population distribution, the optical pumping method was employed to pump the population to the desired quantum state with high purity up to 96(2)% or 98(1)%. An estimation based on the far-off-resonant transitions, which involves pumping the atoms away from the dark state, shows an upper limit of 99.8%. Experimentally, the impure polarization of light and the non-uniform magnetic field affect the state purity. The main feature and time evolution during the OP of microwave spectroscopy will be discussed and compared with our theoretical predictions. ## II Scheme and experimental setup We perform the experiment using \({}^{87}\)Rb atoms in a typical magneto-optical-trap (MOT) system, including three pairs of trapping beams and one repumping beam. The magnetic field gradients in the X, Y, and Z directions of 3.06, 1.44, and 4.97 Gauss/cm were produced by a pair of anti-Helmholtz coils (Qpole) with a current of 2.0 Amp. The trapped atom cloud has a dimension of \(7.5\times 5.7\times 3.7\) mm\({}^{3}\) containing \(2\times 10^{8}\) atoms. The schematic of the setup, relevant energy levels, and laser excitations are shown in Figs. 1(a) and 1(b). The population can be prepared by trapping, repumping, and optical pumping fields. The pumping fields OP\({}_{1}\) and OP\({}_{2}\) drove the population away from \(F=1\) and \(F=2\), respectively. Figure 1(c) shows the timing sequence. The repetition time was 436 ms, including 430 ms for the cooling process and 6 ms for preparing and detecting the population. Denote the time of switching on the microwave as \(t=0\). We typically turned off the Qpole field, repumping, and trapping beams of the MOT at \(t=-6\) ms, \(-1.12\) ms, and \(-0.52\) ms in sequence. In order to characterize the state-preparation efficiency, we use microwave spectroscopy to measure the population distribution on the Zeeman sublevels of the ground state \(F=1\), as shown in Fig. 1(d). The microwave lasted for 80 \(\mu\)s, but the period was varied in other measurements. When turning off the microwave, the trapping beams were turned on again to serve as the image beams. The fluorescence signals were detected by a mono-color CCD camera (The Imaging Source: DMK 22BUC03) with an exposure time of 122 \(\mu\)s. After the cooling process, the trapping fields were applied to pump the cold atoms to three Zeeman states of \(F=1\). Under a magnetic field, the energy splittings of Zeeman sublevels for \(F=2\) and \(F=1\) states are \(m_{F}\times\)0.7 MHz/G and \(m_{F}\times\)(-0.7) MHz/G, respectively. Choose the direction of the magnetic field as the quantization axis. The polarization of the microwave was random in an arbitrary magnetic field. As shown in Fig. 1(e), the microwave can induce \(\alpha_{i}\), \(\beta_{i}\), and \(\gamma_{i}\) transitions involving populations in states \(\alpha\), \(\beta\), and \(\gamma\), respectively, where \(i=1,2,3\) corresponds to \(\sigma^{+}\), \(\pi\), \(\sigma^{-}\) transitions. Therefore, a typical microwave spectrum displays seven major peaks, among which \(\alpha_{3}\) and \(\beta_{1}\) (as well as \(\beta_{3}\) and \(\gamma_{1}\)) exhibit the same resonance frequency. The 6.835 GHz microwave was generated by a synthesizer (Rigol BSG830) combined with a frequency multiplier (Minicircuits: ZX90-3-812-S+), power amplifier (Minicircuits: ZVE-3W-183+), and waveguide adapter (Woken: 0060WA2KOBB01X). The microwave was scanned at a speed of 0.6 kHz per repetition time. ## III Theoretical model We theoretically explain the main feature of the microwave (MW) spectrum and individually determine the population distribution, polarization, and MW Rabi frequency. First, we measured the MW strength. Consider a two-level system with states \(|1\rangle\) and \(|2\rangle\), and assume the population is initially in \(|1\rangle\). The population time evolution is a function of the MW detuning \(\Delta\), MW period \(t\), and MW Rabi frequency \(\Omega\). The population in state \(|2\rangle\), \(P_{2}\), is given by \[P_{2}(t,\Delta)=\frac{\Omega^{2}}{\Omega^{\prime 2}}\left(1-\text{Cos}(\Omega^{ \prime}t)\right), \tag{1}\] where \[\Omega^{\prime}=\sqrt{\Omega^{2}+\Delta^{2}}.\] The MW Rabi frequency can be estimated by varying the MW transition period and tracking the population oscillation. The MW frequency was fixed for \(\alpha_{1}\) transition, and the fluorescence signals are shown in Fig. 2(b). In this measurement, states \(|1\rangle\) and \(|2\rangle\) represent \(|F=1,m_{F}=1\rangle\) and \(|F=2,m_{F}=2\rangle\), respectively. Figure 2: (a) Energy structure of a two-level system. The microwave field is tuned to the frequency of the transition at approximately 6.835 GHz with a certain detuning \(\Delta\). (b) Directly determine the Rabi frequency from the specific transition, e.g., \(\alpha_{1}\) transition \(|F=1,m_{F}=1\rangle\rightarrow|F=2,m_{F}=2\rangle\), which gives the Rabi frequency of \(2\pi\times\)6.0 kHz. Considering the population decay, the measured oscillation gives the Rabi frequency for the \(\alpha_{1}\) transition of \(2\pi\times 6.0\) kHz. Below we show the procedure to extract all of the unknown parameters from a seven peaks microwave spectrum (e.g., Fig. 1(d)), including the population distribution \(P_{\alpha}\), \(P_{\beta}\), \(P_{\gamma}\), the polarization component of the microwave, and the Rabi frequency for each transition. Here we define \(\Omega_{1}\) as the MW Rabi frequency for the dipole matrix element between two states of one. The dipole matrix elements of \(\sigma^{+}\) (\(\sigma^{-}\)) transitions are (\(C_{\gamma_{1}(\alpha_{3})}\), \(C_{\beta_{1}(\beta_{3})}\), \(C_{\alpha_{1}(\gamma_{3})}\)) = (\(\sqrt{1/12}\), \(\sqrt{1/4}\), \(\sqrt{1/2}\)). The ones for \(\pi\) transitions are (\(C_{\gamma_{2}},C_{\beta_{2}},C_{\alpha_{2}}\)) = (\(\sqrt{1/4},\sqrt{1/3},\sqrt{1/4}\)) for \(\gamma_{2}\), \(\beta_{2}\), and \(\alpha_{2}\) transitions, respectively. We can derive the bias magnetic field from the peak splittings. The MW resonant frequencies were shifted due to the Zeeman effect, and therefore, the detuning \(\Delta\) in Eq. (1) needs to be replaced by \(\Delta-\Delta_{0}\), where \(\Delta_{0}\) is the MW resonant frequency. For the spectrum shown in Fig. 1(d), \(\Delta_{0}\) was \(\pm\)\(j\times\) 532.5 kHz for \(j=1,2,3\). To determine the population distribution, (\(P_{\gamma},P_{\beta},P_{\alpha}\)), we first focus on the \(\pi\) transition peaks, i.e., the second, fourth, and sixth peaks. The fluorescence signals are proportional to the total number of atoms and the sensitivity of the CCD camera, which have an equal impact on each peak. The factors (\(P_{\gamma},P_{\beta},P_{\alpha}\)), where \(P_{\gamma}+P_{\beta}+P_{\alpha}=1\), can vary the fluorescence peak height and need to be taken into account in Eq. (1). Here, \(\Omega\) in Eq. (1) is replaced by \(\Omega_{1}\times\)(\(C_{\gamma_{2}}\), \(C_{\beta_{2}}\), \(C_{\alpha_{2}}\)). The population distribution was derived from these three peaks of seven, showing that the population was almost equally distributed. We next determine the polarization component of the microwave from the most-left (\(\sigma^{-}\)-polarization) and the most-right (\(\sigma^{+}\)- polarization) peaks. The quantization axis is aligned with the applied magnetic field, allowing the MW to drive \(\sigma^{+}\), \(\sigma^{-}\), and \(\pi\) transitions. From the comparison of the determined Rabi frequencies that fit the data and the derived ones from the dipole matrix element, we obtain the ratios of polarization components are (\(14\%,80\%,5.4\%\)) for (\(\sigma^{-}\), \(\pi\), \(\sigma^{+}\)) polarizations. In order to verify the accuracy of the above-determined parameters, the third and the fifth peaks were simulated with the above-given parameters. The feature of the spectrum fitted quite well, including the relevant peak height and the oscillation structure. ## IV Results We now discuss how to improve the purity of the population accumulation by applying optical pumping. First, if the laser polarization \(\tilde{Z}_{L}\) differs from the magnetic field direction \(\tilde{Z}_{B}\), the laser field can drive \(\Delta m_{F}=\pm 1\) or \(\Delta m_{F}=0\) transitions, causing the pumping process to be more complicated. The effective magnetic field \(\mathbf{B_{eff}}\) = \(\mathbf{B_{coil}}+\mathbf{B_{0}}\), where \(\mathbf{B_{0}}\) is the stray field from the environment and \(\mathbf{B_{coil}}\) can be adjusted by the three pairs of the Helmholtz coils. Since the Zeeman splitting is proportional to the strength of \(\mathbf{B_{eff}}\), we minimized the stray magnetic field on the three orthogonal directions by measuring the frequency shift for various \(\mathbf{B_{eff}}\). Then, we can define the quantization axis on the \(\tilde{Z}_{B}\) direction and purify the optical pumping field transition. The frequency splitting for two nearby peaks was obtained by extracting the detuning \(\Delta_{0}\) for several magnetic field strengths along a fixed direction. The spectrum's peak resolution is better than 5 kHz, determined by the full width at half maximum of the peak. Figures 3(a), 3(b), and 3(c) show the extracted frequency splittings as a function of the magnetic field on X, Y, and Z axes, respectively, when the stray magnetic field was almost canceled. The red fitting lines (either a linear or a quadratic function) show that the DC magnetic fields, \(B_{coil}\), were 0.80, 0.83, 1.11 Gauss/A on the X, Y, and Z axes. In addition, the intercept points or the minimum points of the fitting lines give the best setting current to compensate the \(\mathbf{B_{0}}\) and the lowest achievable magnetic field with a resolution of below two mG for each axis. Once the stray magnetic field had been compensated, we applied an additional magnetic field as the quantization axis. The polarization components of the optical pumping and microwave fields will be varied according to the direction of the quantization axis. The magnetic field of 65 mG was set on the Y axis, and another minor field on the X axis was varied to determine polarization purity. Based on theoretical fits to the MW spectra, we can determine the purity of the MW polarization. A purity of 97% for the \(\pi\) transition has been achieved, as demonstrated in the corresponding microwave spectrum shown in Fig. 4(b). When we set the magnetic field of 65 mG on the X axis, the microwave polarization was perpendicular to this quantization axis, and it drove the \(\sigma^{+}\) and \(\sigma^{-}\) transitions. The four peaks in Fig. 4(c) correspond to \(\gamma_{3}\), \(\gamma_{1}+\beta_{3}\), \(\beta_{1}+\alpha_{3}\), and \(\alpha_{1}\) transitions, from the left to the right peaks. According to the best fits to the data, the red lines show that the population was equally distributed in the present measurements. The polarization purity, period, and microwave power are crucial for Figure 3: The magnetic field strength derived from the Zeeman splittings as a function of the applied coil current in X (Fig. a), Y (b), and Z (c) axes after minimizing the stray field. The red linear fitting lines give that the DC magnetic fields, \(B_{coil}\), were 0.80, 0.83, 1.1 Gauss/Amp. By extrapolating both linear fits to the magnetic field, we derive the resolution to minimize the magnetic field below two mG. manipulating \(\pi\)-pulse transition, such as the Ramsey interference study. We then applied optical pumping to accumulate the populations to a Zeeman state and improve state purity. The optical pumping fields, OP\({}_{1}\) and OP\({}_{2}\), were set at the resonant frequencies of \(F=1\to F^{\prime}=1\) and \(F=2\to F^{\prime}=2\) transitions, respectively. First, the target state is the clock state \(|F=1,m_{F}=0\rangle\). We applied the external magnetic field on the Z axis as the quantization axis. The polarizations of the MW, OP\({}_{1}\), and OP\({}_{2}\) fields were \(\sigma^{\pm}\), \(\pi\), and \(\sigma^{\pm}\), respectively. Initially, the population was almost equally distributed among the three Zeeman states. When the trapping beams were turned off at \(t=-330\)\(\mu s\), OP\({}_{1}\) and OP\({}_{2}\) were turned on simultaneously and lasted for \(300\)\(\mu\)s. As the shown energy levels and laser excitations in Fig. 5(a), the dark state is \(|F=1,m_{F}=0\rangle\). After optimizing the laser polarization and intensity, the MW spectrum displayed only two major peaks corresponding to \(\beta_{1}\) and \(\beta_{3}\) transitions. The best fit shows that the fraction of atoms on this clock state increases to \(98(1)\%\). Note that the small peak at the center, with \(\Delta=0\), corresponds to a \(\pi\)-polarized MW transition, with only \(3\%\) polarization impurity. Similarly, we can pump the population in another clock state \(|F=2,m_{F}=0\rangle\) by changing the polarization of the pumping fields, e.g., \(\pi\) polarization for OP\({}_{1}\) and \(\sigma^{\pm}\) polarization for OP\({}_{2}\). Here we observed the dips instead of the peaks. The baseline signals of the fluorescence showed the number of atoms in \(F=2\) states. The fluorescence signals were taken using an image beam (\(F=2\to F=3^{\prime}\) transition frequency with a specified detuning) after the population was initially prepared in the \(F=2\) state. However, the resonant microwave drove some of the population to the \(F=1\) state, thereby reducing the strength of the resonant fluorescence. Finally, we accumulated the population in the state \(|F=1,m_{F}=1\rangle\) as it exhibits the strongest coupling strength between two hyperfine states, resulting in the highest optical density. The magnetic field was set on the X axis, while OP\({}_{1}\) with \(\sigma^{+}\) polarization and OP\({}_{2}\) with \(\sigma^{\pm}\) polarization were utilized. The MW duration was adjusted based on the coupling strength to achieve a \(\pi\)-pulse transition for the \(\alpha_{1}\) peak. Note that the linewidth of the \(\alpha_{1}\) peak was broader than the fitting curve because of the non-uniform magnetic field across the center and the edge of the atomic cloud [32]. This broadening effect was observed only when the magnetic field was aligned along the X axis. A longer atomic cloud was observed Figure 5: Timing sequence, the relevant energy levels, and laser excitations. (a), (b), and (c) are the microwave spectra for atomic preparation in states \(|F=1,m_{F}=0\rangle\), \(|2,0\rangle\), and \(|1,1\rangle\), respectively. The best purity of population in these states were \(98(1)\%\), \(98(2)\%\), and \(96(2)\%\), from (a) to (c). The red, black, and blue circles in the inset represent the populations \(P_{\alpha}\), \(P_{\beta}\), and \(P_{\gamma}\) as a function of the orientation angle of a zero-order \(\lambda/4\) plate for OP\({}_{1}\) beam. (d) A simplified set of atomic energy levels involving one dark state \(|Dg\rangle\), one bright state \(|Bg\rangle\), and two excited states, \(|e_{1(2)}\rangle\). These states are coupled by a resonant \(R(0)\) and a far-off resonant \(R(\delta)\) driving fields. Figure 4: (a) The microwave polarization measurements as a function of the magnetic field strength on the X axis under an additional magnetic field of \(65\) mG on the Y axis, which had the same direction as the MW polarization. A \(97\%\) polarization purity for \(\pi\) transition has been achieved. (b) and (c) are the spectral measurements when the external magnetic field was applied on the Y and Z axes, respectively. The solid lines are the best fits by applying Eq. (1) and considering all possible MW transitions. The best fits show that the population was equally distributed in the present measurements. on the X axis relative to the Z axis, making the magnetic field's homogeneity crucial. To optimize the purity of each desired Zeeman state, we well adjusted the orientation of a zero-order \(\lambda/4\) plate for the OP\({}_{1}\) (or OP\({}_{2}\)) beam. Only the measurements for state \(|1,1\rangle\) are shown in the inset of Fig. 5(c). The polarization impurity causes population to be pumped to the bright states, resulting in clear peaks on the spectrum which are attributed to the bright-state atoms. The purity has been optimized up to \(96(2)\%\). Compared to the purity of states \(|1,0\rangle\) and \(|2,0\rangle\), the lower purity and larger error in state \(|1,1\rangle\) are mainly due to the wider transition peaks caused by the non-uniform magnetic field. Off-resonant transitions hinder the population purity despite having only one dark state in the optical pumping process. We consider a simplified set of atomic energy levels, as depicted in Fig. 5(d). The populations in the dark state (\(|Dg\rangle\)) and bright state (\(|Bg\rangle\)) are \(N_{0}(t)\) and \(N_{1}(t)\), respectively. We assume the branching ratios of the radiative decay from the excited state to the ground states are equal, i.e., \(\Gamma/2\) (\(\Gamma\) is the total nature decay rate of the excited state). The absorption rate \(R\) depends on the detuning and laser intensity [33], \[R(\delta)=C_{ij}\frac{\Gamma}{2}\frac{I/I_{sat}}{1+4(\delta/\Gamma)^{2}+(I/I_{ sat})}, \tag{2}\] where \(I\) and \(I_{sat}\) are the laser intensity and saturation intensity, and \(\delta\) is the detuning of the laser field from the atomic resonance, \(C_{ij}\) denotes the Clebsch-Gordan coefficient involving the transition. The population transfer between the dark and bright states involves two steps: (1) atoms are excited either from the dark state \(|Dg\rangle\) to the off-resonant excited state \(|e_{1}\rangle\) or from the bright state \(|Bg\rangle\) to the resonant excited state \(|e_{2}\rangle\), and (2) they then spontaneously decay either from \(|e_{1}\rangle\) to \(|Bg\rangle\) or from \(|e_{2}\rangle\) to \(|Dg\rangle\). The time evolution of the population can be described as \[\frac{dN_{0}(t)}{dt}=\frac{1}{2}N_{1}(t)R(0)-\frac{1}{2}N_{0}(t)R(\delta). \tag{3}\] In the steady-state condition, the population purity in the dark state is \[\frac{N_{0}}{N_{1}+N_{0}}=\frac{R(0)}{R(0)+R(\delta)}. \tag{4}\] Therefore, consider the detunings of the near hyperfine levels of the \({}^{87}\)Rb \(D_{2}\)-line transitions, \(\delta=12\Gamma\) and \(24\Gamma\). We estimate the upper limit of the purity approach \(99.8\%\) at low intensities, where \(I\ll I_{sat}\). The limitation can be further improved by selecting the \({}^{87}\)Rb \(D_{1}\) transitions, which have a more significant splitting between the two hyperfine levels. As found in Eqs. (2)-(4), the pumping field intensity is another factor influencing purity. The duration of the optical pumping process plays a similar role in the population accumulation that we did not discuss here. Figure 6 (a) illustrates the fluorescence signals for identifying the optimal population accumulation in state \(|1,0\rangle\). The signal in the absence of an optimized OP field is represented by the black line, while the red line represents the signal with an optimized OP field and the green line represents the signal with the strongest OP field. A stronger fluorescence signal indicates a larger population. By increasing the intensity of the optical pumping, more population is transferred to the hyperfine state with \(F=2\) through off-resonant transitions. As a result, the baseline level of the MW spectrum increases. We then extract the peak heights (\(\beta_{1}\) transitions) and baseline levels from the spectrum as a function of the OP power, as shown in Fig. 6(c). The black dotted line is the peak height without the OP field. Population (black circles) can be concentrated in a specific state with proper pumping power. The purity of \(98(1)\%\) has been achieved from the systematic measurements. Similarly, results for state \(|1,1\rangle\) are shown in Figs. 6(b) and 6(d). Both figures demonstrate that the population \(P_{\beta}\) in (c) or \(P_{\alpha}\) in (d) increases with higher OP power. However, increasing the power beyond a certain point results in a decrease in population. The fluorescence signal of the \(\alpha_{1}\) transition has been improved by a factor of three, which corresponds to a three-fold increase in population. In this case, population accumulation was optimized to \(98(1)\%\) in the \(|1,0\rangle\) state and Figure 6: (a) and (b) illustrate the fluorescence signals for identifying the optimal population accumulation in states \(|1,0\rangle\) and \(|1,1\rangle\), respectively. The black line represents signals without the OP field, the red one with an optimized OP field, and the green one with the strongest OP field. We extract the peak heights (\(\beta_{1}\) and \(\alpha_{1}\)) and baseline levels in (c) and (d). The dashed lines are the guide to the eyes. The dotted lines are the extracted peak heights without the OP field. Vertical axes are displayed in arbitrary units. Both measurements show that the population \(P_{\beta}\) in (c) or \(P_{\alpha}\) in (d) was enhanced with increasing pumping power; however, further increasing the power reduced the population. Meanwhile, the baseline level increased with increasing pumping power, so more populations were pumped to another hyperfine state \(F=2\) via off-resonant transitions. The fluorescence signal for the \(\alpha_{1}\) transition has improved by a factor of 3, corresponding to the 3-fold population increment. Therefore, the population \(P_{\beta}\) in (c) or \(P_{\alpha}\) in (d) can be optimized by applying a proper OP power (or intensity). \(96(2)\%\) in the \(|1,1\rangle\) state, with limitations imposed by the inhomogeneous magnetic field and far-off-resonant transitions. Once the population is accumulated in a single Zeeman state, people can apply the stimulated-Raman adiabatic passage (STIRAP) procedure, which is a two-photon adiabatic transition in a three-level system, to pump the population to any other desired state [18; 34]. ## V Conclusions We present a comprehensive approach to population accumulation on any desired Zeeman states. Using a theoretical fitting model, we can individually determine the population distribution, the MW polarization ratio, and the MW Rabi frequency by applying it to a single real-time MW spectrum under an arbitrary magnetic field. The environment's stray magnetic field can be compensated below two mG from the MW spectroscopy. The state purities were optimized by adjusting OP polarization, period, and intensity. In a steady-state condition, a simplified model, which considers resonant and off-resonant transitions, indicates that there is an upper limit to the purity under a weak optical pumping field. Experimentally, the impure polarization of light, non-uniform magnetic field, and significant off-resonant transitions affect the state purity. The purities reached up to \(98(1)\%\) in \(|1,0\rangle\) and \(98(2)\%\) in \(|2,0\rangle\) states after the optimization. The population accumulated in \(|1,1\rangle\) state was \(96(2)\%\) mainly due to the inhomogeneous magnetic field. Using this real-time microwave spectrum, we can dynamically track the population distribution in the optical pumping process. This technique can significantly impact quantum-information processing in multi-level atomic systems, and our investigation will advance future applications in precision measurement, such as atomic clocks. ###### Acknowledgements. This work was supported by Grants Nos. 109-2112-M-110-008-MY3 and 111-2123-M-006-001 of the National Science and Technology Council, Taiwan. The authors thank Prof. Cheng Chin for the valuable comments on the study.
2308.07550
High-Order Topological Phase Diagram Revealed by Anomalous Nernst Effect in Janus ScClI Monolayer
Higher-order topological properties of two-dimensional(2D) magnetic materials have recently been proposed. In 2D ferromagnetic Janus materials, we find that ScClI is a second-order topological insulator (SOTI). By means of a multi-orbital tight-binding model, we analyze the orbital contributions of higher-order topologies. Further, we give the complete high-order topological phase diagram of ScClI, based on the external field modulation of the magneto-valley coupling and energy levels. 2D ScClI has a pronounced valley polarization, which causes different insulating phases to exhibit completely different anomalous Nernst conductance. As a result, we use the matched anomalous Nernst effect to reveal the topological phase transition process of ScClI. We utilize the characteristics of valley electronics to link higher-order topological materials with the anomalous Nernst effect, which has potential implications for high-order topological insulators and valley electronics.
Ning-Jing Yang, Jian-Min Zhang
2023-08-15T03:42:13Z
http://arxiv.org/abs/2308.07550v3
# High-Order Topological Phase Diagram Revealed by Anomalous Nernst Effect in Janus ScCII Monolayer ###### Abstract Higher-order topological properties of two-dimensional(2D) magnetic materials have recently been proposed. In 2D ferromagnetic Janus materials, we find that ScCII is a second-order topological insulator (SOTI). By means of a multi-orbital tight-binding model, we analyze the orbital contributions of higher-order topologies. Further, we give the complete high-order topological phase diagram of ScCII, based on the external field modulation of the magneto-valley coupling and energy levels. 2D ScCII has a pronounced valley polarization, which causes different insulating phases to exhibit completely different anomalous Nernst conductance. As a result, we use the matched anomalous Nernst effect to reveal the topological phase transition process of ScCII. We utilize the characteristics of valley electronics to link higher-order topological materials with the anomalous Nernst effect, which has potential implications for high-order topological insulators and valley electronics. ## I Introduction Recently, two-dimensional high-order Topological insulator(HOTI) have received great attention [1; 2; 3; 4; 5; 6]. Prior to this, higher order topological states in three-dimensional Bi, MnBi\({}_{2n}\)Te\({}_{3n+1}\), and EuIn\({}_{2}\)As\({}_{2}\) have received more focus [7; 8; 9; 10; 11]. However, higher-order topological corner state hidden in the 2D transition metal \(2\mathrm{H-MoS_{2}}\) family have only recently been discovered [1; 2; 3]. They are all protected by space inversion symmetry and have non-zero topological corner charge [12; 13; 14]. The analogous higher-order topological properties can also be extended to 2D ferromagnetic and ferroelectric materials [15; 16]. However, there is still a need for more detailed and comprehensive analysis of high-order topological multi-orbital systems and phase transition processes, as well as a stronger connection to valleytronics. Compared to photonic and phononic crystals, which are highly manipulated [17; 18], there are still relatively few electronic materials with higher-order topologies. Therefore, finding new 2D HOTI materials has important value and significance. Two-dimensional materials with honeycomb structure have strong energy valley properties [19; 20; 21]. For 2D ferromagnetic materials, the intrinsic ferromagnetic order couples with the energy valleys to produce giant valley polarization [4; 22] and leads to large differences in the Berry curvature of the K/K' valleys. Since the Berry curvature is analogous to an equivalent magnetic field [23], the anomalous velocity imparted to the electrons leads to valley currents [21; 22; 23; 24]. Applications on this basis consist of the valley Hall effect [24] and the anomalous Nernst effect [22]. In particular, the anomalous Nernst effect has been widely measured and applied experimentally [25; 26; 27; 28; 29; 30; 31]. The anomalous Nernst effect on \(2\mathrm{H-MoS_{2}}\) family materials was discussed early by Xiao-Qin Yu \(et\)\(al.\)[31]. However, connecting topological phase transitions with the anomalous Nernst effect is currently lacking. In this work, we discover a second-order topological corner state in a 2D triangular ScCII quantum dot, i.e., it is a new SOTI. Using the multi-orbital tight-binding approximation theory, we learn that the p-orbitals of the halogen elements are indispensable for higher-order topology. It is not possible to show this by only considering the d-orbitals. Next, by applying external field to change the magneto-valley coupling strength and energy level difference, we provide the complete topological phase diagram of 2D Janus ScCII. ScCII mainly undergoes the topological phase transition process of SOTI, quantum anomalous valley Hall insulators (QAVHI), and normal insulators (NI). Due to the strong valley electronic properties of ScCII, the phenomena of SOTI, QAVHI, and NI exhibit distinct characteristics in the thermally induced anomalous Nernst effect. Based on this foundation, we provide anomalous Nernst conductance maps that correspond to the higher-order topological phase diagram. Our results effectively link high-order topology with the anomalous Nernst effect of thermal excitation, which is of great significance for the measurement and application of experimental electronic devices. ## II Calculation Methods First-principles calculations based on density functional theory (DFT) are conducted using the Vienna Ab initio Simulation Package (VASP) [32; 33]. The electronic exchange-correlation interactions are treated using the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) [34; 35]. For two-dimensional materials, a minimum of 15 A vacuum layer is included. The energy cutoff for the plane-wave basis is set to 500 eV, and a 10x10x1 k-mesh is employed for Brillouin zone sampling. The convergence threshold for the maximum force during structural optimization is set to be less than 0.01 eV/A, and the energy convergence criterion is set to be \(10^{-}6\) eV. We apply the GGA + U method to correct the Coulomb repulsion interactions of the Sc atom's d orbitals, with a typical value of 3 eV [36; 37]. The parity of the occupied states is calculated using the irrep program [38]. Additionally, a tight-binding model based on maximally localized Wannier functions (MLWFs) is constructed using the wannier90 and WANNIERTOOLS packages [39; 40]. ## III Results and discussion The structure of the two-dimensional Janus honeycomb material ScClI is shown in Fig. 1(a). And its projected orbital band structure indicate that the neighborhood of the Fermi energy level is primarily contributed by the d orbitals of Sc. In the out-of-plane ferromagnetic state, a magneto-valley coupling induces polarization at K and K', as illustrated in Fig. 1(b, c). The measured energy level difference between valence electrons in the K and K' valleys is 73.1 meV, and the band gap in the spin-up energy band of the K valleys is determined to be 502.1 meV. Although the system is an indirect bandgap insulator in its fully relaxed state, it can be transformed into a direct bandgap insulator by applying strain (further details in the Supplementary Material). To verify the high-order topological properties of ScClI, We first calculate the higher order topological indices \(Q_{c}^{3}\) of the system, which is protected by the symmetry of the \(C_{3}\) space inversion. For the rotation eigenvalues calculation of all occupied states on the high symmetry points in the Brillouin zone, one can take \([\text{K}_{n}^{(3)}]=\#\text{K}_{n}^{(3)}-\#\Gamma_{n}^{(3)}\), where \(\#\) denotes the counting about the symmetry eigenvalues at the points K and \(\Gamma\). The eigenvalues of the \(C_{3}\) rotations are defined as \(e^{2\pi i(n-1)/3}\left(\text{for }n=1,2,3\right)\). The topological indices [12; 13] of the HOTI are \[\chi^{(3)}=([K_{1}^{(3)}].[K_{2}^{(3)}]),Q_{c}^{(3)}=\frac{e}{3}[K_{2}^{(3)}] \text{mod}e, \tag{1}\] where \(e\) is the charge of the free electron. By calculating the eigenvalues of the DFT occupying the band rotation operator, the corner charge \(Q_{c}^{(3)}=2e/3\) is obtained. Then, to comprehensively capture the essential physical characteristics of higher-order topology, we develop a simplified multi-orbit tight-binding (TB) model. This TB model is designed for a 2D ferromagnetic material possessing \(C_{3}\) symmetry, where the intrinsic magnetism induces magneto-valley coupling, leading to a giant valley polarization. To provide a complete representation Figure 1: (a) Top and side views of ScClI. (b) In the case of out of plane ferromagnetic FM, the orbit projects a Electronic band structure. (c) Energy bands of Spin polarization near Fermi level. The red and blue lines represent the states of spin up and spin down, respectively. (d) Energy spectrum of triangular armchair nanosheets. (e) The total wavefunction distribution in real space for the corner states within the bandgap of the (d) energy spectrum. (f) The complete phase diagram as a function of the magnetic moment direction \(\theta_{z}\) and the energy level difference \(\Delta_{e}\). of higher-order topological corner states, our TB model incorporates the entire set of five d-orbitals and six p-orbitals (from Sc, Cl and I, respectively), with a focus on the nearest-neighbor lattice point orbitals. Furthermore, we consider spin-orbit coupling (SOC) and magnetic interactions, leading to the following expression for the Hamiltonian: \[H=\sum_{(i,j)\alpha,\beta}(t_{ij}^{\alpha,\beta}c_{i\alpha}^{\dagger}c_{j\beta}+ h.c.)+t_{soc}\mathbf{L}\cdot\mathbf{S}+m_{z}\mathbf{M}\cdot\mathbf{S}, \tag{2}\] where \(c_{i\alpha}^{\dagger}(c_{j\beta})\) represents the electron creation (annihilation) operator for the orbital \(\alpha(\beta)\) located at position i (j). \(t_{soc}\) denotes the strength of the spin-orbit coupling, and \(M=(M_{x},M_{y},M_{z})\) represents the magnetic moment direction with \(m_{z}\) intensity. Under this TB model, we have successfully identified higher-order topological corner states in the fully relaxed structure of ScCII. The electronic band structure of the triangular quantum dot is depicted in Fig. 1(d), where the presence of red color in the middle of the band gap represents the corner states. In Fig. 1(e), the wave functions of the corner states are localized on the corners of the quantum dots. Higher order topological corner states are the result after full consideration of the \(\mathrm{p,d}\) orbitals. Conversely, when only the d orbitals near the Fermi level are considered, the topological corner state in the band gap disappears, as demonstrated in the supplementary materials. The reason for this is that considering only the d orbital results in a single valence band below the Fermi level, which inaccurately captures the correct fractional angular charge in this scenario. Next, we can achieve higher-order topological phase transitions by tuning the angle \(\theta_{z}\) of the magnetic moment and the energy level difference \(\Delta_{e}\) of partial orbitals. The angle \(\theta_{z}\) primarily influences the magnitude of valley polarization, while \(\Delta_{e}\) induces band inversions, leading to a transition from a SOTI to a QAVHI. Due to the magneto-valley coupling, after the first band inversion, the system becomes a QAVHI with Chern number \(C=1\). Following the second band inversion, the system transforms into a trivial insulator. It is worth noting that at the critical point of the topological phase transition, the system is a valley half semimetal phase at the K valley during the first inversion and at the K' valley during the second inversion. Fig. 1(f) displays the complete phase diagram, showing the progression SOTI-VHSM-QAVHI-VHSM-NI. For the above topological phase transition process, we can realize it by applying tensile strain. After experiencing the first energy band inversion in the K valley, the system becomes QAVHI. At this time, the energy spectrum of the triangular quantum dots undergoes a transformation, wherein the band gap disappears and is supplanted by a continuum of edge states in the vicinity of the Fermi energy level, as shown in Fig. 2(a). With increasing strain, the second energy band inversion occurs in the K' valley. The system behaves as a normal insulator. The corresponding energy spectrum exhibits a band gap with no corner states or side platforms present within it. Additionally, both sides of the bandgap show localized features within the body, as shown in Fig. 2(b). At tensile strain magnitudes of 2.05% and 2.34%, the system reaches the phase boundary, transitioning into a Valley-half Semi-metal, as shown in Fig. 2(c). In both strain ranges, the Chen number of the system is 1, and Fig. 2(d) illustrates the Wannier charge center (WCC) of the Figure 2: (a) The quantum dot energy spectrum of QAVHI is displayed, with blue dots marking the boundary states near the Fermi energy. The inset shows the wave function distribution. (b) The energy spectrum of NI, illustrating the wave function distribution of bulk states. (c) Projected energy band evolution in the \(1.5-3\%\) strain range. WCC and surface band of QAVHI at 2.2% strain are shown in (d) and (e), respectively. (f) Plot of quantum anomalous Hall conductance versus energy. system. The surface states and quantum anomalous Hall conductance platform of QAVHI are shown in Fig. 2(e, f), respectively. For this system, the magneto-valley coupling gives rise to a giant valley polarization. This depends on the magnitude of the magnetic moment outside the surface, and the valley polarization disappears when a magnetic moment inside the surface is applied. It is demonstrated in the Supplementary Material. The giant valley polarization property, due to the intrinsic magnetism of the system, has attracted our attention because it will bring about a difference in the Berry curvature at the valley of K/K'. Berry curvature is the same as adding an equivalent magnetic field to the system, which causes the electrons to acquire anomalous velocities [23]. We can exploit this property to obtain tunable valley flows. Since the Berry curvature induced valley flow depends on the neighborhood of the Fermi energy level and related to the valley degree of freedom, we can use the two-band k-p model to characterize the physical properties of valley locking. The Hamiltonian of the two-band can be written as: \[\mathcal{H}=at(k_{x}\sigma_{x}+\eta k_{y}\sigma_{y})+\frac{\Delta}{2}\sigma_{ z}+\eta m_{z}\sigma_{z}, \tag{3}\] where is \(\eta\) the valley index, a is the lattice constant, t is the jump integral \(\Delta\) denotes the bandgap size, and \(m_{z}\) denotes the strength of the magneto-valley coupling. \(\sigma_{x,y,z}\) is the Pauli matrix. The absence of a spin indicator is attributed to the spin polarization in the vicinity of the Fermi energy level. Effective mass \(\Delta_{m}=\Delta/2+\eta m_{z}\). The energy eigenvalues are \[E_{\eta}^{n}=n\sqrt{(kat)^{2}+{\Delta_{m}}^{2}}, \tag{4}\] where n = \(\pm 1\), denoting the inverted and valence bands, respectively. According to the definition of Berry curvature \(\Omega(k)=\nabla\times\langle u(\mathbf{k})|i\mathbf{V}_{i}|u(\mathbf{k})\rangle\), where \(u(k)\) is the periodic part of the Bloch wave function. We can obtain Figure 3: (a), (b), and (c) present the Berry curvature, ANC and ANV of three insulation correlations: SOTI, QAVHI, and NI, respectively, concerning the K/K’ valley. In (d), ANV is plotted as a function of energy and gap. Meanwhile, (e) illustrates the relationship between ANV, energy, and the direction of the magnetic moment \(\theta_{z}\). Phase boundaries are marked by gray dashed lines. the Berry curvature associated with the valley \[\Omega_{\eta}^{n}(\mathbf{k})=-\eta n\frac{a^{2}t^{2}\Delta_{m}}{2\left[\left(kat \right)^{2}+\left(\Delta_{m}\right)^{2}\right]^{(3/2)}} \tag{5}\] By tuning the bandgap and the magneto-valley coupling strength, we obtained the Berry curvature for the three phases SOTI, QAVHI and NI, respectively. They are shown in Fig. 3(a-c), respectively, matching the results of the DFT calculations. It is worth noting that the valley polarization is reflected by the difference in Berry curvature. During the phase transition, the clear change from opposite signs to identical signs in the Berry curvature of the K/K' valleys is one of the distinctive features of the QAVHI. For characterizing the nature of the valleys matching the phase diagram, we can reveal it through the anomalous Nernst effect of thermal excitation. The valley-dependent anomalous Nernst conductivities is defined as: \[\mathcal{N}_{\eta}=\frac{ek_{\mathfrak{B}}}{\hbar}\sum_{n}\int\frac{d^{2}k}{ \left(2\pi\right)^{2}}\Omega_{\eta}^{n}\mathcal{S}_{\eta}^{n}(\mathbf{k}), \tag{6}\] where \(\mathcal{S}_{\eta}^{n}\) is the entropy density, \(f_{k}\) denotes the Fermi distribution function. At 300 K temperature, we can obtain the anomalous valley nernst conductance (ANV) and the anomalous charge nernst conductance (ANC): \[\text{ANV}=\mathcal{N}_{k}-\mathcal{N}_{k^{\prime}},\text{ ANC}=\mathcal{N}_{k}+\mathcal{N}_{k^{\prime}}, \tag{7}\] Figure 3(a-c) shows the anomalous Nernst conductivities for each of the three phases. In the context of QAVHI, the ANC intersects the zero energy level just once, whereas for SOTI and NI, the ANC crosses the zero point three times. Therefore, the ANC does not perfectly distinguish between individual insulation phases. On the other hand, the ANVs of the three phases differ from each other. The ANV of QAVHI has three intersections with the Fermi energy level. The ANVs of SOTI and NI has only one intersection with the Fermi energy level, and they have opposite signs. Therefore, ANV is a perfect choice to distinguish the insulating phases. When the magnetic moment is perpendicular to the plane, the magneto-valley coupling strength reaches its maximum. By adjusting the gap size, the contour plot of the ANV with respect to energy and gap is obtained, as shown in Fig. 3(d), which follow the process of the white dashed line in the phase diagram. Fig. 3(d) corresponds to the yellow dashed line in the phase diagram, the fixed gap adjusts the direction of the magnetic moment, and the system undergoes a QAVHI-SOTI-QAVHI phase transition, which matches the phase diagram exactly. ## IV Conclusions In summary, we employ the multi-orbital tight-binding method to analyze the high-order topological properties of the 2D Janus ScCII. We give the complete phase diagram of ScCII's high-order topology, which is controlled by the magnetic moment direction \(\theta_{z}\) and the orbital energy level interpolation \(\Delta_{e}\). The theoretical predicted phase transition process of SOTI-VHSM-QAVHI-VHSM-NI has been verified and realized from the DFT calculation results under strain engineering. Meanwhile, due to the valley polarization characteristics of the system, the three types of insulators, SOTI, QAVHI, and NI, have different anomalous Nernst conductivity. By exploiting the anomalous Nernst effect, we construct an ANV map that corresponds to the topological phase diagram. Our findings can be extended to other two-dimensional materials of the same type, paving the way for future measurements and characterizations of higher-order topological phases in a broader range of materials. ###### Acknowledgements. We acknowledge the financial support by the National Natural Science Foundation of China (No. 11874113) and the Natural Science Foundation of Fujian Province of China (No. 2020J02018).
2305.08847
Probing bursty star formation by cross-correlating extragalactic background light and galaxy surveys
Understanding the star formation rate (SFR) variability and how it depends on physical properties of galaxies is important for developing and testing the theory of galaxy formation. We investigate how statistical measurements of the extragalactic background light (EBL) can shed light on this topic and complement traditional methods based on observations of individual galaxies. Using semi-empirical models of galaxy evolution and SFR indicators sensitive to different star formation timescales (e.g., H$\alpha$ and UV continuum luminosities), we show that the SFR variability, quantified by the joint probability distribution of the SFR indicators (i.e., the bivariate conditional luminosity function), can be characterized as a function of galaxy mass and redshift through the cross-correlation between deep, near-infrared maps of the EBL and galaxy distributions. As an example, we consider combining upcoming SPHEREx maps of the EBL with galaxy samples from Rubin/LSST. We demonstrate that their cross-correlation over a sky fraction of $f_\mathrm{sky}\sim0.5$ can constrain the joint SFR indicator distribution at high significance up to $z\sim2.5$ for mass-complete samples of galaxies down to $M_{*}\sim10^9\,M_{\odot}$. These constraints not only allow models of different SFR variability to be distinguished, but also provide unique opportunities to investigate physical mechanisms that require large number statistics such as environmental effects. The cross-correlations investigated illustrate the power of combining cosmological surveys to extract information inaccessible from each data set alone, while the large galaxy populations probed capture ensemble-averaged properties beyond the reach of targeted observations towards individual galaxies.
Guochao Sun, Adam Lidz, Andreas L. Faisst, Claude-André Faucher-Giguère
2023-05-15T17:58:26Z
http://arxiv.org/abs/2305.08847v2
Probing bursty star formation by cross-correlating extragalactic background light and galaxy surveys ###### Abstract Understanding the star formation rate (SFR) variability and how it depends on physical properties of galaxies is important for developing and testing the theory of galaxy formation. We investigate how statistical measurements of the extragalactic background light (EBL) can shed light on this topic and complement traditional methods based on observations of individual galaxies. Using semi-empirical models of galaxy evolution and SFR indicators sensitive to different star formation timescales (e.g., H\(\alpha\) and UV continuum luminosities), we show that the SFR variability, quantified by the joint probability distribution of the SFR indicators (i.e., the bivariate conditional luminosity function), can be characterized as a function of galaxy mass and redshift through the cross-correlation between deep, near-infrared maps of the EBL and galaxy distributions. As an example, we consider combining upcoming SPHEREx maps of the EBL with galaxy samples from Rubin/LSST. We demonstrate that their cross-correlation over a sky fraction of \(f_{\rm sky}\sim 0.5\) can constrain the joint SFR indicator distribution at high significance up to \(z\sim 2.5\) for mass-complete samples of galaxies down to \(M_{*}\sim 10^{9}\,M_{\odot}\). These constraints not only allow models of different SFR variability to be distinguished, but also provide unique opportunities to investigate physical mechanisms that require large number statistics such as environmental effects. The cross-correlations investigated illustrate the power of combining cosmological surveys to extract information inaccessible from each data set alone, while the large galaxy populations probed capture ensemble-averaged properties beyond the reach of targeted observations towards individual galaxies. keywords: galaxies: star formation - cosmology: cosmic background radiation - infrared: diffuse background ## 1 Introduction Both observations of star-forming galaxies at different cosmic epochs (Weisz et al., 2012; Emami et al., 2019; Faisst et al., 2019) and galaxy simulations resolving the gravitational collapse of star-forming gas and stellar feedback (Dominguez et al., 2015; Sparre et al., 2017; Gurvich et al., 2023; Hopkins et al., 2023) have led to an emerging picture where the star formation rate (SFR) of galaxies in certain regimes is highly time-variable -- a situation often referred to as bursty star formation. Elucidating the physical origin of bursty star formation and the transition to time-steady star formation is a key task for galaxy formation theory (Faucher-Giguere, 2018; Caplar and Tacchella, 2019; Iyer et al., 2020; Furlanetto and Mirocha, 2022; Orr et al., 2022; Hopkins et al., 2023). To this end, a crucial way to connect observations with theory is to investigate the variety of SFR indicators sensitive to different timescales of star formation. Among the large number of SFR indicators proposed in the literature, the H\(\alpha\)\(\,\lambda\)6563 nebular line emission and the UV continuum emission are most commonly considered (e.g., Emami et al., 2019; Flores Velazquez et al., 2021). Because H\(\alpha\) emission is predominantly produced by recombinations in H ii regions ionized by young, massive stars, it is expected to be sensitive to recent SFR variations on timescales as short as a few Myr. On the other hand, the UV continuum emission has substantial contributions from the non-ionizing radiation of older stellar populations and therefore is sensitive to significantly longer star formation timescales (\(\sim\)10 Myr when the SFR is time-steady and \(\sim\)100 Myr following extreme starbursts; see e.g., Flores Velazquez et al., 2021). The exact value depends on various factors, such as the wavelength of emission, the star formation history (SFH), and the stellar population synthesis (SPS) model assumed. Traditional methods relying on these SFR indicators usually require measuring the H\(\alpha\) and UV luminosities of individual galaxies simultaneously from flux limited surveys. Such measurements are expensive and likely susceptible to issues like selection bias that preferentially selects galaxies experiencing an ongoing burst of star formation (Dominguez et al., 2015; Faisst et al., 2019; Sun et al., 2023). Meanwhile, measuring the mean ratio \(L_{\rm H\alpha}/L_{\rm UV}\) (where \(L_{\rm UV}=\nu L_{\nu}\) is the UV luminosity per logarithmic frequency) alone for a limited sample of galaxies is insufficient to probe the SFR variability because it can be very sensitive to complications such as dust attenuation, whereas characterizing the joint distribution of \(L_{\rm H\alpha}\) and \(L_{\rm UV}\), especially its width, with a large galaxy sample can be a lot more informative (Sparre et al., 2017; Emami et al., 2019). These limitations together make an extensive, mass-complete study of bursty star formation in galaxies of different properties at different cosmic times challenging. Composed of the accumulated radiation from the all the sources in the universe outside the Milky Way, the extragalactic background light (EBL) offers a wealth of information about the galaxy and star formation physics across cosmic time (Finke et al., 2010, 2022). At near-infrared wavelengths (corresponding to rest-frame optical/UV at high redshifts), its potential to constrain the star formation process in high-redshift galaxies have attracted increasing interest in recent years (see e.g., Sun et al., 2021; Sun, 2022; Mirocha et al., 2022; Scott et al., 2022). Therefore, as an alternative approach to probe bursty star formation, we investigate in this work the possibility of statistically constraining the joint distribution of \(L_{\rm H\alpha}\) and \(L_{\rm UV}\) by cross-correlating cosmological surveys of the near-infrared EBL and galaxy distributions. Thanks to its unprecedented survey depth and sky coverage, the SPHEREx mission (Dore et al., 2014; Korngut et al., 2018; Crill et al., 2020) promises to accurately quantify sources of the EBL out to the epoch of reionization and thereby probe galaxy formation and evolution across a wide range of cosmic times. In synergy with wide-field galaxy surveys to be conducted by e.g., the Rubin Observatory Legacy Survey of Space and Time (Rubin/LSST; LSST Science Collaboration et al., 2009) or the Nancy Grace Roman Space Telescope (Spergel et al., 2015), it has been demonstrated that the EBL-galaxy cross-correlation can be detected at high significance in each spectral channel of SPHEREx, thereby allowing the mean rest-frame optical/UV emission spectrum of galaxies to be accurately measured (Cheng & Chang, 2022). It is therefore interesting to explore whether the EBL-galaxy cross-correlation can help constrain bursty star formation in galaxies, including its mass and redshift dependence, and provide a test of galaxy formation theory. In this paper, we conduct a proof-of-principle study of using the (near-infrared) EBL-galaxy cross-correlation to probe bursty star formation. In particular, we focus on the cross-correlation between intensity maps of H\(\alpha\) and UV continuum emission and the distribution of galaxies selected by their stellar mass. More specifically, we aim to constrain the joint distribution of \(L_{\rm H\alpha}\) and \(L_{\rm UV}\) as a probe for the SFR variability by measuring the zero-lag cross-correlation of the distribution of mass-selected galaxy samples and intensity maps of H\(\alpha\) and UV emission. As illustrated in Fig. 1, such a measurement can probe the decorrelation effect on the zero-lag cross-correlation caused by the scatter in the \(L_{\rm H\alpha}\)\(-\)\(L_{\rm UV}\) joint distribution, which links to the SFR variability (though complications due to e.g., dust attenuation exist; see Section 4). To measure the zero-lag cross-correlation, we calculate the Poisson-noise cross-ispeectrum in Fourier space, which is the optimal way to separate the signal of interest from other sources of confusion, including large-scale clustering, instrument noise, and observational systematics. We forecast the prospects for measuring this cross-correlation using SPHEREx and Rubin/LSST and demonstrate the utility for probing bursty star formation in galaxies in different mass and redshift ranges. The remainder of this paper is organized as follows. In Section 2, we first introduce a simple, semi-empirical model for the \(L_{\rm H\alpha}\)\(-\)\(L_{\rm UV}\) joint distribution of galaxies conditioned on stellar mass. We then show, in the limit where Poisson fluctuations dominate over clustering, how the zero-lag cross-correlation in real space is equivalent to a measurement of the cross-ispeectrum in Fourier space. Finally, we describe the full framework for constraining the \(L_{\rm H\alpha}\)\(-\)\(L_{\rm UV}\) joint distribution with a set of correlation coefficients defined by cross-ispeectra. In Section 3, we present main results of our analysis, including forecasts for the various cross-correlation signals and the implied constraints on the toy models considered in our case study for SPHEREx and Rubin/LSST. We discuss some limitations and caveats of the presented analysis in Section 4, before concluding in Section 5. A flat, \(\Lambda\)CDM cosmology consistent with the measurements from Planck Collaboration et al. (2016) is adopted throughout this paper. ## 2 Methods ### Modeling the \(L_{\rm H\alpha}\)\(-\)\(L_{\rm UV}\) joint distribution #### 2.1.1 Overview While the modeling and analysis frameworks to be presented are generally applicable, for our proof-of-principle study in this paper, we investigate specifically the prospects for cross-correlating near-infrared EBL maps measured by SPHEREx with distributions of galaxies from the Rubin/LSST photometric redshift survey, which is expected to measure the mean rest-UV/optical spectrum of galaxies at high significance up to \(z\sim 4\)(Cheng & Chang, 2022). Given the wavelength coverage of SPHEREx (0.75-5\(\mu\)m) and the redshift range over which high-quality photo-\(z\) measurements can be achieved by Rubin/LSST, we aim to optimize the chance of detecting the decorrelation between H\(\alpha\) and UV luminosities due to bursty star formation, which is expected to be more pronounced in low-mass galaxies that are abundant but faint. For the longer-timescale SFR indicator, we choose the \(U\)-band (3500 A) luminosity1 rather than the more commonly used FUV (1500 A) luminosity because the former reaches lower redshifts (\(z\simeq 1.2\)) and maximizes the contrast in star formation timescales compared to H\(\alpha\)(Emami et al., 2021). Footnote 1: Throughout this paper, we use UV and \(U\)-band interchangeably when referring to the continuum emission to be studied together with H\(\alpha\). For simplicity, we refer to it with the subscript \(U\) hereafter. Performing the analysis at \(z\sim 1\) rather than \(z\sim 4\), is also motivated by the completeness limit of the Rubin/LSST photometric redshift survey, below which issues like selection bias due to incompleteness introduce significant systematics. Following Leauthaud et al. (2020), we can estimate the stellar mass range accessible by scaling from the 90% mass completeness limit of the COSMOS2015 catalog (Laigle et al., 2016). For Rubin/LSST with \(i\)-band limiting magnitude of \(i=26.8\), the 90% mass completeness limits are \(\log(M_{\rm H}^{\rm lim}/M_{\odot})=8.55,8.95,9.25,9.4\) at \(z=1,1.5,2,2.5\), respectively, well below stellar masses at which simulations predict galaxies at these redshifts to exhibit a considerable level of scatter in \(L_{\rm H\alpha}\)\(-\)\(L_{\rm UV}\) due to bursty star formation (Dominguez et al., 2015; Sparre et al., 2017). We analytically derive a conditional luminosity function (CLF)-based description of the different moments of \(L_{\rm H\alpha}\) and \(L_{U}\) necessary for the cross-correlation. Since galaxies in different stellar mass bins will be analyzed separately, the luminosity distributions are conditioned on stellar mass \(M_{*}\). The exact parameterization is based on semi-empirical models of galaxy evolution and H\(\alpha\) and UV emission, which are verified against the matching between the observed \(U\)-band luminosity functions (e.g., Moutard et al., 2020) and stellar mass functions (e.g., Shuntov et al., 2022) at redshifts of interest. #### 2.1.2 H\(\alpha\)\(-\)\(U\) bivariate conditional luminosity function (BCLF) Taking \(\Phi(L)\) to be the probability distribution function (PDF) of the luminosity \(L\) such that \(\int\Phi(L)dL=1\), we can write the joint PDF of \(L_{\rm H\alpha}\) and \(L_{U}\) conditioned on \(M_{*}\) as \[\Phi(L_{\rm H\alpha},L_{U}|M_{*})=\Phi(L_{\rm H\alpha}|L_{U},M_{*})\Phi(L_{U}| M_{*}), \tag{1}\] where on the right-hand side the first term \(\Phi(L_{\rm H\alpha}|L_{U},M_{*})\) is given by a log-normal distribution around the mean H\(\alpha\) luminosity \(\bar{L}_{\rm H\alpha}=L_{U,0}(L_{U}/L_{U,0})^{\beta}\), following the functional form from Mehta et al. (2015), with a logarithmic scatter of \(\sigma_{\alpha U}(M_{*})\). The second term, \(\Phi(L_{U}|M_{*})\), often referred to as the _conditional luminosity function_ (see e.g., Yang et al., 2003), is the distribution of \(L_{U}\) conditioned on \(M_{*}\) that can be determined by matching the observed stellar mass function and UV luminosity function. For \(\Phi(L_{U}|M_{*})\), we also consider a log-normal distribution specified by some mean relation \(\bar{L}_{U}(M_{*})\) and a logarithmic scatter \(\sigma_{\rm LM}\). Putting these ingredients together, we define a _bivariate conditional luminosity function_ (BCLF) of \(L_{\rm H\alpha}\) and \(L_{U}\), \(\Phi(L_{\rm H\alpha},L_{U}|M_{*})\), that is the product of \[\Phi(L_{\rm H\alpha}|L_{U},M_{*})=\frac{\exp\left\{\frac{-[\ln L_{\rm H\alpha} -\ln L_{U}(L_{U})]^{2}}{2\sigma_{\alpha U}^{2}(M_{*})}\right\}}{\sqrt{2\pi} \sigma_{\alpha U}(M_{*})L_{\rm H\alpha}} \tag{2}\] and \[\Phi(L_{U}|M_{*})=\frac{1}{\sqrt{2\pi}\sigma_{\rm LM}L_{U}}\exp\left\{\frac{- [\ln L_{U}-\ln L_{U}(M_{*})]^{2}}{2\sigma_{\rm LM}^{2}}\right\}, \tag{3}\] which satisfies \[\bar{L}_{U}(M_{*})=e^{-\sigma_{\rm LM}^{2}/2}\int_{0}^{\infty}dL_{U}\Phi(L_{U} |M_{*})L_{U}. \tag{4}\] By the definition of the CLF, equation (3) can in principle be determined by finding the appropriate functional form of \(\bar{L}_{U}(M_{*})\) and the value of \(\sigma_{\rm LM}\) that best matches the observed \(U\)-band luminosity function \(\phi(L_{U})=dn/dL_{U}\) and stellar mass function \(\psi(M_{*})=dn/dM_{*}\), where \(n\) is the number density of galaxies. In this work, however, we construct a simple, parametric model of \(\bar{L}_{U}(M_{*})\) and \(\sigma_{\rm LM}\) based on the specific SFR-stellar mass relation from semi-empirical models of galaxy formation given by the UniverseMachine code (Behroozi et al., 2019) and the observed \(U\)-band luminosities of galaxies from Zhou et al. (2017). As a sanity check, we have verified our simple model by comparing its predicted \(U\)-band luminosity function against the observed ones at redshifts where measurements are available (Moutard et al., 2020). To describe H\(\alpha\) and \(U\)-band continuum emission, we take \(L_{\rm H\alpha}=2.1\times 10^{41}\) erg s\({}^{-1}\) (\(\rm SFR/\,\,\)\(M_{\odot}\) yr\({}^{-1}\)), valid for the Chabrier IMF (7) assumed in this work, and adopt the attenuation-corrected, empirical relation between \(U\)-band and H\(\alpha\) luminosities from Zhou et al. (2017), who provide a calibration of the \(U\)-band luminosity as an SFR indicator. Because both these luminosities and the stellar masses they are anchored to are dust-corrected, to properly model their observed strengths in our cross-correlation analysis, we must reapply dust attenuation. To do this self-consistently, we assume the \(A_{\rm FUV}(M_{*})\) relation from McLure et al. (2018) that is derived for star-forming galaxies at \(z\sim 2\)-3, \[A_{\rm FUV}=2.293+1.16\mathcal{M}_{10}+0.256\mathcal{M}_{10}^{2}+0.209 \mathcal{M}_{10}^{3}. \tag{5}\] where \(\mathcal{M}_{10}=\log(M_{*}/10^{10}\,M_{\odot})\), and the Calzetti et al. (2000) Figure 1: A graphical representation of the EBL–galaxy cross-correlation analysis investigated in this work for probing effects of bursty star formation on the joint distribution of SFR indicators \(L_{\rm H\alpha}\) and \(L_{\rm UV}\). Distributions of galaxies, H\(\alpha\) line intensity, and UV continuum intensity are cross-correlated in Fourier space to measure the cross-spectrum. This constrains the joint H\(\alpha\)–UV luminosity distribution, especially its width which reflects the scatter in \(L_{\rm H\alpha}/L_{\rm UV}\) around the equilibrium value when star formation is time-steady. The Fourier-space cross-bispectrum analysis in the Poisson-noise dominated limit is formally equivalent to a zero-lag cross-correlation (i.e., stacking) on galaxy positions in real space, as demonstrated in Section 2.2, but allows foregrounds and observational systematics to be more easily separated (Section 4). dust attenuation curve, which implies \(A_{\rm H\alpha}=0.44A_{\rm FUV}\) and \(A_{U}=0.62A_{\rm FUV}\), respectively2. Footnote 2: Following McLure et al. (2018), we assume that \(E(B-V)_{\rm star}=0.76E(B-V)_{\rm neb}\) and derive \(A_{\lambda}=k_{\lambda}E(B-V)\) from \(k_{\lambda}=2.659\times(-2.156+1.509/\lambda-0.198/\lambda^{2}+0.011/\lambda^{ 3})+4.05\) for \(0.12\,\mu{\rm m}<\lambda<0.63\,\mu{\rm m}\) (rest-frame) or \(k_{\lambda}=2.659\times(-1.857+1.040/\lambda)+4.05\) for \(0.63\,\mu{\rm m}<\lambda<2.2\,\mu{\rm m}\), as in Calzetti et al. (2000). With the BCLF of \(L_{\rm H\alpha}\) and \(L_{U}\), the ensemble averages that enter our cross-correlation analysis can then be written as \[(L_{U}L_{\rm H\alpha}) \propto \int dM_{*}\Phi(L_{\rm H\alpha},L_{U}|M_{*})\psi(M_{*})\times \tag{6}\] \[\iint dL_{U}dL_{\rm H\alpha}10^{-0.4(A_{\rm H\alpha}+A_{U})}L_{U }L_{\rm H\alpha},\] \[(L_{\rm H\alpha}^{2}) \propto \int dM_{*}\Phi(L_{\rm H\alpha},L_{U}|M_{*})\psi(M_{*})\times \tag{7}\] \[\iint dL_{U}dL_{\rm H\alpha}10^{-0.8A_{\rm H\alpha}}L_{\rm H \alpha}^{2},\] and \[(L_{U}^{2}) \propto \int dM_{*}\Phi(L_{U}|M_{*})\psi(M_{*})\int dL_{U}10^{-0.8A_{U}} L_{U}^{2}, \tag{8}\] where \(\psi(M_{*})\) is the stellar mass function that we self-consistently obtain from UniverseMachine, and \(\langle...\rangle\) implicitly assumes that the ensemble average is taken for the sample of stellar-mass-selected galaxies over the mass bin \([M_{*},M_{*}+AM_{*}]\). We have also confirmed that using the latest observed stellar mass functions (e.g., Shmutov et al., 2022) has little impact on our results. As illustrated in Fig. 2 and summarized in Table 1, two toy models of the BCLF of \(L_{\rm H\alpha}\) and \(L_{U}\) are considered for our subsequent analysis. The fiducial model, Model I, assumes that the scatter, \(\sigma_{\alpha U}\), increases with decreasing stellar mass, whereas the contrasting model, Model II, assumes a constant \(\sigma_{\alpha U}=0.1\,{\rm dex}\) across all stellar mass bins. For both models, we further assume a constant \(\sigma_{\rm LM}=0.2\,{\rm dex}\), consistent with the scatter in the light-to-mass ratio observed and commonly assumed in semi-empirical models of high-\(z\) galaxies (More et al., 2009; Sun and Furlanetto, 2016), whereas \(\beta=1.25\) and \(L_{U,0}=3.55\times 10^{51}\,{\rm erg\,s^{-1}}\) are suggested by the best-fit relation to the observed correlation between H\(\alpha\) and \(U\)-band luminosities of galaxies (Zhou et al., 2017). We note that even though more accurately modeling the H\(\alpha\)-UV BCLF is beyond the scope of this study, our simple parameterization of the mean relations is grounded on empirical models that reliably describe galaxy evolution and the production of H\(\alpha\) and \(U\)-band emission at the redshifts of interest. The two contrasting cases for \(\sigma_{\alpha U}\) are chosen to roughly bracket the range of possible mass dependence of the width of \(L_{\rm H\alpha}\)-\(L_{U}\) distribution as a proxy for star formation burstiness, motivated by observations and numerical simulations (Weisz et al., 2012; Dominguez et al., 2015; Sparre et al., 2017; Emami et al., 2019; Faisst et al., 2019). #### 2.1.3 Connection to the EBL-galaxy cross-correlation From the ensemble averages defined above and their dependence on our BCLF model parameters, we can obtain a few simple and useful expressions that connect cross-correlation observables to these model parameters. The observable most directly related to the cross-correlation analysis is the cross-correlation coefficient, \(r_{\rm\times}^{\rm g}(\ell)\), which characterizes how correlated the two SFR tracer fields are for the galaxy population \(g\) of interest. As will be shown in Section 2.3, when measured in the Poisson-noise limit in Fourier space, the cross-correlation coefficient \(r_{\rm\times,P}^{\rm g}\equiv r_{\rm\times}^{\rm g}(\ell\gg\ell_{\rm c})\) takes the simple form \[r_{\rm\times,P}^{\rm g}=\frac{B_{\ell,\rm P}^{U,{\rm H},\alpha,g}}{\sqrt{B_{ \ell,\rm P}^{U,{\rm g}}B_{\ell,\rm P}^{{\rm H}\alpha,{\rm H}\alpha,g}}} \propto\frac{\langle L_{U}L_{\rm H\alpha}\rangle}{\sqrt{\langle L_{U}^{2} \rangle\langle L_{\rm H\alpha}^{2}\rangle}}. \tag{9}\] Here, the multipole moment \(\ell_{\rm c}\) denotes some characteristic scale (to be estimated from a power spectrum analysis) at which non-linear clustering is comparable to the Poisson noise, and \(B_{\ell,\rm P}^{i,j,k}\) denotes the Poisson-noise-limit cross-bispectrum of fields \(i\), \(j\), and \(k\). In Section 2.2, we will first motivate the understanding of the cross-correlation of interest in both real and Fourier spaces. We will then detail how to arrive at the proportionality, and derive the components of \(r_{\rm\times,P}^{\rm g}\) and their uncertainties, in Section 2.3. Combining equations (1) through (9), we can show that \(r_{\rm\times,P}^{\rm g}\) is in fact insensitive to the \(L_{U}(M_{*})\) parameterization or the value of \(L_{U,0}\), and obtain \[\ln\left[r_{\rm\times,P}^{\rm g}\right]=-\left[\frac{\sigma_{\alpha U}^{2}}{2 }+\frac{\sigma_{\rm LM}^{2}(\beta-1)^{2}}{2}\right]. \tag{10}\] It is easy to see that \(r_{\rm\times,P}^{\rm g}\) drops below unity if either \(\sigma_{\alpha U}\) or \(\sigma_{\rm LM}\) (as long as \(\beta\) is not strictly 1) is non-zero. While the latter characterizes the intrinsic scatter in the mass-to-light ratio of galaxies due to stochasticity in e.g., mass accretion rates (McBride et al., 2009; Fakhouri et al., 2010; van den Bosch et al., 2014), the former may be \begin{table} \begin{tabular}{c c c c c} \hline Model & \(\sigma_{\alpha U}\) & \(\sigma_{\rm LM}\) & \(\beta\) & \(L_{U,0}\)\({}^{+}\) \\ & (dex) & (dex) & & (erg s\({}^{-1}\)) \\ \hline 1 & 0.4, 0.3, 0.2, 0.15, 0.1, 0.05 & 0.2 & 1.25 & \(3.55\times 10^{51}\) \\ II & 0.1, 0.1, 0.1, 0.1, 0.1, 0.1 & 0.1 & 0.2 & 1.25 & \(3.55\times 10^{51}\) \\ \hline \multicolumn{4}{l}{\({}^{+}\) The exact value of \(L_{U,0}\) does not impact the cross-correlation coefficients} \\ \multicolumn{4}{l}{(Section 2.1.3) but affects the expected detectability of cross-correlation.} \\ \end{tabular} \end{table} Table 1: Specifications of the toy models considered in this work. The scatter \(\sigma_{\alpha U}\) is allowed to vary across the 6 stellar mass bins uniformly distributed over \(8.5<\log(M_{*}/M_{\odot})<11.5\). Figure 2: Illustration of the average \(\log\left(L_{\rm H\alpha}/L_{U}\right)\) and the scatter around it (specified in brackets in units of dex) as a function of stellar mass described by the baseline model (Model I) and its variant (Model II) considered in this work. The scatters are overplotted on the mean relation in the 6 stellar mass bins uniformly distributed over \(8.5<\log(M_{*}/M_{\odot})<11.5\). The growth of scatter with decreasing stellar mass as in Model I is often considered as an indication of an increasing level of bursty star formation. largely attributed to the time variability of the SFR. Because constraints on bursty star formation mainly come from the comparison of \(r_{\rm x,P}^{\rm g}\) in different stellar mass bins instead of its exact values, factors that are generally mass-independent will not significantly complicate the interpretation. For reference, assuming \(\sigma_{\rm LM}=0\), we have \(r_{\rm x,P}^{\rm g}=0.97,0.79\), and \(0.52\) for \(\sigma=0.1,0.3,\) and \(0.5\) dex, respectively. By analogy to the cross-correlation coefficient, \(r_{\rm x,P}^{\rm g}\), we can also define and derive the following auto-correlation coefficients for H\(\alpha\) and UV emission \[\ln\left(r_{\rm H\alpha,P}^{\rm g}\right)=\ln\left(\frac{C_{\ell,P}^{\rm H \alpha,g}}{\sqrt{B_{\ell,P}^{\rm H\alpha,H\alpha,g}}}\right)=-\frac{\sigma_{ \alpha U}^{2}+\sigma_{\rm LM}^{2}\beta^{2}}{2} \tag{11}\] and \[\ln\left(r_{U,P}^{\rm g}\right)=\ln\left(\frac{-C_{\ell,P}^{U,\rm g }}{\sqrt{B_{\ell,P}^{U,\rm g}}}\right)=-\frac{\sigma_{\rm LM}^{2}}{2}, \tag{12}\] where \(C_{\ell,P}^{\rm H\alpha,g}\) and \(C_{\ell,P}^{U,\rm g}\) are the Poisson-noise terms of the angular cross-power spectra of H\(\alpha\) and UV emission with galaxies to be defined in Section 2.3. Equations (10) through (12) therefore connect correlation coefficients directly measurable from the EBL-galaxy cross-correlation to parameters of our BCLF model, which can be individually constrained by solving these equations. Although we will focus on the analysis of the BCLF hereafter, for completeness, in Appendix A we also derive the mean and variance of the luminosity ratio, \(L_{\rm H\alpha}/L_{U}\), as two examples of other potentially useful measures of the BCLF and thus the star formation burstiness. Relationship between the real-space zero-lag cross-correlation and the Fourier-space cross-bispectrum Here, before presenting the full cross-correlation analysis framework in Fourier space, we start with a demonstration of how the Poisson-noise cross-bispectrum to be analyzed relates to the zero-lag cross-correlation (i.e., stacking) in real space, which might be more intuitive to understand as a well-established method to probe astrophysics beyond the reach of individually targeted observations (see e.g., Viero et al., 2022, for a recent stacking analysis of the dust-obscured star formation in high-\(z\) galaxies). By showing that they are essentially equivalent, we aim to build up the physical intuition to comprehend details of the full, Fourier-space treatment to be described in Section 2.3. To demonstrate the equivalence of cross-correlation analyses performed in Fourier and real spaces, it is sufficient to compare the signal-to-noise ratios (S/N) derived in both cases as a measure of the information available. For a zero-lag cross-correlation of intensity maps \(j\) and \(k\) with galaxies in real space, in the Poisson-noise dominated limit, the S/N scales as \[\left(\frac{\rm S}{\rm N}\right)_{\rm FS}\sim\left(N_{\rm gal} \right)^{1/2}\left(\frac{\langle\nu t_{\nu}^{j}\rangle}{\sigma_{\rm pix,N}^{j} }\right)\left(\frac{\langle\nu t_{\nu}^{k}\rangle}{\sigma_{\rm pix,N}^{k}} \right)\frac{\langle\langle L_{j}L_{k}\rangle}{\langle L_{j}\rangle\langle L_ {k}\rangle}, \tag{13}\] which is a product of the cross-correlation coefficient, the S/N per pixel of the intensity maps, and a scaling factor for the noise reduction when "stacking" on \(N_{\rm gal}\) galaxies. Using definitions of cross-bispectrum and its uncertainty to be introduced in Section 2.3, we can show that the S/N of cross-bispectrum \(B_{\ell}^{ijk}\) defined in Fourier space _resembles equation (13) in the Poisson-noise limit_. Specifically, we have (see Section 2.3.2 for details) \[\left(\frac{\rm S}{\rm N}\right)_{\rm X}^{2} =\left(\frac{B_{\ell,P}^{ijk}}{\delta B_{\ell,P}^{ijk}}\right)^{2 }\approx\sum_{\ell_{1},\ell_{2},\ell_{3}}\frac{\left[B_{\ell}^{ijk}(\ell_{1}, \ell_{2},\ell_{3})\right]^{2}}{C_{\ell}^{i}(\ell_{1})C_{\ell}^{j}(\ell_{2})C_{ \ell}^{k}(\ell_{3})}\Omega_{k}\ell_{\rm max}\Delta\ell_{1}\Delta\ell_{2} \Delta\ell_{3}\] \[\approx\ell_{\rm max}^{4}\Omega_{8}\frac{\left(B_{\ell,P}^{ijk} \right)^{2}}{C_{\ell,P}^{i}C_{\ell,P}^{j}C_{\ell,P}^{k}}, \tag{14}\] where the approximation \(N_{\rm trip}\approx\ell_{\rm max}\Omega_{8}^{2}\Delta\ell_{1}\Delta\ell_{2} \Delta\ell_{3}\approx\ell_{\rm max}^{4}\Omega_{8}^{2}\) is applied. Note that here \(\ell_{\rm max}\approx\theta_{\rm pix}^{-1}\), where \(\theta_{\rm pix}\) is the pixel size in steradian, and \(\Omega_{8}\) is the survey size. As will be shown in Section 2.3.2, we can write the angular power spectra as \(C_{\ell,P}^{i}=\Omega_{8}N_{\rm gal}^{-1}\), \(C_{\ell,P}^{j}=\left(\sigma_{\rm pix,N}^{j}\right)^{2}\theta_{\rm pix}^{2}\), and \(C_{\ell,P}^{k}=\left(\sigma_{\rm pix,N}^{k}\right)^{2}\theta_{\rm pix}^{2}\), whereas the cross-bispectrum scales as \[B_{\ell,P}^{ijk}\propto\langle\nu I_{\nu}^{j}\rangle\langle\nu I_{\nu}^{k} \rangle\frac{\langle L_{j}L_{k}\rangle}{\langle L_{j}\rangle\langle L_{k} \rangle}. \tag{15}\] Putting together, we can recover the form of equation (13) from equation (14). Therefore, we stress that, while measuring a zero-lag cross-correlation in real space is mathematically equivalent to measuring a Poisson-noise cross-bispectrum in Fourier space, we choose to work in Fourier space below given practical considerations in observational data analysis that favor it as a more robust and unbiased method. For example, the finite angular and spectral resolution of SPHERE imply that the pure zero-lag cross-correlation is not strictly observable. The separation between the clustering contributions and Poisson fluctuations is then more transparent in Fourier space, as are the treatment of the beam, spectral resolution, foreground contamination and pixel noise, while the analysis may also be more easily generalized to incorporate clustering terms. ### The EBL-galaxy cross-correlation: signals and errors #### 2.3.1 Cross-power spectra and cross-bispectra Following Cheng and Chang (2022), we can write the cross-power spectra between H\(\alpha\)/UV emission and galaxies in the Poisson-noise limit as \[C_{\ell,P}^{\rm H\alpha,g}=\frac{1}{\sigma_{\rm g}}\Delta z_{ \rm g}\frac{d\nu I_{\nu}}{dz}\Big{|}_{\rm H\alpha} \tag{16}\] and \[C_{\ell,P}^{U,\rm g}=\frac{1}{\sigma_{\rm g}}\Delta z_{\rm g} \frac{d\nu I_{\nu}}{dz}\Big{|}_{U}. \tag{17}\] By analogy to the definition of cross-power spectra, three fields (two factors of intensity map and one factor of galaxy distribution) are required to calculate ensemble averages involving the second moment of luminosity, \(\langle\mathcal{O}(L^{2})\rangle\). We therefore define the cross-bispectrum as an integral of the differential flux densities \(d\langle\nu I_{\nu}\rangle/dz\) of H\(\alpha\) and UV emission (which themselves are mass integrals over the galaxy population described by the stellar mass function \(\psi(M_{*})\)) over redshift, conditioned on the subgroup of galaxies selected by stellar mass. When a narrow redshift range \(\Delta z_{\rm g}\ll 1\) is considered, the redshift integral \(\int_{\Delta z_{\rm g}}F(z)dz\) can be approximated as \(F(z_{\rm g})\Delta z_{\rm g}\), which simplifies the calculations. For H\(\alpha\) (line) and UV (continuum) emission, we can write their differential flux densities as3 Footnote 3: Note that we omit the convolution with the conditional PDFs of the luminosities in the two expressions below for brevity, but include them in the full expressions for \(B_{\ell,\mathrm{P}}\) below. \[\frac{d\nu I_{\nu}}{dz}\Big{|}_{\mathrm{H}\alpha} =\frac{1}{\Delta z_{\mathrm{H}\alpha}}\int_{M_{*,\mathrm{min}}}^{M _{*,\mathrm{max}}}dM_{*}\psi(M_{*})\frac{\nu L_{\mathrm{H}\alpha}}{4\pi D_{L}^{ 2}}\frac{d\chi}{d\nu}D_{A,\mathrm{com}}^{2}\] \[\simeq\frac{c/H(z_{\mathrm{g}})}{\Delta z_{\mathrm{g}}}\frac{ \int dM_{*}\psi(M_{*})L_{\mathrm{H}\alpha}}{16\pi^{2}\sigma_{\mathrm{g}}(1+z_{ \mathrm{g}})^{4}\chi^{2}(z_{\mathrm{g}})}\, \tag{18}\] \[\frac{d\nu I_{\nu}}{dz}\Big{|}_{U} =\int_{M_{*,\mathrm{min}}}^{M_{*,\mathrm{max}}}dM_{*}\psi(M_{*}) \frac{L_{U}}{4\pi D_{L}^{2}}\frac{d\chi}{dz}D_{A,\mathrm{com}}^{2}\] \[=\frac{c/H(z_{\mathrm{g}})}{4\pi(1+z_{\mathrm{g}})^{2}}\int dM_{* }\psi(M_{*})L_{U}\, \tag{19}\] and the density of galaxies (per unit solid angle) is \[\sigma_{\mathrm{g}}\simeq\Delta z_{\mathrm{g}}\frac{dN_{\mathrm{g}}}{dzd \Omega}=\frac{\chi^{2}(z_{\mathrm{g}})c\Delta z_{\mathrm{g}}}{H(z_{\mathrm{g}} )}\int\ dM_{*}\psi(M_{*})\, \tag{20}\] where \(H(z)\), \(\chi\), \(D_{L}\), and \(D_{A,\mathrm{com}}=\chi\) are the Hubble parameter, the comoving radial distance, the luminosity distance, and the comoving angular diameter distance, respectively. The \(\chi\) gradients are given by \(d\chi/d\nu=c(1+z)/[\nu H(z)]\) for the observed frequency \(\nu\), and \(d\chi/dz=c/H(z)\). We assume \(\Delta z_{\mathrm{g}}\approx\Delta z_{\mathrm{H}\alpha}=(1+z)/R\) with \(R\) being the spectral resolving power. Note that both \(L_{U}\) and \(L_{\mathrm{H}\alpha}\) are defined to be non-specific luminosities in units of \(\mathrm{erg\,s^{-1}}\) that, to the first order, scale with the SFR and thus \(M_{*}\). Unless otherwise specified when the mass integral spans the full range of stellar mass from \(M_{*,\mathrm{min}}=10^{7.5}\,M_{\odot}\) to \(M_{*,\mathrm{max}}=10^{11.5}\,M_{\odot}\) (as in equations (18) and (19)), the stellar mass integral is by default over \(AM_{*}\), which selects the subgroup of galaxies in the stellar mass bin of interest. With equations (18) and (19), the Poisson-noise-limit cross-bispectrum of the H\(\alpha\) line, \(U\)-band continuum, and galaxy fields can be written as \[B_{\ell,\mathrm{P}}^{\mathrm{\chi}} \equiv B_{\ell,\mathrm{P}}^{\mathrm{H}\alpha,U,\mathrm{g}} \tag{21}\] \[=\frac{1}{\sigma_{\mathrm{g}}}\int dz\frac{H(z)}{c\chi^{2}(z) \Delta z_{\mathrm{H}\alpha}|_{\mathrm{g}}}\int\ dM_{*}\psi(M_{*})\times\] \[\left[\frac{\nu L_{\mathrm{H}\alpha}}{4\pi D_{L}^{2}}\frac{d\chi }{d\nu}D_{A,\mathrm{com}}^{2}\right]\left[\frac{L_{U}}{4\pi D_{L}^{2}}\frac{ d\chi}{dz}D_{A,\mathrm{com}}^{2}\right]\Phi(L_{\mathrm{H}\alpha},L_{U}|M_{*})\] \[=\frac{c\int dM_{*}\Phi(L_{\mathrm{H}\alpha},L_{U}|M_{*})\psi(M_ {*})L_{\mathrm{H}\alpha}L_{U}}{16\pi^{2}\sigma_{\mathrm{g}}H(z_{\mathrm{g}} )(1+z_{\mathrm{g}})^{3}\chi^{2}(z_{\mathrm{g}})}\,\] where \(\Delta z_{\mathrm{H}\alpha|_{\mathrm{g}}}\approx\Delta z_{\mathrm{H}\alpha} \approx\Delta z_{\mathrm{g}}\) denotes the redshift range over which galaxy and emission intensity fields overlap. Similarly, for the \(\langle L_{U}^{2}\rangle\) and \(\langle L_{\mathrm{H}\alpha}^{2}\rangle\) (auto-correlation) terms in the denominator of equation (9), we have \[B_{\ell,\mathrm{P}}^{U,U,\mathrm{g}} =\frac{1}{\sigma_{\mathrm{g}}}\int\ dz\frac{H(z)}{c\chi^{2}(z)} \int\ dM_{*}\psi(M_{*},z)\] \[\times\left[\frac{L_{U}}{4\pi D_{L}^{2}}\frac{d\chi}{dz}D_{A, \mathrm{com}}^{2}\right]^{2}\Phi(L_{U}|M_{*})\] \[=\frac{c\Delta z_{\mathrm{g}}}{H(z_{\mathrm{g}})}\frac{\int dM_{*} \Phi(L_{U}|M_{*})\psi(M_{*},z_{\mathrm{g}})L_{U}^{2}}{16\pi^{2}\sigma_{ \mathrm{g}}(1+z_{\mathrm{g}})^{4}\chi^{2}(z_{\mathrm{g}})}\, \tag{22}\] and \[B_{\ell,\mathrm{P}}^{\mathrm{H}\alpha,\mathrm{H}\alpha,\mathrm{ g}} =\frac{1}{\sigma_{\mathrm{g}}}\int\ dz\frac{H(z)}{c\chi^{2}(z)} \frac{1}{\Delta z_{\mathrm{H}\alpha}^{2}}\int\ dM_{*}\psi(M_{*},z)\] \[\times\left[\frac{\nu L_{\mathrm{H}\alpha}}{4\pi D_{L}^{2}}\frac{ d\chi}{d\nu}D_{A,\mathrm{com}}^{2}\right]^{2}\Phi(L_{\mathrm{H}\alpha},L_{U}|M_{*})\] \[=\frac{1}{\sigma_{\mathrm{g}}}\frac{c}{H(z_{\mathrm{g}})\Delta z_{ \mathrm{g}}}\frac{\int dM_{*}\psi(M_{*},z_{\mathrm{g}})L_{\mathrm{H}\alpha}^{ 2}}{16\pi^{2}(1+z_{\mathrm{g}})^{2}\chi^{2}(z_{\mathrm{g}})}. \tag{23}\] #### 2.3.2 Uncertainties on cross-power spectra and cross-bispectra For the cross-power spectrum between an intensity map \(i\) (H\(\alpha\) or \(U\)-band intensity map here) and galaxies, the uncertainty for a given multipole moment \(\ell\) binned in a width of \(\Delta\ell\) can be expressed as \[\left(\delta c_{\ell,\mathrm{P}}^{i,\mathrm{g}}\right)^{2}=\frac{1}{f_{\mathrm{ sky}}(2\ell+1)\Delta\ell}\left[\left(c_{\ell,\mathrm{P}}^{i,\mathrm{g}}\right)^{2}+C_{\ell, \mathrm{N}}^{i}C_{\ell,\mathrm{P}}^{\mathrm{g}}\right]\, \tag{24}\] where \(f_{\mathrm{sky}}\) is the sky covering fraction and we assume here that auto-correlations of the intensity map and galaxies are dominated by the instrument noise and the Poisson noise, respectively, on the small scales considered in our analysis. In practice, to obtain the net effective uncertainty of the cross-power spectrum, we further scale down equation (24) by a factor of 300 to approximate the gain in sensitivity from binning together modes over \(10^{4}<\ell<10^{5}\). This renders S/N of \(C_{\ell,\mathrm{P}}^{\mathrm{H}\alpha,\mathrm{g}}\) (or \(C_{\ell,\mathrm{P}}^{U,\mathrm{g}}\)) substantially higher than that of Figure 3: A comparison of the error budget for \(C_{\ell}\) of H\(\alpha\) and UV emission (top panel), as well as the galaxy distribution (bottom panel) at \(z\approx 1.5\). At high multipoles \(\ell\geq 10^{4}\), uncertainties of the intensity (galaxy) power spectra are strongly dominated by the instrument noise \(C_{\ell,N}\) (Poisson noise \(C_{\ell,\mathrm{P}}^{\mathrm{g}}\)). Note that, unlike in the bottom panel, the sample variances in the top panel are evaluated by integrating over the full range of stellar mass [\(M_{*,\mathrm{min}}\), \(M_{*,\mathrm{max}}\)]. \(B^{H\alpha,\alpha,\alpha,\rm g}_{\ell,\rm P}\) (or \(B^{U,U,\rm g}_{\ell,\rm P}\)), as will be detailed below, and therefore the S/N of auto-correlation coefficients \(r^{\rm g}_{\rm H\alpha,\rm p}\) (or \(r^{\rm g}_{U,\rm P}\)) can be simply approximated as twice of that of \(B^{H\alpha,\rm H\alpha,\rm g}_{\ell,\rm P}\) (or \(B^{U,U,\rm g}_{\ell,\rm P}\)). Following Kayo et al. (2013), we can write the bispectrum variance in the Gaussian approximation as \[{\rm Var}\left[B^{ijk}_{\ell}(\ell_{1},\ell_{2},\ell_{3})\right]= \frac{\Omega_{\rm s}C^{i}_{\ell}(\ell_{1})C^{j}_{\ell}(\ell_{2})C^{k}_{\ell}( \ell_{3})}{N_{\rm trip}(\ell_{1},\ell_{2},\ell_{3})}, \tag{25}\] where \(\Omega_{\rm s}\) is the total survey area over which EBL and galaxy surveys overlap (\(\Omega_{\rm s}\approx 5.5\,\rm sr\) for SPHEREx and Rubin/LSST), and the number of triplets that form closed triangles in Fourier space \(N_{\rm trip}(\ell_{1},\ell_{2},\ell_{3})=\sum_{\delta,\delta_{2},\delta_{3}, \ell_{4}}\Delta q_{12}\), which can be approximated in the limit of large multipole bins as \[N_{\rm trip}\simeq\frac{\Omega_{\rm s}^{2}\ell_{1}\ell_{2}\ell_{3}\Delta\ell_ {1}\Delta\ell_{2}\Delta\ell_{3}/2\pi^{3}}{\sqrt{2\ell_{1}^{2}\ell_{2}^{2}+2 \ell_{1}^{2}\ell_{3}^{2}+2\ell_{2}^{2}\ell_{3}^{2}-\ell_{1}-\ell_{2}-\ell_{3}}}. \tag{26}\] Each of the three angular auto-power spectra in the numerator of equation (25) has contributions from clustering4, Poisson noise, and instrument noise (for intensity maps of H\(\alpha\) and UV emission) whose relatively importance varies across \(\ell\). Specifically, assuming Limber approximation and narrow redshift range \(\Delta z_{\rm g}\ll 1\), we have (Cheng & Chang, 2022) Footnote 4: For simplicity, we ignore the nonlinear clustering whose impact on scales smaller than \(\ell\sim 10^{4}\) is expected to be subdominant to that of the Poisson noise (see e.g., Cheng & Bock, 2022). \[C^{\rm g}_{\ell,\rm cl}(\ell)=\frac{H(z_{\rm g})\langle b_{\rm g}^{2}(z_{\rm g })}{\Delta z_{\rm g}c\chi^{2}(z_{\rm g})}P_{\delta\delta}\left[k=\frac{\ell}{ \chi(z_{\rm g})},z_{\rm g}\right] \tag{27}\] and \[C^{\rm g}_{\ell,\rm P}=\left(\frac{dN_{\rm g}}{d\Omega}\right)^{-1}=\sigma_{ \rm g}^{-1} \tag{28}\] for the auto-power spectrum of galaxies, where \(\langle b\rangle_{\rm g}\) is the galaxy bias averaged over the ensemble of galaxies in the stellar mass bin of width \(\Delta M_{*}\) (see Appendix B for a more detailed description of the various bias factors involved) and \(P_{\delta\delta}\) is the dark matter power spectrum. Similarly, for H\(\alpha\) and UV emission, the auto-power spectra are \[C^{\rm H\alpha}_{\ell,\rm cl}(\ell)= \int dz\frac{H(z)}{c\chi^{2}(z)}b_{\rm H\alpha}^{2}(z)\left[\left. \frac{d\nu I_{\nu}}{dz}\right|_{\rm H\alpha}(z)\right]^{2}\] \[\times P_{\delta\delta}\left[k=\frac{\ell}{\chi(z)},z\right], \tag{29}\] \[C^{\rm H\alpha}_{\ell,\rm P}= \int dz\frac{H(z)}{c\chi^{2}(z)}\frac{1}{\Delta z_{\rm H\alpha}^{ 2}}\int_{M_{*,\rm min}}^{M_{*,\rm max}}dM_{*}\psi(M_{*},z)\] \[\times\left[\frac{\nu I_{\rm H\alpha}}{4\pi D_{L}^{2}}\frac{d\chi }{d\nu}D_{A,\rm com}^{2}\right]^{2}\Phi(L_{\rm H\alpha},L_{U}|M_{*}), \tag{30}\] \[C^{U}_{\ell,\rm cl}(\ell)= \int dz\frac{H(z)}{c\chi^{2}(z)}b_{U}^{2}(z)\left[\left.\frac{d \nu I_{\nu}}{dz}\right|_{U}(z)\right]^{2}\] \[\times P_{\delta\delta}\left[k=\frac{\ell}{\chi(z)},z\right], \tag{31}\] and \[C^{U}_{\ell,\rm P}= \int dz\frac{H(z)}{c\chi^{2}(z)}\int_{M_{*,\rm min}}^{M_{*,\rm max }}dM_{*}\psi(M_{*},z)\] \[\times\left[\frac{L_{U}}{4\pi D_{L}^{2}}\frac{d\chi}{dz}D_{A,\rm com }^{2}\right]^{2}\Phi(L_{U}|M_{*}). \tag{32}\] As shown by Fig. 3, on small scales the Poisson noise and instrument noise dominate the angular power spectra of galaxies and emission fields, respectively. Therefore, we take \(C^{i}_{\ell}=C^{\rm g}_{\ell,\rm P}=\sigma_{\rm g}^{-1}\), \(C^{j}_{\ell}=C^{\rm H\alpha}_{\ell,\rm N}=\sigma_{\rm pix,N}^{2}|\lambda_{\rm H \alpha(1+z)}\Omega_{\rm pix}c^{\rm H\alpha}_{\rm pix}c^{2}\), and \(C^{k}_{\ell}=C^{U}_{\ell,\rm N}=\sigma_{\rm pix,N}^{2}|\lambda_{U(1+z)}c_{\rm pix }c^{\rm H\alpha}_{\rm pix}c^{2}\), where \(\sigma_{\rm pix,N}\) is the projected surface brightness sensitivity of the SPHEREx all-sky survey5. To estimate the detectability of the bispectrum in terms of its total signal-to-noise ratio (S/N), we adopt a universal bin size of \(\Delta t=1000\) and sum the S/N of individual \(\ell\) bins over \(\ell_{\rm min}=10^{4}\) to \(\ell_{\rm max}=10^{5}\) where the angular power spectra are well within the Poisson-noise-dominated regime, namely Footnote 5: See the public data product of surface brightness sensitivity available at [https://github.com/SPHEREx/Public-products/blob/master/Surface_Brightness_V28_base_cbe.txt](https://github.com/SPHEREx/Public-products/blob/master/Surface_Brightness_V28_base_cbe.txt) \[\left(\frac{\rm S}{\rm N}\right)^{2}_{\times}=\sum_{\{\ell_{1},\ell_{2},\ell_{3} \}=d\ell_{\rm min}-\frac{d\chi}{L}}^{\ell_{\rm max+\frac{d\chi}{2}}}\frac{ \left(B^{ijk}_{\ell,\rm P}\right)^{2}}{{\rm Var}\left[B^{ijk}_{\ell}(\ell_{1}, \ell_{2},\ell_{3})\right]}. \tag{33}\] Finally, from the definition of \(r^{\rm g}_{\chi,\rm P}\), we have \[\left(\frac{\rm S}{\rm N}\right)^{-2}_{r_{\times}}=\left(\frac{\rm S}{\rm N} \right)^{-2}_{\times}+\frac{1}{4}\left[\left(\frac{\rm S}{\rm N}\right)^{-2}_{\rm H \alpha}+\left(\frac{\rm S}{\rm N}\right)^{-2}_{U}\right]. \tag{34}\] ## 3 Results In this section, we first present the detectability of the various cross-bispectra related to our case study, where we cross-correlate EBL maps of rest-frame H\(\alpha\) and UV (\(U\)-band) emission and photometric galaxies to be observed with SPHEREx and Rubin/LSST, respectively (Section 3.1). Then, we show the constraints on BCLF model parameters derived from the predicted sensitivity to the correlation coefficients, \(r^{\rm g}_{\chi,\rm P},r^{\rm g}_{\rm H\alpha,\rm P}\), and \(r^{\rm g}_{U,\rm P}\) (Section 3.2). The toy models considered here suffice to forecast the potential for EBL-galaxy cross-correlations to distinguish these limiting cases and thereby shed light on bursty star formation. ### Detectability of cross-correlation signals In the left panel of Fig. 4, we show the predicted detectability of the cross-bispectrum, \(r^{\rm H\alpha,\rm\alpha,\rm U,g}_{\ell,\rm P}\), of H\(\alpha\) and \(U\)-band intensity maps measured by the all-sky survey with SPHEREx and photo-\(z\) galaxies surveyed by Rubin/LSST in each of the \(\delta\) stellar mass bins. The S/N numbers quoted here are evaluated for _a single pair_ of spectral channels corresponding to a narrow redshift range of \(\Delta z=(1+z)/R\) around \(z=1.5\), where \(R=41\) is the spectral resolving power of SPHEREx in bands relevant to this study. We note that at the redshifts of interest for this study (\(z\sim 1.5\)-\(2.5\)), the adopted \(\Delta z\) happens to be comparable to the level of photometric redshift uncertainty expected for the nominal 10-year Rubin/LSST survey, which may be further improved over the course of the survey by the addition of near-IR and UV photometry from other existing/concurrent surveys, such as Roman, Euclid, and SPHEREx (Graham et al., 2018, 2020). Due to the trade-off between the brightness of sources and the number density of galaxies contributing to the intensity fields and available for cross-correlation, the expected S/N of \(B_{\ell,\mathrm{P}}^{\mathrm{H}\alpha,U,B}\) peaks at intermediate mass scales \(M_{*}\sim 10^{10.5}\ M_{\odot}\), although a high-significance detection can be achieved in all but the lowest mass bins. Meanwhile, from the comparison between cases with and without dust attenuation, it is clear that the expected detectability of the EBL-galaxy cross-correlation is highly sensitive to the treatment of dust attenuation (especially for massive galaxies that are more dust-rich), which has sometimes been neglected for simplicity in previous work, although dust attenuation will likely reduce the SNR of EBL observations with SPHEREx (e.g., Gong et al., 2017). In the right panel of Fig. 4, we show how the S/N of each bispectrum involved in the definition of \(r_{\mathrm{\chi,P}}^{\mathrm{B}}\) can be propagated to obtain the S/N of \(r_{\mathrm{\chi,P}}^{\mathrm{B}}\) (see equations (9) and (34)). As shown by the comparison, the detectability of \(r_{\mathrm{\chi,P}}^{\mathrm{B}}\), from which constraints on \(\sigma_{\alpha U}\) (and other BCLLF model parameters) are drawn, evolves across the mass bins in a similar way to the bispectra and is mainly set by how well \(B_{\ell,\mathrm{P}}^{\mathrm{H}\alpha,U,B}\) can be measured. Different from the predicted constraints on the bispectra presented in Fig. 4, which are evaluated for a single redshift interval using one pair of spectral channels of SPHEREx, we consider broader redshift bins for measuring the BCLF of \(L_{\mathrm{H}\alpha}\) and \(L_{U}\) from the correlation coefficients to optimize the parameter constraints. Specifically, we define 3 redshift bins with bin centers \(z_{c}=1.5\), \(z_{c}=2.0\), and \(z_{c}=2.5\), and bin edges \([1.25,1.75]\), \([1.75,2.25]\), and \([2.25,2.75]\), respectively. We further divide each redshift bin into \(\mathcal{N}=0.5R/(1+z_{c})\) redshift intervals with \(R=41\), which yields \(\mathcal{N}=8,7,6\), respectively. The uncertainties in the correlation coefficients evaluated for \(z_{c}\) and \(\Delta z=(1+z_{c})/R\) are consequently scaled by a factor of \(1/\sqrt{N}\) to approximate the effect of binning together \(\mathcal{N}\) redshift intervals. Fig. 5 shows the constraints on the cross-correlation coefficient, \(r_{\mathrm{\chi,P}}^{\mathrm{B}}\), in each stellar mass bin predicted by Models I and II in three broad redshift bins as labeled on the vertical axis. With the help of the additional statistical power from redshift binning, we expect cross-correlating EBL maps from SPHEREx with photo-\(z\) galaxies from Rubin/LSST to distinguish Model II from Model I by detecting the decrease of \(r_{\mathrm{\chi,P}}^{\mathrm{B}}\) towards lower stellar masses at high significance up to \(z\sim 3\). It is noteworthy that even though the difference between the two toy models is modest in intermediate-mass bins, strong evidence for decorrelation may still be obtained thanks to the expected high sensitivity to the bispectra at these mass scales. Detecting such a decorrelation between H\(\alpha\) and UV luminosities in low-mass galaxies and characterizing the mass dependence via the EBL-galaxy cross-correlation described can be a smoking gun for an elevated level of bursty star formation, although alternative explanations may exist (see discussion in Section 4). We note that, for simplicity, instead of estimating the actual galaxy counts taking into account of the mass incompleteness, we show the 90% mass completeness limit in Fig. 5 and note that the constraining power in lower mass bins should therefore be taken as an upper limit due to incompleteness. ### Constraints on BCLLF model parameters From the expected constraints on \(r_{\mathrm{\chi,P}}^{\mathrm{B}}\) shown in Fig. 5, together with the similarly derived constraints on the auto-correlation coefficients \(r_{\mathrm{H}\alpha,\mathrm{P}}^{\mathrm{B}}\) and \(r_{U,P}^{\mathrm{B}}\) (see equations (11) and (12)), we can directly constrain the H\(\alpha\)-UV BCLF model assumed. To estimate the parameter constraints, we employ a Fisher matrix formalism, which performs a quadratic expansion around the log-likelihood of the data vector \(\hat{\mathbf{r}}\), namely \[F_{ij}=\sum_{k}\frac{1}{\mathrm{var}(r_{k})}\frac{\partial r_{k}}{\partial \theta_{i}}\frac{\partial r_{k}}{\partial\theta_{2}}\,, \tag{35}\] with \(\mathbf{r}(\mathbf{\theta})=\left(r_{\mathrm{\chi,P}}^{\mathrm{B}}(\mathbf{\theta}),r_{ \mathrm{H}\alpha,\mathrm{P}}^{\mathrm{B}}(\mathbf{\theta}),r_{U,\mathrm{P}}^{ \mathrm{B}}(\mathbf{\theta})\right)\) being the model vector for \(\mathbf{\theta}=(\sigma_{\alpha U},\sigma_{\mathrm{IM}},\beta)\). We neglect the covariance between the correlation coefficients, which is likely a reasonable approximation in the instrument-noise-dominated regime relevant to this work (Section 2.3.2), and no priors are assumed on the parameters. The resulting constraints on the BCLF model parameters in Model I are shown in Fig. 6 for two example stellar mass bins where strong evidence for a decorrelation between H\(\alpha\) and \(U\)-band luminosities may exist. As shown by the ellipses, the cross-correlation between SPHEREx and Rubin/LSST surveys can place useful constraints on our main proxy for bursty star formation, \(\sigma_{\alpha U}\), up to Figure 4: Left: S/N of the Poisson-noise cross-bispectra of H\(\alpha\), UV, and galaxies at \(z=1.25\) in different stellar mass bins, before and after including dust attenuation. The fiducial model (Model I) is assumed and the total S/N is quoted for the sum over all stellar mass bins. Right: a comparison of the detectability of the cross-correlation coefficient, \(r_{\mathrm{\chi,P}}^{\mathrm{B}}\) (black), as well as its 3 components, namely \(B_{\ell,\mathrm{P}}^{\mathrm{H}\alpha,U,B}\) (red), \(B_{\ell,\mathrm{P}}^{\mathrm{H}\alpha,\mathrm{H}\alpha,B}\) (blue), and \(B_{\ell,\mathrm{P}}^{U,U,B}\) (yellow). The fiducial model (Model I) is assumed, after including dust attenuation. All the data displayed here are evaluated for a single redshift interval, without redshift binning (see Section 3.1). \(z\sim 2.5\), despite the degeneracy between \(\sigma_{\alpha U}\) and \(\beta\). At redshifts where the constraints are tight enough (e.g., \(z\sim 1.5\)), it is also possible to quantify by how much \(\sigma_{\alpha U}\) differs between different mass bins, which serves as another way to probe the strength of star formation burstiness (see Section 4 for further discussion). Table 2 summarizes the constraints on the three BCLF model parameters in terms of the fractional uncertainties derived from the diagonal of the inverse of the Fisher matrix in each redshift and mass bin. Applying the Fisher matrix formalism to all mass and redshift bins and extracting the variance on \(\sigma_{\alpha U}\), we derive ultimately the constraints on the \(\log(L_{\rm H\alpha}/L_{U})\)-\(M_{*}\) relation available from the EBL-galaxy cross-correlation using SPHEREx and Rubin/LSST data, which can be readily compared with observations of individual galaxies. The resulting constraints are illustrated in Fig. 7 in terms of the upper (dashed curves and filled triangles) and lower bounds (dotted curves and empty triangles) on the width of the \(\log(L_{\rm H\alpha}/L_{U})\)-\(M_{*}\) distribution. From these constraints, it can be seen that any stellar mass dependence of \(\sigma_{\alpha U}\) resulting from changes in the SFR variability may be tested by the cross-correlation analysis up to \(z\sim 2\), beyond which data from SPHEREx and Rubin/LSST can not provide sufficient constraining power. ## 4 Discussion By cross-correlating EBL and galaxy surveys to be conducted by SPHEREx and Rubin/LSST as an example, we have so far demonstrated that statistical constraints on the BCLF of \(L_{\rm H\alpha}\) and \(L_{\rm UV}\) may be obtained at high significance and used to probe bursty star formation across a wide range of galaxy mass and redshift. Next, we supplement the presented analysis with a semi-quantitative dis \begin{table} \begin{tabular}{c c c c} \hline Redshift & \(\log(M_{*}/M_{\odot})\) & \(\sigma_{\alpha U}/\sigma_{\alpha U}^{\rm fid}\) & \(\beta/\beta^{\rm fid}\) & \(\sigma_{\rm LM}/\sigma_{\rm LM}^{\rm fid}\) \\ \hline \(1.25<z<1.75\) & \(9-9.5\) & 0.133 & 0.265 & 0.153 \\ \(1.25<z<1.75\) & \(9.5-10\) & 0.082 & 0.076 & 0.046 \\ \(1.75<z<2.25\) & \(9-9.5\) & 0.242 & 0.482 & 0.280 \\ \(1.75<z<2.25\) & \(9.5-10\) & 0.154 & 0.145 & 0.090 \\ \(2.25<z<2.75\) & \(9-9.5\) & 0.532 & 1.046 & 0.595 \\ \(2.25<z<2.75\) & \(9.5-10\) & 0.375 & 0.352 & 0.216 \\ \hline \end{tabular} \end{table} Table 2: Fractional uncertainties in the BCLF model parameters in different redshift and mass bins derived from the diagonal of the inverse of the Fisher matrix. Figure 5: Capability of distinguishing Model I (blue) and Model II (orange) implied by the constraints on \(r_{\rm w,p}^{\rm g}\) in individual stellar mass bins, after binning in redshift. From the top to the bottom, the three panels show the expected constraints evaluated in the three broad redshift bins respectively. The vertical dotted lines indicate the 90% mass completeness limits of the Rubin/LSST photometric galaxy redshift survey expected at these redshifts (Section 2.1.1). Figure 6: Constraints on the BCLF model parameters \(\sigma_{\alpha U}\), \(\sigma_{\rm LM}\), and \(\beta\) in Model I drawn from the cross-correlation analysis for two example stellar mass bins in different colours and in the three redshift bins from top to bottom. The dark and light shaded ellipses represent the 1-\(\sigma\) and 2-\(\sigma\) confidence intervals, respectively. cussion of the caveats, limitations, and implications of the method explored in this work. In particular, we focus on ways to identify and reduce the potential ambiguity from dust attenuation, and compare the statistical approach presented in this paper with the characterization of SFR indicators like \(L_{\rm H\alpha}\) and \(L_{\rm UV}\) for samples of individual galaxies. ### Ambiguity associated with dust attenuation For individual galaxies, both \(L_{\rm H\alpha}\) and \(L_{U}\) are subject to non-negligible dust attenuation, but the amount of attenuation can vary substantially and with different time dependence for H\(\alpha\) and UV continuum emission from galaxy to galaxy, as a result of the different sites and mechanisms these photons are created in galaxies (the UV continuum can be much more extended than H\(\alpha\) emission produced in star-forming regions). Consequently, part of the observed scatter \(\sigma_{\alpha U}\) may actually be associated with variations of the level of dust attenuation rather than star formation burstiness for a given galaxy sample (Reddy et al., 2015). For the analysis presented, we do not consider the effect of dust on the BCLF. We do, however, take into account dust attenuation in estimating the detectability of various cross-correlation signals. While methods have been proposed to apply appropriate dust corrections for accurate comparison of H\(\alpha\) and UV SFR indicators (see e.g., Weisz et al., 2012, for an example method based on energy balance), they do not directly apply to the statistical approach considered in this paper. Here, through a similar cross-correlation analysis to estimate the Balmer decrement (\(L_{\rm H\alpha}/L_{\rm H\beta}\)) variations, we discuss a possible way to reduce the ambiguity associated with unknown dust attenuation variations in the interpretation of results like those shown in Section 3. The attenuation \(A_{\lambda}=k_{\lambda}E(B-V)\) and the Balmer decrement are related by (Dominguez et al., 2013) \[A_{\rm FUV}=C\log\left(\frac{L_{\rm H\alpha}/L_{\rm H\beta}}{2.86}\right), \tag{36}\] where the coefficient \(C=2.5k_{\rm FUV}/(k_{\rm H\beta}-k_{\rm H\alpha})=19.6\) and \(L_{\rm H\alpha}/L_{\rm H\beta}=2.86\) is the intrinsic Balmer decrement that remains roughly constant for typical star-forming galaxies. Assuming perfectly correlated scatters in dust-attenuated \(L_{\rm H\alpha}\) and \(L_{U}\) induced by a scatter in \(A_{\rm FUV}\) as defined in equation (5), we find that a scatter of about 4 in \(A_{\rm FUV}\), corresponding to a 0.2 dex scatter in the Balmer decrement \(\log(L_{\rm H\alpha}/L_{\rm H\beta})\), results in a 0.3 dex scatter in \(\log(L_{\rm H\alpha}/L_{U})\) comparable to what one might expect from a strongly time-variable SFH. Therefore, to see whether or not an observed scatter in \(\log(L_{\rm H\alpha}/L_{U})\) can be explained entirely by variations in the dust attenuation, we can use the cross-correlation between H\(\alpha\) and H\(\beta\) to constrain the scatter \(\sigma_{\rm BD}\) in \(\log(L_{\rm H\alpha}/L_{\rm H\beta})\). Since \(L_{\rm H\alpha}\) and \(L_{\rm H\beta}\) are almost strictly proportional to each other, their cross-correlation coefficient is simply related to \(\sigma_{\rm BD}\) as \(\ln(r_{\rm H\alpha\times H\beta,P}^{\rm s})=-\sigma_{\rm BD}^{2}/2\), which implies a more than 10% decorrelation for a Balmer decrement scatter of \(\sigma_{\rm BD}=0.2\) dex. At \(z\sim 1.5\), for example, by performing a detectability analysis for the H\(\alpha\)-H\(\beta\) cross-correlation similar to that shown in Fig. 4 for the case of H\(\alpha\) and \(U\)-band luminosities, we expect \(r_{\rm H\alpha\times H\beta,P}^{\rm g}\) to be detected at S/N \(\ga 40\) (after redshift binning, see Section 3.1) by cross-correlating SPHEREx and Rubin/LSST surveys in all stellar mass bins except the least massive one, which is somewhat below the expected mass completeness limit of the Rubin/LSST galaxy survey anyway. Such a high S/N should allow us to reliably test whether or not a notable decorrelation, e.g., \(r_{\rm H\alpha\times H\beta,P}^{\rm g}<0.9\), between \(L_{\rm H\alpha}\) and \(L_{\rm H\beta}\) exists as a sign of large variations in the Balmer decrement. This can be compared in turn with level of dust attenuation variations required to fully account for the measured scatter \(\sigma_{\alpha U}\) in the H\(\alpha\)-UV BCLF. Finally, it is also noteworthy that the scatter from dust attenuation variations will likely increase with stellar mass, since the massive galaxies tend to be more dust-rich. Therefore, the expected trend with stellar mass is opposite to that of the burstiness, which may also help clarify the ambiguity associated with dust attenuation. ### Limitations and implications of the presented method Despite its great potential for constraining bursty star formation using forthcoming cosmological survey data sets, the presented framework based on the EBL-galaxy cross-correlation has a few noteworthy limitations due to either simplified assumptions or the methodology itself. First, while being motivated by observations, a rather simplistic description of the BCLF is adopted in this proof-of-concept study. Potentially more self-consistent and physically-grounded models can be constructed from the combination of analytic arguments and results from detailed galaxy simulations, in order to better connect burstiness observables such as \(\log(L_{\rm H\alpha}/L_{U})\) to realistic representations of the time-variable SFHs. Meanwhile, in the presented analysis we have focused almost entirely on constraining the scatter in the H\(\alpha\)-UV BCLF, whereas any trend between the mean value and \(M_{*}\) may be an additional way to probe bursty star formation. Figure 7: Constraints on \(\log(L_{\rm H\alpha}/L_{U})\) as a function of \(M_{*}\) in the three redshift bins expected from Model I (shaded band) and the Fisher matrix analysis. Marginalized \(\pm 1\sigma\) bounds on the _width_ of the \(\log(L_{\rm H\alpha}/L_{U})\) distribution are indicated by the outer, dashed curves with filled triangles (\(1\sigma\) upper bound) and the inner, dotted curves with empty triangles (\(1\sigma\) lower bound), respectively. Note that the dotted (lower-bound) curves cross at the high-mass end as a result of increased uncertainties.
2308.08663
On the $2$-Selmer group of Jacobians of hyperelliptic curves
Let $\mathcal{C}$ be a hyperelliptic curve $y^2 = p(x)$ defined over a number field $K$ with $p(x)$ integral of odd degree. The purpose of the present article is to prove lower and upper bounds for the $2$-Selmer group of the Jacobian of $\mathcal{C}$ in terms of the class group of the $K$-algebra $K[x]/(p(x))$. Our main result is a formula relating these two quantities under some mild hypothesis. We provide some examples that prove that our lower and upper bounds are as sharp as possible. As a first application, we study the rank distribution of the $2$-Selmer group in families of quadratic twists. Under some extra hypothesis we prove that among prime quadratic twists, a positive proportion has fixed $2$-Selmer group. As a second application, we study the family of octic twists of the genus $2$ curve $y^2 = x^5 + x$.
Daniel Barrera Salazar, Ariel Pacetti, Gonzalo Tornaría
2023-08-16T20:28:42Z
http://arxiv.org/abs/2308.08663v1
# On the \(2\)-Selmer group of Jacobians of hyperelliptic curves. ###### Abstract. Let \(\mathcal{C}\) be a hyperelliptic curve \(y^{2}=p(x)\) defined over a number field \(K\) with \(p(x)\) integral of odd degree. The purpose of the present article is to prove lower and upper bounds for the \(2\)-Selmer group of the Jacobian of \(\mathcal{C}\) in terms of the class group of the \(K\)-algebra \(K[x]/(p(x))\). Our main result is a formula relating these two quantities under some mild hypothesis. We provide some examples that prove that our lower and upper bounds are as sharp as possible. As a first application, we study the rank distribution of the \(2\)-Selmer group in families of quadratic twists. Under some extra hypothesis we prove that among prime quadratic twists, a positive proportion has fixed \(2\)-Selmer group. As a second application, we study the family of octic twists of the genus \(2\) curve \(y^{2}=x^{5}+x\). Key words and phrases:2-Selmer group, quadratic twists 2010 Mathematics Subject Classification: Primary: 11G05, Secondary: 11G40 DBS was supported by the MathAMSUD 2020018, FONDECYT 11201025 and PAI 77180007 AP was partially supported by FonCyT BID-PICT 2018-02073 and by the Portuguese Foundation for Science and Technology (FCT) within project UIDB/04106/2020 (CIDMA) GT was partially supported by CSIC-I+D 2020/651. in terms of the \(2\)-rank of the class group of \(\mathbb{Q}[x]/(f(x))\) when \(f(x)\) is irreducible (see also [10]). One is led to expect that a similar phenomena should hold in general, namely the order of \(\mathrm{Sel}_{2}(J)\) should be related to a ray class group of the \(K\)-algebra \(K[x]/(p(x))\). In [11] Chao Li gave not only an upper bound, but also a lower bound of the \(2\)-Selmer group of a rational elliptic curve in terms of the class group of \(K[x]/(p(x))\) (under some hypothesis). Li's result was generalized to general number fields under less restricted hypothesis in [1]. Moreover, in [1] we provided a general framework which could be applied to more general situations, like the case of hyperelliptic curves \(\mathcal{C}\). In the present article we pursuit this goal, obtaining a similar result. More precisely in this work: 1. We obtain general bounds for \(\dim_{\mathbb{F}_{2}}(\mathrm{Sel}_{2}(J))\). 2. We obtain applications related to quadratic twists of hyperelliptic curves and certain families of hyperelliptic curves. ### Bounding the \(2\)-Selmer group Attached to the curve \(\mathcal{C}\) with equation \(\mathcal{C}:y^{2}=p(x)\) we consider the etale \(K\)-algebra \(A_{K}=K[x]/(p(x))\). Our main result gives a lower and upper bound for \(\mathrm{Sel}_{2}(J)\) in terms of a \(2\)-class group similar to the one obtained in [1]. We denote by \(\mathrm{Cl}(A_{K})\) the class group of \(A_{K}\) as defined in SS5 and we consider a group \(\mathrm{Cl}_{*}(A_{K},\mathcal{C})\) between the classical class group \(\mathrm{Cl}(A_{K})\) and the narrow class group (see definition 5.6). Our main result is the following. **Theorem 5.15**.: _Let \(K\) be a number field and \(\mathcal{C}/K\) be a hyperelliptic curve. Suppose that hypotheses 5.2 hold. Then_ \[\dim_{\mathbb{F}_{2}}\mathrm{Cl}_{*}(A_{K},\mathcal{C})[2]-\sum_{ v\mid 2}\bigl{(}r_{v}-1-\dim_{\mathbb{F}_{2}}(\mathbb{V}_{v})\bigr{)}\\ \leq\quad\dim_{\mathbb{F}_{2}}\mathrm{Sel}_{2}(J)\quad\leq\quad \dim_{\mathbb{F}_{2}}\mathrm{Cl}_{*}(A_{K},\mathcal{C})[2]+g\left[K:\mathbb{Q }\right].\] The lower bound includes local correction terms (possibly zero) at places over \(2\) defined as follows. For \(v\mid 2\) let \(K_{v}\) be the completion of \(K\) at \(v\) and \(k\) its residue field. Over \(K_{v}\) the polynomial \(p(x)\) factors as \(p(x)=p_{v,1}(x)\cdots p_{v,r_{v}}(x)\) so that \(K_{v,i}=K_{v}[x]/p_{v,i}(x)\) is a field extension of \(K_{v}\). We denote \(k_{i}\) the residue field of \(K_{v,i}\) and let \(\overline{T}_{i}\) be the image of \(x\) in \(k_{i}\). The space \(\mathbb{V}_{v}\) is defined as follows: \[\mathbb{V}_{v}=\langle\mathrm{Tr}_{k_{i}/k}(\overline{T}_{i})\ :\ i=1,...,r_{v}\rangle\subset k.\] Under our assumption on \(a_{d-1}\) we have \(\dim_{\mathbb{F}_{2}}\mathbb{V}_{v}\leq r_{v}-1\) (see Lemma 3.7) so the last terms at the left hand side are all non-negative. The best lower bound occurs when \(\mathbb{V}_{v}\) has dimension equal to \(r_{v}-1\) for all \(v\mid 2\), in which case the difference between the upper and the lower bound equals \(g\left[K:\mathbb{Q}\right]\), exactly as in the case of elliptic curves studied in [1, Theorem 2.16]. For example, if \(p(x)\) is irreducible over \(K_{v}\) then \(r_{v}=1\) and \(\mathbb{V}_{v}=\{0\}\). Assuming the parity conjecture we can then deduce from our bounds the precise \(2\)-Selmer rank when \(g\left[K:\mathbb{Q}\right]=1\), and also when \(g\left[K:\mathbb{Q}\right]=2\) and the root number has the right parity. To our knowledge, the only previous general result to bound the \(2\)-Selmer group of a hyperelliptic curve is due to Stoll. In [12] Stoll developed a very nice algorithm to compute the \(2\)-Selmer group of an hyperelliptic curve, and as a Corollary of his results ([12, Lemma 4.10]), he obtained an upper bound for the \(2\)-Selmer group similar to the one obtained by Brumer and Kramer (see also [10]). A similar upper bound was obtained in [13] in the particular situation where \(p(x)\) is the minimal polynomial of \(\zeta_{p}+\zeta_{p}^{-1}\) (where \(\zeta_{p}\) denotes a \(p\)-th root of unity) under the assumption that \((p-1)/2\) is a prime number. Note that our result improves theirs in the sense that we get the same upper bound when \(p\equiv 3\pmod{4}\) (although we do not need to impose the condition \((p-1)/2\) to be a prime number), but we also get a lower bound. The proof of our main result has two key ingredients. We start considering the long exact sequence in cohomology (1.2) By a result of Cassels, the cohomology group \(\mathrm{H}^{1}(\mathrm{Gal}_{K},J[2])\) is isomorphic to \((A_{K}^{\times}/(A_{K}^{\times})^{2})_{\square}\). To get the upper bound, we need to assure that any cocycle in \(\mathrm{H}^{1}(\mathrm{Gal}_{K_{v}},J[2])\) coming from \(J(K_{v})\) is unramified at any odd place of \(A_{K}\). Under Cassels isomorphism, this is equivalent to proving that the image of any point \(P\) in \(J(K_{v})\), a priori in \((A_{K_{v}}^{\times}/(A_{K_{v}}^{\times})^{2})_{\square}\), actually belongs to the class of integral elements \((A_{0}^{\times}/(A_{0}^{\times})^{2})_{\square}\) (a purely local computation, see Theorem 3.4). The hypotheses imposed are the ones needed for this statement to be true. A second key ingredient is needed to get the lower bound: we need to construct points on \(J(K_{v})\). Luckily enough (by dimension reasons) this last hard problem only needs to be done at primes dividing \(2\). The spaces \(\mathbb{V}_{v}\) appearing in Theorem 5.15 play a crucial role in this construction. ### Applications The present article contains two different applications of our main result. The first one concerns the study of quadratic twists of hyperelliptic curves. If \(a\in K^{\times}\), then the quadratic twist of our hyperelliptic curve \(\mathcal{C}\) by \(a\) is the curve \[\mathcal{C}(a):ay^{2}=p(x).\] If the polynomial \(p(x)\) is irreducible, and the number \(a\) is divisible only by prime ideals which are inert or ramified in \(A_{K}/K\), then the curve \(\mathcal{C}(a)\) also satisfies the hypothesis of our main theorem as proved in Lemma 6.2. In particular, the rank of the \(2\)-Selmer group of \(\mathcal{C}(a)\) also satisfies the same bounds as \(\mathcal{C}\) does. This allows us to prove: **Theorem 6.3**.: _Let \(\mathcal{C}\) be an hyperelliptic curve satisfying hypotheses 5.2 over a number field \(K\) with odd narrow class number. Suppose that \(p(x)\) is irreducible, and suppose furthermore that there is a principal prime ideal of \(K\) which is inert in \(A_{K}/K\). Then among all quadratic twists by principal prime ideals, there exists a subset of positive density \(\mathscr{S}\) such that the abelian varieties \(\mathrm{Jac}(\mathcal{C}(a))\) have the same \(2\)-Selmer group for all \(a\in\mathscr{S}\)._ To our knowledge this is the first general result regarding \(2\)-Selmer group distributions in quadratic twists of hyperelliptic curves. A second application comes from a particular family of hyperelliptic curves considered in [1]. Let \(a\) be a non-zero integer, and consider the genus \(2\) hyperelliptic curve over \(K=\mathbb{Q}\) \[\mathcal{C}(a):y^{2}=x^{5}+ax.\] Note that in this case, the polynomial on the right hand side is reducible. The surface \(\mathrm{Jac}(\mathcal{C}(a))\) has some very interesting properties. For example, it has complex multiplication by \(\mathbb{Z}[\zeta_{8}]\) (over the extension \(\mathbb{Q}(\zeta_{8})\)), but it is also isogenous to the product of two elliptic curves over the field \(\mathbb{Q}(\sqrt[4]{a})\) (see Corollary 6.6). In particular, they are all octic twists of the curve \(\mathcal{C}(1)\). What can be said of the rank of the surface \(\mathrm{Jac}(\mathcal{C}(a))\)? The point \((0,0)\) has order \(2\), giving a point in its \(2\)-Selmer group. It follows from Lemma 6.8 that if \(a\) is square-free and \(a\equiv 1\pmod{4}\), then \(\mathcal{C}(a)\) satisfies the hypothesis of Theorem 5.15 at all primes except at the primes \(p\) dividing \(a\). Still, one can provide an upper bound of the form \[\dim_{\mathbb{F}_{2}}\operatorname{Sel}_{2}(\operatorname{Jac}(\mathcal{C}(a)) \leq\dim_{\mathbb{F}_{2}}\operatorname{Cl}_{*}(A_{\mathbb{Q}},\mathcal{C}(a))[ 2]+2+\#\{p\::\:p\mid a\}.\] To give a complete description of the \(2\)-Selmer rank of \(\operatorname{Jac}(\mathcal{C}(a))\), we need to understand the class group of \(\mathbb{Q}(\sqrt[4]{a})\). Although one expects that such a class group should be well understood, we could not find any reference in this direction. **Theorem 6.12**.: _If \(p\) is an odd prime, \(p\equiv 3\pmod{8}\) then \(\operatorname{Cl}(\mathbb{Q}[\sqrt{-1},\sqrt[4]{p}])\) has odd cardinality._ Then the class group \(\operatorname{Cl}_{*}(A_{\mathbb{Q}},\mathcal{C}(a))[2]\) is trivial, providing the bound, for \(p\) a prime congruent to \(3\) modulo \(8\), \[1\leq\dim_{\mathbb{F}_{2}}\operatorname{Sel}_{2}(\operatorname{Jac}(\mathcal{ C}(-p))\leq 3,\] where the lower bound \(1\) comes from the existence of a \(2\)-torsion point. Since the family does not have other \(2\)-torsion points under our assumption (\(a\) prime) and the Tate-Shafarevich group of \(\operatorname{Jac}(\mathcal{C}(-p))\) has order a square (by a result of Poonen-Stoll), the rank of \(\operatorname{Jac}(\mathcal{C}(-p))\) belongs to the set \(\{0,1,2\}\). In Theorem 6.9 we prove that the root number of \(\operatorname{Jac}(\mathcal{C}(-p))\) is \(-1\) (assuming \(p\) prime and \(p\equiv 3\pmod{8}\)) so the parity conjecture implies that the rank of \(\operatorname{Jac}(\mathcal{C}(-p))\) is always \(1\). The article is organized as follows: Section 2 contains the basic definitions as well as some preliminary results used throughout this work. Section 3 contains the main local results needed to understand the \(2\)-Selmer group of hyperelliptic curves defined over non-archimedean fields, including the definition of the (\(\dagger\)) hypothesis (it is also part of hypotheses 5.2). Section 4 contains the needed results for archimedean places. Section 5 is devoted to prove Theorem 5.15. Section 6 contains the two main applications stated before, namely the study of quadratic twists and the particular family \[\mathcal{C}(a):y^{2}=x^{5}+ax.\] At last, Section 7 contains different examples showing that both our upper and our lower bounds are attained. **Acknowledgments.** We would like to thank Professor John Cremona for some fruitful discussions regarding the splitting of the surface considered in Section 6.2. We also would like to thank Davide Lombardo for explaining how to use his algorithm ([19]) used to deduce such a splitting. Special thanks go to Alvaro Lozano-Robledo, who draw our attention that our previous result could be generalized to the case of hyperelliptic curves. ## 2. Preliminaries Let \(K\) be a number field or a local field of characteristic \(0\), and let \(\mathcal{O}\) be its ring of integers. By \(\mathfrak{p}\subset\mathcal{O}\) we denote a maximal ideal (the unique one when \(K\) is local). Let \(\mathcal{C}\) be the hyperelliptic curve over the field \(K\) given by the equation \[\mathcal{C}:y^{2}=p(x),\] where \(p(x)\in\mathcal{O}[x]\) is a _monic_ polynomial of odd degree \(d\geq 3\) and (without loss of generality) non-zero discriminant \(\Delta(p)\). Furthermore, if \(K\) is a number field (or a local field of residual characteristic \(2\)), we also assume that the coefficient of \(x^{d-1}\) is even, i.e. is divisible by all maximal primes of residual characteristic \(2\) (which always occurs after an integral translation). The hypothesis that \(p(x)\) has non-zero discriminant implies that the curve \(\mathcal{C}\) is a non-singular curve of genus \[g=\operatorname{genus}(\mathcal{C})=\frac{d-1}{2}. \tag{2.1}\] Let \(J\) denote its Jacobian. Let us clarify a subtlety (for readers who never studied this problem before) on what we mean by a rational point on \(J\). Let \(\overline{K}\) denote an algebraic closure of \(K\) and \(\operatorname{Gal}_{K}:=\operatorname{Gal}(\overline{K}/K)\) its Galois group. There is a natural action of \(\operatorname{Gal}_{K}\) on the group of divisors \(\operatorname{Div}(\mathcal{C}_{\overline{K}})\), on \(\operatorname{Princ}(\mathcal{C}_{\overline{K}})\) (the principal divisors) and on \(\operatorname{Pic}(\mathcal{C}_{\overline{K}})\) (the quotient of the two previous ones). The group \(\operatorname{Pic}(\mathcal{C}):=\operatorname{Div}(\mathcal{C}_{\overline{K} })^{\operatorname{Gal}_{K}}/\operatorname{Princ}(\mathcal{C}_{\overline{K}}) ^{\operatorname{Gal}_{K}}\hookrightarrow\operatorname{Pic}(\mathcal{C}_{ \overline{K}})^{\operatorname{Gal}_{K}}\). Although the curve \(\mathcal{C}\) is singular at the infinity point \((0:1:0)\), the hypothesis on the polynomial \(p(x)\) having odd degree implies that the desingularization of \(\mathcal{C}\) at \((0:1:0)\) has a unique rational point that we denote by \(\infty\). In particular, we have a rational map \(\mathcal{C}\to J\) defined over \(K\) given by \(P\to P-\infty\). Hence \(\operatorname{Pic}(\mathcal{C})=\operatorname{Pic}(\mathcal{C}_{\overline{K} })^{\operatorname{Gal}_{K}}\) (see for example [13, Proposition 3.1]) so the two possible definitions of a rational point on \(J\) coincide. Decompose \(p(x)\) into its irreducible factors \[p(x)=p_{1}(x)\cdots p_{r}(x),\] where \(p_{i}(x)\in\mathcal{O}[x]\) are all distinct (due to our assumption \(\Delta(p)\neq 0\)). For \(i\in\{1,...r\}\), let \(d_{i}=\deg(p_{i}(x))\). Then the \(K\)-algebra \(A_{K}=K[x]/(p(x))\) is etale, i.e., it decomposes as a product of fields \[A_{K}\simeq K[x]/(p_{1}(x))\times\cdots\times K[x]/(p_{r}(x)), \tag{2.2}\] where each \(K_{i}:=K[x]/(p_{i}(x))\) is a finite field extension of \(K\). By \(T\) we will denote the class of the variable \(x\) in \(A_{K}\) and by \((T_{1},\ldots,T_{r})\) its image under the isomorphism (2.2). Let \(A_{\mathcal{O}}\) be the ring of integers of \(A_{K}\), which is isomorphic to the product \(\mathcal{O}_{1}\times\cdots\times\mathcal{O}_{r}\) where \(\mathcal{O}_{i}\) is the ring of integers of \(K_{i}\). Let \(\mathcal{N}:A_{K}\to K\) denote the usual norm map (i.e. \(\mathcal{N}(x)\) equals the determinant of the \(K\)-linear map given by multiplication by \(x\)), which gives a well defined map \(\mathcal{N}:A_{K}^{\times}/(A_{K}^{\times})^{2}\to K^{\times}/(K^{\times})^{2}\). Let \((A_{K}^{\times}/(A_{K}^{\times})^{2})_{\square}\) denote its kernel. **Theorem 2.1**.: _The group \(\operatorname{H}^{1}(\operatorname{Gal}_{K},J[2])\) is isomorphic to \((A_{K}^{\times}/(A_{K}^{\times})^{2})_{\square}\)._ Proof.: See [12, Theorem 1.1], which generalizes [12, p. 240]. The exact sequence (1.2), with \(m=2\), then gives an injective morphism \[\delta_{K}:J(K)/2J(K)\hookrightarrow(A_{K}^{\times}/(A_{K}^{\times})^{2})_{ \square}.\] One can give an explicit description of such a map for points on \(J\) which are not of order \(2\). Recall that \(J(K)\) consists of degree zero divisors of \(\mathcal{C}\) defined over \(K\), hence it is spanned by divisors of the form \[D=\sum_{\sigma}(\sigma(P)-\infty), \tag{2.3}\] where \(P\in\mathcal{C}(\overline{K})\) and the sum is over the different conjugates of \(P\). By [13, Lemma 4.1], if \(y(P)\neq 0\) then we have \[\delta_{K}(D)=\prod_{\sigma}(\sigma(x(P))-T), \tag{2.4}\] where \(x(P)\) denotes the \(x\)-coordinate of the point \(P\). When \(K\) is a number field, for each place \(v\) we have a similar injective morphism \[\delta_{v}:J(K_{v})/2J(K_{v})\hookrightarrow(A_{K_{v}}^{\times}/(A_{K_{v}}^{ \times})^{2})_{\square}\] which can be explicitly described as done in (2.4). **Definition 2.2**.: The 2-Selmer group of \(J\) consists of the cohomology classes in \(H^{1}(\operatorname{Gal}_{K},J[2])\) whose restriction to \(\operatorname{Gal}_{K_{v}}\) lies in the image of \(\delta_{v}\) for all places \(v\). Under the isomorphism of Theorem 2.1, the 2-Selmer group of \(J\) corresponds to \[\operatorname{Sel}_{2}(J)=\{[\alpha]\in(A_{K}^{\times}/(A_{K}^{\times})^{2})_{ \square}:\ \operatorname{loc}_{v}([\alpha])\in\operatorname{Im}(\delta_{K_{v}}) \text{ for each place $v$ of $K$}\},\] where \(\operatorname{loc}_{v}:(A_{K}^{\times}/(A_{K}^{\times})^{2})_{\square}\to(A_{K_ {v}}^{\times}/(A_{K_{v}}^{\times})^{2})_{\square}\) is the natural map. This description of the 2-Selmer group coincides with the one given in [13, Proposition 4.2] for \(K=\mathbb{Q}\). ## 3. Some local non-Archimedean computations The main reference for the first part of this section is the article [13]. Let \(p\geq 2\) be a prime number and let \(K\) be a finite extension of \(\mathbb{Q}_{p}\), with ring of integers \(\mathcal{O}\). Let \(v:\overline{K}^{\times}\to\mathbb{Q}\) be the valuation normalized so that \(v(K^{\times})=\mathbb{Z}\). Set \(d_{2}=[K:\mathbb{Q}_{2}]\) if \(p=2\), and \(d_{2}=0\) otherwise, so in any case \([\mathcal{O}:2\mathcal{O}]=2^{d_{2}}\). Recall the factorization \[p(x)=p_{1}(x)\cdots p_{r}(x),\] of \(p(x)\) into irreducible polynomials. **Lemma 3.1**.: _Under the previous hypothesis and notations_ 1. \(\dim_{\mathbb{F}_{2}}J(K)/2J(K)=r-1+d_{2}\cdot g=\dim_{\mathbb{F}_{2}}J(K)[2]+ d_{2}\cdot g\)_._ 2. \(\dim_{\mathbb{F}_{2}}(A_{K}^{\times}/(A_{K}^{\times})^{2})_{\square}=2\dim_{ \mathbb{F}_{2}}J(K)/2J(K)\)_._ 3. \(\dim_{\mathbb{F}_{2}}(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2} )_{\square}=\dim_{\mathbb{F}_{2}}J(K)/2J(K)+d_{2}\cdot g\)_._ Proof.: The first two statements follow from [13, Lemma 4.4]. The proof of the last statement follows the lines of the proof of [1, Lemma 1.5]. The decomposition \(A_{\mathcal{O}}\simeq\mathcal{O}_{1}\times\cdots\times\mathcal{O}_{r}\) implies that \[[A_{\mathcal{O}}^{\times}:(A_{\mathcal{O}}^{\times})^{2}]=\prod_{i=1}^{r}[ \mathcal{O}_{i}^{\times}:(\mathcal{O}_{i}^{\times})^{2}]=2^{r}\prod_{i=1}^{r }[\mathcal{O}_{i}:2\mathcal{O}_{i}]=2^{r}[\mathcal{O}:2\mathcal{O}]^{d}.\] Since \(\mathcal{N}:A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2}\to \mathcal{O}^{\times}/(\mathcal{O}^{\times})^{2}\) is surjective, its kernel \((A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2})_{\square}\) has order \([A_{\mathcal{O}}^{\times}:(A_{\mathcal{O}}^{\times})^{2}]/[\mathcal{O}^{ \times}:(\mathcal{O}^{\times})^{2}]=2^{r-1}\left[\mathcal{O}:2\mathcal{O} \right]^{d-1}=2^{r-1}2^{d_{2}\cdot 2g}\). The result then follows from the first statement. Recall that \(J\) has a Neron model \(\mathcal{J}\) over \(\mathcal{O}\), which provides a reduction map on \(J(K)\). Following the standard notation, let \(J^{0}(K)\) denote the set of points mapping into the identity component of \(\mathcal{J}_{k}\) (where \(k\) denotes the residual field of \(K\)). Then there is an exact sequence ### The condition (\(\dagger\)) **Definition 3.2**.: The polynomial \(p(x)\) satisfies condition (\(\dagger\)) if either one of the following two conditions holds: 1. The ring \(\mathcal{O}[x]/(p(x))\) is isomorphic to the product \(\prod_{i=1}^{r}\mathcal{O}[x]/(p_{i}(x))\). 2. The residual characteristic of \(K\) is odd and the order of the component group \([J(K):J^{0}(K)]\) is odd. **Remark 3.3**.: Since the polynomials \(p_{1}(x),\ldots,p_{r}(x)\) are prime to each other, there exists an injective map \[\pi:\mathcal{O}[x]/(p(x))\to\prod_{i=1}^{r}\mathcal{O}[x]/(p_{i}(x)). \tag{3.1}\] The Chinese Remainder Theorem (CRT) states that if \(\mathcal{O}[x]\) were a principal ideal domain, then the map \(\pi\) would be an isomorphism. Over the ring \(\mathcal{O}[x]\) this might not be true, for example if \(\mathcal{O}=\mathbb{Z}_{2}\) and \(p(x)=x(x+2)\), the image of the map \(\pi\) consists of pairs of elements \((a,b)\) in \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) such that \(a\equiv b\pmod{2}\). Let \(\overline{\mathcal{O}}\) denote the ring of integers of an algebraic closure of \(K\) and let \(\mathfrak{p}\) denote its maximal ideal. The hypothesis \((\dagger.\mathrm{i})\) (surjectivity of \(\pi\)) is equivalent to impose the condition that if \(\alpha\) is a root of a polynomial \(p_{i}(x)\) and \(\beta\) is a root of other polynomial \(p_{j}(x)\), then \(\mathfrak{p}\nmid\alpha-\beta\). Indeed, if \(p(x)\) and \(q(x)\) are two monic polynomials in \(\mathcal{O}[x]\) without common roots, then \[\mathcal{O}[x]/(p(x)q(x))\simeq\mathcal{O}[x]/(p(x))\times\mathcal{O}[x]/(q(x))\] if and only if there exist polynomials \(a,b\in\mathcal{O}[x]\) such that \(1=ap+bq\) (the usual CRT hypothesis). The proof that the condition is sufficient mimics the proof of the CRT. To prove the other implication, suppose that \(\pi\) is an isomorphism. Then the element \((1,0)\) lies in its image so there exists \(f\in\mathcal{O}[x]\) such that \(\pi(f)=(1,0)\). In particular \(q\mid f\), so \(f=bq\) for some \(b\in\mathcal{O}[x]\) (here we use the fact that \(q(x)\) is monic). Similarly, there exists \(a\in\mathcal{O}[x]\) such that \(\pi(ap)=(0,1)\). But then \(\pi(ap+bq)=\pi(1)\), so \[1=ap+bq+cpq.\] A standard argument using resultants proves that \(1=ap+bq\) if and only if \(\mathfrak{p}\nmid\alpha-\beta\) for any root \(\alpha\) of \(p\) and any root \(\beta\) of \(q\). Here are two easy instances where the condition \((\dagger.\mathrm{i})\) is satisfied: * The polynomial \(p(x)\) is irreducible. * \(A_{\mathcal{O}}\) (the ring of integers of \(A_{K}\)) equals \(\mathcal{O}[x]/(p(x))\). These two cases correspond to the first two hypothesis of [3] (Definition 1.6) while studying the case of elliptic curves. Our assumption \((\dagger.\mathrm{i})\) is less restrictive (improving the results of loc. cit.). **Theorem 3.4**.: _If the polynomial \(p(x)\) satisfies \((\dagger)\) then \(\mathrm{Im}(\delta_{K})\subset(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{ \times})^{2})_{\square}\)._ Before giving the proof, let us state a particular (and easy to prove) instance of the result. **Lemma 3.5**.: _Let \(P=(a,b)\in\mathcal{C}(\overline{K})\) and suppose that \(v(a)<0\). Consider the divisor \(D=\sum_{\sigma}(\sigma(p)-\infty)\in J(K)\) where the sum is over the different conjugates of \(P\). Then \(\delta_{K}(D)\in(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2})_{ \square}\). Moreover, if \(p>2\) then \(\delta_{K}(D)=1\)._ Proof.: Suppose first that \(P=(a,b)\in\mathcal{C}(K)\) and \(v(a)<0\). Equation (1.1), with the assumption that \(p(x)\) has integral coefficients, implies that \(b\neq 0\) and \(2\,v(b)=d\,v(a)\). Since \(b\in K^{\times}\) we have \(v(b)\in\mathbb{Z}\) and so in particular \(v(a)\) is even. Since \(T_{i}\in\mathcal{O}_{i}\) for all \(i=1,\dots,r\), it follows that \(v(a-T_{i})=v(a)\) is even as well, hence, up to a square in \(K_{i}^{\times}\), it can be taken to be a unit. Thus \(\delta_{K}(P-\infty)=(a-T)\in(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{ \times})^{2})_{\square}\) as claimed. In general, let \(L=K(a,b)\) with ramification index \(e\). The same argument as above, now using \(v(b)\in\frac{1}{e}\mathbb{Z}\), shows that \(e\,v(\sigma(a)-T_{i})\) is even. Since \(e\mid[L:K]\) it follows that \(\prod_{\sigma}(\sigma(a)-T_{i})\in K_{i}^{\times}\) has even valuation and the argument goes through to show \[\delta_{K}(D)=\prod_{\sigma}(\sigma(a)-T)\in(A_{\mathcal{O}}^{\times}/(A_{ \mathcal{O}}^{\times})^{2})_{\square}.\] To prove the second claim, note that since \(v(b)<v(a)<0\) the divisor \(D\) lies in \(J^{0}(K)\), the kernel of the reduction map. But \(J^{0}(K)\) has a formal group structure, hence it is a pro-\(p\)-group and so \(\delta_{K}(D)=1\) if \(p\neq 2\) (as it is an element of order at most \(2\)). Proof of Theorem 3.4.: Start supposing that (\(\dagger\).i) holds. The proof is similar to that due to Michael Stoll given in [1, Proposition 8.5]. Let us just recall the main ingredients: let \(D=\sum_{i=1}^{m}P_{i}-m\cdot\infty\) be a degree zero divisor, which satisfies the following hypothesis (which can always be assumed): * The value \(x(P_{i})\) is not a root of \(p(x)\) (see [1, Lemma 2.2]). * The degree of \(D^{+}\) (equal to \(m\)) is at most \(\frac{d-1}{2}\) (by Riemann-Roch's theorem). * The values \(\{x(P_{i})\}\) are all distinct. * Each point \(P_{i}\) has integral coordinates (otherwise the result follows from Lemma 3.5). To ease the notation, let \(P_{i}=(a_{i},b_{i})\). Then \[\delta_{K}(D)=\prod_{i=1}^{m}(a_{i}-T)=(-1)^{m}q(T),\] where \(q(x)=(x-a_{1})\cdots(x-a_{m})\in\mathcal{O}[x]\). There exists a unique \(R(x)\in K[x]\) of degree \(\leq m-1\) with \(R(a_{i})=b_{i}\). Observe that \(R(x)^{2}-p(x)\) vanishes at \(\{a_{1},\ldots,a_{m}\}\) so it is divisible by \(q(x)\). Consider the following two cases: _Case 1:_\(R(x)\) has integral coefficients. Let \(I_{D}\subset A_{\mathcal{O}}\) be the \(\mathcal{O}[T]\)-ideal generated by \((q(T),R(T))\). **Claim:**\(I_{D}^{2}=(q(T))\) as \(\mathcal{O}[T]\)-ideals. Indeed, since \(p(T)=0\), \(q(T)\mid R(T)^{2}\). From this observation it is clear that \(I_{D}^{2}\subset(q(T))\). Then the proof of the claim follows from the fact that both ideals have the same norm (as proved in loc. cit.). Clearly \(\mathcal{O}[T]\subseteq\operatorname{End}(I_{D})\subseteq\operatorname{End}( I_{D}^{2})\subseteq\prod\mathcal{O}[T_{i}]\), where the last inequality follows from the claim. Then (\(\dagger\).i) implies they are all equalities, so \(\operatorname{End}(I_{D})=\mathcal{O}[T]\) (i.e. \(I_{D}\) is a proper ideal). As explained in [1], the ring \(\mathcal{O}[T]\) is generated by a single element over \(\mathcal{O}\), hence it is Gorenstein of dimension one. In particular, an \(\mathcal{O}[T]\)-ideal is principal if and only if it is proper, so \(I_{D}\) is indeed a principal \(\mathcal{O}[T]\)-ideal. It follows that \(q(T)\) is a square up to a unit, thus \(\delta_{K}(D)\in(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2})_{\square}\). Note that when \(m=1\), \(R(x)\) does have integral coefficients, so we have proven the statement (in both cases) for \(m\leq 1\). _Case 2:_\(R(x)\) is not integral. Then \(p(x)-R(x)^{2}\) is not integral, but it has at most \(2m-2\) integral roots. Since \(a_{1},\ldots,a_{m}\) are integral roots, there are other integral roots \(\alpha_{1},\ldots,\alpha_{t}\) (with \(t\leq m-2\)). Let \(\beta_{i}=R(\alpha_{i})\). Then the divisor of \(y-R(x)\) on \(\mathcal{C}\) equals \[D+D^{\prime}+D^{\prime\prime}\] where \(D^{\prime}=\sum_{i=1}^{t}[(\alpha_{i},\beta_{i})-\infty]\) and \(D^{\prime\prime}\) is a sum of non-integral points. From Lemma 3.5 we know that \(\delta_{K}(D^{\prime\prime})\in(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{ \times})^{2})_{\square}\), hence \(\delta_{K}(D)\in(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2})_{\square}\) is equivalent to \(\delta_{K}(D^{\prime})\in(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times}) ^{2})_{\square}\). Since the positive part of \(D^{\prime}\) has degree at most \(m-2\) the claim follows by an inductive argument on \(m\). Suppose at last that (\(\dagger\).ii) holds. By [1, Lemma 4.5], the valuation of the image of \(J(K)\) under \(\delta\) is isomorphic to the 2-group of connected components, which is trivial by hypothesis. **Corollary 3.6**.: _Suppose that \(p(x)\) satisfies (\(\dagger\)). Then \(\operatorname{Im}(\delta_{K})\subset(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^ {\times})^{2})_{\square}\) with index \(2^{d_{2}\cdot g}\)._ Proof.: By Lemma 3.1, if \(p\neq 2\) then both sets have the same cardinality, while when \(p=2\), the index equals \(2^{d_{2}\cdot g}\) as claimed. ### The case \(p=2\) Suppose for the rest of the section that \(K\) is a finite extension of \(\mathbb{Q}_{2}\). The problem at even characteristic is that the image of the map \(\delta_{K}\) is not the whole group \((A_{\mathbb{O}}^{\times}/(A_{\mathbb{O}}^{\times})^{2})_{\square}\) (as stated in Corollary 3.6), so we need to give a "lower bound" for the group \(\operatorname{Im}(\delta_{K})\). Ideally, the lower bound would be related to unramified extensions of \(A_{K}\) (justifying the class group formula in our main theorem), but this is not always the case. As before, let \((T_{1},\ldots,T_{r})\) denote the image of \(T\) under the isomorphism \(A_{K}\simeq K_{1}\times\cdots\times K_{r}\). Let \(k_{i}\) denote the residue field of \(K_{i}\) for each \(i=1,\ldots,r\), and let \(k\) denote the residue field of \(K\). Let \(\overline{T_{i}}\) denote the image of \(T_{i}\) under the reduction map \(\mathcal{O}_{i}\to k_{i}\). Let \(e_{i}\) denote the ramification degree of the extension \(K_{i}/K\). Note that at least one of the \(e_{i}\) must be odd, since \(d=\sum_{i=1}^{r}e_{i}[k_{i}:k]\) is odd. Recall our assumption that the coefficient of \(x^{d-1}\) in \(p(x)\) is "even" (i.e. divisible by any local uniformizer at places dividing \(2\)). **Lemma 3.7**.: _Keeping the previous notation,_ \[\sum_{i=1}^{r}e_{i}\mathrm{Tr}_{k_{i}/k}(\overline{T}_{i})=0.\] Proof.: The coefficient of \(x^{d-1}\) in \(p(x)\) equals the trace \(\mathrm{Tr}_{A_{K}/K}(T)\) (an element of \(\mathcal{O}_{K}\)) which under the isomorphism \(A_{K}\simeq K_{1}\times\cdots\times K_{r}\) equals \(\sum_{i=1}^{r}\mathrm{Tr}_{K_{i}/K}(T_{i})\). The assumption on the coefficient of \(x^{d-1}\) implies that \(\mathrm{Tr}_{A_{K}/K}(T)\) is congruent to zero modulo the maximal ideal \(\mathfrak{p}\) of \(\mathcal{O}_{K}\). Thus, the result follows from the well known equality \[\mathrm{Tr}_{K_{i}/K}(T_{i})\equiv e_{i}\,\mathrm{Tr}_{k_{i}/k}(\overline{T}_ {i})\pmod{\mathfrak{p}}.\] Let \(\mathbb{V}\) denote the \(\mathbb{F}_{2}\)-vector space \[\mathbb{V}=\langle\mathrm{Tr}_{k_{i}/k}(\overline{T}_{i})\ :\ i=1,...,r\rangle\subset k. \tag{3.2}\] Lemma 3.7 implies that \(\dim_{\mathbb{F}_{2}}\mathbb{V}\leq r-1\). **Definition 3.8**.: We say that \(p(x)\) satisfies \((*)\) if \(\dim_{\mathbb{F}_{2}}\mathbb{V}=r-1\). **Theorem 3.9**.: _If \(p(x)\) satisfies \((*)\) then it satisfies \((\dagger.\mathrm{i})\)._ Proof.: Suppose that \((\dagger.\mathrm{i})\) does not hold, so by Remark 3.3 there exist a root (over \(\overline{\mathcal{O}}\)) \(\alpha\) say of \(p_{1}(x)\) and \(\beta\) of \(p_{2}(x)\) such that \(\mathfrak{p}\mid\alpha-\beta\). Let \(\mathfrak{l}=k_{1}\cap k_{2}\), so \(\overline{T_{1}}\) and \(\overline{T_{2}}\) are the same as elements of \(\mathfrak{l}\) and satisfy the relation \[[k_{1}:\mathfrak{l}]\,\mathrm{Tr}_{k_{2}/k}(\overline{T_{2}})=[k_{2}:\mathfrak{ l}]\,\mathrm{Tr}_{k_{1}/k}(\overline{T_{1}}).\] In particular, the values \(\{\mathrm{Tr}_{k_{1}/k}(\overline{T_{1}}),\ldots,\mathrm{Tr}_{k_{r}/k}( \overline{T_{r}})\}\) in the \(\mathbb{F}_{2}\)-vector space \(k\) satisfy the following two relations: * \(e_{1}\,\mathrm{Tr}_{k_{1}/k}(\overline{T_{1}})+\cdots+e_{r}\,\mathrm{Tr}_{k_{ r}/k}(\overline{T_{r}})=0\), * \([k_{2}:k]\,\mathrm{Tr}_{k_{1}/k}(\overline{T_{1}})+[k_{1}:k]\,\mathrm{Tr}_{k_{ 2}/k}(\overline{T_{2}})=0\). But they also span an \(r-1\)-dimensional subspace, so both equations must generate a \(1\)-dimensional relations space. Then either \([k_{1}:k]\equiv[k_{2}:k]\equiv 0\pmod{2}\) (in which case \(\mathrm{Tr}_{k_{1}:k}(\overline{T_{1}})\equiv\mathrm{Tr}_{k_{2}:k}(\overline{T_ {2}})\equiv 0\), contradicting \((*)\) ) or \(e_{i}\equiv 0\pmod{2}\) for all \(i=3,\ldots,r\) and the vectors \(([k_{2}:k],[k_{1}:k])\) and \((e_{1},e_{2})\) are linearly dependent in \(\mathbb{F}_{2}^{2}\). The hypothesis \(d\) odd implies that \(\sum_{i=1}^{r}e_{i}[k_{i}:k]\) is odd, hence \(e_{1}[k_{1}:k]+e_{2}[k_{2}:k]\) is also odd, so the determinant \[\det\begin{pmatrix}e_{1}&e_{2}\\ [k_{2}:k]&[k_{1}:k]\end{pmatrix}\equiv 1\pmod{2},\] contradicting the fact that \(([k_{2}:k],[k_{1}:k])\) and \((e_{1},e_{2})\) are linearly dependent. **Corollary 3.10**.: _If \(p(x)\) satisfies \((*)\) then \(\operatorname{Im}(\delta_{K})\subset(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{ \times})^{2})_{\square}\)._ Proof.: Follows from the last theorem and Theorem 3.4. **Remark 3.11**.: The condition \((*)\) is not equivalent to \((\dagger.\mathrm{i})\) as the following example shows. Let \(K=\mathbb{Q}_{8}=\mathbb{Q}_{2}[t]/(t^{3}-t-1)\), be the unramified cubic extension of \(\mathbb{Q}_{2}\). Consider the hyperelliptic curve \[\mathcal{C}:y^{2}=x(x-1)(x-t)(x-t^{2})(x-(1+t+t^{2})).\] Since all roots belong to \(K\) and are not congruent modulo its maximal ideal, the curve \(\mathcal{C}\) satisfies \((\dagger.\mathrm{i})\). However, condition \((*)\) cannot be satisfied, since \(\dim_{\mathbb{F}_{2}}(k)=3<5-1\). Consider the following subgroup of \(A_{\mathcal{O}}^{\times}\) (corresponding to quadratic extensions unramified at places dividing \(2\)). **Definition 3.12**.: Let \(U_{4}\) denote the subgroup of \(A_{\mathcal{O}}^{\times}\) defined by \[U_{4}=\{u\in A_{\mathcal{O}}^{\times}\ :\ u\equiv\square\pmod{4A_{\mathcal{O}}} \text{ and }\mathcal{N}(u)=\square\}.\] Note that \((A_{\mathcal{O}}^{\times})^{2}\subset U_{4}\) and furthermore each class in \(U_{4}/(A_{\mathcal{O}}^{\times})^{2}\) has a representative of the form \(1+4\beta\) for some \(\beta\in A_{\mathcal{O}}\). Define the set \[\mathcal{S}:=\left\{(s_{1},\dots,s_{r})\in\mathbb{F}_{2}^{r}\ :\ \sum_{i=1}^{r}e_{i}s_{i}=0\right\}. \tag{3.3}\] Note that some \(e_{i}\) must be odd, hence the subspace \(\mathcal{S}\) has dimension \(r-1\). **Lemma 3.13**.: _The map \(\phi:U_{4}/(A_{\mathcal{O}}^{\times})^{2}\to\mathcal{S}\) induced by the map_ \[1+4\beta\mapsto(\operatorname{Tr}_{k_{1}/\mathbb{F}_{2}}(\overline{\beta}_{1} ),\dots,\operatorname{Tr}_{k_{r}/\mathbb{F}_{2}}(\overline{\beta}_{r}))\] _for \(\beta\in A_{\mathcal{O}}\), is a group isomorphism. In particular \(\dim_{\mathbb{F}_{2}}U_{4}/(A_{\mathcal{O}}^{\times})^{2}=r-1\)._ Proof.: Consider the equality \[(1+4\alpha)(1+4\beta)=1+4(\alpha+\beta)+16\alpha\beta=(1+4(\alpha+\beta))(1+16 z),\] where \(z=\frac{\alpha\beta}{1+4(\alpha+\beta)}\in A_{\mathcal{O}}\). By [1, Lemma 1.10], \(1+16z\in(A_{\mathcal{O}}^{\times})^{2}\) so \((1+4\alpha)(1+4\beta)\equiv 1+4(\alpha+\beta)\pmod{(A_{\mathcal{O}}^{\times})^{2}}\). This implies that the map is a morphism. The facts that the map is well defined on equivalence classes and that it is injective follow from Lemma 1.10 (1) of [1]. At last, note that by Lemma 1.10 (5) (and its natural generalization) of [1] both sets \(U_{4}/(A_{\mathcal{O}}^{\times})^{2}\) and \(\mathcal{S}\) have the same cardinality \(2^{r-1}\) hence the statement. Let \(W\) be the subgroup of \(U_{4}\) given by \[W=\{u=(1-4T_{1}w^{2},\dots,1-4T_{r}w^{2}):w\in\mathcal{O}\ \text{ and }\mathcal{N}(u)=\square\}(A_{\mathcal{O}}^{\times})^{2}\subset U_{4}. \tag{3.4}\] **Theorem 3.14**.: _With the previous notations, \(W\subset\operatorname{Im}(\delta_{K})\) and the dimension \(\dim_{\mathbb{F}_{2}}W/(A_{\mathcal{O}}^{\times})^{2}=\dim_{\mathbb{F}_{2}} \mathbb{V}\). Moreover, the index of \(W\) in \(U_{4}\) equals_ \[[U_{4}:W]=2^{r-1-\dim_{\mathbb{F}_{2}}\mathbb{V}}.\] Proof.: To prove the first statement, we need to construct points in \(J(K)\) hitting each element of \(W\). Actually, the points we construct lie on \(\mathcal{C}(K)\). The expansion around the infinity point of the curve \(\mathcal{C}\) in terms of the local uniformizer \(z=\frac{y^{\frac{d+1}{2}}}{x^{\frac{d+1}{2}}}\) is given by \[\begin{cases}x(z)=z^{-2}+zO_{1}(z),\\ y(z)=z^{-d}+z^{-d+3}O_{2}(z),\end{cases}\] where \(O_{1}(z),O_{2}(z)\in\mathcal{O}[[z]]\). If \(w\in\mathcal{O}\), \(2w\) lies in the maximal ideal, and since \(O_{1}(z),O_{2}(z)\in\mathcal{O}[[z]]\), we get a well defined point \(P=(x(2w),y(2w))\in\mathcal{C}(\mathcal{O})\) (i.e. the series converges). Then we have \[\delta_{K}(P-\infty)=[((2w)^{-2}+2wO_{1}(2w)-T_{1},\cdots,(2w)^{-2}+2wO_{1}(2w )-T_{r})].\] Multiplying by \((2w)^{2}\) (a square), we get \[\left[\left((1-4T_{1}w^{2})\left(1+\frac{8w^{3}O_{1}(2w)}{1-4T_{1}w^{2}}\right),\cdots,(1-4T_{r}w^{2})\left(1+\frac{8w^{3}O_{1}(2w)}{1-4T_{r}w^{2}}\right) \right)\right].\] Note that the second factors are squares (by [1, Theorem 63:1]), so \[\delta_{K}(P-\infty)=[(1-4T_{1}w^{2},\cdots,1-4T_{r}w^{2})].\] Varying \(w\) over the elements of \(\mathcal{O}\) proves the first statement. To compute the dimension, we look at the image of \(W\) under \(\phi\). Indeed, given \(w\in\mathcal{O}\) we have \[\phi((1-4T_{1}w^{2},\cdots,1-4T_{r}w^{2}))=(\operatorname{Tr}_{k_{1}/\mathbb{F }_{2}}(\overline{T_{1}}\overline{w}^{2}),\ldots,\operatorname{Tr}_{k_{r}/ \mathbb{F}_{2}}(\overline{T_{r}}\overline{w}^{2})).\] Note that over a perfect field of characteristic two, squaring is a bijection, so it is enough to determine for which elements \(s=(s_{1},\ldots,s_{r})\in\mathcal{S}\), there exists \(v\in k\) such that \(\operatorname{Tr}_{k_{i}/\mathbb{F}_{2}}(\overline{T_{i}}v)=s_{i}\) for all \(1\leq i\leq r\). Let \(\sigma_{i}=\operatorname{Tr}_{k_{i}/k}(\overline{T}_{i})\in k\), so by the property of traces in towers \[\phi(W)=\{(\operatorname{Tr}_{k/\mathbb{F}_{2}}(\sigma_{1}v),\ldots, \operatorname{Tr}_{k/\mathbb{F}_{2}}(\sigma_{r}v))\mid v\in k\}.\] Recall that the bilinear mapping \(k\times k\to\mathbb{F}_{2}\) given by \((x,y)\mapsto\operatorname{Tr}_{k/\mathbb{F}_{2}}(xy)\) is perfect. By definition, the set \(\{\sigma_{1},\ldots,\sigma_{r}\}\subset k\) generates an \(\mathbb{F}_{2}\)-vector space of dimension \(\dim_{\mathbb{F}_{2}}\mathbb{V}\) in \(k\), then the same holds for the set of linear functions \((\operatorname{Tr}_{k/\mathbb{F}_{2}}(\sigma_{1}v),\ldots,\operatorname{Tr}_ {k/\mathbb{F}_{2}}(\sigma_{r}v))\), hence \(\dim_{\mathbb{F}_{2}}W/(A_{\mathcal{O}}^{\times})^{2}=\dim_{\mathbb{F}_{2}} \phi(W)=\dim_{\mathbb{F}_{2}}\mathbb{V}\). **Remark 3.15**.: When \(p(x)\) satisfies \((*)\), the last statement proves that \(U_{4}=W\), hence \(U_{4}/(A_{\mathcal{O}}^{\times})^{2}\) is contained in the image of the elements of \(\mathcal{C}(K)\) under \(\delta_{K}\). ## 4. Archimedean places Let \(K\) be an archimedean place, namely \(K=\mathbb{R}\) or \(K=\mathbb{C}\). If \(K=\mathbb{C}\), then \(A_{K}=A_{\mathbb{C}}\simeq\mathbb{C}^{d}\), and the map \(\delta_{K}:J(\mathbb{C})/2J(\mathbb{C})\to(A_{\mathbb{C}}^{\times}/(A_{ \mathbb{C}}^{\times})^{2})_{\square}=\{1\}\) is the trivial map. Thus suppose that \(K=\mathbb{R}\). Let \(2t\) denote the number of complex roots of \(p(x)\) and \(2s+1\) the number of real ones (so \(d=2s+1+2t\)). Then \[A_{\mathbb{R}}\simeq\mathbb{R}^{2s+1}\times\mathbb{C}^{t}\,. \tag{4.1}\] Order the real roots in the form \(\tilde{v}<v_{1}<v_{1}^{\prime}<v_{2}<v_{2}^{\prime}\ldots<v_{s}<v_{s}^{\prime}\) (as in Figure 1) and let \(w_{1},\overline{w_{1}},\ldots,w_{t},\overline{w_{t}}\) denote the complex ones. Let \(P\in J(\mathbb{R})\) be a real point. Then \[\delta_{\mathbb{R}}(P-\infty)=(x(P)-\tilde{v},x(P)-v_{1},x(P)-v_{1}^{\prime}, \ldots,x(P)-v_{s}^{\prime},x(P)-w_{1},\ldots,x(P)-w_{t}).\] **Lemma 4.1**.: _We have \(\operatorname{Im}(\delta_{\mathbb{R}})\subset\{\pm 1\}^{2s+1}\times\{1\}^{t}\) and moreover_ \[\operatorname{Im}(\delta_{\mathbb{R}})=\{(1,\epsilon_{1},\epsilon_{1},\ldots, \epsilon_{s},\epsilon_{s},1,\ldots,1)\in\{\pm 1\}^{2s+1}\times\{1\}^{t}\mid \epsilon_{i}\in\{\pm 1\},i=1,\ldots,s\}.\] Proof.: The fact that \(\mathrm{Im}(\delta_{\mathbb{R}})\subset\{\pm 1\}^{2s+1}\times\{1\}^{t}\) is clear from (4.1). A point between \(\tilde{v}\) and \(v_{1}\) has image \((1,-1,\ldots,-1)\times(1)^{t}\) as shows Figure 1. In general, a real point between \(v_{i}^{\prime}\) and \(v_{i+1}\) maps into a vector with \(2i+1\) plus signs, and \(2s+1-(2i+1)\) minus ones (and trivial at the complex places). This proves the containment \[\mathrm{Im}(\delta_{\mathbb{R}})\supset\{(1,\epsilon_{1},\epsilon_{1},\ldots,\epsilon_{s},\epsilon_{s},1,\ldots,1)\in\{\pm 1\}^{2s+1}\times\{1\}^{t}\mid \epsilon_{i}\in\{\pm 1\},i=1,\ldots,s\}.\] The opposite inclusion is clear for real points \(P\in\mathcal{C}(\mathbb{R})\). If \(P\in\mathcal{C}(\mathbb{C})-\mathcal{C}(\mathbb{R})\) then \[\delta_{\mathbb{R}}(P-\infty)\delta_{\mathbb{R}}(\overline{P}-\infty)=(\left| x(P)-\tilde{v}\right|^{2},\left|x(P)-{v_{1}}\right|^{2},\ldots,\left|x(P)-{v_{s} ^{\prime}}\right|^{2})\times(1)^{t},\] a vector whose components are all positive (and hence trivial in the quotient). ## 5. 2-Selmer groups and Class groups In this section \(K\) denotes a number field and \(\mathcal{C}\) an hyperelliptic curve defined over \(K\). Keeping the previous notation, if \(p(x)\) factors like \[p(x)=p_{1}(x)\cdots p_{r}(x),\] then the \(K\)-algebra \(A_{K}\) is isomorphic to \(K_{1}\times\cdots\times K_{r}\), where \(K_{i}\) is the number field \(K[x]/(p_{i}(x))\). We will denote by \(\mathrm{Cl}(A_{K})\) the finite abelian group \[\mathrm{Cl}(A_{K}):=\mathrm{Cl}(K_{1})\times\cdots\mathrm{Cl}(K_{r}),\] where \(\mathrm{Cl}(K_{i})\) is the class group of the number field \(K_{i}\). A similar notation will be used for the set of ideals, fractional ideals, principal ideals and the ring of integers of \(A_{K}\). If \(\alpha\in A_{K}\) corresponds to \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\) under the isomorphism (2.2), we denote by \(A_{K}(\sqrt{\alpha})\) the \(K\)-algebra \(K_{1}(\sqrt{\alpha_{1}})\times\cdots\times K_{r}(\sqrt{\alpha_{r}})\), and we call the extension \(A_{K}(\sqrt{\alpha})/A_{K}\) unramified if each extension in the previous product is unramified. For \(v\) a real place of \(K\) we follow the notations of SS4, i.e. we denote by \(\tilde{v},v_{1},v_{1}^{\prime},\ldots,v_{s_{v}},v_{s_{r}}^{\prime}\) the real roots of \(p(x)\) in \(K_{v}\), where \(s_{v}\in\mathbb{Z}_{\geq 0}\) depends on \(v\). **Remark 5.1**.: A real root \(v\) of the polynomial \(p_{i}(x)\) determines an embedding of \(K_{i}\) into \(\mathbb{R}\). Abusing notation, we will use the same symbol to denote either a real root of \(p_{i}(x)\) or the embedding it determines. From now on we assume the following hypotheses: **Hypotheses 5.2**.: The hyperelliptic curve \(\mathcal{C}\) and the field \(K\) satisfy the following conditions: 1. The degree of \(p(x)\) is odd. 2. The narrow class group of \(K\) is odd. 3. For all finite places \(v\) of \(K\), \(\mathcal{C}/K_{v}\) satisfies \((\dagger)\). **Remark 5.3**.: The first two conditions together with \((\dagger.i)\) are very easy to verify with most computational programs (like [10]). The hypothesis \((\dagger)\) implies that for all finite places \(v\) of \(K\) the image of the connecting morphism \(\delta_{K_{v}}\) belongs to the subgroup \((A_{\mathcal{O}_{v}}^{\times}/(A_{\mathcal{O}_{v}}^{\times})^{2})_{\square} \subset(A_{K_{v}}^{\times}/(A_{K_{v}}^{\times})^{2})_{\square}\). **Definition 5.4**.: Let \(C_{*}(\mathcal{C})\subset A_{K}^{\times}/(A_{K}^{\times})^{2}\) be the subgroup made of elements \([\alpha]\) satisfying the following properties: * \(A_{K}(\sqrt{\alpha})\) is unramified at all finite places of \(A_{K}\), * if \(v\) is a real place of \(K\) then \(A_{K}(\sqrt{\alpha})\) is unramified at \(\tilde{v}\) (equivalently \(\tilde{v}(\alpha)>0\)), * if \(v\) is a real place of \(K\) then \(A_{K}(\sqrt{\alpha})\) ramifies at \(v_{i}\) if and only if it ramifies at \(v_{i}^{\prime}\) for each \(i=1,\ldots,s_{v}\). The group \(C_{*}(\mathcal{C})\) plays a crucial role in our bounds, as it is deeply connected to the \(2\)-class group of \(A_{K}\). Let \(\operatorname{Frac}(A_{K})\) denote the group of fractional ideals of \(A_{K}\). Consider the following subgroup of the group of principal ideals: \[P_{*}(\mathcal{C})=\{(\alpha)\in\operatorname{Frac}(A_{K}):v_{i}(\alpha)\,v_{ i}^{\prime}(\alpha)>0,\text{for each real place $v$, \ $i=1,...,s_{v}$}\}.\] **Remark 5.5**.: If \(A_{K}=K_{1}\times\ldots\times K_{r}\), the places \(v_{i}\) and \(v_{i}^{\prime}\) need not be places of the same field \(K_{j}\). A priori the condition \(v_{i}(\alpha)v_{i}^{\prime}(\alpha)>0\) might imply a relation between embeddings of different fields. **Definition 5.6**.: Let \(\operatorname{Cl}_{*}(A_{K},\mathcal{C})\) be the class group attached to \(P_{*}(\mathcal{C})\), i.e. \[\operatorname{Cl}_{*}(A_{K},\mathcal{C})=\operatorname{Frac}(A_{K})/P_{*}( \mathcal{C})\] **Proposition 5.7**.: _The group \(C_{*}(\mathcal{C})\) is isomorphic to the torsion \(2\)-subgroup of \(\operatorname{Cl}_{*}(A_{K},\mathcal{C})\), i.e. \(C_{*}(\mathcal{C})\simeq\operatorname{Cl}_{*}(A_{K},\mathcal{C})[2]\)._ Proof.: The proof mimics the one given in [1, Proposition 2.10]. If \(\alpha\in C_{*}(\mathcal{C})\) (say \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\)) then the extension \(F=A_{K}(\sqrt{\alpha})\) (i.e., \(K_{1}(\sqrt{\alpha_{1}})\times\cdots\times K_{r}(\sqrt{\alpha_{r}})\)) is an extension of \(A_{K}\) that is abelian and unramified at all finite places (meaning that each \(L_{i}/K_{i}\) is abelian and unramified at all finite places). Furthermore, the extension \(F/A_{K}\) is unramified at the Archimedean place \(\tilde{v}\) above \(v\), and satisfies that it ramifies at a place \(v_{i}\) if and only if it ramifies at the place \(v_{i}^{\prime}\). Let \(L=L_{1}\times\cdots\times L_{r}\) denote the maximal abelian extension of \(A_{K}\) which is unramified at all finite places and satisfies the same property at the Archimedean places. Clearly \(F\subset L\) and \(C_{*}(\mathcal{C})\simeq\operatorname{Hom}(\operatorname{Gal}(L/A_{K}),\mu_{2})\) (the extension \(F\) corresponds to the morphism whose kernel equals \(\operatorname{Gal}(L/F)\)). The Artin reciprocity map \(\operatorname{Frac}(A_{K})\to\operatorname{Gal}(L/A_{K})\) has kernel \(P_{*}(\mathcal{C})\), so \(\operatorname{Cl}_{*}(A_{K},\mathcal{C})\simeq\operatorname{Gal}(L/A_{K})\) and \(C_{*}(\mathcal{C})\simeq\operatorname{Cl}_{*}(A_{K},\mathcal{C})[2]\). The hypotheses 5.2 are needed to bound the Selmer group \(\operatorname{Sel}_{2}(J)\) in terms of \(\operatorname{Cl}_{*}(A_{K},\mathcal{C})[2]\). For that purpose, we need to introduce two subgroups of \(A_{K}^{\times}/(A_{K}^{\times})^{2}\). If \(v\) is a finite place of \(K\) dividing \(2\), we denote by \(U_{4,v}\subset A_{\mathcal{O}_{v}}^{\times}\) the subgroup introduced in definition 3.12 and by \(W_{v}\subset U_{4,v}\) the subgroup defined in (3.4). **Definition 5.8**.: Let \(C_{W}(\mathcal{C})\subset A_{K}^{\times}/(A_{K}^{\times})^{2}\) be the subgroup of the classes \([\alpha]\in A_{K}^{\times}/(A_{K}^{\times})^{2}\) satisfying the following properties: * for each place \(v\) of \(K\) over \(2\) the class \([\alpha]\) belong to the image of \(W_{v}\) in \(A_{K_{v}}^{\times}/(A_{K_{v}}^{\times})^{2}\), * \(A_{K}(\sqrt{\alpha})\) is unramified at all finite places of \(A_{K}\), * if \(v\) is a real place of \(K\) then \(A_{K}(\sqrt{\alpha})\) is unramified at \(\tilde{v}\), * if \(v\) is a real place of \(K\) then \(A_{K}(\sqrt{\alpha})\) ramifies at \(v_{i}\) if and only if it ramifies at \(v_{i}^{\prime}\) for each \(i=1,\ldots,s_{v}\). **Proposition 5.9**.: _If hypotheses 5.2 are satisfied then \(C_{W}(\mathcal{C})\subset\operatorname{Sel}_{2}(J)\)._ Proof.: Let \(\alpha\in A_{K}^{\times}\) such that \([\alpha]\in C_{W}(\mathcal{C})\). We need to verify \(\operatorname{loc}_{v}([\alpha])\in\operatorname{Im}(\delta_{v})\) for each place \(v\) of \(K\), where \(\operatorname{loc}_{v}:(A_{K}^{\times}/(A_{K}^{\times})^{2})_{\square}\to(A_{K _{v}}^{\times}/(A_{K_{v}}^{\times})^{2})_{\square}\) is the natural map. At Archimedean places, the result follows from Lemma 4.1. If \(v\) corresponds to a prime not dividing \(2\), then the condition \(A_{K}(\sqrt{\alpha})/A_{K}\) unramified implies that \(\alpha\) (up to squares) is a unit in \(A_{O_{v}}\), so the result follows from Corollary 3.6. The result for places dividing \(2\) follows from Theorem 3.14. To obtain an upper bound we use the following auxiliary group. **Definition 5.10**.: Let \(\tilde{C}(\mathcal{C})\subset A_{K}^{\times}/(A_{K}^{\times})^{2}\) be the subgroup of the \([\alpha]\in A_{K}^{\times}/(A_{K}^{\times})^{2}\) such that * For each finite place \(w\) of \(A_{K}\) the \(w\)-adic valuation of \(\alpha\) is even. * \(\mathcal{N}(\alpha)\) is a square in \(K\). * if \(v\) is a real place of \(K\) then \(A_{K}(\sqrt{\alpha})\) is unramified at \(\tilde{v}\) (i.e. \(\tilde{v}(\alpha)>0\)), * if \(v\) is a real place of \(K\) then \(A_{K}(\sqrt{\alpha})\) ramifies at \(v_{i}\) if and only if it ramifies at \(v_{i}^{\prime}\) for each \(i=1,\ldots,s_{v}\). **Proposition 5.11**.: _If hypotheses 5.2 are satisfied, then \(\operatorname{Sel}_{2}(J)\subset\tilde{C}(\mathcal{C})\)._ Proof.: The condition at the Archimedean places is clear (by Lemma 4.1). The result for finite primes follows from Theorem 3.4. Note that \(C_{W}(\mathcal{C})\subset C_{*}(\mathcal{C})\subset\tilde{C}(\mathcal{C})\), so it is enough to bound the indexes \([\tilde{C}(\mathcal{C}):C_{*}(\mathcal{C})]\) and \([C_{*}(\mathcal{C}):C_{W}(\mathcal{C})]\) in order to get explicit bounds for \(\operatorname{Sel}_{2}(J)\) in terms of the \(2\)-class group \(\operatorname{Cl}_{*}(A_{K},\mathcal{C})[2]\). **Theorem 5.12**.: _Following the previous notations,_ * \([\tilde{C}(\mathcal{C}):C_{*}(\mathcal{C})]\leq 2^{g[K:\mathbb{Q}]}\)_._ * \([C_{*}(\mathcal{C}):C_{W}(\mathcal{C})]\leq 2^{\sum_{v\mid 2}(r_{v}-1-\dim_{2}( \mathbb{V}_{v}))}\)_._ Proof.: The second claim is clear, since by definition, \(C_{W}(\mathcal{C})\) consists of the elements of \(C_{*}(\mathcal{C})\) that locally at places \(v\) dividing \(2\) lie in \(W_{v}\), hence the index is bounded by the product of the local indexes which were computed in Theorem 3.14. The proof of the first claim follows the same idea used in the proof of [1, Theorem 2.11]. There is a natural well defined map \(\phi:\tilde{C}(\mathcal{C})\to\operatorname{Cl}(A_{K})[2]\) given as follows: if \(\alpha\in A_{K}^{\times}\) such that \([\alpha]\in\tilde{C}(\mathcal{C})\) then the even valuation condition implies the existence of an ideal \(I\) such that \(I^{2}=(\alpha)\). Define \(\phi([\alpha])=[I]\); the map is well defined by [1, Lemma 2.13]. Equation (2.1) of [1] implies that \[[\tilde{C}(\mathcal{C}):C_{*}(E)]\leq\frac{\#\ker\phi}{\#(P/P_{*}(\mathcal{C} ))}.\] For each odd value \(1\leq i\leq d\), let \[\mathcal{A}_{i}=\{v\ \text{real Archimedean places of}\ K\ :p(x)\ \text{has}\ i\ \text{real roots}\ \text{in}\ K_{v}\}.\] Let \(a_{i}=\#\mathcal{A}_{i}\) and \(c\) denote the number of complex places of \(K\), so \([K:\mathbb{Q}]=a_{1}+a_{3}+a_{5}+\cdots+a_{d}+2c\). The sign map \[\operatorname{sign}:A_{K}^{\times}\to\prod_{v\in\mathcal{A}_{1}}\{\pm 1\} \times\cdots\times\prod_{v\in\mathcal{A}_{d}}\{\pm 1\}^{d},\] induces a well defined map on \(A_{K}^{\times}/(A_{K}^{\times})^{2}\). Let \(W_{i}\subset\{\pm 1\}^{a_{i}}\) be the subset of elements whose product equals \(1\) (a subgroup of index two) and let \[\widetilde{W}=\prod_{v\in A_{1}}W_{1}\times\ldots\times\prod_{v\in A_{d}}W_{d}.\] Let \(V_{i}\) be the subset of \(W_{i}\) given by \[V_{i}=\left\{\left(1,\epsilon_{1},\epsilon_{1},\ldots,\epsilon_{\frac{i-1}{2}},\epsilon_{\frac{i-1}{2}}\right)\ :\ \epsilon_{j}=\pm 1\right\},\] and \[\widetilde{V}=\prod_{v\in A_{1}}V_{1}\times\ldots\times\prod_{v\in A_{d}}V_{d}.\] Clearly \(\operatorname{sign}((A_{K}^{\times}/(A_{K}^{\times})^{2})_{\square})\subset \widetilde{W}\) and \(\operatorname{sign}(\tilde{C}(\mathcal{C}))\subset\widetilde{V}\). The rest of the argument given in the proof of [1, Theorem 2.11] works mutatis mutandis with this definitions, and we obtain that \[[\tilde{C}(\mathcal{C}):C_{*}(\mathcal{C})]\leq\frac{\#\widetilde{V}\# \operatorname{sign}(\mathcal{O}^{\times})\#(A_{\mathcal{O}}^{\times}/(A_{ \mathcal{O}}^{\times})^{2})_{\square}}{\#\operatorname{sign}(A_{K}^{\times})}. \tag{5.1}\] The values appearing in the previous formula are the following: * \(\#\widetilde{V}=2^{a_{3}+2a_{5}+3a_{7}+\cdots+(\frac{d-1}{2})a_{d}}\), * \(\#\operatorname{sign}(\mathcal{O}^{\times})=2^{a_{1}+a_{3}+a_{5}+\cdots+a_{d}}\), * \(\#\operatorname{sign}(A_{K}^{\times})=2^{a_{1}+3a_{3}+5a_{5}+\cdots+da_{d}}\), * \(\#(A_{0}^{\times}/(A_{0}^{\times})^{2})_{\square}=2^{g[K:\mathbb{Q}]}\cdot 2^{a_{3} +2a_{5}+3a_{7}+\cdots+(\frac{d-1}{2})a_{d}}\) by Lemma 5.14. But \[(a_{3}+2a_{5}+\cdots+(\frac{d-1}{2})a_{d})+(a_{1}+a_{3}+a_{5} \cdots+a_{d})+(a_{3}+2a_{5}+\cdots+(\frac{d-1}{2})a_{d})\\ =a_{1}+3a_{3}+5a_{5}+\cdots+da_{d}\] so the right hand side of (5.1) equals \(2^{g[K:\mathbb{Q}]}\). **Remark 5.13**.: The oddness hypothesis on the class group of \(K\) is only used in the last theorem. A general result could be obtained by consider a more general class group as done in [13] (for the case of elliptic curves). **Lemma 5.14**.: _With the previous notation,_ \[\#(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2})_{\square}=2^{g[K: \mathbb{Q}]}\cdot 2^{a_{3}+2a_{5}+3a_{7}+\cdots+(\frac{d-1}{2})a_{d}}\] Proof.: By definition, \((A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2})_{\square}\) is the kernel of the norm map \(\mathcal{N}:A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2}\to \mathcal{O}^{\times}/(\mathcal{O}^{\times})^{2}\). The map is surjective (since \([A_{K}:K]\) is odd, in particular, if \(\epsilon\in\mathcal{O}^{\times}\), its norm is equal to itself up to a square). Thus \[\#(A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2})_{\square}=\frac{ \#A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2}}{\#\mathcal{O}^{ \times}/(\mathcal{O}^{\times})^{2}}\,.\] By Dirichlet's unit theorem we have \(\#\mathcal{O}^{\times}/(\mathcal{O}^{\times})^{2}=2^{\alpha}\) where \(\alpha\) is the number of archimedean places of \(K\), i.e., \(\alpha=a_{1}+a_{3}+\cdots+a_{d}+c\). As before, write \(A_{K}\simeq K_{1}\times\cdots\times K_{r}\). Given an archimedean place \(v\) of \(K\), let \(r_{i}(v)\) and \(s_{i}(v)\) denote the number of real and complex places of \(K_{i}\) above \(v\), so \([K_{i}:K]=r_{i}(v)+2s_{i}(v)\) for real \(v\) and \([K_{i}:K]=s_{i}(v)\) for complex \(v\). We can apply Dirichlet's unit theorem to each \(K_{i}\) to obtain \[\#A_{\mathcal{O}}^{\times}/(A_{\mathcal{O}}^{\times})^{2}=2^{\sum_{v\text{ real}}\sum_{i=1}^{r}(r_{i}(v)+s_{i}(v))+\sum_{v\text{ complex}}\sum_{i=1}^{r}s_{i}(v)}.\] If \(v\) is complex we have \(\sum_{i=1}^{r}s_{i}(v)=d\), so the second term in the exponent is \[\beta=\sum_{v\text{ complex}}\sum_{i=1}^{r}s_{i}(v)=cd=\frac{d-1}{2}(2c)+c\,.\] For \(v\) real we have \(v\in\mathcal{A}_{j}\) for some \(j\) and in that case \(\sum_{i=1}^{r}r_{i}(v)=j\), while \(\sum_{i=1}^{r}s_{i}(v)=\frac{d-j}{2}\). Hence the first term in the exponent is \[\gamma=\sum_{v\text{ real}}\sum_{i=1}^{r}(r_{i}(v)+s_{i}(v)) =\sum_{j=1}^{d}\left(\frac{d+j}{2}\right)a_{j}\] \[=\frac{d-1}{2}\sum_{j=1}^{d}a_{j}+(a_{1}+2a_{3}+3a_{5}+\ldots+( \tfrac{d+1}{2})a_{d})\,.\] Adding both terms, and using \(g=\frac{d-1}{2}\) and \([K:\mathbb{Q}]=a_{1}+a_{2}+\ldots+a_{d}+2c\) we obtain \(\gamma+\beta-\alpha=g[K:\mathbb{Q}]+(a_{3}+2a_{5}+\ldots+(\tfrac{d-1}{2})a_{d})\) proving the claim. Combining all the previous results, we can now prove our main result. **Theorem 5.15**.: _Let \(K\) be a number field and \(\mathcal{C}/K\) be a hyperelliptic curve. Suppose that hypotheses 5.2 hold, then_ \[\dim_{\mathbb{F}_{2}}\operatorname{Cl}_{*}(A_{K},\mathcal{C})[2] -\sum_{v|2}\bigl{(}r_{v}-1-\dim_{\mathbb{F}_{2}}(\mathbb{V}_{v})\bigr{)} \quad\leq\\ \leq\quad\dim_{\mathbb{F}_{2}}\operatorname{Sel}_{2}(J)\quad\leq \quad\dim_{\mathbb{F}_{2}}\operatorname{Cl}_{*}(A_{K},\mathcal{C})[2]+g\left[ K:\mathbb{Q}\right]. \tag{5.2}\] Proof.: By Proposition 5.11 we have \(\operatorname{Sel}_{2}(J)\subset\tilde{C}(\mathcal{C})\), hence \[\#\operatorname{Sel}_{2}(J)\leq\#\tilde{C}(\mathcal{C})=[\tilde{C}(\mathcal{ C}):C_{*}(\mathcal{C})]\cdot\#C_{*}(\mathcal{C}).\] Theorem 5.12 gives the bound \([\tilde{C}(\mathcal{C}):C_{*}(\mathcal{C})]\leq 2^{g[K:\mathbb{Q}]}\) and Proposition 5.7 implies that \(C_{*}(\mathcal{C})\simeq\operatorname{Cl}_{*}(A_{K},\mathcal{C})[2]\), proving the upper bound \[\dim_{\mathbb{F}_{2}}\operatorname{Sel}_{2}(J)\leq\dim_{\mathbb{F}_{2}} \operatorname{Cl}_{*}(A_{K},\mathcal{C})[2]+g\left[K:\mathbb{Q}\right].\] Similarly, by Proposition 5.9 we have \(C_{W}(\mathcal{C})\subset\operatorname{Sel}_{2}(J)\), and the lower bound follows from Theorem 5.12. ## 6. Applications ### Quadratic twists The goal of the present section is to study the rank variation of families of quadratic twists of a given hyperelliptic curve. For that purpose, let \(K\) be a number field whose narrow class number is odd, and let \(p(x)\in K[x]\) be an irreducible polynomial of odd degree (we cannot remove the irreducibility hypothesis as will become clear later). Let \(\mathcal{C}\) be the hyperelliptic curve with equation \[\mathcal{C}:y^{2}=p(x)\] **Definition 6.1**.: The _quadratic twist_ of the curve \(\mathcal{C}\) by \(a\in K^{\times}\) is the curve defined by the equation \[\mathcal{C}(a):ay^{2}=p(x). \tag{6.1}\] A change of variables transforms 6.1 into the more well known equation \[\mathcal{C}(a):y^{2}=a^{d}p(x/a).\] Let \(J_{a}\) denote the Jacobian of \(\mathcal{C}(a)\). Note that since \(p(x)\) is irreducible, \(A_{K}\) is a number field. **Lemma 6.2**.: _Let \(\mathcal{C}\) be an hyperelliptic curve over \(K\) whose defining polynomial has odd degree and is irreducible. Let \(a\in K^{\times}\) be an element satisfying that any prime ideal \(\mathfrak{p}\mid a\) is either inert or totally ramified in the field extension \(A_{K}/K\). Then, if \(\mathcal{C}\) satisfies hypotheses 5.2 so does \(\mathcal{C}(a)\)._ Proof.: The first two assumptions of Hypothesis 5.2 are clearly satisfied, so we need only to to verify the last one, namely that for all finite places \(v\) of \(K\), \(\mathcal{C}(a)/K_{v}\) satisfies (\(\dagger\)). Let \(v\) be finite place of \(K\). * If \(p(x)\) is irreducible over \(K_{v}\) then clearly \(a^{d}p(x/a)\) is also irreducible, as both polynomials define the same \(K_{v}\)-algebra \(A_{K_{v}}\), so \(\mathcal{C}(a)\) also satisfies (\(\dagger.i\)). * Suppose that \(p(x)\) satisfies (\(\dagger.i\)) but it's not irreducible over \(K_{v}\). The inclusion \(\mathcal{O}[x]/(p(x))\subset\prod_{i}\mathcal{O}[x]/(p_{i}(x))\) is an equality if and only if both rings have the same discriminant. Our hypothesis on \(a\) implies that \(v\nmid a\), so the discriminants of \(p(x)\) and \(a^{d}p(x/a)\) differ by a unit in \(\mathcal{O}_{K_{v}}\), and similarly for \(p_{i}(x)\). Hence \(\mathcal{C}(a)\) also satisfies (\(\dagger.i\)) over \(K_{v}\). * Finally, suppose that \(p(x)\) satisfies (\(\dagger.ii\)) over \(K_{v}\) but does not satisfy (\(\dagger.i\)). Our hypotheses on \(a\) imply that \(v\nmid a\), so the extension \(K(\sqrt{a})/K\) is unramified. Note that the curves \(\mathcal{C}\) and \(\mathcal{C}(a)\) become isomorphic over such an extension. It is a well known fact that the component group of the Jacobian of a curve does not vary over unramified extensions. **Theorem 6.3**.: _Let \(\mathcal{C}\) be an hyperelliptic curve satisfying hypotheses 5.2 over a number field \(K\) of odd narrow class number. Suppose that \(p(x)\) is irreducible, and suppose furthermore that there is a principal prime ideal of \(K\) which is inert in \(A_{K}/K\). Then among all quadratic twists by principal prime ideals, there exists a subset of positive density \(\mathscr{S}\) such that the abelian varieties \(\operatorname{Jac}(\mathcal{C}(a))\) have the same \(2\)-Selmer group for all \(a\in\mathscr{S}\)._ Proof.: Let \(\mathscr{P}\) denote the set of all principal prime ideals of \(K\) (they have positive density), and let \(\mathscr{S}\) be those ones which are inert or totally ramified in \(A_{K}/K\). Our hypothesis implies that \(\mathscr{S}\) has positive density in the set of all principal prime ideals (by Tchebotarev's density theorem). Lemma 6.2 implies that if \((a)\in\mathscr{S}\) then the twisted curve \(\mathcal{C}(a)\) also satisfies hypotheses 5.2, so we can apply our main result (Theorem 5.15) to \(\mathcal{C}(a)\) and deduce that for each \(a\in\mathscr{S}\) the \(2\)-Selmer group of \(\mathcal{C}(a)\) satisfies \[C_{W}(\mathcal{C})\subset\operatorname{Sel}_{2}(\operatorname{Jac}(\mathcal{C }(a)))\subset\tilde{C}(\mathcal{C}).\] The result follows from the fact that there are finitely many \(\mathbb{F}_{2}\)-vector spaces containing \(C_{W}(\mathcal{C})\) and contained in \(\tilde{C}(\mathcal{C})\). ### A family of octic twists Let \[\mathcal{C}(a):y^{2}=x(x^{4}+a). \tag{6.2}\] The curve \(\mathcal{C}(a)\) has genus two, and it contains the non-trivial point \(P=(0,0)\) of order two in \(\operatorname{Jac}(\mathcal{C}(a))\). The \(2\)-Selmer rank of the surface \(\operatorname{Jac}(\mathcal{C}(a))\) was studied in [11]. There are some very interesting facts concerning the surface \(\operatorname{Jac}(\mathcal{C}(a))\). Note that all such curves are "twists" of each other, in the sense that they all become isomorphic over the field \(\mathbb{Q}(\sqrt[8]{a})\), via the change of variables \((x,y)\mapsto(\sqrt[8]{a}^{5}y,\sqrt[4]{a}x)\). The reason for the existence of such a twist is that the curve \(\mathcal{C}(a)\) has an automorphism of order \(8\) given explicitly by \((x,y)\mapsto(\zeta_{4}x,\zeta_{8},y)\), where \(\zeta_{8}\) denotes an eighth root of unity, and \(\zeta_{4}=\zeta_{8}^{2}\). This implies in particular that its Jacobian is an abelian surface with complex multiplication. It is a naturally question whether one can "find" for a fixed value of \(a\), the appropiate Hecke character. **Remark 6.4**.: There is a nice implementation in Pari/GP [11] of algebraic Hecke characters based on the article [12]. For example, if \(a=-1\), then one can compute the conductor of \(\operatorname{Jac}(\mathcal{C}(-1))\) in Magma, and find that it equals \(2^{12}\). Let \(E/F\) be a finite extension of number fields, and let \(\chi\) be a continuous character of \(\operatorname{Gal}_{E}\). Recall the well known formula: \[\operatorname{cond}\left(\operatorname{Ind}_{\operatorname{Gal}_{E}}^{ \operatorname{Gal}_{E}}\chi\right)=\Delta(E/F)\cdot\mathcal{N}(\operatorname {cond}(\chi)), \tag{6.3}\] where \(\Delta(E/F)\) denotes the discriminant of the extension and \(\mathcal{N}\) denotes the norm from \(E\) to \(F\) (see for example [10], page 105 after Proposition 4). By class field theory there is a bijection between Galois characters and Hecke characters (respecting conductors). Then the previous formula, with \(E=\mathbb{Q}(\zeta_{8})\) and \(F=\mathbb{Q}\), implies that if \(\mathfrak{p}_{2}\) denotes the unique prime ideal in \(\mathbb{Q}(\zeta_{8})\) dividing \(2\), the Hecke character attached to \(\operatorname{Jac}(\mathcal{C}(-1))\) over \(\mathbb{Q}(\zeta_{8})\) must have conductor \(\mathfrak{p}_{2}^{4}=(2)\) (since \(\Delta(\mathbb{Q}(\zeta_{8})/\mathbb{Q})=2^{8}\)). Using Pari/GP we can compute the finite set of all algebraic Hecke characters of infinity type \((1,0),(1,0)\) and conductor dividing \(2\) as follows: g=gcharinit(bnfinit(polcyclo(8)),2); ? g.cyc % = [0, 0, 0, 0.E-57] This shows that there are no finite order characters with this conductor, hence if one Hecke character with infinity type \((1,0),(1,0)\) exists, it must be unique. We compute a Hecke character with this infinity type with the command chi=gcharalgebraic(g,[[1,0],[1,0]])[1]; The output matches our character \(\chi\). As a sanity check, since our Hecke character has algebraic integers coefficients, we can check whether the L-function attached to our hyperelliptic curve matches the one attached to our character \(\chi\). Ls1=lfuncreate([g,chi]); Ls2=lfungenus2(x*(x^4-1)); lfunan(Ls2,1000)==round(lfunan(Ls,1000)) This verifies that the first thousand coefficients of the two L-functions do match. The surface \(\operatorname{Jac}(\mathcal{C}(a))\) over \(\overline{\mathbb{Q}}\) is isogenous to the product of two elliptic curves. This was verified numerically for some particular values of \(a\) (using the algorithm [13]) to deduce the following result whose proof was communicated to us by John Cremona. **Proposition 6.5**.: _If \(a=1\) then the surface \(\operatorname{Jac}(\mathcal{C}(1))\) is isogenous to the product of the elliptic curves with label \(256.\mathrm{d}1\) and \(256.\mathrm{a}1\)._ Proof.: If we denote by \(t=\frac{1+x}{1-x}\), then \(x=\frac{t-1}{t+1}\), so we can rewrite the equation of \(\mathcal{C}\) in the form \[y^{2}=\frac{(t-1)}{(t+1)}\frac{(t-1)^{4}+(t+1)^{4}}{(t+1)^{4}}=2\frac{(t-1)}{( t+1)}\frac{(t^{4}+6t^{2}+1)}{(t+1)^{4}}.\] Cleaning denominators, we get the equation \[((t+1)^{3}y)^{2}=2(t^{2}-1)(t^{4}+6t^{2}+1).\] So the map \((t,y)\to(t^{2},(t+1)^{3}y)\) send the curve \(\mathcal{C}\) into the elliptic curve on the variables \((u,v)\) with equation \[v^{2}=2(u-1)(u^{2}+6u+1),\] which is isomorphic to the elliptic curve \(256.\)a1. Similarly, if we make the substitution \(t=\frac{1-x}{1+x}\), so \(x=\frac{1-t}{1+t}\), a similar computation as before gives a map from \(\mathcal{C}\) to the elliptic curve \[((t+1)^{3}y)^{2}=-2(t^{2}-1)(t^{4}+6t^{2}+1),\] a quadratic twist by \(\sqrt{-1}\) of the previous one (but they are not isogenous over \(\mathbb{Q}\)). This proves the result. **Corollary 6.6**.: _If \(a\) is a non-zero integer, then the surface \(\operatorname{Jac}(\mathcal{C}(a))\) is isogenous to the product of the quadratic twist of the elliptic curve \(256.\)a1 by \(\sqrt[4]{a}\) times the quadratic twist of the curve \(256.\)d1 by \(\sqrt[4]{a}\) over the field extension \(\mathbb{Q}(\sqrt[4]{a})\)._ Proof.: Over the field \(\mathbb{Q}(\sqrt[4]{a})\), the map \((x,y)\to(\sqrt[4]{a}x,\sqrt{a}y)\) gives an isomorphism between the curve \(\mathcal{C}(a)\) and the curve \[y^{2}=\sqrt[4]{a}x(x^{4}+1).\] Then the same proof as before gives a rational map to the curve \[v^{2}=2\sqrt[4]{a}(u-1)(u^{2}+6u+1),\] which is \(256.\)a1, and its quadratic twist by \(-1\), which is \(256.\)d1. The main result (Theorem 1) obtained in [11] is the following: for a non-zero integer \(a\), let \(\omega(a)\) denote the number of prime divisors of \(a\). **Theorem 6.7**.: _Suppose that \(a\) is \(8\)-th power free, that the class number of \(\mathbb{Q}(\sqrt[4]{-a})\) is odd and that every prime divisor of \(2a\) has a unique prime dividing it in \(\mathbb{Q}(\sqrt[4]{-a})\). Then if \(a<0\), the \(2\)-selmer rank of \(\operatorname{Jac}(\mathcal{C}(a))\) is bounded above by \(\omega(2a)+3\)_ Suppose that \(a<0\). A necessary condition for the class number of \(\mathbb{Q}(\sqrt[4]{a})\) to be odd is that either \(a\) is prime, or it is twice a prime. Let us restrict to the case \(a\) an odd prime number (that we denote by \(p\) to emphasize our hypothesis). We would like to apply our main result to the curve \(\mathcal{C}(-p)\) over \(\mathbb{Q}\) to improve the upper bound. For that purpose we need to verify whether Hypotheses 5.2 are satisfied. Our polynomial has odd degree and we are working over the rationals, hence the first two hypothesis are satisfied. The \(\mathbb{Q}\)-algebra \(A_{\mathbb{Q}}=\mathbb{Q}\times\mathbb{Q}(\sqrt[4]{p})\), \(\Delta(x(x^{4}-p))=-2^{8}\cdot p^{5}\) and \(\Delta(x^{4}-p)=-2^{8}\cdot p^{3}\), so \((\dagger.i)\) is satisfied at all primes but \(p\). The problem is \((\dagger.\)ii) is also not satisfied at \(p\) because the Neron model has two components (see [14], page 156, type VII). However, not everything is lost. Our local map is an injective morphism \[\delta_{p}:\operatorname{Jac}(\mathcal{C}(-p))(\mathbb{Q}_{p})/2\operatorname{ Jac}(\mathcal{C}(-p))(\mathbb{Q}_{p})\hookrightarrow((\mathbb{Q}_{p}\times \mathbb{Q}_{p}(\sqrt[4]{p}))^{\times}/(A_{\mathbb{Q}_{p}}^{\times})^{2})_{ \square}.\] Any element in the image is the class of an element of the form \((\epsilon_{1}p^{r},\epsilon_{2}\sqrt[4]{p}^{r})\) for \(\epsilon_{i}\) units and \(r\in 0,1\). In particular, the image is bounded above by twice the elements which are units up to squares, hence we still can provide an upper bound in terms of the class group \(\operatorname{Cl}_{*}(A_{\mathbb{Q}},\mathcal{C}(-p))[2]\), namely \[0\leq\dim_{\mathbb{F}_{2}}\operatorname{Sel}_{2}(\operatorname{Jac}(\mathcal{ C}(-p)))\leq\operatorname{Cl}_{*}(A_{\mathbb{Q}},\mathcal{C}(-p))[2]+2+1. \tag{6.4}\] We are led to compute \(\operatorname{Cl}_{*}(A_{\mathbb{Q}},\mathcal{C}(-p))[2]\). Firstly we need to understand the class group of the extension \(\mathbb{Q}(\sqrt[4]{p})\). **Lemma 6.8**.: _If \(a\) is square-free and \(a\equiv 1,2\pmod{4}\) then the ring of integer of \(\mathbb{Q}(\sqrt[4]{-a})\) is \(\mathbb{Z}[\sqrt[4]{-a}]\)._ Proof.: See [10, Theorem 1]. **Theorem 6.9**.: _Let \(p\) be a prime number congruent to \(3\) modulo \(8\). Then the class group of \(\mathbb{Q}(\sqrt[4]{p})\) has odd cardinality, while its narrow class group has twice its cardinality. Its element of order \(2\) corresponds to the quadratic extension \(\mathbb{Q}(\sqrt[4]{p},\sqrt[4]{-1})\)._ Before proving the result, we need some auxiliary results, whose proof are based on Gauss' results on binary quadratic forms (see [13], SS6). **Proposition 6.10**.: _If \(p\) is a prime number, \(p\equiv 3\pmod{4}\) then \(\operatorname{Cl}(\mathbb{Q}[\sqrt{-1},\sqrt{p}])\) has odd cardinality._ Proof.: Consider the following diagram \[\begin{array}{c}H\\ \\ L=\mathbb{Q}(\sqrt{-1},\sqrt{p})\\ \\ F=\mathbb{Q}(\sqrt{-1})\end{array}\] where \(H\) is the Hilbert class field of \(L\) (a Galois extension of \(F\)). Suppose that \(L(\sqrt{\alpha})\) is a subextension of \(H\). Since \(\operatorname{Gal}(H/L)\simeq\operatorname{Cl}(L)\), then \(L(\sqrt{\alpha})\subseteq\tilde{L}=H^{\operatorname{Cl}(L)^{2}}\). **Claim:**\(\tilde{L}/F\) is an abelian extension. The proof mimics the one given in [13, Theorem 6.1, page 122], replacing complex conjugation by another element of order two. Let \(\sigma\in\operatorname{Gal}(H/F)\) be any element such that \(\sigma(\sqrt{p})=-\sqrt{p}\). Note that if \(\mathfrak{a}\) is an element in \(\operatorname{Cl}(L)\) then \(\mathfrak{a}\cdot\sigma(\mathfrak{a})\) matches an ideal of \(F\) extended to \(\mathcal{O}_{L}\). In particular, \(\mathfrak{a}\cdot\sigma(\mathfrak{a})\) is a principal ideal (since the class group of \(F\) is trivial), so as elements of the class group \(\operatorname{Cl}(L)\), \(\mathfrak{a}^{-1}=\sigma(\mathfrak{a})\). Consider the following exact sequence (6.5) The map \(\sigma\) provides a splitting of it, since \(\sigma^{2}\in\operatorname{Gal}(H/L)\) and it induces the identity on the class group \(\operatorname{Cl}(L)\) (recall that \(\sigma(\mathfrak{a})=\mathfrak{a}^{-1}\)). In particular, \(\operatorname{Gal}(H/F)\simeq\mathbb{Z}/2\ltimes\operatorname{Cl}(L)\), where the action of \(1\) send an ideal \(\mathfrak{a}\) to its inverse. Then the proof of [13, Theorem 6.1, page 122] proves that \(\operatorname{Cl}(L)^{2}=[G,G]\), where \(G=\operatorname{Gal}(H/F)\), and in particular \(\operatorname{Gal}(\tilde{L}/F)\simeq G/[G,G]\) is abelian. Since \(L(\sqrt{\alpha})\subset\tilde{L}\) and \(\tilde{L}/F\) is an abelian extension, the extension \(L(\sqrt{\alpha})/F\) is Galois. The group \(\operatorname{Gal}(L(\sqrt{\alpha})/F)\) is isomorphic to either \(\mathbb{Z}/2\), to \(\mathbb{Z}/2\times\mathbb{Z}/2\) or it is cyclic of order \(4\). The last case cannot occur, because \(\mathbb{Z}/4\) does not fit into the exact sequence (6.5). In the first two cases, we can assume (without loss of generality) that \(\alpha\in F\), so \(F(\sqrt{\alpha})\) is a quadratic extension of \(F\) unramified outside \(p\). Since the class number of \(F\) is one, \(\alpha\) can be chosen so that the ideal it generates is supported only at the prime \(p\) (which is inert in \(F\) because \(p\) is congruent to \(3\) modulo \(4\)) and hence \(\alpha\in\{i,p,ip\}\) (up to squares). But among these possibilities, the only quadratic extension which is unramified at \(2\) is \(F(\sqrt{p})=L\). **Proposition 6.11**.: _If \(p\equiv 3\pmod{8}\) then the fundamental unit of \(\mathbb{Q}(\sqrt{p},\sqrt{-1})\) equals \(\sqrt{i\cdot\varepsilon_{p}}\), where \(\varepsilon_{p}\) is a totally positive fundamental unit of \(\mathbb{Q}(\sqrt{p})\)._ Proof.: Suppose that \(p\neq 3\), since this case is true by a computer check. By [1, pp. 1.1], the units of \(F=\mathbb{Q}(\sqrt{p},\sqrt{-1})\) are generated by \(\{i,\kappa\}\), where \(\kappa=\varepsilon_{p}\) or \(\kappa=\sqrt{i\cdot\varepsilon_{p}}\) (meaning that \(\kappa\) is an element of \(F\) whose square equals \(i\cdot\varepsilon_{p}\)). To prove the claim, it is enough to prove that \(i\cdot\varepsilon_{p}\) is a square in \(F\). The extension \(\mathbb{Q}(\sqrt{p},\sqrt{-1})[\sqrt{i\cdot\varepsilon_{p}}]\) is at most quadratic, and is unramified at all finite odd primes (i.e. those not dividing \(2\)). If we can prove that the extension is also unramified at even primes, then the extension must be trivial (since the class number of \(\mathbb{Q}(\sqrt{p},\sqrt{-1})\) is odd by the previous proposition). Since \(p\equiv 3\pmod{8}\), there exists an isomorphism \(\Phi:\mathbb{Q}_{2}(\sqrt{p})\to\mathbb{Q}_{2}(\sqrt{3})\). The fact that the class number of \(\mathbb{Q}(\sqrt{p})\) is odd also implies that the quadratic extension \(\mathbb{Q}(\sqrt{p},\sqrt{\varepsilon_{p}})\) is ramified at \(2\) (since the fundamental unit has norm \(1\)). Then \(\mathbb{Q}_{2}(\sqrt{p},\sqrt{\varepsilon_{p}})/\mathbb{Q}_{2}\) is biquadratic of conductor \(2^{8}\). There are precisely two such extensions, which match \(\mathbb{Q}_{2}(\sqrt{3},\sqrt{\varepsilon_{3}})\) and \(\mathbb{Q}_{2}(\sqrt{3},\sqrt{-\varepsilon_{3}})\), where \(\varepsilon_{3}=2-\sqrt{3}\) is a fundamental unit for \(\mathbb{Q}(\sqrt{3})\) (see for example Jones-Roberts tables at [https://hobbes.la.asu.edu/courses/site/Localfields-index.html](https://hobbes.la.asu.edu/courses/site/Localfields-index.html)). Then extending \(\Phi\) to an isomorphism between \(\mathbb{Q}_{2}(\sqrt{p},\sqrt{-1})\) and \(\mathbb{Q}_{2}(\sqrt{3},\sqrt{-1})\) we can assume that \(\Phi(\varepsilon_{p})=\varepsilon_{3}\) up to squares (so it is enough to understand the case \(p=3\)). But actually \(i\cdot(2-\sqrt{3})\) is a square in \(\mathbb{Q}(\sqrt{3},\sqrt{-1})\) (since \(\left(\frac{1+i-\sqrt{3}-\sqrt{3}i}{2}\right)^{2}=i(2-\sqrt{3})\)), so if \(\mathbb{Q}(\sqrt{p},\sqrt{-1})[\sqrt{i\varepsilon_{p}}]\) is not equal to \(\mathbb{Q}(\sqrt{p},\sqrt{-1})\), the primes dividing \(2\) are split (not ramified). **Theorem 6.12**.: _If \(p\equiv 3\pmod{8}\) then \(\operatorname{Cl}(\mathbb{Q}[\sqrt{-1},\sqrt[4]{p}])\) has odd cardinality._ Proof.: In Proposition 6.9 replace \(F\) by \(\mathbb{Q}(\sqrt{-1},\sqrt{p})\) (whose class group has odd cardinality) and \(L\) by \(\mathbb{Q}(\sqrt{-1},\sqrt[4]{p})\). Let \(\sigma\in\operatorname{Gal}(L/F)\) be the non-trivial element, defined by \(\sigma(i)=i\), \(\sigma(\sqrt[4]{p})=-\sqrt[4]{p}\). Since we are interested in understanding quadratic extensions, instead of working with the whole extension \(H/L\), we can consider the subextension \(H^{\operatorname{Cl}(L)^{2}}/L\) (whose Galois group is an elementary \(2\)-group) and the extension \(\operatorname{Gal}(H^{\operatorname{Cl}(L)^{2}}/F)\). Since the class group of \(F\) is odd, \(\sigma(\mathfrak{a})\cdot\mathfrak{a}\) equals the square of an ideal of \(F\), so \(\sigma(\mathfrak{a})\) has the same class as \(\mathfrak{a}^{-1}\) in \(\operatorname{Cl}(L)/\operatorname{Cl}(L)^{2}\). Then the same argument as in the previous proposition proves that \(L(\sqrt{\alpha})\) is an extension of degree at most \(4\) of \(F\) unramified outside \(2p\) whose Galois group \(\operatorname{Gal}(L(\sqrt{\alpha})/F)\) is isomorphic to either \(\mathbb{Z}/2\) or to \(\mathbb{Z}/2\times\mathbb{Z}/2\) (it cannot be cyclic of order \(4\) for the same reason as before). Since the class group of \(F\) is odd, we can assume that \((\alpha)\) is only supported at the prime ideal \((\sqrt{p})\) and at an ideal dividing \(2\). The fact that \(L(\sqrt{\alpha})/L\) is unramified at primes dividing \(2\) together with the fact that the quadratic extension \(L/F\) has conductor exponent \(2\) implies that \(\alpha\) cannot be divisible by primes dividing \(2\). In particular, \(\alpha\in\{u,\sqrt{p}u\}\) for \(u\) a unit of \(F\), and since we are interested in the extension \(L(\sqrt{\alpha})/L\) (and \(\sqrt[4]{p}\in L\)), it is enough to understand the case when \(\alpha\) is a unit up to squares. Let \(\varepsilon_{p}\) denote the fundamental unit of \(\mathbb{Q}(\sqrt{p})\). Supposes that \(p\neq 3\) (as in this case the class number can be computed and prove the veracity of the statement), so that the only roots of unity in \(F\) are the fourth roots of unity. By Proposition 6.11, the units of \(F\) are generated by \(\{i,\kappa\}\), where \(\kappa=\sqrt{i\cdot\varepsilon_{p}}\). Then up to squares, we can restrict to the case \(\alpha\in\{i,\kappa,i\kappa\}\). Start assuming that \(p\equiv 3\pmod{16}\). Then since \(p/3\) is a fourth power in \(\mathbb{Q}_{2}\), there is an isomorphism \(\Phi:\mathbb{Q}_{2}(\sqrt[4]{p})\to\mathbb{Q}_{2}(\sqrt[4]{3})\) and also an isomorphism between \(\mathbb{Q}_{2}(\sqrt[4]{p},\sqrt{-1})\) and \(\mathbb{Q}_{2}(\sqrt[4]{3},\sqrt{-1})\). The extension \(\mathbb{Q}_{2}(\sqrt[4]{-1},\sqrt[4]{3})/\mathbb{Q}_{2}(\sqrt{-1},\sqrt[4]{ 3})\) is ramified, so \(\alpha\neq i\). The same proof of the previous proposition implies that \(\Phi(\varepsilon_{p})\) equals \(\varepsilon_{3}\) up to squares in \(\mathbb{Q}_{2}(\sqrt{3},\sqrt{-1})\). Then we proceed as follows: * Run over all elements of \(\mathcal{O}_{F}\) modulo \(4\), and keep only the elements \(\beta\) whose square equals \(i\cdot(2-\sqrt{3})\) modulo \(4\). * For each such element \(\beta\), check whether the extension \(F[\sqrt{\beta}]/F\) is ramified or not. It turns out that the first search produces sixteen values of \(\beta\), and for all of them the extension is ramified, so \(\alpha\neq\kappa\). We apply the same strategy to \(i\cdot\kappa\) and get no unramified extension either, so \(\alpha\neq i\kappa\) deducing that there is no non-trivial extension of \(L\) as claimed. We apply the same check when \(p\equiv 11\pmod{16}\), taking as fundamental unit \(\varepsilon_{11}=10-3\sqrt{11}\), obtaining no unramified extension of \(\mathbb{Q}_{2}(\sqrt{11},\sqrt{-1})\), finishing the proof. Proof of Theorem 6.9.: Let \(L/\mathbb{Q}(\sqrt[4]{p})\) be a quadratic unramified extension. Then the extension \(L\cdot\mathbb{Q}(\sqrt[4]{p},\sqrt{-1})/\mathbb{Q}(\sqrt[4]{p},\sqrt{-1})\) is unramified of degree at most \(2\), but by the previous result there is no non-trivial such an extension, so \(L=\mathbb{Q}(\sqrt[4]{p},\sqrt{-1})\), which ramifies at both real infinite places of \(\mathbb{Q}(\sqrt[4]{p})\). **Remark 6.13**.: With a little more effort, one can prove a similar result for \(a=-2p\), with \(p\equiv 3\pmod{8}\). Recall that our goal is to compute the value \(\operatorname{Cl}_{*}(A_{\mathbb{Q}},\mathcal{C}(-p))[2]\). The marked place \(\tilde{v}\) attached to the infinity place of \(\mathbb{Q}\) corresponds to the real root \(-\sqrt[4]{p}\). Following the notation of Section 4, the extension \(\mathbb{Q}(\sqrt[4]{p},\sqrt{-1})\) ramifies at \(\tilde{v}\) and at \(v_{1}^{\prime}\), so it does not correspond to an element of \(\operatorname{Cl}_{*}(A_{\mathbb{Q}},\mathcal{C}(-p))\) hence \(\operatorname{Cl}_{*}(A_{\mathbb{Q}},\mathcal{C}(-p))[2]=1\). Then (6.4) gives \[0\leq\dim_{\mathbb{F}_{2}}\operatorname{Sel}_{2}(\operatorname{Jac}(\mathcal{ C}(-p)))\leq 2+1=3.\] This already improves the upper bound of Theorem 6.7 from 5 to 3. We know that the Selmer group is non-trivial (due to the point of order two on \(\operatorname{Jac}(\mathcal{C}(-p))\)), so actually \[1\leq\dim_{\mathbb{F}_{2}}\operatorname{Sel}_{2}(\operatorname{Jac}( \mathcal{C}(-p)))\leq 3. \tag{6.6}\] This implies that the rank of the surface belongs to the set \(\{0,1,2\}\). If we can prove that actually the root number of our family is \(-1\), then the parity conjecture implies that its rank is odd hence (assuming the validity of the parity conjecture) it must be one. ### On the root number of \(\operatorname{Jac}(\mathcal{C}(-p))\) The main goal of the present section is to prove the following result. **Theorem 6.14**.: _Assuming the parity conjecture, the root number of \(\operatorname{Jac}(\mathcal{C}(-p))\) is \(-1\) for all primes \(p\equiv 3\pmod{8}\). In particular, \(\operatorname{Jac}(\mathcal{C}(-p))\) has rank \(1\) for all such primes._ Proof.: Let \(F=\mathbb{Q}(\zeta_{8})\) the field containing the eighth roots of unity. The Galois group \(\operatorname{Gal}(F/\mathbb{Q})\) is isomorphic to \(\mathbb{Z}/2\times\mathbb{Z}/2\) and consists of the maps \(\sigma_{i}:\zeta_{8}\to\zeta_{8}^{i}\), for \(i\in(\mathbb{Z}/8)^{\times}\). Note that the isomorphism between \(\operatorname{Gal}(F/\mathbb{Q})\) and \((\mathbb{Z}/8)^{\times}\) is canonical (i.e. it does does not depend on the choice of root of unity). The fixed field of each map is given in Table 1. Over the field \(F\) the endomorphism ring of our surface \(\operatorname{Jac}(\mathcal{C}(p))\) contains \(\mathbb{Z}[\zeta_{8}]\) (of rank \(4\) over \(\mathbb{Z}\)), hence our surface has complex multiplication over \(\mathbb{Q}(\zeta_{8})\) (the whole endomorphism algebra equals \(M_{2}(\mathbb{Z}[\sqrt{-2}])\) as proven in Corollary 6.6). As explained in Remark 6.4, there exists an explicit Hecke character \(\chi\) of infinity type \((1,0),(1,0)\) such that \[L(\mathcal{C}(-1),s)=L(\chi,s).\] \begin{table} \begin{tabular}{|c|c||c|c|} \hline Map & Fixed Field & Map & Fixed Field \\ \hline \hline \(\sigma_{1}\) & \(\mathbb{Q}\) & \(\sigma_{3}\) & \(\mathbb{Q}(\sqrt{-2})\) \\ \hline \(\sigma_{5}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(\sigma_{7}\) & \(\mathbb{Q}(\sqrt{2})\) \\ \hline \end{tabular} \end{table} Table 1. Fixed fields of the maps \(\sigma_{i}\) Let \(\theta_{a}\) denote the order eight Hecke character corresponding to the extension \(L=F(\sqrt[8]{a})/F\). Then the surface \(\operatorname{Jac}(\mathcal{C}(a))\) is the "twist" of \(\operatorname{Jac}(\mathcal{C}(-1))\) by \(\theta_{a}\) (since our surface contains the eighth roots of unity in its endomorphism ring, it makes sense to twist by an order \(8\) character). Then \[L(\mathcal{C}(a),s)=L(\chi\theta_{a},s),\] so it is enough to compute the root number of \(\chi\theta_{a}\) for each prime number \(p\) dividing \(2a\). The problem is that the local computation at primes dividing \(2\) is very delicate, so we avoid this issue with the following trick: restrict to the case \(a=-p\) is an odd prime number (our case of interest). * For each residue class of \(p\) modulo \(32\), compute the root number of \(\mathcal{C}(q)\) for a particular representative \(q\) of the congruence class via computing the rank of \(\mathcal{C}(q)\) (using Magma) and assuming the parity conjecture. * If \(p\) is another prime congruent to \(q\) modulo \(32\), the extension \(F(\sqrt[8]{p/q})/F\) is unramified at \(2\), so the surfaces \(\mathcal{C}(p)\) and \(\mathcal{C}(q)\) differ by an octic twist whose conductor is odd (only ramified at primes dividing \(p\) and \(q\)). Compute how the root number varies under such a twist. * Apply the previous steps to the primes \(q=3,11,19\) and \(59\) (since we assumed \(a\equiv 5\pmod{8}\)). Start with the case \(q=3\). Using Magma we compute the \(2\)-Selmer group of \(\operatorname{Jac}(\mathcal{C}(-3))\) and verify that it is isomorphic to \(\mathbb{Z}/2\times\mathbb{Z}/2\). Furthermore, since our curve has a rational point, its set of deficient primes (as in [13, Corollary 12]) is empty, so the order of its Tate-Shafarevich group is a square, hence trivial. Since the \(2\)-torsion of \(\operatorname{Jac}(\mathcal{C}(-3))\) is \(\mathbb{Z}/2\), this implies that \(\operatorname{Jac}(\mathcal{C}(-3))\) has rank \(1\), hence (assuming the parity conjecture) the sign of the functional equation of \(\operatorname{Jac}(\mathcal{C}(-3))\) is \(-1\). Let \(p\) be a prime congruent to \(3\) modulo \(32\). Let \(a=p/3\) and let \(\theta_{a}\) be the order \(8\) character of \(F\) corresponding to the extension \(F(\sqrt[8]{a})/F\) (so its conductor is only divisible by primes dividing \(3\) and \(p\)). Let \(\chi\) be the Hecke character attached to \(\mathcal{C}(-3)\), so that \[L(\mathcal{C}(-p),s)=L(\chi\theta_{a},s).\] _The local root number variation at primes dividing \(p\)._ The prime \(p\) factors as a product of two primes \(\mathfrak{p}\mathfrak{p}^{\prime}\) in \(F\) (each of them with inertial degree \(2\)). Let \(\mathcal{O}_{\mathfrak{p}}\) denote the completion of \(\mathcal{O}_{F}\) at \(\mathfrak{p}\). Locally at the prime \(\mathfrak{p}\), \(\theta_{a}\) has conductor \(\mathfrak{p}\) and order \(8\), so it factors through a character \[\theta_{\mathfrak{p}}:\mathcal{O}_{\mathfrak{p}}^{\times}\to\mathbb{F}_{p^{2 }}^{\times}\to\mathbb{C}^{\times},\] sending a generator to an eighth root of unity. Let \(\psi\) be an additive unramified character of \(F_{\mathfrak{p}}\) (i.e. \(\psi\) restricted to \(\mathcal{O}_{\mathfrak{p}}\) is trivial, but its restriction to \(\frac{\mathcal{O}_{\mathfrak{p}}}{p}\) is not). Let \(dx\) be a Haar measure on \(F_{\mathfrak{p}}\) such that \(\int_{\mathcal{O}_{\mathfrak{p}}}dx=1\) (so the measure is self dual with respect to the additive character \(\psi\)). Then the local root number of \(\chi\) at \(\mathfrak{p}\) equals \(1\) (by [10, (3.4.3.1)], since the character is unramified at \(\mathfrak{p}\)), while the local root number of \(\theta_{a}\chi\) equals \[\varepsilon(\chi_{\mathfrak{p}}\theta_{\mathfrak{p}},\psi,dx)=\chi_{ \mathfrak{p}}(p)\int_{p^{-1}\mathcal{O}_{\mathfrak{p}}^{\times}}\theta_{ \mathfrak{p}}^{-1}(x)\psi(x)dx. \tag{6.7}\] Since \(\theta_{\mathfrak{p}}\) has conductor exponent \(1\), the later equals \[\chi_{\mathfrak{p}}(p)\theta_{\mathfrak{p}}(p)\sum_{b\in\mathbb{F}_{p^{2}}} \theta_{\mathfrak{p}}(b)\psi\left(\frac{b}{p}\right). \tag{6.8}\] This Gauss sum has very nice properties, namely (see [1, Chapter 1]): * Its absolute value equals \(p\). * \(\overline{\sum_{b\in\mathbb{F}_{p^{2}}}\theta_{\mathfrak{p}}(b)\psi\left(\frac{b}{p }\right)}=\theta_{\mathfrak{p}}(-1)\sum_{b\in\mathbb{F}_{p^{2}}}\overline{ \theta_{\mathfrak{p}}(b)}\psi\left(\frac{a}{p}\right)\). The same computations applies to the prime \(\mathfrak{p}^{\prime}\). The map \(\sigma_{7}\) sends the ideal \(\mathfrak{p}\) to \(\mathfrak{p}^{\prime}\) and vice-versa. In particular, it induces an isomorphism \(\widetilde{\sigma_{7}}:\mathcal{O}_{\mathfrak{p}}\to\mathcal{O}_{\mathfrak{p}^ {\prime}}\). Via \(\widetilde{\sigma_{7}}\) we define an additive character and a Haar measure on \(\mathit{F}_{\mathfrak{p}^{\prime}}\) (by composing the ones for \(\mathcal{O}_{\mathfrak{p}}\) with the isomorphism \(\sigma_{7}\)). We claim that under the isomorphism \(\widetilde{\sigma_{7}}\) the following relation holds: \[\theta_{\mathfrak{p}^{\prime}}(b)=\theta_{\mathfrak{p}}(\widetilde{\sigma_{7} }(b))^{-1}=\overline{\theta_{\mathfrak{p}}(\widetilde{\sigma_{7}}(b))}. \tag{6.9}\] Recall that in general, if \(L/F/M\) is a tower of Galois field extensions, there is an action of \(\mathrm{Gal}(L/M)\) on \(\mathrm{Gal}(L/F)\) (by conjugation) coming from the fact that the subgroup is normal. If furthermore, \(\mathrm{Gal}(L/F)\) is abelian, then we get an action of the quotient \(\mathrm{Gal}(F/M)\) on \(\mathrm{Gal}(L/F)\). In our particular case, \(L=\mathbb{Q}(\sqrt[8]{p/3})\), \(F=\mathbb{Q}(\zeta_{8})\) and \(M=\mathbb{Q}\). Since \(L/F\) is abelian, there is a well defined Artin map \(\mathrm{Art}:\mathrm{Frac}(F)\to\mathrm{Gal}(L/F)\) (where \(\mathrm{Frac}(F)\) corresponds to the group of fractional ideals of \(\mathcal{O}_{F}\)), and the Artin map is compatible with the action of \(\mathrm{Gal}(F/\mathbb{Q})\) on \(\mathrm{Frac}(F)\) in the sense that for any \(\sigma\in\mathrm{Gal}(F/\mathbb{Q})\), \[\mathrm{Art}(\sigma(\mathfrak{p}))=\sigma^{-1}\,\mathrm{Art}(\mathfrak{p})\sigma.\] It is easy to verify that for any ideal \(\mathfrak{a}\), the Artin map satisfies \[\mathrm{Art}(\sigma_{7}(\mathfrak{a}))=\mathrm{Art}(\mathfrak{a})^{7}. \tag{6.10}\] Let \(\alpha\in\mathcal{O}_{F}\) be such that: * \(\alpha\equiv 1\pmod{\mathfrak{p}}\), * \(\alpha\equiv g\pmod{\mathfrak{p}^{\prime}}\), for \(g\) a generator of \((\mathcal{O}_{\mathfrak{p}^{\prime}}/\mathfrak{p}^{\prime})^{\times}\), * \(\alpha\equiv 1\pmod{3}\). Since \(\theta_{a}\) only ramifies at primes dividing \(3p\) (with conductor exponent \(1\)), then \[\theta_{a}((\alpha))=\theta_{\mathfrak{p}^{\prime}}(\alpha)=\theta_{\mathfrak{ p}^{\prime}}(g).\] On the other hand, \[\sigma_{7}\,\theta_{a}((\alpha))=\theta_{a}(\sigma_{7}(\alpha))=\theta_{ \mathfrak{p}}(\widetilde{\sigma_{7}}(\alpha)).\] But equation (6.10) implies that \(\sigma_{7}\,\theta_{a}=\theta_{a}^{7}\), so the claim follows (because \(\theta_{a}^{-1}=\theta_{a}^{7}\)). The \(p\)-th epsilon factor of \(\chi\theta_{a}\) at \(p\) equals the product of the two epsilon factors at \(\mathfrak{p}\) and \(\mathfrak{p}^{\prime}\), namely \[\chi_{\mathfrak{p}}(p)\theta_{\mathfrak{p}}(p)\chi_{\mathfrak{p}^{\prime}}(p) \theta_{\mathfrak{p}^{\prime}}(p)\left(\sum_{b\in\mathbb{F}_{p^{2}}}\theta_{ \mathfrak{p}}(b)\psi\left(\frac{b}{p}\right)\right)\left(\sum_{b\in\mathbb{F}_ {p^{2}}}\theta_{\mathfrak{p}^{\prime}}(b)\psi\left(\frac{b}{p}\right)\right).\] Formula (6.9) together with the two properties of our Gauss sum and the fact that \(\theta_{\mathfrak{p}}(-1)=-1\) (since \(p\equiv 3\pmod{8}\), \(v_{2}(p^{2}-1)=3\)) imply that the root number at \(p\) equals \[-\chi_{\mathfrak{p}}(p)\chi_{\mathfrak{p}^{\prime}}(p)\theta_{\mathfrak{p}}(p) \theta_{\mathfrak{p}^{\prime}}(p)p^{2}. \tag{6.11}\] Since the character \(\theta_{a}\) is unramified at \(2\), the product formula implies that \[1=\theta_{\mathfrak{p}}(p)\theta_{\mathfrak{p}^{\prime}}(p)\theta_{\mathfrak{q} _{3}}(p)\theta_{\mathfrak{q}_{3}^{\prime}}(p),\] where \(3=\mathfrak{q}_{3}\mathfrak{q}_{3}^{\prime}\) over \(F\). The same proof as before gives that \(\theta_{\mathfrak{q}_{3}^{\prime}}(b)=\overline{\theta_{\mathfrak{q}_{3}}( \widetilde{\sigma_{7}}(b))}\) for any \(b\in\mathcal{O}_{\mathfrak{q}_{3}}\). Since \(\widetilde{\sigma_{7}}(p)=p\), \(\theta_{\mathfrak{q}_{3}}(p)\theta_{\mathfrak{q}_{3}^{\prime}}(p)=1\). Then the local root number variation at \(p\) equals \[\varepsilon_{p}:=-\chi_{\mathfrak{p}}(p)\chi_{\mathfrak{p}^{\prime}}(p)p^{2}. \tag{6.12}\] _The local root number variation at primes dividing \(3\)_. The situation is similar to the previous one, but now if \(\mathfrak{p}_{3}\) and \(\mathfrak{p}_{3}^{\prime}\) are the two primes dividing \(3\), then \(\chi_{\mathfrak{p}_{3}}\) is a ramified character, while \(\theta_{\mathfrak{p}_{3}}\chi_{\mathfrak{p}_{3}}\) is not. Hence the root number variation equals \[\varepsilon_{3}:=-\chi_{\mathfrak{p}_{3}}(3)^{-1}\chi_{\mathfrak{p}_{3}^{ \prime}}(3)^{-1}3^{-2}. \tag{6.13}\] _The local root number variation at primes dividing \(2\)_. The prime \(2\) ramifies completely in the extension \(F/\mathbb{Q}\). Let \(\mathfrak{q}_{2}\) denote the unique prime ideal of \(F\) dividing it. Our hypothesis \(p\equiv 3\pmod{32}\) implies that \(\mathbb{Q}_{2}(\zeta_{8},\sqrt[8]{p})\simeq\mathbb{Q}_{2}(\zeta_{8},\sqrt[8]{ 3})\), so the root number at \(2\) is the same for both varieties. To finish the proof, we need to verify the equality \[\chi_{\mathfrak{p}}(p)\chi_{\mathfrak{p}^{\prime}}(p)p^{2}\chi_{\mathfrak{p} _{3}}(3)^{-1}\chi_{\mathfrak{p}_{3}^{\prime}}(3)^{-1}3^{-2}=1.\] The surface \(\operatorname{Jac}(\mathcal{C}(-3))\) has conductor \(2^{11}\cdot 3^{4}\) (this can be computed using Magma). Since \(\delta(F/\mathbb{Q})=8\), formula (6.3) implies that the conductor of \(\chi_{\mathfrak{q}_{2}}\) equals \(3\), so it is trivial at \(a=p/3\). Then the product formula for the character \(\chi\) at the element \(a\) implies that \[1=\chi_{\mathfrak{q}_{3}}(p)\chi_{\mathfrak{q}_{3}^{\prime}}(p)\chi_{ \mathfrak{p}}(p)\chi_{\mathfrak{p}^{\prime}}(p)p^{2}/(\chi_{\mathfrak{q}_{3}} (3)\chi_{\mathfrak{q}_{3}^{\prime}}(3)\chi_{\mathfrak{p}}(3)\chi_{\mathfrak{ p}^{\prime}}(3)3^{2}).\] The equality \(\chi_{\mathfrak{q}_{3}}(p)\chi_{\mathfrak{q}_{3}^{\prime}}(p)=1=\chi_{ \mathfrak{p}}(3)\chi_{\mathfrak{p}^{\prime}}(3)\) (which follows from an argument similar to the one applied to \(\theta_{a}\)) proves that the root number of \(\operatorname{Jac}(\mathcal{C}(-3))\) equals that of \(\operatorname{Jac}(\mathcal{C}(-p))\). Then sign of the functional equation of \(\operatorname{Jac}(\mathcal{C}(-p))\) also equals \(-1\) and hence (assuming once again the parity conjecture) its rank is odd. But it belongs to the set \(\{0,1,2\}\), so it equals \(1\). Similarly, we compute the rank of \(\operatorname{Jac}(\mathcal{C}(q))\), for \(q\in\{11,19,59\}\). In all cases its \(2\)-Selmer group has rank \(2\), and their conductors equal \(2^{11}\cdot q^{4}\). The same proof applies to these cases mutatis mutandis. ## 7. Examples The following examples have been computed using Magma [1]. ### The genus 2 curve of conductor 277 Consider the hyperelliptic curve \[\mathcal{C}:y^{2}+(x^{3}+x^{2}+x+1)y=-x^{2}-x\] with LMFDB label 277.a. This corresponds to the semistable abelian surface of smaller conductor. Its modularity was proven in [1]. Via a standard change of variables, it can we written in the form \[\mathcal{C}:y^{2}=x^{6}+2x^{5}+3x^{4}+4x^{3}-x^{2}-2x+1.\] The polynomial \(x^{6}+2x^{5}+3x^{4}+4x^{3}-x^{2}-2x+1\) has a rational root (namely \(x=-1\)) so a change of variables sending \(-1\) to infinity transforms the equation into the quintic \[\mathcal{C}:y^{2}=x^{5}+10x^{4}+8x^{3}+16x^{2}-48x+32.\] Over \(\mathbb{Q}_{2}\) the polynomial is irreducible (since the prime \(2\) is completely ramified in the degree \(5\) extension \(A_{\mathbb{Q}}=\mathbb{Q}[x]/(x^{5}+10x^{4}+8x^{3}+16x^{2}-48x+32)\)), so (\(\dagger\).i) holds at the prime \(2\). The quotient of the polynomial discriminant by the field discriminant equals \(2^{28}\), so (\(\dagger\).i) holds for all odd primes and we are in the hypothesis of our main theorem. The set of ramified primes of \(A_{\mathbb{Q}}\) over \(\mathbb{Q}\) is \(\{2,277\}\). For all primes \(p\) which are inert in \(A_{\mathbb{Q}}\), Lemma 6.2 implies that the quadratic twist \(\mathcal{C}(p)\) also satisfies the hypothesis of Theorem 5.15. Since the narrow class group of \(A_{\mathbb{Q}}\) is one, we get that for all primes inert in \(A_{\mathbb{Q}}/\mathbb{Q}\), \[0\leq\operatorname{Sel}_{2}(\operatorname{Jac}(\mathcal{C}(p)))\leq 2.\] The Galois group \(\operatorname{Gal}(A_{K}/\mathbb{Q})\simeq S_{5}\), so the density of inert primes equals \(1/5\). The first inert primes (up to \(100\)) are \(\{3,7,13,29,41,59\}\). We computed the \(2\)-Selmer rank of all quadratic twists by inert primes up to \(100.000\), and in all cases it equals \(0\). ### Examples with \(K=\mathbb{Q}\) **Example 1**.: Consider the hyperelliptic curve \[\mathcal{C}:y^{2}=x^{5}+x^{2}+1.\] The extension \(L=\mathbb{Q}[x]/x^{5}+x^{2}+1\) is monogenic, and the class of \(x\) generates the ring of integers, hence (\(\dagger\).i) is satisfied at all primes. The narrow class number of \(L\) is one, hence Theorem 5.15 implies that \[0\leq\dim_{\mathbb{F}_{2}}\operatorname{Sel}_{2}(J)\leq 2.\] One can check (in magma) that the \(2\)-Selmer group is actually isomorphic to \(\mathbb{Z}/2\times\mathbb{Z}/2\), so the upper bound is attained. The prime \(31\) is inert in the extension \(L/\mathbb{Q}\), so we are in the hypothesis of Lemma 6.2. In particular the same bound applies to the quadratic twist of \(\mathcal{C}\) by \(31\), corresponding to the curve with equation \[\mathcal{C}(31):y^{2}=x^{5}+31^{3}x^{2}+31^{5}.\] It is easy to verify (in magma) that the Jacobian of such a curve has trivial \(2\)-Selmer group. In particular, the lower bound is also attained. The twist by the prime \(101\) corresponds to a curve whose \(2\)-Selmer group has rank one. In particular, all intermediate values are also obtained. **Example 2**.: Let us study the case of some genus \(5\) curves. Most of the examples were obtained via choosing a random degree \(11\) polynomial (with coefficients in \([-5,5]\)) and studying the hyperelliptic curves they define. Consider first the hyperelliptic curve \[\mathcal{C}:y^{2}=x^{11}-3x^{9}-3x^{8}+x^{7}-x^{5}-x^{4}-2x^{3}+x^{2}-5x-1.\] Let \(L/\mathbb{Q}\) denote the degree \(11\) extension given by the polynomial \(x^{11}-3x^{9}-3x^{8}+x^{7}-x^{5}-x^{4}-2x^{3}+x^{2}-5x-1\). Once again, the class of \(x\) generates the ring of integers of \(L\), so the hypothesis (\(\dagger\).i) is always satisfied. The narrow class group of \(L\) equals one, hence Theorem 5.15 gives the bounds \(0\leq\dim_{\mathbb{F}_{2}}\operatorname{Sel}_{2}(J)\leq 5\). The Jacobian of \(\mathcal{C}\) has \(2\)-Selmer group of rank \(0\) (as can be verified with Magma), hence the lower bound is obtained. The prime \(2\) is inert in the extension \(L/\mathbb{Q}\), hence one can study twists of \(\mathcal{C}\) by any odd prime inert in \(L/\mathbb{Q}\) (such primes always exist). An interesting phenomena is that we computed all quadratic twist for such primes up to \(2000\), and in all cases the twisted curve has trivial \(2\)-Selmer group. Consider now the hyperelliptic curve with equation \[\mathcal{C}:y^{2}=x^{11}+x^{4}+x^{2}+x+1,\] and let \(L/\mathbb{Q}\) denote the degree \(11\) extension given by the (irreducible) polynomial \(x^{11}+x^{4}+x^{2}+x+1\). The ring of integers is generated by the class of \(x\), so we are in the hypothesis of Theorem 5.15. The narrow class group of \(L\) equals \(1\), so once again we obtain the bound \(0\leq\dim_{\mathbb{F}_{2}}\operatorname{Sel}_{2}(J)\leq 5\). The Jacobian of \(\mathcal{C}\) has \(2\)-Selmer group isomorphic to \((\mathbb{Z}/2)^{5}\), so the upper bound is attained. The quadratic twist by \(\sqrt{13}\) has \(2\)-Selmer group of rank \(3\), while the quadratic twist by \(\sqrt{149}\) has \(2\)-Selmer group of rank \(4\). All quadratic twist up to \(2000\) have \(2\)-Selmer rank in \(\{3,4,5\}\). Finally, consider the hyperelliptic curve \[\mathcal{C}:y^{2}=x^{11}+4x^{10}+4x^{9}-4x^{8}-2x^{7}-2x^{6}-3x^{5}+4x^{4}-3x^ {3}-3x^{2}+2x-3.\] It satisfies exactly the same properties as the previous ones. The Jacobian of the curve has \(2\)-Selmer group isomorphic to \(\mathbb{Z}/2\), and its quadratic twist by \(\sqrt{23}\) (an inert prime in \(L/\mathbb{Q}\)) has \(2\)-Selmer group of rank \(2\). All quadratic twists by prime numbers up to \(2000\) have \(2\)-Selmer group of rank \(1\) or \(2\). In particular, these three examples (and some twists) correspond to genus five hyperelliptic curves whose Jacobians have \(2\)-Selmer group isomorphic to all the possible groups predicted by our main result. ### Examples with \(K=\mathbb{Q}(\sqrt{5})\) **Example 3**.: Let \(K=\mathbb{Q}(\sqrt{5})\) and consider the hyperelliptic curve \[\mathcal{C}:y^{2}=x^{5}+x^{4}+\sqrt{5}x^{2}+x+1.\] Let \(L\) denote the extension \(A_{K}\), a degree \(10\) extension over \(\mathbb{Q}\). The narrow class group of \(L\) is trivial. The prime \(2\) is inert in \(L/\mathbb{Q}\) so \((\dagger.\mathrm{i})\) is satisfied at \(2\). The discriminant of the degree \(10\) extension differs from the discriminant of \((x^{5}+x^{4}+\sqrt{5}x^{2}+x+1)(x^{5}+x^{4}-\sqrt{5}x^{2}+x+1)\) by a power of \(2\); in particular, \((\dagger.\mathrm{i})\) is also satisfied for all odd primes of \(K\). We are in the hypothesis of our main result, i.e. \(0\leq\dim_{\mathbb{F}_{2}}\mathrm{Sel}_{2}(J)\leq 4\). The \(2\)-Selmer group of \(\mathcal{C}\) has rank \(2\), the quadratic twist by \(\sqrt{23}\) has \(2\)-Selmer group of rank \(1\), while the quadratic twist by \(\sqrt{673}\) has \(2\)-Selmer group of rank \(3\). On the other hand, the hyperelliptic curve \[\mathcal{C}:y^{2}=x^{5}+7x^{4}+\sqrt{5}x^{2}+3x+1,\] satisfies the same properties as the previous one, but has \(2\)-Selmer group of rank \(4\) (so once again the bound is sharp).
2301.12330
Nonlocal Kondo effect and two-fluid picture revealed in an exactly solvable model
Understanding the nature of local-itinerant transition of strongly correlated electrons is one of the central problems in condensed matter physics. Heavy fermion systems describe the f-electron delocalization through Kondo interactions with conduction electrons. Tremendous efforts have been devoted to the so-called Kondo-destruction scenario, which predicts a dramatic local-to-itinerant quantum phase transition of f-electrons at zero temperature. On the other hand, two-fluid behaviors have been observed in many materials, suggesting coexistence of local and itinerant f-electrons over a broad temperature range but lacking a microscopic theoretical description. To elucidate this fundamental issue, here we propose an exactly solvable Kondo-Heisenberg model in which the spins are defined in the momentum space and the k-space Kondo interaction corresponds to a highly nonlocal spin scattering in the coordinate space. Its solution reveals a continuous evolution of the Fermi surfaces with Kondo interaction and two-fluid behaviors similar to those observed in real materials. The electron density violates the usual Luttinger's theorem, but follows a generalized one allowing for partially enlarged Fermi surfaces due to partial Kondo screening in the momentum space. Our results highlight the consequence of nonlocal Kondo interaction relevant for strong quantum fluctuation regions, and provide important insight into the microscopic description of two-fluid phenomenology in heavy fermion systems.
Jiangfan Wang, Yi-feng Yang
2023-01-29T02:47:15Z
http://arxiv.org/abs/2301.12330v1
# Nonlocal Kondo effect and two-fluid picture revealed in an exactly solvable model ###### Abstract Understanding the nature of local-itinerant transition of strongly correlated electrons is one of the central problems in condensed matter physics. Heavy fermion systems describe the \(f\)-electron delocalization through Kondo interactions with conduction electrons. Tremendous efforts have been devoted to the so-called Kondo-destruction scenario, which predicts a dramatic local-to-itinerant quantum phase transition of \(f\)-electrons at zero temperature. On the other hand, two-fluid behaviors have been observed in many materials, suggesting coexistence of local and itinerant \(f\)-electrons over a broad temperature range but lacking a microscopic theoretical description. To elucidate this fundamental issue, here we propose an exactly solvable Kondo-Heisenberg model in which the spins are defined in the momentum space and the \(\mathbf{k}\)-space Kondo interaction corresponds to a highly nonlocal spin scattering in the coordinate space. Its solution reveals a continuous evolution of the Fermi surfaces with Kondo interaction and two-fluid behaviors similar to those observed in real materials. The electron density violates the usual Luttinger's theorem, but follows a generalized one allowing for partially enlarged Fermi surfaces due to partial Kondo screening in the momentum space. Our results highlight the consequence of nonlocal Kondo interaction relevant for strong quantum fluctuation regions, and provide important insight into the microscopic description of two-fluid phenomenology in heavy fermion systems. ## I Introduction Underlying the rich emergent quantum phenomena of heavy fermion systems [1; 2] is the local-to-itinerant transition of \(f\)-electrons controlled by the interplay of Kondo and Ruderman-Kittel-Kasuya-Yosida (RKKY) interactions [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. Below the so-called coherence temperature \(T^{*}\), a large amount of experimental observations have pointed to the coexistence of local and itinerant characters of \(f\)-electrons as captured phenomenologically by the two-fluid model [16; 17; 18; 19; 20; 21; 22], which assumes the coexistence of an itinerant heavy electron fluid formed by hybridized (screened) \(f\)-moments and a (classical) spin liquid of residual unhybridized \(f\)-moments. The two-fluid behavior exists over a broad temperature range, from the normal state below the coherence temperature down to inside the quantum critical superconducting phase [23; 24; 25], and explains a variety of anomalous properties observed in heavy fermion materials [22]. But a microscopic description of the two-fluid phenomenology is still lacking, and no consensus has been reached on how exactly the \(f\)-electrons become delocalized [26]. Tremendous theoretical and experimental efforts in past decades have been focused on the so-called Kondo-destruction scenario, in which the local-itinerant transition was predicted to occur abruptly through a quantum critical point (QCP) at zero temperature [4; 5; 6]. While it seems to be supported experimentally by the Hall coefficient jump under magnetic field extrapolated to zero temperature in YbRh\({}_{2}\)Si [27] and the de Haas-van Alphen experiment under pressure in CeRhIn\({}_{5}\)[28], it was lately challenged by a number of angle-resolved photoemission spectroscopy measurements showing signatures of large Fermi surfaces [29] or band hybridization above the magnetically ordered state [30]. In theory, the Kondo-destruction scenario could be derived under certain local or mean-field approximations, such as the dynamical large-\(N\) approaches assuming independent electron baths coupled to individual impurity [9; 10; 14] and the extended dynamical mean-field theory by mapping the Kondo lattice to a single impurity Bose-Fermi Kondo model [4]. Since the corresponding spin-\(\frac{1}{2}\) single- or two-impurity problems only allow for two stable fixed points in the strong-coupling limit and the decoupling limit [31; 32; 33], these approaches unavoidably predicted a single QCP associated with Kondo destruction. However, there is no a priori reason to assume such a local impurity mapping to be always valid for Kondo lattice systems in which all spins are spatially correlated and coupled to a common shared bath. For example, in CePdAl [34], geometric frustration may promote quantum fluctuations of local spins so that the single QCP is replaced by an intermediate quantum critical phase at zero temperature [11; 12]. Numerically, density-matrix renormalization group (DMRG) calculations of the one-dimensional (1D) Kondo lattice have predicted an intermediate phase with neither large nor small Fermi surfaces [35]. For 2D Kondo lattice, both quantum Monte Carlo (QMC) simulations [36; 37] and the dynamical cluster approach [38] have suggested continuous existence of Kondo screening inside the magnetic phase. In particular, an effective nonlocal Kondo interaction has recently been proposed using an improved Schwinger boson approach with full momentum-dependent self-energies, yielding intermediate ground states with partially enlarged electron Fermi surfaces [11; 12; 39]. It is therefore necessary to go be yond the local or mean-field approximations and explore in a more rigorous manner how \(f\)-electrons may evolve once nonlocal interaction effects are taken into account. In this work, we extend the concept of Kondo interaction to an extreme case where the nonlocal scattering between conduction electrons and spins has an infinite interacting range such that it becomes local in the momentum space. We further include a Heisenberg-like term in the momentum space to mimic the Kondo-RKKY competition in heavy fermion materials. Similar to the Hatsugai-Kohmoto model with a \(\mathbf{k}\)-space Hubbard-\(U\) interaction [40; 41; 42; 43; 44], our proposed \(\mathbf{k}\)-space Kondo-Heisenberg model is exactly solvable. This allows us to overcome uncertainties in previous studies introduced by either analytical approximations or numerical ambiguities and extract decisive information on potential physical effects of nonlocal correlations. We find many interesting features such as spin-charge separated excitations, coexistence of Kondo singlets and spin singlets, and continuous evolution of the Fermi surfaces. Our results yield useful insight into the microscopic description of two-fluid behaviors, highlight the rich consequences of nonlocal Kondo scattering, and provide an unambiguous counterexample to the local Kondo-destruction scenario. ## II Results ### The \(\mathbf{k}\)-space Kondo-Heisenberg model We begin by constructing the following Hamiltonian, \[H = \frac{1}{2}\sum_{\mathbf{k}}H_{\mathbf{k}},\] \[H_{\mathbf{k}} = (\epsilon_{\mathbf{k}}-\mu)(n_{\mathbf{k}}+n_{-\mathbf{k}})+J_{ K}(\mathbf{s}_{\mathbf{k}}\cdot\mathbf{S}_{\mathbf{k}}+\mathbf{s}_{-\mathbf{k}} \cdot\mathbf{S}_{-\mathbf{k}}) \tag{1}\] \[+J_{H}\mathbf{S}_{\mathbf{k}}\cdot\mathbf{S}_{-\mathbf{k}},\] where \(n_{\mathbf{k}}=\sum_{\alpha}c^{\dagger}_{\mathbf{k}\alpha}c_{\mathbf{k}\alpha}\) is the electron occupation number at momentum \(\mathbf{k}\), \(\mu\) is the chemical potential, and \(\epsilon_{\mathbf{k}}=\epsilon_{-\mathbf{k}}\) is the electron dispersion relation. The electron spin \(\mathbf{s}_{\mathbf{k}}=\frac{1}{2}\sum_{\alpha\beta}c^{\dagger}_{\mathbf{k} \alpha}\mathbf{\sigma}_{\alpha\beta}c_{\mathbf{k}\beta}\) and the local spin \(\mathbf{S}_{\mathbf{k}}\) are both defined in the momentum space. Note that \(\mathbf{S}_{\mathbf{k}}\) is not the Fourier transform of the spin operator in the coordinate space, but should rather be viewed as that of an "\(f\)-electron" localized in the momentum space. In the pseudofermion representation, this corresponds to \(\mathbf{S}_{\mathbf{k}}=\frac{1}{2}\sum_{\alpha\beta}f^{\dagger}_{\mathbf{k} \alpha}\mathbf{\sigma}_{\alpha\beta}f_{\mathbf{k}\beta}\) under the constraint \(\sum_{\alpha}f^{\dagger}_{\mathbf{k}\alpha}f_{\mathbf{k}\alpha}=1\). It is immediately seen that the Kondo interaction is highly nonlocal by Fourier transform to the coordinate space, \(\frac{J_{K}}{2}\sum_{iji^{\prime}j^{\prime}}c^{\dagger}_{i\alpha}c_{j\beta}f^{ \dagger}_{i^{\prime}\beta}f_{j^{\prime}\alpha}\delta_{\mathbf{r}_{i}-\mathbf{ r}_{j},\mathbf{r}_{j^{\prime}}-\mathbf{r}_{i^{\prime}}}\). A similar form of nonlocal Kondo interaction has been suggested to emerge in the quantum critical regime and play an important role in strongly frustrated Kondo systems [11; 12; 39]. The above model is exactly solvable, since the total Hilbert space can be divided into many small and independent subspaces by each conserved \(H_{\mathbf{k}}\). The local Hilbert space at each momentum point contains 8 states constructed by 4 electron states (\(\ket{0}\), \(\ket{\uparrow}\), \(\ket{\downarrow}\), \(\ket{2}\)) and 2 spin states (\(\ket{\Uparrow}\), \(\ket{\Downarrow}\)), so \(H_{\mathbf{k}}\) has a total number of 64 eigenstates and can be exactly diagonalized. These states are further classified into different sectors by the electron numbers \((n_{\mathbf{k}},n_{-\mathbf{k}})\). Depending on the relative magnitudes of \(\epsilon_{\mathbf{k}}-\mu\) and \(\zeta\equiv(J_{K}-J_{H}+\tilde{J})/4\), where \(\tilde{J}=\sqrt{J_{H}^{2}-2J_{H}J_{K}+4J_{K}^{2}}\), we may find the ground state of \(H_{\mathbf{k}}\) among three possibilities: 1) for \(\epsilon_{\mathbf{k}}-\mu>\zeta\), one has \((n_{\mathbf{k}},n_{-\mathbf{k}})=(0,0)\), and \(\mathbf{S}_{\mathbf{k}}\), \(\mathbf{S}_{-\mathbf{k}}\) form a spin singlet; 2) for \(\ket{\epsilon_{\mathbf{k}}-\mu}<\zeta\), \((n_{\mathbf{k}},n_{-\mathbf{k}})=(1,1)\), and the ground state is a superposition between Kondo singlets and spin singlets, as shown in Table 1 and Fig. 1(c); 3) for \(\epsilon_{\mathbf{k}}-\mu<-\zeta\), one has \((n_{\mathbf{k}},n_{-\mathbf{k}})=(2,2)\), and the two \(\mathbf{k}\)-local spins form a singlet. Other sectors like \((n_{\mathbf{k}},n_{-\mathbf{k}})=(0,1)\) and \((1,2)\) only contribute to excited states (see _Appendix A_). The momentum space is therefore separated into three different regions, \(\Omega_{0}\), \(\Omega_{1}\) and \(\Omega_{2}\), corresponding to \(n_{\mathbf{k}}=0,1,2\), as illustrated in Figs. 1(a) and 1(b). The ground state of \(H\) is simply a direct product of the above three states at different \(\mathbf{k}\). Many interesting properties arise from the existence of the singly occupied region \(\Omega_{1}\), which seems to be a general feature of models with \(\mathbf{k}\)-space local interactions [40; 45; 46; 47]. The volume of \(\Omega_{1}\), defined as \(V_{\Omega_{1}}=\frac{1}{\mathcal{N}}\sum_{\mathbf{k}}\theta(\zeta-\ket{ \epsilon_{\mathbf{k}}-\mu})\) where \(\mathcal{N}\) is the total number of \(\mathbf{k}\) points, is shown in Fig. 1(d), which maps out the phase diagram on the \(J_{H}\)-\(J_{K}\) plane. For simplicity, we have assumed \(\epsilon_{\mathbf{k}}=k^{2}/2\pi-1\), \(\mu=0\), and \(\epsilon_{\mathbf{k}}-\mu\in[-1,1]\). The momentum average is then \(\frac{1}{\mathcal{N}}\sum_{\mathbf{k}}\equiv\int_{\ket{\mathbf{k}}<\epsilon_{ \mathbf{k}}}d^{2}\mathbf{k}/(2\pi)^{2}\), where \(k_{\Lambda}=2\sqrt{\pi}\) is the momentum cutoff corresponding to a Brillouin zone volume \((2\pi)^{2}\). At \(J_{K}=0\), one has \(V_{\Omega_{1}}=0\), and the conduction electrons are completely decoupled from the "\(\mathbf{k}\)-space valence bond state" formed by the local spins [47], hence the name decoupled metal. For \(J_{K}\) and \(J_{H}\) satisfying \(\zeta\geq 1\) (below the white curve in Fig. 1(d)), one has \(V_{\Omega_{1}}=1\), such that all spins are Kondo screened by conduction electrons. This is the Kondo insulator (KI) phase with an insulating gap around the \begin{table} \begin{tabular}{c c c c} \hline \hline \(\mathbf{k}\) & \((n_{\mathbf{k}},n_{-\mathbf{k}})\) & \(E_{\mathbf{k}}\) & Ground State \\ \hline \(\epsilon_{\mathbf{k}}-\mu>\zeta\) & (0,0) & \(-\frac{3}{4}J_{H}\) & \(\ket{00}\otimes\ket{\text{SS}}\) \\ \(|\epsilon_{\mathbf{k}}-\mu|<\) & (1,1) & \(2(\epsilon_{\mathbf{k}}-\mu)-\) & \(a\ket{\text{KS}}_{\mathbf{k}}\otimes\) \\ \(\zeta\) & & \(\frac{J_{K}+\tilde{J}}{2}-\frac{J_{H}}{4}\) & \(\ket{\text{KS}}_{-\mathbf{k}}+b\ket{\text{ss}}\otimes\) \\ & & \(\ket{\text{SS}}\) \\ \(\epsilon_{\mathbf{k}}-\mu<\) & (2,2) & \(4(\epsilon_{\mathbf{k}}-\mu)-\) & \(\ket{22}\otimes\ket{\text{SS}}\) \\ \(-\zeta\) & & \(\frac{3J_{H}}{4}\) & \\ \hline \hline \end{tabular} \end{table} Table 1: The ground states of \(H_{\mathbf{k}}\). \(E_{\mathbf{k}}\) is the ground state energy. \(\ket{00}\) and \(\ket{22}\) denote the empty and fully occupied electron states at \(\mathbf{k}\) and \(-\mathbf{k}\). \(\ket{\text{ss}}\) (\(\ket{\text{SS}}\)) denotes the spin singlet formed by the two electrons (local spins) at \(\mathbf{k}\) and \(-\mathbf{k}\), while \(\ket{\text{KS}}_{\mathbf{k}}\) denotes the Kondo singlet at \(\mathbf{k}\). The ratio between the coefficients \(a\) and \(b\) is \(2J_{K}/(J_{H}+\tilde{J}-2J_{K})\). Fermi energy. In between, one has \(0<V_{\Omega_{1}}<1\), and the system is in a charge-\(2e\) metal with gapped single-particle excitations but gapless two-particle (Cooper pair) excitations. As one approaches the \(J_{H}=0\) limit from inside the charge-\(2e\) metal, the single particle gap vanishes, and the system becomes a non-Fermi liquid (NFL) metal, which we denote as M. ### Excitations The elementary excitations can be obtained exactly from the single-particle retarded Green's function defined as \(G_{c}(\mathbf{k},t)=-i\theta(t)\left\langle\left\{c_{\mathbf{k}\alpha}(t),c_{ \mathbf{k}\alpha}^{\dagger}\right\}\right\rangle\). Its explicit analytical expression at zero temperature is given in _Appendix B_. The poles of the Green's function are plotted in Fig. 2(a) in different phases, with the spectral weights represented by the thickness of the curves. Two additional poles in the \(\Omega_{1}\) region are not shown as they have very small weights and locate far away from the Fermi energy. For \(\zeta<1\), the following poles are most close to the Fermi energy: \[\omega_{0,\mathbf{k}} = \epsilon_{\mathbf{k}}-\mu-\frac{J_{K}-2J_{H}+2\tilde{J}^{\prime} }{4},\qquad\mathbf{k}\in\Omega_{0}\] \[\omega_{1,\mathbf{k}}^{\pm} = \epsilon_{\mathbf{k}}-\mu\pm\frac{J_{K}+2\tilde{J}-2\tilde{J}^{ \prime}}{4},\qquad\mathbf{k}\in\Omega_{1} \tag{2}\] \[\omega_{2,\mathbf{k}} = \epsilon_{\mathbf{k}}-\mu+\frac{J_{K}-2J_{H}+2\tilde{J}^{\prime} }{4},\qquad\mathbf{k}\in\Omega_{2}\] where \(\tilde{J}^{\prime}=\sqrt{J_{H}^{2}-J_{H}J_{K}+J_{K}^{2}}\). Physically, \(\omega_{0,\mathbf{k}}\) corresponds to adding one electron at \(\mathbf{k}\in\Omega_{0}\), so that the system is excited from the state \(\left|00\right\rangle\otimes\left|\mathrm{SS}\right\rangle\) to one of the lowest doublets of the \(\left(n_{\mathbf{k}},n_{-\mathbf{k}}\right)=(1,0)\) sector, for example, \(C_{1}\left|\mathrm{KS}\right\rangle_{\mathbf{k}}\otimes\left|\Downarrow_{- \mathbf{k}}+C_{2}\left|\mathrm{SS}\right\rangle\otimes\left|\downarrow\right\rangle _{\mathbf{k}}\) if the added electron has a down spin (see Fig. 2(b)). Interestingly, the component \(\left|\mathrm{KS}\right\rangle_{\mathbf{k}}\otimes\left|\Downarrow_{-\mathbf{k}}\right\rangle\) creates a charge \(-e\) excitation (anti-holon[45]) at \(\mathbf{k}\) and a spin-\(1/2\) excitation (spinon) at \(-\mathbf{k}\), while the component \(\left|\mathrm{SS}\right\rangle\otimes\left|\downarrow\right\rangle_{\mathbf{k}}\) creates an electron excitation at \(\mathbf{k}\). The former indicates spin-charge separated excitations that dominate at small \(J_{H}/J_{K}\) due to the vanishing weight \(|C_{2}|^{2}\) in the \(J_{H}\to 0\) limit as shown in Fig. 2(c). Similarly, the pole \(\omega_{-\mathbf{k}}^{-1}\) corresponds to removing one electron at \(\mathbf{k}\in\Omega_{1}\), and the resulting excited state is a superposition between a hole excitation at \(\mathbf{k}\) (with coefficient \(C_{1}\)), and a holon-spinon pair located at opposite momentum points (with coefficient \(C_{2}\)). The poles \(\omega_{1,\mathbf{k}}^{+}\) and \(\omega_{2,\mathbf{k}}\) have similar physical meanings, but with the empty states in Fig. 2(b) replaced by the double-occupied states. In the charge-\(2e\) metal, as shown in Fig. 2(a), the poles \(\omega_{0,\mathbf{k}}\) and \(\omega_{1,\mathbf{k}}^{-}\) are separated by a direct energy gap at the \(\Omega_{0}\)-\(\Omega_{1}\) boundary, and the same for \(\omega_{1,\mathbf{k}}^{+}\) and \(\omega_{2,\mathbf{k}}\) at the \(\Omega_{1}\)-\(\Omega_{2}\) boundary. We find the gap follows a scaling \(\Delta/J_{K}=\frac{1}{2}[z+(z^{2}-2z+4)^{1/2}-2(z^{2}-z+1)^{1/2}]\), with \(z=J_{H}/J_{K}\). It vanishes in the limit \(J_{H}\to 0\), leading to two "Fermi surfaces" in the M phase, as denoted by FS1 and FS2 in Fig. 2(a). However, these are not usual electron Fermi surfaces, in the sense that moving an electron from one side of the Fermi surface to the other causes spin-charge separation. Therefore, the M phase at \(J_{H}=0\) is actually a NFL metal. We will see that even for \(J_{H}>0\), the physics should be qualitatively identical to the M phase at temperatures higher than the single particle gap of the charge-\(2e\) metal ground state. Inside the KI phase, both \(\Omega_{0}\) and \(\Omega_{2}\) disappear, and the single particle gap becomes an indirect gap between \(\omega_{1,\mathbf{k}}^{+}\) and \(\omega_{1,\mathbf{k}}^{-}\). This gap remains open in the \(J_{H}\to 0\) limit, and has a different nature from that of the charge-\(2e\) metal. Their difference becomes more clear when we consider the two-particle Green's function, \(G_{b}(\mathbf{k},t)=-i\theta(t)\left\langle\left[b_{\mathbf{k}}(t),b_{ \mathbf{k}}^{\dagger}\right]\right\rangle\), where \(b_{\mathbf{k}}^{\dagger}=\frac{1}{\sqrt{2}}(c_{\mathbf{k}\uparrow}^{\dagger}c_ {-\mathbf{k}\downarrow}^{\dagger}-c_{\mathbf{k}\downarrow}^{\dagger}c_{-\mathbf{ k}\uparrow}^{\dagger})\) creates a singlet pair of electrons (a Cooper pair) [46]. As shown in Fig. 2(d), \(G_{b}(\mathbf{k},\omega)\) is gapped in the KI phase but gapless in the charge-\(2e\) metal. This means, inside the charge-\(2e\) metal, adding or removing a singlet pair of electrons at \(\mathbf{k}\) and \(-\mathbf{k}\) costs no energy if \(\mathbf{k}\) locates exactly at the \(\Omega_{0}\)-\(\Omega_{1}\) or \(\Omega_{1}\)-\(\Omega_{2}\) boundaries, indicating Cooper pairs rather then electrons being its elementary charge carriers. However, because our simple model does not contain scatterings between Cooper pairs, this state can only be viewed as a completely quantum disordered superconductor without long-range phase coherence [47; 48]. Figure 1: The ground state of \(\mathbf{k}\)-space Kondo-Heisenberg model. (_A_) The momentum space contains three regions with different electron occupation number shown in (_B_). (_C_) Ground states of \(H_{\mathbf{k}}\) in each momentum region. The red arrows and blue balls with a black arrow denote the local spins and conduction electrons, respectively. The ellipses represent the entangled Kondo singlet or spin singlet. (_D_) The ground state phase diagram at \(\mu=0\), showing different phases. The color represents the volume of the singly occupied region \(\Omega_{1}\). ### Two-fluid behavior The fact that the ground state involves a superposition of the Kondo singlets and local spin singlets in the momentum space is reminiscent of the two-fluid model of heavy fermion materials, in which an "order parameter" \(f(T)=\min\{1,f_{0}(1-T/T^{*})^{3/2}\}\) was found to characterize the fraction of hybridized \(f\)-moments over a broad temperature range, with \(f_{0}\) reflecting the strength of collective hybridization (or collective Kondo entanglement) [18; 20]. \(f_{0}\geq 1\) indicates full screening below some characteristic temperature where \(f(T)\) reaches unity, while \(0<f_{0}<1\) implies that a fraction of \(f\)-electrons may remain unhybridized even down to zero temperature if the scaling is not interrupted by other orders. The two-fluid model captures a large amount of experimental properties of heavy fermion metals [22], but its microscopic theory remains to be explored [26]. To see how two-fluid behavior may emerge in our exactly solvable model, we introduce the projector \(P_{\bf k}=\left|{\bf K}\right\rangle\left\langle{\bf K}\right|\) with \(\left|{\bf K}\right\rangle=\frac{1}{2}\left(\left|\uparrow\Downarrow\right\rangle -\left|\downarrow\Uparrow\right\rangle\right)_{\bf k}\left(\left|\uparrow \Downarrow\right\rangle-\left|\downarrow\Uparrow\right\rangle\right)_{-{\bf k}}\), and its momentum average \(P=\frac{1}{\cal N}\sum_{\bf k}P_{\bf k}\). This gives a two-fluid "order parameter", \[f(T)=\frac{{\rm Tr}[e^{-H/T}P]}{{\rm Tr}[e^{-H/T}]}=\frac{1}{\cal N}\sum_{\bf k }\frac{{\rm Tr}[e^{-H_{\bf k}/T}P_{\bf k}]}{{\rm Tr}[e^{-H_{\bf k}/T}]}, \tag{3}\] which reflects the fraction of Kondo singlet formation in the momentum space. With this definition, it is easy to show that a physical observable can in principle also be divided into a two-fluid form \(\langle O\rangle=f\langle O\rangle_{P}+(1-f)\langle O\rangle_{1-P}\). Figures 3(a) and 3(c) show the contour plots of the calculated \(f(T)\) at \(J_{H}=0\) and \(0.5\), respectively. In general, we see \(f(T)\) increases with decreasing temperature and saturates to a finite zero temperature value \(f(0)\). For \(J_{H}=0\), \(f(0)\) increases linearly from \(0\) to \(1\) with increasing \(J_{K}\), and stays at unity for \(J_{K}>4/3\) (inside the KI phase). For \(J_{K}<4/3\) (inside the M phase), \(f(T)\) follows a universal scaling function \(f(T)/f(0)=F(T/T^{*})\), as shown in Fig. 3(b). Quite remarkably, the low temperature part of \(F(T/T^{*})\) can be well approximated by the function \((1-T/T^{*})^{3/2}\). At high temperatures, its smooth evolution reflects a crossover rather than a phase transition of the delocalization with temperature. For Figure 2: Low-energy excitations. (_A_) The poles of the single particle Green’s function at \(\mu=0\) for three typical values of \(J_{K}\) and \(J_{H}\) corresponding to the M (left), charge-\(2e\) metal (middle) and KI (right) phases. The blue, red and orange curves represent the excitations with momentum \({\bf k}\in\Omega_{0}\), \(\Omega_{1}\) and \(\Omega_{2}\), respectively, and the thickness of the curves is proportional to the spectral weight of the poles. (_B_) The physical meanings of the poles \(\omega_{0,{\bf k}}\) and \(\omega_{1,{\bf k}}^{-}\). (_C_) The coefficient \(C_{1}\) and \(C_{2}\) (up) and the single particle gap (bottom) as functions of \(J_{H}/J_{K}\) in the charge-\(2e\) metal phase. (_D_) The poles of the two-particle Green’s function in the charge-\(2e\) metal (left) and KI (right) phases. The dashed curves correspond to the poles with negative spectral weights. Figure 3: The two-fluid “order parameter”. (_A_) A contour plot of the two-fluid “order parameter” \(f(T)\) for \(J_{H}=0\). (_B_) \(f(T)/f(0)\) as a function of \(T/T^{*}\) for \(J_{H}=0\) and different \(J_{K}\). The dashed curve shows the phenomenological scaling function \((1-T/T^{*})^{3/2}\) for comparison. The inset shows \(f(0)\), \(T^{*}\), and \(V_{\Omega_{1}}\) as functions of \(J_{K}\). (_C_)(_D_) The same as (_A_)(_B_), but with \(J_{H}=0.5\). The dashed curve in (_D_) corresponds to \(1.18(1-T/T^{*})^{3/2}\). \(J_{K}>4/3\), \(f(T)\) grows to unity already at a finite temperature, in good agreement with the expectation of the two-fluid picture [20]. The results for \(J_{H}=0.5\) are slightly different. We find for small \(J_{H}\), \(f(T)\) already stays constant below certain temperature before it reaches unity. This is due to the energy gap of the charge-\(2e\) metal that interrupts the two-fluid scaling. Above the gap, \(f(T)\) follows the same two-fluid scaling behavior over a broad intermediate temperature range, as shown in Fig. 3(d). The similar two-fluid behavior clearly indicates that the intermediate temperature physics above the charge-\(2e\) metal is controlled by the NFL M phase with partial Kondo screening rather than the charge-\(2e\) metal. This may have important implications for real materials, where the scaling is often interrupted or even suppressed (\(f\)-electron relocalization) by magnetic, superconducting, or other long-range orders. A second observation is that \(f(0)\) as a function of \(J_{K}\) is nearly identical to the volume of single-occupied region, as shown by the red line in the inset of Figs. 3(b) and 3(d). This confirms the previous speculation of an intimate relation between the two-fluid "order parameter" and the partial Kondo screening at zero temperature [20]. The quantum state superposition revealed in the exactly solvable model may also be the microscopic origin of the two-fluid phenomenology widely observed in real heavy fermion materials. ### Luttinger's theorem The Luttinger's theorem provides an important criterion for Landau's Fermi liquid description of interacting electron systems [49; 50; 51]. It states that the volume enclosed by the Fermi surface should be equal to the number of conduction electrons per unit cell. Mathematically, it is often quoted as [52; 53; 54; 51] \[V_{\rm LC}\equiv\frac{2}{\mathcal{N}}\sum_{\bf k}\theta({\rm Re}G_{c}({\bf k}, 0))=n_{c}, \tag{4}\] where the factor 2 arises from the up and down spins, and \(n_{c}=\frac{1}{\mathcal{N}}\sum_{\bf k}\langle n_{\bf k}\rangle\) is the electron density. For a Fermi liquid metal, \({\rm Re}G_{c}({\bf k},0)\) changes its sign only at the Fermi surface by passing through infinity, and hence Eq. (4) reduces to the simple Fermi volume statement. It was later suggested that Eq. (4) can also be applied to systems without quasiparticle poles [52; 55], such as the Mott insulator. In that case, \({\rm Re}G_{c}({\bf k},0)\) changes sign by passing through its zeros, which form a Luttinger surface [52]. However, the Luttinger surface of a Mott insulator was found to depend on the arbitrary choice of \(\mu\), such that Eq. (4) only holds with the presence of particle-hole symmetry [53; 56]. This suggests a failure of Eq. (4) and possibly nonexistence of the Luttinger-Ward functional in these strongly correlated systems [53; 54; 57]. Here, we demonstrate based on our model that the naive Fermi volume counting is in fact better than the Luttinger count \(V_{\rm LC}\) on representing the electron density. As shown in Fig. 4(a), the real part of the Green's function \({\rm Re}G_{c}({\bf k},0)\) at \(J_{H}=0\) reveals a Luttinger surface inside \(\Omega_{1}\) and two Fermi surfaces at the boundaries of \(\Omega_{1}\) and \(\Omega_{2}\). Therefore, we can define the Fermi volume as \(V_{\rm FS}\equiv 2(V_{\Omega_{1}}+V_{\Omega_{2}})\), and study its relation to the electron density. To do this, we first calculate \(n_{c}\) as a function of \(J_{K}\) and \(\mu\) at \(J_{H}=0\). The result is shown in Fig. 4(b). For nonzero \(\mu\), there exist another two metallic phases, M1 and M2, where one of the two Fermi surfaces disappears due to the absence of \(\Omega_{2}\) or \(\Omega_{0}\) region. Both M1 and M2 will open a single particle gap by turning on a finite \(J_{H}\), and become another two charge-\(2e\) metals. These phases have qualitatively the same physical properties with their counterparts at \(\mu=0\), and hence will not be discussed in detail. In Figs. 4(c) and 4(d), we compare \(V_{\rm LC}\) and \(V_{\rm FS}\) with the electron density \(n_{c}\) as functions of \(J_{K}\) at \(\mu=0\) and \(\mu=-0.3\), respectively. At \(\mu=0\), we found \(V_{\rm LC}=n_{c}=1\) for both the M and KI phases. On the other hand, \(V_{\rm FS}\) evolves continuously from \(n_{c}\) at \(J_{K}=0\) to \(n_{c}+1\) in the KI phase. The deviation \(V_{\rm FS}-n_{c}\) is exactly equal to the volume of \(\Omega_{1}\). In fact, the identity \[V_{\rm FS}\equiv 2(V_{\Omega_{1}}+V_{\Omega_{2}})=n_{c}+V_{\Omega_{1}} \tag{5}\] holds for arbitrary \(\mu\) and \(J_{K}\), since the electron density can always be written as \(n_{c}=V_{\Omega_{1}}+2V_{\Omega_{2}}\). Equation (5) correctly accounts for the Fermi surface enlargement due to the Kondo screening effect, an important feature of the Kondo lattice [51]. By contrast, the deviation Figure 4: The Fermi volume evolution and the Luttinger’s theorem. (\(A\)) Real part of the Green’s function \(G_{c}({\bf k},0)\) at \(\mu=0\), \(J_{H}=0\), \(J_{K}=0.8\), showing both the Luttinger surface and the Fermi surfaces (FS1 and FS2). (\(B\)) The electron density as a function of \(\mu\) and \(J_{K}\) at \(J_{H}=0\). M, M1 and M2 denote different metallic phases, and KI is the Kondo insulator phase. (\(C\))(\(D\)) Evolution of \(V_{\rm FS}\), \(V_{\rm LC}\), and \(n_{c}\) with increasing \(J_{K}\) at \(\mu=0\) and -0.3. depends explicitly on the chemical potential in the M1, M2, and KI phases, as shown in Fig. 4(d) for \(\mu=-0.3\). The parabolic free electron dispersion leads to \(V_{\rm LC}=n_{c}\) in the M phase for all \(\mu\), which is generally not true for other forms of \(\epsilon_{\bf k}\). In fact, one can derive analytically (see _Appendix C_) \[V_{\rm LC}=n_{c}+\frac{1}{\mathcal{N}}\sum_{{\bf k}\in\Omega_{1}}{\rm sgn}( \epsilon_{\bf k}-\mu), \tag{6}\] which points to a general violation of Eq. (4) when \(\Omega_{1}\) is present. However, this equation does not reflect the Fermi surface enlargement due to the Kondo screening effect, and is not as useful as Eq. (5) due to its explicit dependence on \(\epsilon_{\bf k}\) and \(\mu\). It should be noted that Eq. (5) has the same form as the generalized Luttinger sum rule derived in the Schwinger boson formalism of the Kondo lattice, where \(V_{\Omega_{1}}\) corresponds to the volume of an emergent holon Fermi surface [11; 12; 58]. In both cases, an intermediate phase with \(0<V_{\Omega_{1}}<1\) is allowed, featured with partial (nonlocal) Kondo screening of local spins and gapless spinon and holon excitations, which is completely different from the Kondo-destruction scenario where \(V_{\rm FS}\) jumps from \(n_{c}\) to \(n_{c}+1\) through a local QCP. This partial screening in the momentum space should be distinguished from those studied in the coordinate space [59], which is always accompanied by broken translational symmetry. ## III Discussion We briefly discuss to what extent our toy model reflects the true physics of correlated \(f\)-electron systems. First, the momentum space local spins can be originated from an infinitely large Hatsugai-Kohmoto (HK) interaction between \(f\)-electrons, \(U\sum_{\bf k}n_{\bf k}^{f}n_{\bf k}^{f}\). Although being a simplification of the Hubbard model, the HK model has recently been shown to capture the essential physics of Mottness and some important high-\(T_{c}\) features upon doping [41; 42]. As suggested in Ref. [42], this is possibly because the HK interaction is the most relevant part of the Hubbard interaction that drives the system away from the Fermi liquid fixed point to the Mott insulator. In fact, a perfect single-occupancy constraint on every lattice site (\(n_{i}^{f}=1\)) must also imply the single-occupancy at each momentum point (\(n_{\bf k}^{f}=1\)). Therefore, we believe our model does capture the essential physics of strongly correlated \(f\)-electrons. Second, the Kondo term of our model contains a particular form of nonlocal Kondo interaction proposed in recent Schwinger boson theories of Kondo lattices with strong quantum fluctuation or geometric frustration [11; 12], \(J_{K}(|{\bf r}_{i}-{\bf r}_{j}|)c_{i\alpha}^{\dagger}c_{j\beta}f_{j\beta}^{ \dagger}f_{i\alpha}\). It is related to the term \(c_{i\alpha}^{\dagger}\mathbf{\sigma}_{\alpha\beta}c_{j\beta}\cdot\mathbf{S}_{i}\times \mathbf{S}_{j}\) that emerges naturally upon renormalization group from a Kondo lattice, and may become important in the quantum critical region [39]. In summary, we have constructed an exactly solvable Kondo-Heisenberg model in momentum space. This model displays many interesting properties: 1) it realizes a charge-\(2e\) metal phase with gapped single particle excitations but gapless Cooper pair excitations; 2) as the Heisenberg interaction vanishes, the charge-\(2e\) metal becomes a NFL metal featured with a partially enlarged Fermi volume; 3) both the charge-\(2e\) metal and the NFL metal show universal two-fluid behaviors at finite temperatures, reflecting partial Kondo screening of local spins. All these interesting properties arise from the highly nonlocal Kondo interaction in real space, which might play an important role in heavy fermion systems. Our results may help to understand the experimentally observed NFL quantum critical phase in CePdAl [34]. For other materials like YbRh\({}_{2}\)Si\({}_{2}\), such nonlocal physics might become important in the quantum critical region, causing the smooth evolution of the Fermi surface. ## Acknowledgment This work was supported by the National Natural Science Foundation of China (Grants No. 12174429, No. 11974397), the National Key R&D Program of China (Grant No. 202ZYFA1402203), and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB33010100). ## Appendix A Exact diagonalization The 64-dimensional Hilbert space of \(H_{\bf k}\) can be divided into 9 subspaces according to the electron number \(n_{\bf k}\) and \(n_{-{\bf k}}\), \[(n_{\bf k},n_{-{\bf k}}) = (0,0),(2,0),(0,2),(2,2)\qquad d=4\] \[(n_{\bf k},n_{-{\bf k}}) = (1,0),(0,1),(1,2),(2,1)\qquad d=8\] \[(n_{\bf k},n_{-{\bf k}}) = (1,1)\qquad d=16 \tag{11}\] where \(d\) is the dimension of each subspace. To diagonalize the subspaces, we use the basis \(\left|\phi_{\bf k}\phi_{-{\bf k}}S_{\bf k}^{z}S_{\bf k}^{z}\right\rangle\) to compute the matrix elements, where \(\phi_{\bf k}=0,\uparrow,\downarrow,2\) denotes the four electron states and \(S_{\bf k}^{z}=\uparrow,\Downarrow\) denotes the local spin states. The lowest eigenstates within each subspace are listed in Table 2. By comparing the lowest eigenenergy \(E_{n_{\bf k},n_{-{\bf k}}}\) of different subspaces, one obtains the ground states of \(H_{\bf k}\) listed in Table 1. ## Appendix B Green's function The retarded single-electron Green's function can be directly calculated from its definition, leading to \[G_{c}({\bf k},\omega)=\sum_{n}\frac{\left|\left\langle n\right|c_{{\bf k}, \alpha}^{\dagger}\left|0\right\rangle\right|^{2}}{\omega-E_{n}+E_{0}}+\sum_{n }\frac{\left|\left\langle n\right|c_{{\bf k},\alpha}\left|0\right\rangle\right| ^{2}}{\omega+E_{n}-E_{0}}, \tag{12}\] where \(\omega\) represents \(\omega+i0^{+}\), and \(\left|n\right\rangle\) is the \(n\)-th eigenstate of \(H_{\mathbf{k}}\) with energy \(E_{n}\). The explicit analytical results are \[G_{c}(\mathbf{k}\in\Omega_{0},\omega) = \frac{(2\tilde{J}^{\prime}+2J_{H}-J_{K})/4\tilde{J}^{\prime}}{ \omega-\epsilon_{\mathbf{k}}+\mu+\frac{J_{K}-2J_{H}+2\tilde{J}^{\prime}}{4}} \tag{10}\] \[+\frac{(2\tilde{J}^{\prime}-2J_{H}+J_{K})/4\tilde{J}^{\prime}}{ \omega-\epsilon_{\mathbf{k}}+\mu+\frac{J_{K}-2J_{H}-2\tilde{J}^{\prime}}{4}},\] \[G_{c}(\mathbf{k}\in\Omega_{2},\omega) = \frac{(2\tilde{J}^{\prime}+2J_{H}-J_{K})/4\tilde{J}^{\prime}}{ \omega-\epsilon_{\mathbf{k}}+\mu-\frac{J_{K}-2J_{H}+2\tilde{J}^{\prime}}{4}}\] (11) \[+\frac{(2\tilde{J}^{\prime}-2J_{H}+J_{K})/4\tilde{J}^{\prime}}{ \omega-\epsilon_{\mathbf{k}}+\mu-\frac{J_{K}-2J_{H}-2\tilde{J}^{\prime}}{4}},\] \[G_{c}(\mathbf{k}\in\Omega_{1},\omega) = \frac{[(\tilde{J}+\tilde{J}^{\prime})^{2}-J_{K}^{2}]/8\tilde{J} \tilde{J}^{\prime}}{\omega-\epsilon_{\mathbf{k}}+\mu-\frac{J_{K}+2\tilde{J}- \tilde{J}^{\prime}}{4}}\] (12) \[+\frac{[(\tilde{J}+\tilde{J}^{\prime})^{2}-J_{K}^{2}]/8\tilde{J} \tilde{J}^{\prime}}{\omega-\epsilon_{\mathbf{k}}+\mu+\frac{J_{K}+2\tilde{J}- \tilde{J}^{\prime}}{4}}\] \[+\frac{[J_{K}^{2}-(\tilde{J}-\tilde{J}^{\prime})^{2}]/8\tilde{J} \tilde{J}^{\prime}}{\omega-\epsilon_{\mathbf{k}}+\mu-\frac{J_{K}+2\tilde{J}+ \tilde{J}^{\prime}}{4}}\] \[+\frac{[J_{K}^{2}-(\tilde{J}-\tilde{J}^{\prime})^{2}]/8\tilde{J} \tilde{J}^{\prime}}{\omega-\epsilon_{\mathbf{k}}+\mu+\frac{J_{K}+2\tilde{J}+ \tilde{J}^{\prime}}{4}}.\] For the two-particle Green's function, we have \[G_{b}(\mathbf{k},\omega)=\sum_{n}\frac{\left|\left\langle n\right|b_{\mathbf{ k}}^{\dagger}\left|0\right\rangle\right|^{2}}{\omega-E_{n}+E_{0}}-\sum_{n}\frac{ \left|\left\langle n\right|b_{\mathbf{k}}\left|0\right\rangle\right|^{2}}{ \omega+E_{n}-E_{0}}, \tag{13}\] where \(b_{\mathbf{k}}^{\dagger}=\frac{1}{\sqrt{2}}(c_{\mathbf{k}\uparrow}^{\dagger} c_{-\mathbf{k}\downarrow}^{\dagger}-c_{\mathbf{k}\downarrow}^{\dagger}c_{- \mathbf{k}\uparrow}^{\dagger})\) is the Cooper pair creation operator. The analytical results are \[G_{b}(\mathbf{k}\in\Omega_{0},\omega) = \frac{(\tilde{J}+J_{H}-J_{K})/2\tilde{J}}{\omega-2(\epsilon_{ \mathbf{k}}-\mu)+\frac{J_{K}-J_{H}+\tilde{J}}{2}} \tag{14}\] \[+\frac{(\tilde{J}-J_{H}+J_{K})/2\tilde{J}}{\omega-2(\epsilon_{ \mathbf{k}}-\mu)+\frac{J_{K}-J_{H}-J}{2}},\] \[G_{b}(\mathbf{k}\in\Omega_{2},\omega) = -\frac{(\tilde{J}+J_{H}-J_{K})/2\tilde{J}}{\omega-2(\epsilon_{ \mathbf{k}}-\mu)-\frac{J_{K}-J_{H}+\tilde{J}}{2}} \tag{15}\] \[-\frac{(\tilde{J}-J_{H}+J_{K})/2\tilde{J}}{\omega-2(\epsilon_{ \mathbf{k}}-\mu)-\frac{J_{K}-J_{H}-\tilde{J}}{2}},\] \[G_{b}(\mathbf{k}\in\Omega_{1},\omega) = \frac{(\tilde{J}+J_{H}-J_{K})/2\tilde{J}}{\omega-2(\epsilon_{ \mathbf{k}}-\mu)-\frac{J_{K}-J_{H}+\tilde{J}}{2}} \tag{16}\] \[-\frac{(\tilde{J}+J_{H}-J_{K})/2\tilde{J}}{\omega-2(\epsilon_{ \mathbf{k}}-\mu)-\frac{J_{K}-J_{H}-\tilde{J}}{2}},\] \[G_{b}(\mathbf{k}\in\Omega_{1},\omega) = \frac{(\tilde{J}+J_{H}-J_{K})/2\tilde{J}}{\omega-2(\epsilon_{ \mathbf{k}}-\mu)-\frac{J_{K}-J_{H}+\tilde{J}}{2}} \tag{17}\] \[-\frac{(\tilde{J}+J_{H}-J_{K})/2\tilde{J}}{\omega-2(\epsilon_{ \mathbf{k}}-\mu)-\frac{J_{K}-J_{H}-\tilde{J}}{2}},\] \[G_{b}(\mathbf{k}\in\Omega_{1},\omega) = \frac{(\tilde{J}+J_{H}-J_{K})/2\tilde{J}}{\omega-2(\epsilon_{ \mathbf{k}}-\mu)-\frac{J_{K}-J_{H}+\tilde{J}}{2}} \tag{18}\] \[-\frac{(\tilde{J}+J_{H}-J_{K})/2\tilde{J}}{\omega-2(\epsilon_{ \mathbf{k}}-\mu)+\frac{J_{K}-J_{H}+\tilde{J}}{2}}.\] ## Appendix C Luttinger's theorem In the limit \(J_{H}=0\), the Green's functions (10)-(13) reduce to \[G_{c}(\mathbf{k},\omega)^{-1} = \begin{cases}\omega-\epsilon_{\mathbf{k}}+\mu-\frac{3J_{K}^{2}/16} {\omega-(\epsilon_{\mathbf{k}}-\mu-J_{K}/2)},&\mathbf{k}\in\Omega_{0}\\ \omega-\epsilon_{\mathbf{k}}+\mu-\frac{3J_{K}^{2}/16}{\omega-(\epsilon_{ \mathbf{k}}-\mu+J_{K}/2)},&\mathbf{k}\in\Omega_{2}\\ \omega-\epsilon_{\mathbf{k}}+\mu-\frac{\omega_{K}^{2}/16}{\omega-(\epsilon_{ \mathbf{k}}-\mu)},&\mathbf{k}\in\Omega_{1}\end{cases} \tag{19}\] \[= \omega-\epsilon_{\mathbf{k}}+\mu-\Sigma_{c}(\mathbf{k},\omega).\] The electron density is related to the time-ordered Green's function via \[n_{c}=\frac{2}{\mathcal{N}}\sum_{\mathbf{k}}\int_{-\infty}^{\infty}\frac{d\omega} {2\pi}G_{c}(\mathbf{k},i\omega)e^{i\omega 0^{+}} \tag{20}\] where we have performed a wick rotation \(\omega+i0^{+}\to i\omega\) from Eq. (10) to obtain the time-ordered Green's function. In proving the Luttinger's theorem, one uses the following identity, \[G_{c}(\mathbf{k},i\omega) = \frac{\partial}{\partial i\omega}\ln G_{c}(\mathbf{k},i\omega)^{-1} \tag{21}\] \[+G_{c}(\mathbf{k},i\omega)\frac{\partial}{\partial i\omega}\Sigma_{c }(\mathbf{k},i\omega),\] which directly follows from the Dyson's equation (11). Substituting the first term of the right-hand-side of Eq. (13) into Eq. (12) gives exactly the Luttinger's theorem Eq. (4). Therefore Eq. (4) is satisfied if and only if the following integral, \[I_{2} \equiv \frac{2}{\mathcal{N}}\sum_{\mathbf{k}}\int_{-\infty}^{\infty} \frac{d\omega}{2\pi}G_{c}(\mathbf{k},i\omega)\frac{\partial}{\partial i\omega} \Sigma_{c}(\mathbf{k},i\omega) \tag{14}\] \[= n_{c}-V_{\mathrm{LC}},\] vanishes, which was proved by Luttinger and Ward to be true to all orders of perturbation theory [49]. However, in our case, from Eq. (11) and the following identity, \[\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\frac{1}{i\omega-A}\frac{1}{(i \omega-B)^{2}}=\frac{\mathrm{sgn}(B)-\mathrm{sgn}(A)}{2(A-B)^{2}}, \tag{15}\] one can derive \(I_{2}=-\frac{1}{\mathcal{N}}\sum_{\mathbf{k}\in\Omega_{1}}\mathrm{sgn}( \epsilon_{\mathbf{k}}-\mu)\), which is generally nonzero. This may originate from the nonexistence of the Luttinger-Ward functional for our system, similar to the cases studied in Refs. [53; 54]. In fact, for any strictly monotonically increasing function \(\epsilon_{\mathbf{k}}=\epsilon(k)\) within the range \(k\in[0,2\sqrt{\pi}]\), one has \[I_{2} = \frac{1}{2\pi}\int_{\max[\epsilon(0),\mu-\frac{3J_{K}}{4}]}^{\mu }\frac{\epsilon^{-1}(x)}{\epsilon^{\prime}(\epsilon^{-1}(x))}dx \tag{16}\] \[-\frac{1}{2\pi}\int_{\mu}^{\min[\epsilon(2\sqrt{\pi}),\mu+\frac{ 3J_{K}}{4}]}\frac{\epsilon^{-1}(x)}{\epsilon^{\prime}(\epsilon^{-1}(x))}dx,\] where \(\epsilon^{\prime}(x)\) and \(\epsilon^{-1}(x)\) are the derivative and inverse of the function \(\epsilon(x)\), respectively. For a parabolic dispersion function \(\epsilon(x)=ax^{2}+b\), one has \(\epsilon^{-1}(x)=\sqrt{(x-b)/a}\) and \(\epsilon^{\prime}(x)=2ax\), so that \[I_{2}=\begin{cases}\frac{1}{2\pi}\left(\int_{-\frac{3J_{K}}{4}}^{\mu}-\int_{ \mu}^{\mu+\frac{3J_{K}}{4}}\right)\frac{1}{2a}dx=0,&\mathrm{M}\\ \frac{1}{2\pi}\left(\int_{b}^{\mu}-\int_{\mu}^{4\pi a+b}\right)\frac{1}{2a}dx= \frac{\mu-b}{2\pi a}-1,&\mathrm{KI}\end{cases} \tag{17}\] consistent with our numerical results for \(a=1/(2\pi)\) and \(b=-1\).
2310.04213
Topology-Aware Neural Networks for Fast Contingency Analysis of Power Systems
Training Neural Networks able to capture the topology changes of the power grid is one of the significant challenges towards the adoption of machine learning techniques for N-k security computations and a wide range of other operations that involve grid reconfiguration. As the number of N-k scenarios increases exponentially with increasing system size this renders such problems extremely time-consuming to solve with traditional solvers. In this paper, we combine Physics-Informed Neural Networks with both a Guided-Dropout (GD) Neural Network (which associates dedicated neurons with specific line connections/disconnections) and an edge-varrying Graph Neural Neural Network (GNN) architecture to learn the setpoints for a grid that considers all probable single-line reconfigurations (all critical N-1 scenarios) and subsequently apply the trained models to N-k scenarios.We demonstrate how incorporating the underlying physical equations for the network equations within the training procedure of the GD and the GNN architectures, performs with N-1, N-2, and N-3 case studies. Using the AC Power Flow as a guiding application, we test our methods on the 14-bus, 30-bus, 57-bus, and 118-bus systems. We find that these topology-aware NNs not only achieve the task of contingency screening with satisfactory accuracy but do this at up to 1000 times faster than the Newton Raphson power flow solver. Moreover, our results provide a comparison of the GD and GNN models in terms of accuracy and computational speed and provide recommendations on their adoption for contingency analysis of power systems.
Agnes M. Nakiganda, Catherine Cheylan, Spyros Chatzivasileiadis
2023-10-06T13:00:36Z
http://arxiv.org/abs/2310.04213v2
# Topology-Aware Neural Networks for Fast Contingency Analysis of Power Systems ###### Abstract Training Neural Networks able to capture the topology changes of the power grid is one of the significant challenges towards the adoption of machine learning techniques for N-\(k\) security computations and a wide range of other operations that involve grid reconfiguration. As the number of N-\(k\) scenarios increases exponentially with increasing system size this renders such problems extremely time-consuming to solve with traditional solvers. In this paper, we combine Physics-Informed Neural Networks with both a Guided-Dropout (GD) (which associates dedicated neurons with specific line connections/disconnections) and an edge-carrying Graph Neural Neural Network (GNN) architecture to learn the setpoints for a grid that considers all probable single-line reconfigurations (all critical N\(-1\) scenarios) and subsequently apply the trained models to N-\(k\) scenarios.We demonstrate how incorporating the underlying physical equations for the network equations within the training procedure of the GD and the GNN architectures, performs with N\(-1\), N\(-2\), and N\(-3\) case studies. Using the AC Power Flow as a guiding application, we test our methods on the 14-bus, 30-bus, 57-bus, and 118-bus systems. We find that these topology-aware NNs not only achieve the task of contingency screening with satisfactory accuracy but do this at 100 to 1000 times faster than the Newton Raphson power flow solver. Moreover, our results provide a comparison of the GD and GNN models in terms of accuracy and computational speed and provide recommendations on their adoption for contingency analysis of power systems. AC Power Flow, Graph Neural Network, Guided Dropout, Network Topology, Physics Informed Neural Network ## I Introduction The power grid is rapidly transforming and incorporating numerous devices that operate in work together to maintain a balance of supply and demand. Now more than ever, it is vital that system operators can ascertain that potentially critical contingency scenarios are promptly screened, analyzed and mitigation measures devised. Power systems today are designed with inherent N\(-1\) operational reliability, however, as networks grow larger and incorporate more devices, the N-\(k\) system security must be adequately handled such that a reliable and resilient grid can be maintained. The power grid is a separate in synchronism to maintain a balance of supply and demand. Now more than ever, it is vital that system operators can ascertain that potential critical contingency scenarios are promptly screened, analyzed and mitigation measures devised. Power systems today are designed with inherent N\(-1\) operational reliability, however, as networks grow larger and incorporate more devices, the N-\(k\) system security must be adequately managed such that a reliable and resilient grid can be maintained [1]. Moreover, if not sufficiently handled such multiple contingencies can result in voltage collapse and cascading failures [2, 3]. Traditionally, numerical methods such as Newton Raphson have served as a means to solve the power flow problem for critical contingency screening analysis, however, the necessary computing time for handling the combinatorial explosion of N-k scenarios becomes prohibitive with such techniques. Machine Learning (ML) models including Decision Trees, Support Vector Machines, Random Forests, and Neural Networks to mention but a few have been shown to handle complex power systems problems tractably and efficiently [4, 5, 6]. These methods eliminate the computationally intensive iterative procedures of traditional power flow solvers and scale well with increasing grid sizes. However, the downside to many ML algorithms is that they are often unable to consider grid topologies beyond the one topology on which they have been trained i.e., they do not incorporate variables that relate to connection/disconnection of power lines or reconfiguration of buses. Moreover, training a single model for each potential N-\(1\) topology would also be unrealistic and impractical. This affects their ability to generalize to varying grid configurations, which is an inherent aspect of the contingency assessment problem in power systems and is, therefore, hindering their adoption in real systems. In order to leverage the enormous computational efficiency of ML methods for application to the N-\(k\) power flow problem, various ML-based architectures have been proposed. In [7], an one-hot encoding that adds extra binary variables to represent the connection/disconnection of components was presented. However, results therein show this method may not scale well to larger systems with hundreds of components. In [7] and [8], the authors introduce the so-called "Guided Dropout" method to address the topology change problem. "Guided Dropout", _sparsifies_ the neural network model by introducing
2303.13657
Policy Evaluation in Distributional LQR
Distributional reinforcement learning (DRL) enhances the understanding of the effects of the randomness in the environment by letting agents learn the distribution of a random return, rather than its expected value as in standard RL. At the same time, a main challenge in DRL is that policy evaluation in DRL typically relies on the representation of the return distribution, which needs to be carefully designed. In this paper, we address this challenge for a special class of DRL problems that rely on linear quadratic regulator (LQR) for control, advocating for a new distributional approach to LQR, which we call \emph{distributional LQR}. Specifically, we provide a closed-form expression of the distribution of the random return which, remarkably, is applicable to all exogenous disturbances on the dynamics, as long as they are independent and identically distributed (i.i.d.). While the proposed exact return distribution consists of infinitely many random variables, we show that this distribution can be approximated by a finite number of random variables, and the associated approximation error can be analytically bounded under mild assumptions. Using the approximate return distribution, we propose a zeroth-order policy gradient algorithm for risk-averse LQR using the Conditional Value at Risk (CVaR) as a measure of risk. Numerical experiments are provided to illustrate our theoretical results.
Zifan Wang, Yulong Gao, Siyi Wang, Michael M. Zavlanos, Alessandro Abate, Karl H. Johansson
2023-03-23T20:27:40Z
http://arxiv.org/abs/2303.13657v1
# Policy Evaluation in Distributional LQR ###### Abstract Distributional reinforcement learning (DRL) enhances the understanding of the effects of the randomness in the environment by letting agents learn the distribution of a random return, rather than its expected value as in standard RL. At the same time, a main challenge in DRL is that policy evaluation in DRL typically relies on the representation of the return distribution, which needs to be carefully designed. In this paper, we address this challenge for a special class of DRL problems that rely on discounted linear quadratic regulator (LQR) for control, advocating for a new distributional approach to LQR, which we call _distributional LQR_. Specifically, we provide a closed-form expression of the distribution of the random return which, remarkably, is applicable to all exogenous disturbances on the dynamics, as long as they are independent and identically distributed (i.i.d.). While the proposed exact return distribution consists of infinitely many random variables, we show that this distribution can be approximated by a finite number of random variables, and the associated approximation error can be analytically bounded under mild assumptions. Using the approximate return distribution, we propose a zeroth-order policy gradient algorithm for risk-averse LQR using the Conditional Value at Risk (CVaR) as a measure of risk. Numerical experiments are provided to illustrate our theoretical results. D 2023 D 2023 Zifan Wang, Y. Gao, S. Wang, M.M. Zavlanos, A. Abate & K.H. Johansson. Distributional LQR, distributional RL, policy evaluation, risk-averse control ## 1 Introduction In reinforcement learning, the value of implementing a policy at a given state is captured by a value function, which models the expected sum of returns following this prescribed policy. Recently, Bellemare et al. (2017) proposed the notion of distributional reinforcement learning (DRL), which learns the return distribution of a policy from a given state, instead of only its expected return. Compared to the scalar expected value function, the return distribution is infinite-dimensional and contains far more information. It is, therefore, not surprising that a few DRL algorithms, including C51 (Bellemare et al., 2017), D4PG (Barth-Maron et al., 2018), QR-DQN (Dabney et al., 2018) and SDPG (Singh et al., 2022), dramatically improve the empirical performance in practical applications over their non-distributional counterpart. In DRL, the practical effectiveness of algorithms builds on the theory by Bellemare et al. (2017), where the distributional Bellman operator is shown to be a contraction in the (maximum form of) the Wasserstein metric between probability distributions. However, it is usually difficult to characterise the exact return distribution in DRL with finite data. Approximations of the return distribution are thus necessary to make it computable in practice. To address this challenge, Bellemare et al. (2017) propose a categorical method that partitions the return distribution into a finite number of uniformly spaced atoms in a fixed region. One drawback of this method is that it relies on prior knowledge of the range of the returned values. To address this limitation, a quantile function method (Dabney et al., 2018) and a sample-based method (Singh et al., 2022) have been recently proposed. However, these works cannot provide an analytical expression for the approximation error, and computational cost needs to be decided manually to guarantee approximation accuracy. In this paper, we characterise the return distribution of the random cost for the classical discounted linear quadratic regulator (LQR) problem, which we term _distributional LQR_. To the best of our knowledge, the return distribution in LQR has not been explored in the literature. Our contributions are summarised as follows: 1. We provide an analytical expression of the random return for distributional LQR problems and prove that this return function is a fixed-point solution of the distributional Bellman equation. Specifically, we show that the proposed analytical expression consists of infinitely many random variables and holds for arbitrary i.i.d. exogenous disturbances, e.g., non-Gaussian noise or noise with non-zero mean. 2. We develop an approximation of the distribution of the random return using a finite number of random variables. Under mild assumptions, we theoretically show that the sup of the difference between the exact and approximated return distributions deceases linearly with the numbers of random variables: this is also validated by numerical experiments. 3. The proposed analytical return distribution provides a theoretical foundation for distributional LQR, allowing for general optimality criteria for policy improvement. In this work, we employ the return distribution to analyse risk-averse LQR problems using the Conditional Value at Risk (CVaR) as the risk measure. Since the gradient of CVaR is generally difficult to compute analytically, we propose a risk-averse policy gradient algorithm that relies on the zeroth-order optimisation to seek an optimal risk-averse policy. Numerical experiments are provided to showcase this application. **Related Work:** Most closely related to the problem considered in this paper is work on reinforcement learning for LQR, which focuses on learning the expected return through interaction with the environment; see, e.g., Dean et al. (2020); Tu and Recht (2018); Fazel et al. (2018); Malik et al. (2019); Li et al. (2021); Yaghmaie et al. (2022); Zheng et al. (2021). For example, Fazel et al. (2018) propose a model-free policy gradient algorithm for LQR and showed its global convergence with finite polynomial computational and sample complexity. Moreover, Zheng et al. (2021) study model-based reinforcement learning for the Linear Quadratic Gaussian problems, in which a model is first learnt from data and then used to design the policy. However, all these works rely on the expected return instead of the return distribution, hence these methods cannot be applied here. Since the return distribution captures the intrinsic randomness of the long-term cost, it provides a natural framework to consider more general optimality criteria, e.g., optimal risk-averse policies. There exist recent works on risk-averse policy design for DRL, including Singh et al. (2020); Dabney et al. (2018); Tang et al. (2019). For example, the work in Dabney et al. (2018) use the quantile function to approximate the return distribution, which is then applied to design risk-sensitive policies for Atari games. On the other hand, Singh et al. (2020) show that risk-averse DRL achieves robustness against system disturbances in continuous control tasks. All these works focus on empirical improvements in specific tasks, however, without theoretical analysis. Related to this paper is also work on risk-sensitive LQR, which has been studied in Van Parys et al. (2015); Tsiamis et al. (2021); Kim and Yang (2021); Chapman and Lessard (2021); Kishida and Cetinkaya (2022). Similarly, these methods however do not analyse the return distribution. ## 2 Problem Statement Consider a discrete-time linear dynamical system: \[x_{t+1}=Ax_{t}+Bu_{t}+v_{t}, \tag{1}\] where \(x_{t}\in\mathbb{R}^{n}\), \(u_{t}\in\mathbb{R}^{p}\), \(v_{t}\in\mathbb{R}^{n}\) are the system state, control input, and the exogenous disturbance, respectively. We assume that the exogenous disturbances \(v_{t}\) with bounded moments, \(t\in\mathbb{N}\), are i.i.d. sampled from a distribution \(\mathcal{D}\) of arbitrary form. ### Classical LQR The canonical LQR problem aims to find a control policy \(\pi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{p}\) to minimise the objective \[J(u)=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}(x_{t}^{T}Qx_{t}+u_{t}^{T} Ru_{t})\right], \tag{2}\] where \(Q,R\) are positive-definite constant matrices and \(\gamma\in(0,1)\) is a discount parameter. Given a control policy \(\pi\), let \(V^{\pi}(x)=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{k}(x_{t}^{T}Qx_{t}+u_{t} ^{T}Ru_{t})\right]\) denote the expected return from an initial state \(x_{0}=x\) with \(u_{t}=\pi(x_{t})\). For the static linear policy \(\pi(x_{t})=Kx_{t}\), the value function \(V^{\pi}(x)\) satisfies the Bellman equation \[V^{\pi}(x)=x^{T}(Q+K^{T}RK)x+\gamma\underset{X^{\prime}=(A+BK)x+v_{0}}{\mathbb{ E}}[V^{\pi}(X^{\prime})], \tag{3}\] where the capital letter \(X^{\prime}\) denotes a random variable over which we take the expectation. When the exogenous disturbances \(v_{t}\) are normally distributed with zero mean, the value function is known to take the quadratic form \(V^{\pi}(x)=x^{T}Px+q\), where \(P>0\) is the solution of the Lyapunov equation \(P=Q+K^{T}RK+\gamma A_{K}^{T}PA_{K}\) and \(q\) is a scalar related to the variance of \(v_{t}\). In particular, the optimal control feedback gain is obtained as \(K^{*}=-\gamma(R+\gamma B^{T}PB)^{-1}PA\) and \(P\) is the solution to the classic Riccati equation \(P=\gamma A^{T}PA-\gamma^{2}A^{T}PB(R+\gamma B^{T}PB)^{-1}B^{T}PA+Q\). ### Distributional LQR Motivated by the advantages of DRL in better understanding the effects of the randomness in the environment and in considering more general optimality criteria, in this paper we propose a distributional approach to the LQR problem. Unlike classical reinforcement learning, which relies on expected returns, DRL (Bellemare et al., 2023) relies on the distribution of random returns. The return distribution characterises the probability distribution of different returns generated by a given policy and, as such, it contains much richer information on the performance of a given policy compared to the expected return. In the context of LQR, we denote by \(G^{\pi}(x)\) the random return using the static control strategy \(u_{t}=\pi(x_{t})\) from the initial state \(x_{0}=x\), which is defined as \[G^{\pi}(x)=\sum_{t=0}^{\infty}\gamma^{t}(x_{t}^{T}Qx_{t}+u_{t}^{T}Ru_{t}),\quad u _{t}=\pi(x_{t}),x_{0}=x. \tag{4}\] It is straightforward to see that the expectation of \(G^{\pi}(x)\) is equivalent to the value function \(V^{\pi}(x)\). The standard Bellman equation in (3) decomposes the long-term expected return into an immediate stage cost plus the expected return of future actions starting at the next step. Similarly, we can define the distributional Bellman equation for the random return as \[G^{\pi}(x)\,\raisebox{-1.72pt}{$\stackrel{{ D}}{{=}}$}\,x^{T}Qx +\pi(x)^{T}R\pi(x)+\gamma G^{\pi}(X^{\prime}),\quad X^{\prime}=Ax+B\pi(x)+v_{0}. \tag{5}\] Here we use the notation \(\raisebox{-1.72pt}{$\stackrel{{ D}}{{=}}$}\) to denote that two random variables \(Z_{1},Z_{2}\) are equal in distribution, i.e., \(Z_{1}\,\raisebox{-1.72pt}{$\stackrel{{ D}}{{=}}$}\,Z_{2}\). Note that \(X^{\prime}\) denotes a random variable, as in (3). Compared to the expected return in LQR, which is a scalar, here the return distribution is infinite-dimensional and can have a complex form. It is challenging to estimate an infinite-dimensional function exactly with finite data and thus an approximation of the return distribution is necessary in practice. In this paper, we first analytically characterise the random return for the LQR problem. Then we show how to approximate the distribution of the random return using finite random variables, so such that the approximated distribution is computationally tractable and the approximation error is bounded. The proposed distributional LQR framework allows us to consider more general optimality criteria, which we demonstrate by using the proposed return distribution to develop a policy gradient algorithm for risk-averse LQR. ## 3 Main Results ### Exact Form of the Return Distribution In this section, we precisely characterise the distribution of the random return that satisfies the distributional Bellman equation (5). Given a static linear policy \(\pi(x_{t})=Kx_{t}\), we denote by \(G^{K}(x)\) the random return \(G^{\pi}(x)\) under the policy \(\pi(x_{t})\) from the initial state \(x_{0}=x\), which is defined as \[G^{K}(x)=\sum_{t=0}^{\infty}\gamma^{t}x_{t}^{T}(Q+K^{T}RK)x_{t},\quad x_{0}=x.\] The random return \(G^{K}(x)\) satisfies the following distributional Bellman equation \[G^{K}(x)\,\raisebox{-1.72pt}{$\stackrel{{ D}}{{=}}$}\,x^{T}Q_{K}x+ \gamma G^{K}(X^{\prime}),\quad X^{\prime}=A_{K}x+v_{0}, \tag{6}\] where \(A_{K}:=A+BK\) and \(Q_{K}:=Q+K^{T}RK\). In the following theorem, we provide an explicit expression of the random return \(G^{K}(x)\). **Theorem 1**: _Suppose that the feedback gain \(K\) is stabilizing, i.e., \(A_{K}=A+BK\) is stable. Let_ \[G^{K}(x)=x^{T}Px+\sum_{k=0}^{\infty}\gamma^{k+1}w_{k}^{T}Pw_{k}+2\sum_{k=0}^{ \infty}\gamma^{k+1}w_{k}^{T}PA_{K}^{k+1}x+2\sum_{k=1}^{\infty}\gamma^{k+1}w_{k }^{T}P\sum_{\tau=0}^{k-1}A_{K}^{k-\tau}w_{\tau}, \tag{7}\] _where \(P\) is obtained from the algebraic Riccati equation \(P=Q+K^{T}RK+\gamma A_{K}^{T}PA_{K}\), and the random variables \(w_{k}\sim\mathcal{D}\) are independent from each other for all \(k\in\mathbb{N}\). Then, the random variable \(G^{K}(x)\) defined in (7) is a fixed point solution to the distributional Bellman equation (6)._ **Proof** Recall that \(X^{\prime}=A_{K}x+v_{0}\), where \(v_{0}\) is a random variable sampled from the distribution \(\mathcal{D}\) and is independent from \(w_{k}\), \(k\in\mathbb{N}\), in (7). Substituting (7) into the right hand side of the equation (6), we have that \[x^{T}(Q+K^{T}RK)x+\gamma G^{K}(X^{\prime})\] \[= x^{T}Q_{K}x+\gamma X^{\prime T}PX^{\prime}+\sum_{t=0}^{\infty} \gamma^{t+2}w_{t}^{T}Pw_{t}+2\sum_{t=0}^{\infty}\gamma^{t+2}w_{t}^{T}PA_{K}^{t +1}X^{\prime}\] \[+2\sum_{t=1}^{\infty}\gamma^{t+2}w_{t}^{T}PA_{K}\sum_{i=0}^{t-1} A_{K}^{t-1-i}w_{i}\] \[= x^{T}Q_{K}x+\gamma(A_{K}x+v_{0})^{T}P(A_{K}x+v_{0})+\gamma^{2} \sum_{t=0}^{\infty}\gamma^{t}w_{t}^{T}Pw_{t}+2\gamma^{2}\sum_{t=1}^{\infty} \gamma^{t}w_{t}^{T}P\sum_{i=0}^{t-1}A_{K}^{t-i}w_{i}\] \[+2\gamma^{2}\sum_{t=0}^{\infty}\gamma^{t}w_{t}^{T}PA_{K}^{t+1}(A_ {K}x+v_{0})\] \[= x^{T}(Q_{K}+\gamma A_{K}^{T}PA_{K})x+\underbrace{\gamma v_{0}^{T }Pv_{0}+\gamma^{2}\sum_{t=0}^{\infty}\gamma^{t}w_{t}^{T}Pw_{t}}_{:=T_{1}}+ \underbrace{2\gamma v_{0}^{T}PA_{K}x+2\gamma^{2}\sum_{t=0}^{\infty}\gamma^{t}w _{t}^{T}PA_{K}^{t+2}x}_{:=T_{2}}\] \[+\underbrace{2\gamma^{2}\sum_{t=1}^{\infty}\gamma^{t}w_{t}^{T}P \sum_{i=0}^{t-1}A_{K}^{t-i}w_{i}+2\gamma^{2}\sum_{t=0}^{\infty}\gamma^{t}w_{t }^{T}PA_{K}^{t+1}v_{0}}_{:=T_{3}}.\] Define \(\xi_{0}:=v_{0}\), \(\xi_{t}=w_{t-1}\), \(t=1,2,\ldots\). From the definition of the term \(T_{1}\), we have that \[T_{1}=\gamma v_{0}^{T}Pv_{0}+\gamma^{2}\sum_{t=0}^{\infty}\gamma^{t}w_{t}^{T} Pw_{t}\stackrel{{ k=t+1}}{{=}}\gamma\xi_{0}^{T}P\xi_{0}+\gamma\sum_{k=1}^{\infty} \gamma^{k}\xi_{k}^{T}P\xi_{k}=\gamma\sum_{k=0}^{\infty}\gamma^{k}\xi_{k}^{T}P \xi_{k}.\] For the term \(T_{2}\), we have that \[T_{2}=2\gamma v_{0}^{T}PA_{K}x+2\gamma^{2}\sum_{t=0}^{\infty} \gamma^{t}w_{t}^{T}PA_{K}^{t+2}x=2\gamma\xi_{0}^{T}PA_{K}x+2\gamma^{2}\sum_{t= 0}^{\infty}\gamma^{t}\xi_{k+1}^{T}PA_{K}^{t+2}x\] \[\stackrel{{ k=t+1}}{{=}}2\gamma\xi_{0}^{T}PA_{K}x+2 \gamma\sum_{k=1}^{\infty}\gamma^{k}\xi_{k}^{T}PA_{K}^{k+1}x=2\gamma\sum_{k=0}^{ \infty}\gamma^{k}\xi_{k}^{T}PA_{K}^{k+1}x.\] Using similar techniques for the term \(T_{3}\), we obtain that \(T_{3}=2\gamma\sum_{k=1}^{\infty}\gamma^{k}\xi_{k}^{T}PA_{K}\sum_{i=0}^{k-1}A_{ K}^{k-1-i}\xi_{i}\). Due to the fact that \(P=Q+K^{T}RK+\gamma A_{K}^{T}PA_{K}\), we have \[x^{T}Q_{K}x+\gamma G^{K}(X^{\prime})=x^{T}Px+T_{1}+T_{2}+T_{3}\] \[= x^{T}Px+\gamma\sum_{k=0}^{\infty}\gamma^{k}\xi_{k}^{T}P\xi_{k}+2 \gamma\sum_{k=0}^{\infty}\gamma^{k}x^{T}PA_{K}^{k+1}\xi_{k}+2\gamma\sum_{k=1}^{ \infty}\gamma^{k}\xi_{k}^{T}PA_{K}\sum_{i=0}^{k-1}A_{K}^{k-1-i}\xi_{i}, \tag{8}\] which is in the same form as in (7). Since \(\{\xi_{k}\}_{k=0}^{\infty}\) and \(\{w_{k}\}_{k=0}^{\infty}\) are i.i.d., we have that the two random variables (7) and (8) have the same distribution, i.e., \(G^{K}(x)\,\overset{D}{=}\,x^{T}Q_{K}x+\gamma G^{K}(X^{\prime})\). ### Approximation of the Return Distribution with Finite Parameters In this section, we show how to approximate the random return defined in (7) using a finite number of random variables. Considering only the first \(N\) terms in the summations in the expression in (7) and disregarding the terms for \(k\) larger than \(N\) yields the following: \[G_{N}^{K}(x)=x^{T}Px+\sum_{k=0}^{N-1}\gamma^{k+1}w_{k}^{T}Pw_{k} +2\sum_{k=0}^{N-1}\gamma^{k+1}w_{k}^{T}PA_{K}^{k+1}x+2\sum_{k=1}^{N-1}\gamma^{ k+1}w_{k}^{T}P\sum_{\tau=0}^{k-1}A_{K}^{k-\tau}w_{\tau}. \tag{9}\] Let \(F_{x}^{K}\) and \(F_{x,N}^{K}\) denote the cumulative distribution function (CDF) of \(G^{K}(x)\) and \(G_{N}^{K}(x)\), respectively. The following theorem provides an upper bound on the difference between \(F_{x}^{K}\) and \(F_{x,N}^{K}\), and shows that the sequence \(\{G_{N}^{K}(x)\}_{N\in\mathbb{N}}\) converges pointwise in distribution to \(G^{K}(x)\), \(\forall x\in\mathbb{R}^{n}\). **Theorem 2**: _Assume that the probability density functions of \(w_{k}\) exist and are bounded, and satisfy \(\mathbb{E}[w_{k}^{T}w_{k}]\leq\sigma_{0}^{2}\), \(\mathbb{E}[\left\|w_{k}\right\|_{2}]\leq\mu_{0}\), for \(\forall k\in\mathbb{N}\). Suppose that the feedback gain \(K\) is stabilizing such that \(\left\|A_{K}\right\|_{2}=\rho_{K}<1\). Then, the sup difference between the CDFs \(F_{x}^{K}\) and \(F_{x,N}^{K}\) is bounded by_ \[\sup_{z}|F_{x}^{K}(z)-F_{x,N}^{K}(z)|\leq C\gamma^{N}, \tag{10}\] _where \(C\) is a constant that depends on the matrices \(A,B,Q,R,K\), the initial state value \(x\), and the parameters \(\gamma,\rho_{K},\sigma_{0},\mu_{0}\)._ * Define \(Y_{N}:=G^{K}(x)-G_{N}^{K}(x)\), we have \[\sup_{z}|F_{x}^{K}(z)-F_{x,N}^{K}(z)|=\sup_{z}|\mathbb{P}(G_{N}^{K} (x)\leq z)-\mathbb{P}(G^{K}(x)\leq z)|\] \[= \sup_{z}|\mathbb{P}(G_{N}^{K}(x)\leq z)-\mathbb{P}(G_{N}^{K}(x)+ Y_{N}\leq z)|\] \[= \sup_{z}\Big{|}\mathbb{P}(G_{N}^{K}(x)\leq z)\int_{-\infty}^{ \infty}\mathbb{P}(Y_{N}=t)dt-\int_{-\infty}^{\infty}\mathbb{P}(G_{N}^{K}(x) \leq z-t)\mathbb{P}(Y_{N}=t)dt\Big{|}\] \[= \sup_{z}\Big{|}\int_{-\infty}^{\infty}\mathbb{P}(Y_{N}=t)\big{(} F_{x,N}^{K}(z)-F_{x,N}^{K}(z-t)\big{)}dt\Big{|}.\] (11) Since the random variables \(w_{t}\) are i.i.d for all \(t>0\) and the probability density function of \(w_{t}\) exists, the function \(F_{x,N}^{K}\) is continuous and differentiable. Applying the mean value theorem, when \(t>0\) there exists a point \(z^{\prime}\in[z-t,z]\) such that \(F_{x,N}^{K}(z)-F_{x,N}^{K}(z-t)=f_{x,N}^{K}(z^{\prime})t\), where \(f_{x,N}^{K}\) is the probability density function of \(G_{N}^{K}(x)\). Since the probability density function of \(w_{t}\) is bounded, it further follows that \(f_{x,N}^{K}\) is bounded. Then, we have that \(|F_{x,N}^{K}(z)-F_{x,N}^{K}(z-t)|=|f_{x,N}^{K}(z^{\prime})t|\leq L_{0}|t|\), where \(L_{0}\) is an upper bound of the probability function \(f_{x,N}^{K}\). Following a similar argument, we can show that this inequality holds when \(t\leq 0\). Substituting this inequality into (11), we obtain \[\sup_{z}|F_{x}^{K}(z)-F_{x,N}^{K}(z)|\leq\sup_{z}\Big{|}\int_{- \infty}^{\infty}\mathbb{P}(Y_{N}=t)L_{0}|t|dt\Big{|}=L_{0}\mathbb{E}|Y_{N}|.\] (12) From the definition of \(Y_{N}\), we obtain that \[Y_{N}= \sum_{k=N}^{\infty}\gamma^{k+1}w_{k}^{T}Pw_{k}+2\sum_{k=N}^{\infty} \gamma^{k+1}w_{k}^{T}PA_{K}^{k+1}x+2\sum_{k=N}^{\infty}\gamma^{k+1}w_{k}^{T}P \sum_{\tau=0}^{k-1}A_{K}^{k-\tau}w_{\tau}\] \[\overset{t=k-N}{=} \gamma^{N}\Big{(}\sum_{t=0}^{\infty}\gamma^{t+1}w_{t+N}^{T}Pw_{t+N }+2\sum_{t=0}^{\infty}\gamma^{t+1}w_{t+N}^{T}PA_{K}^{t+N+1}x\] \[+2\sum_{t=0}^{\infty}\gamma^{t+1}w_{t+N}^{T}P\sum_{\tau=0}^{t+N-1} A_{K}^{t+N-\tau}w_{\tau}\Big{)}.\] Taking the expectation of the absolute value of \(Y_{N}\), we have \[\mathbb{E}|Y_{N}|\leq \gamma^{N}\Big{(}\sum_{t=0}^{\infty}\gamma^{t+1}\mathbb{E}|w_{t+N} ^{T}Pw_{t+N}|+2\sum_{t=0}^{\infty}\gamma^{t+1}\mathbb{E}|w_{t+N}^{T}PA_{K}^{t+N +1}x|\] \[+2\sum_{t=0}^{\infty}\gamma^{t+1}\mathbb{E}|w_{t+N}^{T}P\sum_{ \tau=0}^{t+N-1}A_{K}^{t+N-\tau}w_{\tau}|\Big{)}.\] We handle the terms in the above inequality one by one. For the first term, we have that \[\sum_{t=0}^{\infty}\gamma^{t+1}\mathbb{E}|w_{t+N}^{T}Pw_{t+N}|\leq \sum_{t=0}^{\infty}\gamma^{t+1}\mathbb{E}|\lambda_{\max}(P)w_{t+N}^{T}w_{t+N}| \leq\lambda_{\max}(P)\sigma_{0}^{2}\frac{\gamma}{1-\gamma}. \tag{13}\] For the second term, we have that \[2\sum_{t=0}^{\infty}\gamma^{t+1}\mathbb{E}|w_{t+N}^{T}PA_{K}^{t+N +1}x|\leq 2\mu\sum_{t=0}^{\infty}\gamma^{t+1}\left\|P\right\|_{2}\left\|A_{ K}^{t+N+1}\right\|_{2}\left\|x\right\|_{2}\] \[\leq 2\mu\sum_{t=0}^{\infty}\gamma^{t+1}\left\|P\right\|_{2}\rho_{K}^{ t+N-1}\left\|x\right\|_{2}\leq 2\mu\left\|P\right\|_{2}\left|x\right|\frac{ \gamma\rho_{K}^{N-1}}{1-\gamma\rho_{K}}\leq 2\mu\left\|P\right\|_{2}\left|x \right|\frac{\gamma}{1-\gamma\rho_{K}}, \tag{14}\] where the second inequality is due to the fact that \(\left\|A_{K}^{t+N+1}\right\|_{2}\leq(\left\|A_{K}\right\|_{2})^{t+N+1}\leq\rho_ {K}^{t+N+1}\) and the last inequality follows from the fact that \(N\geq 1\). For the third term, we have that \[2\sum_{t=0}^{\infty}\gamma^{t+1}\mathbb{E}|w_{t+N}^{T}P\sum_{ \tau=0}^{t+N-1}A_{K}^{t+N-\tau}w_{\tau}|\leq 2\sum_{t=0}^{\infty}\gamma^{t+1} \mathbb{E}\left[\left\|w_{t+N}^{T}\right\|_{2}\left\|P\right\|_{2}^{t+N-1} \sum_{\tau=0}^{t+N-\tau}A_{K}^{t+N-\tau}w_{\tau}\right\|_{2}\right]\] \[\leq 2\mu\left\|P\right\|_{2}\sum_{t=0}^{\infty}\gamma^{t+1} \mathbb{E}\left[\left\|\sum_{\tau=0}^{t+N-1}A_{K}^{t+N-\tau}w_{\tau}\right\|_{2 }\right]\leq 2\mu\left\|P\right\|_{2}\sum_{t=0}^{\infty}\gamma^{t+1}\mathbb{E} \left[\left.\sum_{\tau=0}^{t+N-1}\left\|A_{K}^{t+N-\tau}\right\|_{2}\left\|w_{ \tau}\right\|_{2}\right]\] \[\leq 2\mu^{2}\left\|P\right\|_{2}\sum_{t=0}^{\infty}\gamma^{t+1} \sum_{\tau=0}^{t+N-1}\rho_{K}^{t+N-\tau}\leq 2\mu^{2}\left\|P\right\|_{2}\sum_{t=0}^{ \infty}\gamma^{t+1}\frac{\rho_{K}}{1-\rho_{K}}\leq 2\mu^{2}\left\|P\right\|_{2} \frac{\gamma\rho_{K}}{(1-\gamma)(1-\rho_{K})}, \tag{15}\] where the second inequality is due to the fact that \(w_{\tau}\) and \(w_{t+N}\) are independent and the second to last inequality follows from the fact that \(\sum_{\tau=0}^{t+N-1}\rho_{K}^{t+N-\tau}=\sum_{\tau=1}^{t+N}\rho_{K}^{\tau} \leq\frac{\rho_{K}}{1-\rho_{K}}\). Combining (13), (14) and (15), we have that \[\sup_{z}|F_{x}^{K}(z)-F_{x,N}^{K}(z)|\leq L_{0}\mathbb{E}|Y_{N}|\] \[\leq L_{0}\gamma^{N}\Big{(}\lambda_{\max}(P)\sigma_{0}^{2}\frac{\gamma }{1-\gamma}+2\mu\left\|P\right\|_{2}|x|\frac{\gamma}{1-\gamma\rho_{K}}+2\mu^{2 }\left\|P\right\|_{2}\frac{\gamma\rho_{K}}{(1-\gamma)(1-\rho_{K})}\Big{)}:=C \gamma^{N}.\] The proof is complete and also yields the expression of the constant \(C\). **Remark 3**: _The bound on the distribution approximation in (10) relies on the conditions of Theorem 2, which ensure that the PDF of \(G_{N}^{K}\) is continuous and bounded. Note that these conditions are not particularly strict, and indeed hold for many noise distributions commonly used in linear dynamical systems, including Gaussian and uniform. Future work will investigate relaxations of these conditions._ ### Numerical Experiments on Quality of the Approximation of the Return Distribution In the following experiment, we consider a scalar model with matrices \(A=B=1\). Similarly, the weighting matrices in the LQR cost are chosen as \(Q=R=1\). The exogenous disturbances are standard normal distributions with zero mean. Even for this scalar system, it is impossible to simplify the expression of the exact return distribution, which still depends on an infinite number of random variables. Thus, as a baseline for the return distribution, we generate an empirical distribution that approximates the true distribution of the random return. More specifically, we use the Monte Carlo (MC) method to obtain 10000 samples of the random return and use the sample frequency over evenly-divided regions as an approximation of the probability density function. According to the law of large numbers, the empirical distribution approaches the real one as the number of trials increases. Note that, although the MC method provides an alternative way to approximate the return distribution, it relies on using sufficiently many samples that can be time-consuming, and its (statistical) approximation error is generally difficult to analyse. Thus, the MC method is not applicable for practical policy evaluation of distributional LQR, and in this experiment, it is used only to verify our approximate return distribution. In comparison, the approximate return distribution using finite number of random variables in this paper is analytical for policy evaluation and the corresponding approximation error can be bounded: as such, it is further usable for policy optimisation, as shown in Section 4. We denote here by \(f_{N}\) the distribution of the approximated random return \(G_{N}^{K}(x_{0})\) obtained considering \(N\) random variables. We fix the feedback gain as \(K=-0.4684\) and select different values of \(\gamma\) and \(x_{0}\). The results are shown in Fig. 1. Specifically, Fig. 1 (a) and (c) show that when \(\gamma\) is small, the return distribution can be well approximated using only few random variables (\(N=3\) works well). However, when \(\gamma\) approaches \(1\), more random variables are needed for an accurate approximation: we employ \(N=15\) and \(N=20\) random variables in the case of \(\gamma=0.8\) and \(\gamma=0.85\), respectively, as shown in Fig. 1 (b) and (d). Moreover, the value of the initial state \(x_{0}\) has an influence on the shape of the return distribution, which can be clearly observed from the scalar case. When \(x_{0}\) is large, the random variable \(w_{k}^{T}PA_{K}^{k+1}x_{0}\) dominates and, therefore, its distribution is close to a Gaussian distribution, as shown in Fig. 1 (c) and (d). If instead \(x_{0}\) is small, then the random variable \(w_{k}^{T}Pw_{k}\) plays a leading role, so the overall distribution is close to the chi-square one, as shown in Fig. 1 (a) and (b). In conclusion, when \(N\) is large, the approximate distribution is closer to the distribution obtained from the MC method, and thus to the true distribution. ## 4 Application to Risk-Averse LQR In this section, we consider a risk-averse LQR problem and leverage the closed-form expression of the random return \(G^{K}(x)\) to obtain an optimal policy. Since the distribution of the random return \(G^{K}(x)\) consists of an infinite number of random variables, it is computationally unwieldy. Instead, we employ the approximate random return \(G^{K}_{N}(x)\) proposed in Section 3.2. As a risk measure for the problem at hand, we select the well-known Conditional Value at Risk (CVaR) (Rockafellar et al., 2000). We then construct an approximate risk-averse objective function, as \(\hat{\mathcal{C}}_{N}(K):=\mathrm{CVaR}_{\alpha}\left[G^{K}_{N}(x)\right]\). For a random variable \(Z\) with the CDF \(F\) and a risk level \(\alpha\in(0,1]\), the \(\mathrm{CVaR}\) value is defined as \(\mathrm{CVaR}_{\alpha}[Z]=\mathbb{E}_{F}[Z|Z>Z^{\alpha}]\), where \(Z^{\alpha}\) is the \(1-\alpha\) quantile of the distribution of the random variable \(Z\). Given this objective function, the goal is to find the optimal risk-averse controller, that is, to select the feedback gain \(K\) that minimises \(\hat{\mathcal{C}}_{N}(K)\). ### Risk-Averse Policy Gradient Algorithm In what follows, we propose a policy gradient method to solve this problem. We assume that the matrices \(A,B,Q,R\) are known. The first-order gradient descent step is hard to compute as it hinges on the gradient of the CVaR function. Therefore, we rely on zeroth-order optimisation to derive the policy gradient, as detailed in Algorithm 1. ``` 0: initial values \(K_{0}\), \(x\), step size \(\eta\), smoothing parameter \(\delta\), and dimension \(n\) 1:for\(episode\)\(t=1,\dots,T\)do 2: Sample \(\hat{K}_{t}=K_{t}+U_{t}\), where \(U_{t}\) is drawn at random over matrices whose norm is \(\delta\); 3: Compute the distribution of the random variable \(G^{\hat{K}_{t}}_{N}\); 4: Compute \(\hat{\mathcal{C}}_{N}(\hat{K}_{t})\); 5:\(K_{t+1}=K_{t}-\eta g_{t}\), where \(g_{t}=\frac{n}{\delta^{2}}\Big{(}\hat{\mathcal{C}}(\hat{K}_{t})-\hat{\mathcal{ C}}(\hat{K}_{t-1})\Big{)}U_{t}\). 6:endfor ``` **Algorithm 1** Risk-Averse Policy Gradient Specifically, at each episode \(t\), we sample an approximate feedback gain \(\hat{K}_{t}=K_{t}+U_{t}\), where \(U_{t}\) is drawn uniformly at random from the set of matrices with norm \(\delta\). Given \(\hat{K}_{t}\), we compute the approximate distribution of the random return \(G^{\hat{K}_{t}}_{N}(x)\) in (9) and the value of \(\hat{\mathcal{C}}_{N}(\hat{K}_{t})\). Then, we can perform the feedback gain update as \(K_{t+1}=K_{t}-\eta g_{t}\), where \(g_{t}=\frac{n}{\delta^{2}}\Big{(}\hat{\mathcal{C}}(\hat{K}_{t})-\hat{\mathcal{ C}}(\hat{K}_{t-1})\Big{)}U_{t}\). Figure 1: Return distribution and its approximation with finite number of random variables for different \(\gamma\) and \(x_{0}\). MC denotes the distribution returned by the Monte Carlo method and \(f_{N}\) denotes the distribution of the approximated random return \(G^{K}_{N}(x_{0})\). Here, the zeroth-order residual feedback technique proposed in Zhang et al. (2022) is used to reduce the variance. The theoretical analysis of this algorithm is left as our future work. ### Numerical Experiments Next, we consider a risk-averse LQR problem and experimentally illustrate the performance of Algorithm 1. We illustrate our approach for the same scalar system with the same cost function as in Section 3.3. The other parameters are selected as \(\gamma=0.6\), \(\delta=0.1\), \(\eta=0.0004\), \(N=10\), respectively. The initial controller is set as \(K_{0}=-0.2\), which is a stable one. We first set \(\alpha=1\): in this case, the risk-averse control problem is reduced to a risk-neutral control problem. Therefore, we can use traditional LQR techniques to compute the optimal feedback gain \(K^{*}=-0.4684\). We run the proposed risk-averse policy gradient Algorithm 1 and the simulation results are presented in Fig. 2 (a) and (b). Specifically, in Fig. 2 (a), the controller \(K\) returned by Algorithm 1 converges to \(K^{*}\), which verifies our proposed method for the risk-neutral case. Fig. 2 (b) illustrates the values of \(\mathrm{CVaR}\) achieved by Algorithm 1. Additionally, we select \(\alpha=0.4\) to find the optimal risk-averse controller. The simulation results are presented in Fig. 2 (c) and (d). We see that \(K\) converges to \(-0.55\), which leads to a smaller \(A+BK\) compared to \(K^{*}=-0.4684\). ## 5 Conclusions We have proposed a new distributional approach to the classic discounted LQR problem. Specifically, we first provided an analytic expression for the exact random return that depends on infinitely many random variables. Since the computation of this expression is difficult in practice, we also proposed an approximate expression for the distribution of the random return that only depends on a finite number of random variables, and have further characterised the error between these two distributions. Finally, we utilised the proposed random return to obtain an optimal controller for a risk-averse LQR problem using the CVaR as a measure of risk. To the best of our knowledge, this is a first framework for distributional LQR: it inherits the advantages of DRL methods compared to standard RL methods that rely on the expected return to evaluate the effect of a given policy, but it also provides an analytic expression for the return distribution, an area where current DRL methods significantly lack. Future research includes analyzing the theoretical convergence of risk-averse policy gradient algorithms and exploring a model-free setup where the system matrices are unknown. Figure 2: Risk-averse control using Algorithm 1. The solid lines are averages over 20 runs. ## Acknowledgments This work is supported in part by the Knut and Alice Wallenberg Foundation, the Swedish Strategic Research Foundation, the Swedish Research Council, AFOSR under award #FA9550-19-1-0169, and NSF under award CNS-1932011.
2303.14138
Constant sound speed and its thermodynamical interpretation in $f(Q)$ gravity
On the basis of homogeneous and isotropic Friedmann-Lemaitre-Robertson-Walker (FLRW) geometry, solutions to the issues of cosmic acceleration and dark energy are being put forth within the context of $f\left( Q\right)$ gravity. We take into account a power law $f(Q)$ model using $f\left( Q\right) =\alpha Q^{n}$, where $\alpha $ and $n$ are free model parameters. In the current scenario, we may establish the energy density and pressure for our $f(Q)$ cosmic model by applying the constant sound speed parameterizations, i.e., $\vartheta_{s}^{2}=\beta$, where a barotropic cosmic fluid is described in terms of $\beta$. The field equations are then derived, and their precise solutions are established. We obtain the constraints on the model parameters using the updated Hubble (Hz) data sets consisting of 31 data points, the recently published Pantheon samples (SNe) with 1048 points, and Baryon acoustic oscillations (BAO) data sets. We also examine the physical behaviour of the deceleration parameter, the equation of state (EoS) parameter, the statefinder diagnostic, and the Om diagnostic. We conclude that our $f\left( Q\right) $\ cosmic model predicts a transition in the universe from deceleration to acceleration. Further, to investigate the feasibility of the model, we discussed some of its thermodynamic aspects.
M. Koussour, Simran Arora, Dhruba Jyoti Gogoi, M. Bennai, P. K. Sahoo
2023-03-23T11:08:09Z
http://arxiv.org/abs/2303.14138v1
# Constant sound speed and its thermodynamical interpretation in \(f(Q)\) gravity ###### Abstract On the basis of homogeneous and isotropic Friedmann-Lemaitre-Robertson-Walker (FLRW) geometry, solutions to the issues of cosmic acceleration and dark energy are being put forth within the context of \(f\left(Q\right)\) gravity. We take into account a power law \(f(Q)\) model using \(f\left(Q\right)=\alpha Q^{n}\), where \(\alpha\) and \(n\) are free model parameters. In the current scenario, we may establish the energy density and pressure for our \(f(Q)\) cosmic model by applying the constant sound speed parameterizations, i.e., \(\theta_{\text{s}}^{2}=\beta\), where a barotropic cosmic fluid is described in terms of \(\beta\). The field equations are then derived, and their precise solutions are established. We obtain the constraints on the model parameters using the updated Hubble (Hz) data sets consisting of 31 data points, the recently published Pantheon samples (SNe) with 1048 points, and Baryon acoustic oscillations (BAO) data sets. We also examine the physical behaviour of the deceleration parameter, the equation of state (EoS) parameter, the statefinder diagnostic, and the Om diagnostic. We conclude that our \(f\left(Q\right)\) cosmic model predicts a transition in the universe from deceleration to acceleration. Further, to investigate the feasibility of the model, we discussed some of its thermodynamic aspects. ## I Introduction General Relativity (GR) has successfully explained various aspects of the Universe, including gravitational waves, black holes, compact stars, etc. However, GR is not entirely free from issues and suffers significant problems in the UV, and infrared regime [18]. The theory and a number of observable findings, such as the accelerated expansion of the universe and galaxy rotation curves, are very different in the infrared spectrum. To deal with the infrared issues of GR, a simple but quite effective extension was suggested, which is known as the \(\Lambda\)CDM model. Although this model could explain the experimental deviations of GR, it is burdened with the presence of dark matter and dark energy. Dark matter and dark energy are unknown forms of matter and energy content of the Universe which have are not been directly detected until now. Moreover, the dark energy predicted by the \(\Lambda\)CDM model is static. The accelerated expansion of the Universe indicates the nature and properties of the unknown energy content, i.e. dark energy present in the Universe. It is mainly supported by observational studies like Type Ia supernovae [1; 2], baryon acoustic oscillations [5; 6], large-scale structure [3; 4] and cosmic microwave background radiation [7; 8]. Apart from the \(\Lambda\)CDM model, several models support the existence of such exotic matter and energy in the Universe and, as hypothesized by such models, around 70% of the Universe is filled with dark energy. It is worth to be mentioned that although the \(\Lambda\)CDM model was able to explain the observational results, it again comes with some drawbacks like the cosmic coincidence problem [9]. It implies that the density of non-relativistic matter and dark energy are the same in the present Universe. Another issue with this model is the cosmological constant problem which shows a vast discrepancy between the astronomically observed values of cosmological constant \(\Lambda\)[1; 2] and theoretically predicted value of the quantum vacuum energy [10]. To overcome these issues, dynamical dark energy models like the Chaplygin gas model [11; 12], k-essence [13; 14], quintessence [15; 16] etc. have been introduced. In these models, the energy-momentum part of the field equations of GR is modified to explain the observational results. There is another class of theories in which the geometrical part of the field equations of GR is modified. Such theories are termed modified theories of gravity (MTGs). Some of the promising MTGs are \(f(R)\) gravity [18], \(f(R,T)\) gravity, \(f(R,L_{m})\) gravity etc. The most sim plified type of MTG is the \(f(R)\) gravity, where the Ricci scalar in the action of the theory is replaced by a well-motivated function of the Ricci scalar [18]. Although higher-order terms in the gravity action have been previously included by Utiyama and De Witt [17], Buchdahl used the idea of \(f(R)\) gravity for the first time in 1970 [19]. Apart from such MTGs, there are another two approaches other than curvature representations _viz._, such as the teleparallel gravity and symmetric teleparallel gravity. In teleparallel gravity, the gravitational force is governed by the torsion \(T\)[20; 21; 22; 23; 24]. Einstein used the other approach, i.e. symmetric teleparallel gravity, and it was an attempt to unify field theories. Such theories account for vanishing curvature and torsion with non-vanishing non-metricity, which analyses how the length of a vector changes when paralely transported. In this work, we shall use one of the promising MTGs known as \(f(Q)\) gravity, where the Lagrangian is a function of the non-metricity scalar \(Q\)[25]. One may note that \(f(Q)\) gravity has obtained the attention of researchers in the last few years, [26; 27; 28; 29; 30; 31] and a significant number of studies have been done in this MTG to investigate different properties of dark energy, including its evolution [32; 33; 34; 35]. Here in this work, we consider homogeneous and isotropic FLRW geometry in the power law model of \(f(Q)\) gravity and discuss the solutions to the issue of cosmic acceleration by constraining the model with observational data sets. For the completeness of the study, we also considered a black hole solution in the particular model and investigated its horizon thermodynamics in brief. Such an investigation will help us to comment on the viability of the model in terms of the thermodynamical aspects. The paper is organized as follows. In section II, we discussed the field equations and basics in \(f(Q)\) gravity. We constructed the cosmological model with power law \(f(Q)\) gravity model with constant sound speed parameterizations in section III. The observational constraints on the model are obtained in section IV. We discussed the behaviour of cosmological parameters, such as the deceleration and equation of state parameters, and implemented the diagnostic methods like the statefinder diagnostic and \(Om(z)\) diagnostics in section V. In section VI, we briefly analyzed the thermodynamic parameters of a vacuum black hole solution in \(f(Q)\) theory and studied its horizon thermodynamics and the first law. Finally, in section VII, we included a discussion and conclusion of our work. ## II Some basics of \(f(Q)\) gravity theory The action for \(f(Q)\) gravity is written as, \[S=\int\left(\frac{1}{2}f(Q)+L_{m}\right)\sqrt{-g}d^{4}x, \tag{1}\] where \(f(Q)\) is an arbitrary function related to the non-metricity scalar \(Q\). In addition, \(g=det(g_{\mu\nu})\) and \(L_{m}\) denote the matter Lagrangian. Furthermore, throughout this study, we will use units with the coupling constant \(\kappa\) and the speed of light \(c\) as \(1\). We further define the non-metricity scalar \(Q\) as follows \[Q\equiv-g^{\mu\nu}(L^{\beta}_{\ \alpha\gamma}L^{\alpha}_{\ \nu\beta}-L^{\beta}_{\ \alpha\beta}L^{\alpha}_{\ \mu\nu}), \tag{2}\] where \(L^{\beta}_{\ \alpha\gamma}\) is the disformation tensor, \[L^{\beta}_{\ \alpha\gamma}=-\frac{1}{2}g^{\beta\mu}(\nabla_{\gamma}g_{\alpha \eta}+\nabla_{\alpha}g_{\eta\gamma}-\nabla_{\eta}g_{\alpha\gamma}). \tag{3}\] The non-metricity tensor is given by \[Q_{\gamma\mu\nu}=\nabla_{\gamma}g_{\mu\nu}, \tag{4}\] with the non-metricity traces as \[Q_{\beta}=g^{\mu\nu}Q_{\beta\mu\nu}\qquad\widetilde{Q}_{\beta}=g^{\mu\nu}Q_{ \mu\beta\nu}. \tag{5}\] A superpotential or the non-metricity conjugate can also be defined as \[P^{\beta}_{\ \mu\nu}=-\frac{1}{2}L^{\beta}_{\ \mu\nu}+\frac{1}{4}(Q^{\beta}- \widetilde{Q}^{\beta})g_{\mu\nu}-\frac{1}{4}\delta^{\beta}_{(\mu}Q_{\nu)}. \tag{6}\] expressing the scalar of non-metricity as \[Q=-Q_{\beta\mu\nu}P^{\beta\mu\nu}\,. \tag{7}\] Additionally, it is known that the energy-momentum tensor is defined by \[T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}L_{m})}{\delta g^{\mu \nu}}. \tag{8}\] We obtain the following field equations by equating the variation of action (1) with respect to the metric to zero, \[\frac{2}{\sqrt{-g}}\nabla_{\beta}\left(f_{Q}\sqrt{-g}P^{\beta}_{\ \mu\nu}\right)+\frac{1}{2}fg_{\mu\nu}+f_{Q}(P_{\mu\beta \alpha}Q_{\nu}^{\ \beta\alpha}-2Q^{\beta\alpha}_{\ \mu}P_{\beta\alpha\nu})=-T_{\mu\nu}, \tag{9}\] where \(f_{Q}=\dfrac{df}{dQ}\). One can obtain the following equation by varying the action in relation to the connection. \[\nabla_{\mu}\nabla_{\nu}(\sqrt{-g}f_{Q}P^{\mu\nu}{}_{\lambda})=0. \tag{10}\] Recent CMB data show that our Universe is homogeneous and isotropic on a large scale, that is, on a scale more significant than that of galaxy clusters. For this reason, in the analysis we provide here, we take into consideration a flat FLRW background geometry in Cartesian coordinates with a metric, \[ds^{2}=-dt^{2}+a^{2}(t)[dx^{2}+dy^{2}+dz^{2}], \tag{11}\] where \(a(t)\) is the scale factor of the Universe. Additionally, the non-metricity scalar produced from the metric (11) is as follows: \[Q=6H^{2}, \tag{12}\] where \(H\) is the Hubble parameter, which measures the expansion rate of the Universe. The perfect cosmic fluid, or cosmological fluid without taking into account viscosity effects, is the most frequently used energy-momentum tensor in cosmology. Hence, we have \[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}, \tag{13}\] where \(u^{\mu}=(1,0,0,0)\) denotes the four-velocity vector components that define the fluid, \(\rho\) and \(p\) denote, respectively, the cosmic energy density and isotropic pressure of the perfect cosmic fluid. The \(f(Q)\) gravity dynamics of the universe are described by modified Friedmann equations, which are as follows. \[3H^{2} =\dfrac{1}{2f_{Q}}\left(-\rho+\dfrac{f}{2}\right), \tag{14}\] \[\dot{H}+3H^{2}+\dfrac{f_{Q}}{f_{Q}}H =\dfrac{1}{2f_{Q}}\left(p+\dfrac{f}{2}\right), \tag{15}\] where an overhead dot points out the differentiation of the quantity with respect to the cosmic time \(t\). It is important to note that if the function \(f(Q)=-Q\) is assumed, the standard Friedmann equations of GR can be obtained. We obtain the following evolution equation for \(H\) by eliminating the term \(3H^{2}\) thorugh the previous two equations. \[\dot{H}+\dfrac{\dot{f}_{Q}}{f_{Q}}H=\dfrac{1}{2f_{Q}}\left(p+\rho\right). \tag{16}\] We can rewrite equations (14) and (15) and define the effective energy density and pressure as \[3H^{2} =\rho+\rho_{Q},\] \[3H^{2}+2\dot{H} =-\left(p+p_{Q}\right).\] Hence, we obatin \[\rho_{Q} =3H^{2}\left(1+2f_{Q}\right)-\dfrac{f}{2},\] \[p_{Q} =-\left[2\dot{H}\left(1+f_{Q}\right)-\dfrac{f}{2}+3H^{2}\left(1 +2f_{Q}+8f_{QQ}\dot{H}\right)\right].\] Consequently, the matter conservation equation is obtained as given below. \[\dot{\rho}+3H\left(\rho+p\right)=0. \tag{17}\] According to the following equation of state, the cosmic fluid's normal isotropic pressure and energy density are related by \[p=\omega\rho. \tag{18}\] Here, \(\omega\) is the equation of state (EoS) parameter. ## III Cosmological \(f(Q)\) model with constant speed of sound For our investigation, we assume a specific power law model for the \(f(Q)\) function, which is expressed as \[f\left(Q\right)=\alpha Q^{n}, \tag{19}\] where \(\alpha\neq 0\) and \(n\) are the model free parameters. Using Eqs. (18), (19) and (16), we obtain a first-order differential equation for the Hubble parameter as \[\dot{H}-\dfrac{H^{2(1-n)}}{2\alpha n6^{n-1}\left(2n-1\right)}\rho\left(\omega +1\right)=0. \tag{20}\] Since the isotropic pressure and energy density of a barotropic fluid are related, the EoS can be stated implicitly as \[G\left(\rho,p\right)=0. \tag{21}\] Thus, using Eq. (18), one can write Eq. (21) as \(F(\rho,\omega)=0\) and \(G(\rho,\omega)=F(\rho,p)\). We can think of the energy density \(\rho=\rho\left(\omega\right)\) and isotropic pressure \(p=p\left(\omega\right)=\omega\rho\left(\omega\right)\) as functions of \(\omega\). Moreover, the inversion of \(F(\rho,\omega)=0\) suggests that other solutions to the equations \(p\left(\omega\right)\) and \(\rho\left(\omega\right)\) can be derived. There may be numerous values of \(\rho\left(\omega\right)\), particularly for specific values of \(\omega\). One of the strictest tests to determine whether a cosmological model is valid is the speed of sound \(\theta_{s}^{2}\). If the speed of sound \(\theta_{s}^{2}\) is lower than the speed of light \(c\), a model is considered to be physically plausible. The relation \(0\leq\theta_{s}^{2}\leq c\) specifies the stability prerequisite for the cosmological models. In this study, we have assumed that the speed of light is \(c=1\). Thus, if condition \(0\leq\theta_{s}^{2}\leq 1\) is met, the model is physically plausible. These constraints make this kind of modeling more appropriate, and certain models with variable sound speed have been described in the literature [36; 37; 38; 39; 40]. The squared speed of sound (\(\theta_{s}^{2}\)) in barotropic cosmic fluid can be described as \[\theta_{s}^{2}=\frac{dp}{d\rho}. \tag{22}\] Differentiating Eq. (21) gives \[\frac{\partial G}{\partial\rho}d\rho+\frac{\partial G}{\partial p}dp=0, \tag{23}\] which brings about \[\theta_{s}^{2}=-\frac{\frac{dG}{d\rho}}{\frac{dG}{d\rho}}. \tag{24}\] Using Eqs. (18), (23), and Eq. (24), we have \[\frac{d\rho}{\rho}=\frac{d\omega}{\theta_{s}^{2}-\omega}. \tag{25}\] Combining Eqs. (25) and (17), one can obtain \[\frac{d\omega}{\left(\theta_{s}^{2}-\omega\right)\left(1+\omega\right)}=3 \frac{dz}{1+z}, \tag{26}\] where we used \(\frac{dz}{dt}=-\left(1+z\right)H\). Since, we know that \(\rho\) and \(p\) are functions of \(\omega\), the sound speed, \(\theta_{s}^{2}\) can also be thought of as a function of \(\omega\), i.e., \(\theta_{s}^{2}=\theta_{s}^{2}\left(\omega\right)\) \[\theta_{s}^{2}=\frac{dp}{d\rho}=\frac{\frac{dp}{d\omega}}{\frac{d\rho}{d\omega }}. \tag{27}\] Thus, Eq. (26) governs the dynamics of the EoS parameter \(\omega\). Here, we consider a constant sound speed parameterizations [41; 42; 43], \[\theta_{s}^{2}=\beta, \tag{28}\] where \(\beta\) is a constant. Integration of Eq. (26) generate the parameter of EoS as \[\omega\left(z\right)=\frac{\beta\frac{1+\omega_{0}}{\beta-\omega_{0}}\left(1+ z\right)^{3\left(1+\beta\right)}-1}{\frac{1+\omega_{0}}{\beta-\omega_{0}}\left(1+ z\right)^{3\left(1+\beta\right)}+1}. \tag{29}\] By integrating Eq. (25), we obtain the following relation \[\rho =\rho_{0}\frac{\beta-\omega_{0}}{\beta-\omega}, \tag{30}\] \[p =\beta\rho-\rho_{0}\left(\beta-\omega_{0}\right), \tag{31}\] where \(\omega\left(0\right)=\omega_{0}\). Eqs. (30) and (31) can further used to obtain energy density and isotropic pressure as follows. \[\rho\left(z\right)=\rho_{0}\frac{\beta-\omega_{0}}{1+\beta}\left(\frac{1+ \omega_{0}}{\beta-\omega_{0}}\left(1+z\right)^{3\left(1+\beta\right)}+1\right), \tag{32}\] \[p\left(z\right)=\rho_{0}\frac{\beta-\omega_{0}}{1+\beta}\left(\beta\frac{1+ \omega_{0}}{\beta-\omega_{0}}\left(1+z\right)^{3\left(1+\beta\right)}-1\right). \tag{33}\] Further, we can define the relation for \(t\) and \(z\) using the formula \(a=a_{0}\left(1+z\right)^{-1}\), as shown below, \[\frac{d}{dt}=\frac{dz}{dt}\frac{d}{dz}=-\left(1+z\right)H\left(z\right)\frac{d }{dz}. \tag{34}\] Setting the present value of scale factor to \(a_{0}=a(0)=1\) as a standard. The Hubble parameter can be expressed mathematically as, \[\dot{H}=-\left(1+z\right)H\left(z\right)\frac{dH}{dz}. \tag{35}\] Now, by resolving Eq. (20), in terms of redshift, we found the following expression for the Hubble parameter: \[H(z)=\left[H_{0}^{2n}+\frac{2(6^{-n})(1+\omega_{0})\rho_{0}(1-(1+z)^{3+3\beta })}{(2n-1)\alpha(1+\beta)}\right]^{\frac{1}{2n}} \tag{36}\] where \(H_{0}\) is the present value of the Hubble parameter. ## IV Observational constraints This section presents the cosmological constraints of the considered model. The statistical method we use helps us to constrain the parameters such as \(\alpha\), \(\beta\), \(n\), \(\omega_{0}\), \(H_{0}\), and \(\rho_{0}\). We chose the Markov Chain Monte Carlo (MCMC) with the conventional Bayesian approach. The following data sets are used: * **Hubble data:** We use a standard collection of 31 measurements obtained from the differential age method (DA) [44; 45]. The DA method is employed to calculate the rate of expansion at redshift \(z\). The following formula is used to determine chi-square (\(\chi^{2}\)). \[\chi_{Hz}^{2}=\sum_{j=1}^{31}\frac{\left[H(z_{j})-H_{obs}(z_{j},p_{s})\right] ^{2}}{\sigma(z_{j})^{2}}.\] (37) Here, \(H_{obs}\) represents the observational value, \(p_{S}\) is the parameter space. \(\sigma^{2}\) is the observed error. * **SNe Ia data:** The supernovae (SNe Ia) observation is crucial to understand how the universe is expanding. Significantly, the SNe Ia data is recorded from the Panoramic Survey Telescope and Rapid Response system (Pan-STARSS1), Sloan Digital Sky Survey (SDSS), Supernova Legacy Survey (SNLS), and Hubble Space Telescope (HST) survey [46]. We use the Pantheon sample consisting of 1048 points of distance modulus \(\mu_{j}\) in the range \(0.01<z_{j}<2.26\) at different redshift. We perform the analysis using the expressions \[\mu^{th}(z_{j}) =25+5log_{10}\left[\frac{d_{l}(z)}{1Mpc}\right],\] (38) \[d_{l}(z) =c(1+z)\int_{0}^{z}\frac{dy}{H(y,\rho_{s})},\] (39) \[\chi^{2}_{SN} =\sum_{j,i=1}^{1048}\Delta\mu_{j}(C_{SN}^{-1})_{ji}\Delta\mu_{i}.\] (40) Here, \(\Delta\mu_{j}=u_{th}(z_{j},p_{s})-\mu_{obs}\), \(p_{s}\) is the parameter space, \(C_{SN}\) is the covariance matrix. * **BAO data:** We consider the sample from \(SDSS\), \(6dFGS\), \(Wiggle\) Z surveys at various redshifts. The following cosmology to establish BAO constraints \(\left(\frac{d_{A}(z)}{D_{v}(z)}\right)\) are as follows: \[d_{A}(z) =c\int_{0}^{z}\frac{dx}{H(x,p_{s})},\] (41) \[D_{v}(z) =\left[\frac{d_{A}^{2}(z)cz}{H(z)}\right]^{\frac{1}{3}},\] (42) \[\chi^{2}_{BAO} =Y^{T}C_{BAO}^{-1}\gamma.\] (43) where \(Y\) depends on the survey considered and \(C_{BAO}\) is the covariance matrix [47]. * **Results:** The constraints on the model parameters for the joint \((Hz+SNe+BAO)\) are obtained using \(\chi^{2}=\chi^{2}_{Hz}+\chi^{2}_{BAO}+\chi^{2}_{SNe}\). The outcomes and results are shown in Table 1. Additionally, figures 1 and 2 illustrate the likelihood contours for \(Hz\), \(SNe\) and \(Joint\) analysis. One can observe that the observations from \(Hz\) and \(SNe\) are more consistent than the joint \(Hz+SNe+BAO\) data-sets. It is worth mentioning that the values of parameter \(H_{0}\) align with the observations [48]. ## V Cosmological parameters In modern cosmology, studying cosmological parameters has attracted much interest in understanding the expansion dynamics of the Universe better. This part will explore the cosmological parameters for the earlier built model, including the deceleration parameter, EoS parameter, statefinder diagnostics, and Om diagnostic. ### Deceleration parameter One of the crucial elements needed to explain the behavior of the Universe is the deceleration parameter (\(q\)). The sign of the deceleration parameter, which can be negative or positive, determines whether the Universe is accelerating or decelerating. The definition of the deceleration parameter is \[q=-1-\frac{\dot{H}}{H^{2}}. \tag{44}\] According to our cosmological \(f\left(Q\right)\) model, the deceleration parameter is obtained as \begin{table} \begin{tabular}{l c c c c c c} \hline \hline data-sets & \(\alpha\) & \(\beta\) & \(n\) & \(\omega_{0}\) & \(H_{0}\) & \(\rho_{0}\) \\ \hline \(Hubble(Hz)\) & \(-0.77^{+0.11}_{-0.11}\) & \(0.039^{+0.011}_{-0.043}\) & \(0.5097^{+0.0037}_{-0.0056}\) & \(-0.872^{+0.0052}_{-0.075}\) & \(68.90^{+0.10}_{-0.10}\) & \(1.05^{+0.11}_{-0.094}\) \\ \(Pantheon(SNe)\) & \(-0.78^{+0.10}_{-0.10}\) & \(0.075^{+0.027}_{-0.073}\) & \(0.5085^{+0.0034}_{-0.0048}\) & \(-0.868^{+0.051}_{-0.089}\) & \(68.90^{+0.10}_{-0.10}\) & \(1.04^{+0.11}_{-0.099}\) \\ \(Hz+SNe+BAO\) & \(-0.797^{+0.010}_{-0.010}\) & \(0.0042^{+0.0014}_{-0.0051}\) & \(0.5034^{+0.0014}_{-0.0020}\) & \(-0.9867^{+0.0054}_{-0.0084}\) & \(68.8976^{+0.0098}_{-0.0098}\) & \(1.0406^{+0.0098}_{-0.0098}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Best-fit values of model parameters \[q\left(z\right)=\frac{3(\beta+1)\rho_{0}(\omega_{0}+1)(z+1)^{3\beta+3}}{n\left(2 \rho_{0}(\omega_{0}+1)\left(z^{3}(z+1)^{3\beta}+3z^{2}(z+1)^{3\beta}+3z(z+1)^{3 \beta}+(z+1)^{3\beta}-1\right)-\alpha(\beta+1)6^{n}(2n-1)H_{0}^{2n}\right)}-1 \tag{45}\] According to the values of model parameters imposed by the \(Hz\), \(SNe\), and \(Hz+SNe+BAO\) data-sets, Fig. 3 illustrates the behavior of the deceleration parameter \(q\) versus redshift \(z\). It shows that our cosmic \(f(Q)\) model is capable of producing both the early deceleration expansion (\(q>0\)) and the late-time cosmic acceleration (\(q<0\)). Also, for the data-sets from \(Hz\), \(SNe\), and \(Hz+SNe+BAO\), the deceleration parameter currently has values of \(q_{0}=-0.71^{+0.07}_{-0.04}\), \(q_{0}=-0.66^{+0.06}_{-0.09}\), and \(q_{0}=-0.91^{+0.00005}_{-0.008}\), respectively [49; 50]. ### Equation of State parameter As seen above, the relation between energy density \(\rho\) and isotropic pressure \(p\) is called the EoS parameter denoted by \(\omega\). The EoS parameter is employed to characterize the accelerated and decelerated expansion of the Universe, and it divides different epochs into three categories. The EoS parameter is employed to characterize the accelerated and decelerated expansion of the Universe, and it divides different epochs into three categories. gories: The radiation-dominated phase is shown by the model when \(\omega=\frac{1}{3}\), the matter-dominated phase by \(\omega=0\), and the stiff fluid phase by \(\omega=1\). In the current stage of accelerated evolution, \(-1<\omega\leq-\frac{1}{3}\), indicates the quintessence phase, \(\omega=-1\) indicates the cosmological constant, or \(\Lambda\)CDM model, and \(\omega<-1\) indicates the phantom era. Figs. 4 and 5 depict the behavior of both density parameter for the non-metricity component and energy density of the universe for the parameter values constrained by the \(Hz\), \(SNe\), and \(Hz+SNe+BAO\) data-sets, respectively. It can be shown that the two densities behave positively with redshift \(z\) for all data-sets. Further, the EoS parameter as seen in Fig. 6 suggests that our cosmic \(f(Q)\) model with a constant speed of sound behaves in a similar way to quintessence dark energy for larger values of \(z\) and approaches the \(\Lambda\)CDM point for lower values of \(z\). The current values of the EoS parameter are \(\omega_{0}=-0.872^{+0.052}_{-0.075}\), \(\omega_{0}=-0.868^{+0.051}_{-0.083}\), and \(\omega_{0}=-0.9867^{+0.0054}_{-0.0084}\), respectively, for the \(Hz\), \(SNe\), and \(Hz+SNe+BAO\) data-sets [51]. ### Statefinder diagnostics Sahni et al. developed the statefinder cosmological diagnostic pair \(\{r,s\}\) in [52]. Similar to the geometrical parameters \(H(z)\) and \(q(z)\) discussed in previous sections, the parameters \(r\) and \(s\) are dimensionless and are created from the scale factor of the Universe \(a(t)\) and its temporal derivatives. The statefinder makes it easier to distinguish and contrast various dark energy scenarios. As shown below, there are certain fixed points, in the \(s-r\) plane and \(q-r\) plane in the cosmological constant model (\(\Lambda\)CDM) and the standard cold dark model (SCDM). Any obtained model can be checked to these standard models to determine how closely it conforms to or differs from them. Following is a definition of these parameters: \[r =\frac{\ddot{a}}{aH^{3}}, \tag{46}\] \[s =\frac{(r-1)}{3\left(q-\frac{1}{2}\right)}. \tag{47}\] The parameter \(r\) can be rewritten as \[r=2q^{2}+q-\frac{\dot{q}}{H}. \tag{48}\] The statefinder pair \(\{r,s\}\) represents the following dark energy models for various values: * \(\Lambda\)CDM model is equivalent to (\(r=1,s=0\)), * Holographic dark energy model is equivalent to (\(r=1,s=\frac{2}{8}\)), * Chaplygin gas model is equivalent to (\(r>1,s<0\)), * Quintessence model is equivalent to (\(r<1,s>0\)), The \(s-r\) and \(q-r\) graphs for our cosmic \(f(Q)\) model are presented in Figs. 7 and 8 using the values of the parameters imposed by the \(Hz\), \(SNe\), and \(Hz+SNe+BAO\) data-sets. Fig. 7 shows that the trajectory initially departs from the \(\Lambda\)CDM model before eventually converging to it. Also, the Chaplygin gas zone (which is Figure 4: The graphical behavior of the density parameter for the non-metricity component with the constraint values from the \(Hz\), \(SNe\), and \(Hz+SNe+BAO\) data-sets. Figure 5: The graphical behavior of the energy density with the constraint values from the \(Hz\), \(SNe\), and \(Hz+SNe+BAO\) data-sets. Figure 3: The graphical behavior of the deceleration parameter with the constraint values from the \(Hz\), \(SNe\), and \(Hz+SNe+BAO\) data-sets. Figure 6: The graphical behavior of the EoS parameter with the constraint values from the \(Hz\), \(SNe\), and \(Hz+SNe+BAO\) data-sets. symbolized by \(r>1,s<0\)) perfectly accounts for the trajectory's evolution. According to Fig. 8, our model begins with the Chaplygin gas and moves on to the de-Sitter point (\(q=-1,r=1\)) at the end. Therefore, the statefinder diagnostic effectively demonstrates how the provided model differs from other DE models. ### Om diagnostic We will now describe the \(Om\) diagnostic, known as \(Om(z)\). The typical \(\Lambda\)CDM model is distinguished from numerous dark energy models using \(Om(z)\). Due to the Hubble parameter dependence on a single temporal derivative of \(a(t)\) function, only first-order derivatives are employed in the study of \(Om\) diagnostic. The definition of \(Om(z)\) for a flat Universe is [53; 54], \[Om\left(z\right)=\frac{\left(\frac{H(z)}{H_{0}}\right)^{2}-1}{\left(1+z\right) ^{3}-1}. \tag{49}\] As a result, the \(\Lambda\)CDM model, phantom, and quintessence cosmological models all have different values for \(Om(z)\). We can categorize the behavior of dark energy as quintessence type (\(\omega>-1\)), which corresponds to its negative slope, phantom type (\(\omega<-1\)), which corresponds to its positive slope, and \(\Lambda\)CDM type (\(\omega=-1\)), which corresponds to zero slopes. Fig. 9 shows the positive slope throughout the entire range of the \(Om(z)\) diagnostic parameter for the constrained values of the model parameters by the \(Hz\), \(SNe\), and \(Hz+SNe+BAO\) data-sets. Our cosmic \(f(Q)\) model thus exhibits phantom-type behavior, according to the \(Om(z)\) diagnostic test. Figure 8: The graphical behavior of the \(r-q\) plan with the constraint values from the \(Hz\), \(SNe\), and \(Hz+SNe+BAO\) data-sets. Figure 7: The graphical behavior of the \(r-s\) plan with the constraint values from the \(Hz\), \(SNe\), and \(Hz+SNe+BAO\) data-sets. Figure 9: The graphical behavior of the Om diagnostic with the constraint values from the \(Hz\), \(SNe\) and \(Hz+SNe+BAO\) data-sets. Thermodynamics aspects of the model We consider the following ansatz for a spherically symmetric static black hole in \(f(Q)\) gravity, \[ds^{2}=-h(r)dt^{2}+1/g(r)dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d \phi^{2}). \tag{50}\] For this case, we have the non-metricity scalar given by, \[Q=\frac{\left(g(r)-1\right)\left(g(r)h^{\prime}(r)-h(r)g^{\prime }(r)\right)}{rh(r)g(r)}. \tag{51}\] One may note that the non-metricity scalar shows that one can't simply choose \(h(r)=g(r)\) as the case for the Schwarzschild black hole because this makes the non-metricity scalar vanish, i.e., \(Q=0\)[55]. Hence, for this analysis, we shall pick \(h(r)\neq g(r)\) to ensure that \(Q\) survives. Now we follow the Ref. [56], where a black hole solution of the following form has been obtained for the power law model: \[h(r) =\left(\frac{r}{r_{T}}\right)^{\beta}\left[1-\left(\frac{r_{s}}{ r}\right)^{-\gamma}\right], \tag{52}\] \[g(r) =\frac{1}{C}\left[1-\left(\frac{r_{s}}{r}\right)^{-\gamma} \right]. \tag{53}\] Here \(r_{T}\), \(\beta\), \(\gamma\) and \(C\) are constants associated with the solution and \(r_{s}\) stands for the horizon radius. One may note that this black hole solution is not physically viable [56] and suffers from several issues. Till now, no other black hole solutions have been obtained in the power law model of \(f(Q)\) gravity which is physically viable. Hence, we pick this solution for a brief qualitative analysis to see how this model and non-metricity may affect the horizon thermodynamics and the related parameters. Detailed analysis, as well as physical viability of the black hole solutions, are kept as future scope of the study. From previous studies, it is evident that the thermodynamics is approximately identical for both event and cosmological horizon [57]. So we can use the above metric functions of the black hole space-time to study thermodynamics for the power law model of \(f(Q)\) gravity. Several studies deal with a flat universe to check the same in different frameworks [57; 58; 59]. We can study the horizon thermodynamics of the theory by following Refs. [60; 61]. However, here we have considered the power law model of \(f(Q)\) gravity framework, for which no physically viable black hole solution still has been obtained [56]. A rigorous study of black holes in this new theory remains in the literature. Therefore, in this study, we discuss a few properties in brief only. For this purpose, we consider the black hole solution mentioned above. One may note that the above solution of the black hole for power law \(f(Q)\) gravity does not have a Schwarzschild limit, and at an infinite distance away from the black hole horizon, it can't provide a Minkowski space-time [56]. The surface gravity for this black hole is calculated as \[\kappa_{b}=-\frac{\gamma\sqrt{C\left(\frac{r_{s}}{r_{T}}\right) ^{\beta}}}{2Cr_{s}}. \tag{54}\] Another important thermodynamical parameter is the Hawking temperature of the black hole which is given by \[T=-\frac{\gamma\sqrt{C\left(\frac{r_{s}}{r_{T}}\right)^{\beta}} }{4\pi Cr_{s}}. \tag{55}\] Now, if one considers \(r_{s}=2M\), where \(M\) is the mass of the black hole, one may arrive at an expression for the entropy of the black hole given by: \[S=\frac{64\pi CM^{2}}{(\beta-4)\gamma\sqrt{4^{\beta}C\left(\frac{M}{r_{T}} \right)^{\beta}}}, \tag{56}\] which satisfies the first law of thermodynamics of the black hole: \(dM=TdS\) with pressure \(P=0\). Following Ref. [60; 61], if we consider the \(rr\) components of field equations, \[8\pi P=\frac{r^{2}f(Q)}{g(r)}-\frac{\left(\frac{1}{g(r)}-1\right)f ^{\prime}(Q)\left(\frac{rh^{\prime}(r)}{h(r)}-\frac{rg^{\prime}(r)}{g(r)}+2 \right)-\frac{2rh^{\prime}(r)}{h(r)}+2Q^{\prime}r\left(\frac{1}{g(r)}-1 \right)f^{\prime\prime}(Q)}{2r^{2}}, \tag{57}\] we can see that due to the behaviour of the \(f(Q)\) gravity field equations, at horizon, several terms in the equation diverges for an ansatz \(h(r)\neq g(r)\). Hence we consider the black hole solution obtained in Ref. [56] and simplify the field equation. At the horizon \(r=r_{s}\), we obtain, \[\left(-\frac{\beta}{r^{2}}\right)^{n-1}\left(2(n-1)nrg^{\prime}(r)+4n^{2}-( \beta+6)n-2\beta r^{2}\right)\] \[+2rg^{\prime}(r)=0. \tag{58}\] Using the definition of black hole temperature at the horizon, we can further write the above expression as \[\frac{8\pi T\left(\beta+\beta(n-1)n\left(-\frac{\beta}{r^{2}}\right) ^{n-1}\right)}{\sqrt{C\left(\frac{r_{s}}{r_{T}}\right)^{\beta}}}+\\ r_{s}\left(-4n^{2}+(\beta+6)n+2\beta r_{s}^{2}\right)\left(- \frac{\beta}{r_{s}^{2}}\right)^{n}=0. \tag{59}\] One may note that terms with \(P\) will vanish here due to the behaviour of the black hole solution (no effective cosmological constant). We may identify the additional terms with temperature \(T\) as \(d\tilde{S}\) in the above expression. However, one may note that \(\tilde{S}\) may not be precisely an entropy term here; instead, it is a normalized or scaled term mimicking the properties of entropy. Due to the property of the field equations in \(f(Q)\) gravity, several terms in the denominator diverge at the horizon, so a re-scaling has been done in the limit \(r\to r_{s}\). From the above expression, we identify the following: \[d\tilde{S}=\frac{\left(\beta+\beta(n-1)n\left(-\frac{\beta}{r_ {s}^{2}}\right)^{n-1}\right)}{\sqrt{C\left(\frac{r_{s}}{r_{T}}\right)^{\beta} }}, \tag{60}\] \[d\tilde{E}=r_{s}\frac{\left(-4n^{2}+(\beta+6)n+2\beta r_{s}^{2} \right)}{8\pi}\left(-\frac{\beta}{r_{s}^{2}}\right)^{n}. \tag{61}\] Here \(\tilde{E}\) term mimics the energy of the black hole. These expressions satisfy the first law of thermodynamics for the power law model in \(f(Q)\) gravity. However, for a better understanding of the actual behaviour of these parameters, we might need to look for more viable black hole solutions in the framework of \(f(Q)\) gravity. This is because, in the \(f(Q)\) gravity power law model, the black hole solution considered here is not physically viable [56]. Hence, to study horizon thermodynamics adequately, one needs to obtain a physically viable black hole solution at first, which is beyond the scope of this study. In the above investigation, we have considered a theoretically motivated black hole solution of the model considered here and briefly discussed the thermodynamic variables. One may obtain the black hole temperature, surface gravity, and entropy as shown above, assuming that the first law of thermodynamics is valid. Otherwise, to realize the horizon thermodynamics, one can follow Ref.s [60; 61] to obtain an equivalent form of the first law from the \(rr\) component of the field equations. In this case, we have considered a black hole with no effective cosmological constant resulting in the pressure \(P\) associated with it being zero. This reduces the first law to the form \(dE=TdS\). In our analysis, recovering \(T\) from the previous definition, we only obtained an equivalent expression of the first law. To obtain an exact expression as well as to study the relevant properties, one needs to obtain a physically viable black hole solution in this theory, which we leave as a future prospect of this work. Similarly, generalized second law also may have several issues due to the presence of non-metricity and the form of field equations, and hence a detailed investigation in this regard is necessary to have a clear picture. ## VII Conclusion In this study, we investigated the \(f(Q)\) gravity theory to examine the late cosmic expansion of the universe. A power law \(f(Q)\) model, especially \(f(Q)=\alpha Q^{n}\), where \(\alpha\) and \(n\) are free model parameters, was taken into consideration. The field equations for the flat FLRW geometry were then derived. Using the constant sound speed parameterizations, i.e., \(\theta_{s}^{2}=\beta\), we can create the energy density and pressure for our \(f(Q)\) cosmic model in the current scenario, where a barotropic cosmic fluid is described in terms of \(\beta\). For this model, several cosmological parameters in terms of redshift as well as the EoS parameter are studied. Further, we resolved the field equations using these factors and found the exact solution represented by the Hubble parameter in Eq. (36). We were also able to determine the model parameters that fit the data sets the best using the updated Hubble data-sets (Hz), which have 31 data points, the recently published Pantheon samples (SN), which contain 1048 points, and Baryon acoustic oscillations data sets (BAO). The best-fit values are determined using these data sets and are displayed in Table 1. Fig. 3 displays the graphical behavior of the deceleration parameter \(q\). Accelerated and decelerated phases are observed for \(q\) in our \(f(Q)\) cosmic model. According to Fig. 6, which depicts the evolution of the EoS parameter \(\omega\) about redshift, \(\omega\) approaches the \(\Lambda\)CDM point for lower values of \(z\) and coincides with the quintessence epoch for larger values of \(z\) in our \(f(Q)\) cosmic model. The estimated present values of the deceleration parameter corresponding to the values of the model parameters imposed by \(Hz\), \(\mathrm{S}N\omega\), and the combined \(Hz+SNe+BAO\) data sets are \(q_{0}=-0.71^{+0.07}_{-0.04}\), \(q_{0}=-0.66^{+0.06}_{-0.09}\), and \(q_{0}=-0.91^{+0.00005}_{-0.008}\), respectively. Further, the present values of the EoS parameter are \(\omega_{0}=-0.872^{+0.052}_{-0.075}\) for the Hz data-sets, \(\omega_{0}=-0.868^{+0.051}_{-0.083}\) for the SN data-sets, and \(\omega_{0}=-0.9867^{+0.0054}_{-0.0084}\) for the \(Hz+SNe+BAO\). Finally, we used the statefinder and \(Om\left(z\right)\) diagnostics to examine how our model differed from other dark energy models. We can see from Fig. 7 that the trajectory of the \(r-s\) plan initially departs from the \(\Lambda\)CDM model. The \(\Lambda\)CDM model, which aligns with accepted cosmology, coincides with it in the late period. Also, the trajectory of the \(r-q\) plan begins with the Chaplygin gas and moves on to the de-Sitter point (\(q=-1,r=1\)) at the end. The \(Om\left(z\right)\) diagnostic parameter, as illustrated in Fig. 9 represents the phantom-like era for the model. The consistency of the acquired results with accepted cosmological models and observational data sets indicates the validity of our model, making it far more attractive to scholars in this field for further research. Finally, we investigated the horizon thermodynamics in \(f(Q)\) gravity model. For this purpose, we considered a black hole solution recently obtained in Ref. [56]. However, this black hole solution is not physically viable and suffers several unsolved issues. Getting a physically viable black hole solution is not directly associated with the primary objective of this study. Hence, for a qualitative analysis, we pick this solution from Ref. [56] and calculate the thermodynamics parameters associated with the model. We follow Ref.s [60; 61] to analyze the field equations further and observe that a first law equivalent relation could be extracted from the \(rr\) component of the field equation. However, the further detailed analysis would be necessary with a physically viable black hole solution to obtain an exact relation. We keep this as a prospect of the study. ## Data availability statement There are no new data associated with this article. ###### Acknowledgements. S.A. acknowledges BITS-Pilani, Hyderabad Campus, India for an Institute fellowship. PKS acknowledges Science and Engineering Research Board, Department of Science and Technology, Government of India for financial support to carry out Research project No.: CRG/2022/001847 and IUCAA, Pune, India for providing support through the visiting Associateship program. We are very much grateful to the honorable referee and to the editor for the illuminating suggestions that have significantly improved our work in terms of research quality, and presentation.
2303.03444
Photoionization and Opacity
Opacity determines radiation transport through material media. In a plasma source the primary contributors to atomic opacity are bound-bound line transitions and bound-free photoionization into the continuum. We review the theoretical methodology for state-of-the-art photoionization calculations based on the R-matrix method as employed in the Opacity Project, the Iron Project, and solution of the heretofore unsolved problem of plasma broadening of autoionizing resonances due to electron impact, Stark (electric microfields), Doppler (thermal), and core-excitations. R-matrix opacity calculations entail huge amount of atomic data and calculations of unprecedented complexity. It is shown that in high-energy-density (HED) plasmas Photoionization cross sections become 3-D energy-temperature-density dependent owing to considerable attenuation of autoionizing resonance profiles. Hence, differential oscillator strengths and monochromatic opacities are redistributed in energy. Consequently, Rosseland and Planck mean opacities are affected significantly.
Anil Pradhan
2023-03-06T19:06:45Z
http://arxiv.org/abs/2303.03444v1
# Photoionization and Opacity ###### Abstract Opacity determines radiation transport through material media. In a plasma source the primary contributors to atomic opacity are bound-bound line transitions and bound-free photoionization into the continuum. We review the theoretical methodology for state-of-the-art photoionization calculations based on the R-matrix method as employed in the Opacity Project, the Iron Project, and solution of the heretofore unsolved problem of plasma broadening of autoionizing resonances due to electron impact, Stark (electric microfields), Doppler (thermal), and core-excitations. R-matrix opacity calculations entail huge amount of atomic data and calculations of unprecedented complexity. It is shown that in high-energy-density (HED) plasmas Photoionization cross sections become 3-D energy-temperature-density dependent owing to considerable attenuation of autoionizing resonance profiles. Hence, differential oscillator strengths and monochromatic opacities are redistributed in energy. Consequently, Rosseland and Planck mean opacities are affected significantly. ## 1 Introduction Physically, the opacity depends on all possible intrinsic light-atom interactions that may absorb, scatter, or re-emit photons emanating from the source and received by the observer. In addition, the opacity depends on external conditions in the source and the medium. In recent years there have been a number of theoretical and experimental studies of opacities (viz. [1, 2, 3]). Whereas photoionization and opacity are linked in all plasma sources, we focus especially on high-energy-density (HED) environments such as stellar interiors and laboratory fusion devices, that are characterized by temperatures and densities together, typically \(T>10^{6}K\) and densities \(N>10^{15}\) cm\({}^{-3}\). Computed atomic cross sections and transition probabilities are markedly perturbed by plasma effects. Monochromatic opacity consist of four terms of bound-bound (bb), bound-free (bf), free-free (ff), and scattering (sc): \[\kappa_{ijk}(\nu)=\sum_{k}A_{k}\sum_{j}F_{j}\sum_{i,i^{\prime}}[\kappa_{bb}((i,i^{\prime};\nu)+\kappa_{bf}(i,\epsilon_{i^{\prime}};\nu)+\kappa_{ff}(\epsilon _{i},\epsilon_{i^{\prime}}^{\prime};\nu)+\kappa_{sc}(\nu)]\,. \tag{1}\] In Eq. (1) \(A_{k}\) is element abundance \(k\), its ionization fraction \(F_{j}\), \(i\) and initial bound and final bound/continuum states \(i,i^{\prime}\), of a given atom; the \(\epsilon\) represents electron energy in the continuum. To determine emergent radiation, a harmonic mean \(\kappa_{R}\), is defined, _Rosseland Mean Opacity_ (RMO), with monochromatic opacity \(\kappa_{ijk}(\nu)\) \[\frac{1}{\kappa_{R}}=\frac{\int_{0}^{\infty}g(u)\kappa_{\nu}^{-1}du}{\int_{0}^{ \infty}g(u)du}\quad\mbox{with}\quad g(u)=u^{4}e^{-u}(1-e^{-u})^{-2}. \tag{2}\] Here, \(g(u)\) is the derivative of the Planck function including stimulated emission, \(\kappa_{bb}(i,i^{\prime})=(\pi e^{2}/m_{e}c)N_{i}f_{ii^{\prime}}\phi_{\nu}\), and \(\kappa_{bf}=N_{i}\sigma_{\nu}\). The \(\kappa_{\nu}\) then depends on \(bb\) oscillator strengths, \(bf\), photoionization cross sections \(\sigma_{\nu}\), on the equation-of-state (EOS) that gives level populations \(N_{i}\). We describe large-scale computations using the coupled channel or close coupling (hereafter CC) approximation implemented via the R-matrix (RM) method for opacity in Eq. (1) primarily for: (i) the \(bb\) transition probabilities and (ii) the \(bf\) photoionization cross sections. In this review we focus on the \(bf\)-opacity, and in particular on resonant phenomena manifest in myriad series of autoionizing resonances that dominate photoionization cross sections throughout the energy ranges of interest in practical applications. ## 2 Photoionization Photoionization (PI) of an ion \(X^{+z}\) with ion charge \(z\) into the (e + ion) continuum is \[X^{+z}+h\nu\to X^{+z+1}+\ e. \tag{3}\] PI also entails the indirect process of resonances via formation of autoionizing (AI) doubly-excited states, and subsequent decay into the continuum, as \[h\nu+X^{+Z}\rightleftharpoons(X^{+Z})^{**}\rightleftharpoons X^{+Z+1}+\ e \tag{4}\] Infinite series of AI resonances are distributed throughout the photoionization cross section and generally dominate at lower energies encompassing and converging on to ionization thresholds corresponding to excited levels of the residual ion in the (e + ion) continua. A large number of photoionization cross section values for all bound levels are needed to compute plasma opacities. Total photoionization cross section (\(\sigma_{PI}\)) of each bound level of the (e + ion) system are required, from the ground state as well as from all excited states. Practically however we consider \(n(SLJ)<10\), and approximate relatively small number of energies below thresholds. Total \(\sigma_{PI}\) corresponds to summed contribution of all ionization channels leaving the residual ion in the ground and various excited states. AI resonances in photoionization cross sections are dissolved by plasma density and temperature, resulting in an enhanced continuum background, as discussed later. However the strong and isolated resonances can be seen in absorption spectra. Moreover, a sub-class of AI resonances corresponding to strong dipole transitions within the core ion, known as Photoexcitation-of-core (PEC) or Seaton resonances, correspond to the inverse process of dielectronic recombination [5, 7]. Transition matrix for photoionization \(S=<\Psi_{F}||{\bf D}||\Psi_{B}>\) is obtained from bound and continuum wave functions which give the line strength using the expression above. Photoionization cross section is obtained as \[\sigma_{PI}=\frac{4\pi}{3c}\frac{1}{g_{i}}\omega S, \tag{5}\] where \(\omega\) is the incident photon energy in Rydberg units. ### The Opacity Project and R-matrix Method Astrophysical opacity calculations using the RM method were initiated under the Opacity Project (circa 1983) [4, 5, 6, 7]. The RM opacity codes were developed to compute large-scale and accurate bound-bound (bb) transition oscillator strengths, and bound-free (bf) photoionization cross sections, Considerable effort was devoted to precise delineation of the _intrinsic_ AI resonance profiles in terms of shapes, heights, energy ranges, and magnitudes determined by numerous coupled channels of the (e + ion) system. In the CC-RM method the total (e + ion) system is expressed in terms of the eigenfunctions of the target or core states and a free-electron \[\Psi(E)={\cal A}\sum_{i}\chi_{i}\theta_{i}+\sum_{j}c_{j}\Phi_{j}. \tag{6}\] The \(\chi_{i}\) are target ion wavefunctions in a specific \(S_{i}L_{i}\) state, \(\theta_{i}\) is the free-electron wavefunction, and \(\Phi_{j}\) are bound channel correlation functions with coefficient \(c_{j}\) (viz. [5, 7]). The coupled channel labeled as \(S_{i}L_{i}k_{i}^{2}\ell_{i}(SL\pi)\); \(k_{i}^{2}\) is the incident kinetic energy. In contrast, the distorted wave approximation used in current opacity models neglects the summation over channels in Eq. 6, and therefore coupling effects are not considered as in the RM method in an _ab inito_ manner, due to possibly hundreds to thousands of coupled channels for complex ions. That approximation in principle implies neglect of quantum superposition in the distorted wave method, and interference that manifests in autoionizing resonance profiles. The \(bb\), \(bf\) transition matrix elements for the (e + ion) wave functions \(\Psi_{B}(SL\pi;E)\) and \(\Psi_{F}(SL\pi;E^{\prime})\) respectively, bound state \(B\) and \(B^{\prime}\) line strengths (a.u.) are given by \[S(B;B^{\prime})=|\langle\Psi_{B}(E_{B})||{\bf D}||\Psi_{B^{\prime}}(E_{B^{ \prime}})\rangle|^{2}. \tag{7}\] For opacity computations we consider the \(\bf D\) dipole operator, since non-dipole transitions do not in general significant contributors. With the final continuum state represented by \(\Psi_{F}(E^{\prime})\) and the initial state by \(\Psi_{B}(E)\), the photoionization cross section is \[\sigma_{\omega}(B;E^{\prime})=\frac{4}{3}\frac{\alpha\omega}{g_{i}}|\langle \Psi_{B}(E_{B})||{\bf D}||\Psi_{F}(E^{\prime})\rangle|^{2}. \tag{8}\] The \(\omega\) is photon frequency and \(E^{\prime}\) is the photoelectron energy of the outgoing electron. The Breit-Pauli R-matrix (BPRM) incorporates relativistic effects using the the Breit-Pauli (BP) Hamiltonian for the (e + ion) system in BPRM codes in intermediate coupling with a pair-coupling scheme \(S_{i}L_{l}(J_{i})l_{i}(K_{i})s_{i}(J\pi)\)[11], whereby states split into fine-structure levels \(S_{i}L_{i}J_{i}\). Consequently, the number of channels becomes several times larger than the corresponding \(LS\) coupling case. The IP work generally is based on BPRM codes, as for example the large amount of radiative and collisional data in the database NORAD [10]. ### R-Matrix Calculations for Opacities The \(R\)-Matrix codes employed in opacities calculations are considerably different and extensions of the original \(R\)-Matrix codes [6, 5, 7]. The OP codes were later extended under the Iron Project [8] to incorporate relativistic effects and fine structure in the Breit-Pauli approximation [11]. The RM opacity codes were further adapted with new extensions at Ohio State University for complete RM opacity calculations [12, 3]. Fig. 1 shows the flowchart of the RM codes at the Ohio Supercomputer Center (OSC). The atomic structure codes SUPERSTRUCTURE [17] and CIV3 [18], are first utilized to obtain an accurate configuration-interaction representation of the core-ion states. Next, The two \(R\)-Matrix codes STG1 and STG2 are employed to generate multipole integrals and algebraic coefficients for the (e + ion) Hamiltonian corresponding to coupled integro-differential equations in the CC approximation. In the BPRM codes, the code RECUPD recouples the \(LSJ\) pair coupling representation including fine structure explicitly. The total (e + ion) Hamiltonian matrix is diagonalized in STGH. The \(R\)-Matrix basis functions and dipole matrix elements thus obtained are input to code STGB for bound state wavefunctions B, code STGF for continuum wavefunctions, \(bb\) transitions code STGBB, and code STGBF to compute photoionization cross sections. Code STGF(J) may also be used to obtain electron impact excitation collision strengths. The immense complexity of RM calculations, compared to DW method and atomic structure calculations, requires substantial computational effort and resources. In particular, inner-shell transitions are often dominant contributors to opacity. But those could not be completed in OP work, except for outer-shell radiative transtions using the RM or BPRM methods due to computational constraints and then available high-performance computing platforms. Therefore, the simpler DW method was used for most of the OP opacity calculations, such as in DW-type methods in other opacity models that also neglect channel couplings and hence _ab initio_ reconsideration of autoionizing resonances in the bound-free continua. A prominent exemplar is the extensive role of _photoexcitation-of-core_ (PEC) resonances, or Seaton resonances [5, 7], associated with strong dipole transitions (viz. [12, 3] for Fe XVII). Despite unprecedented effort and advances, the OP-RM work faced several then intractable difficulties that limited the scope of atomic calculations. Primarily, the limitations were due to computational constraints which, in turn, did not enable accounting for important physical effects and a complete RM calculation of atomic opacities. The main features and deficiencies of OP are as follows: (I) The calculations were in LS coupling neglecting relativistic fine structure, (II) The close coupling wavefunction expansion for the target or the core ion in the (e + ion) system included only a few ground configuration LS terms, (III) Inner-shell excitations could not be included owing to the restricted target ion expansion that precluded photoexcitation of levels from inner shells into myriad resonances in the continua of the residual (e + ion) system, (IV) autoionizing resonances in bound-free photoionization cross sections were delineated within the few excited target terms, (V) Total angular and spin (e + ion) symmetries with large orbital angular-spin quantum numbers were not computed. All of these factors are crucial for a complete, converged and accurate opacity calculation. As mentioned, the OP work initially began with the \(R\)-matrix codes, albeit with very small wavefunction expansions (e + ion) system, usually limited to the ground configuration of the core ion. Thus OP opacities incorporated a small subset of RM data. Rather, most of the opacities contributions were obtained using atomic structure codes and the Distorted Wave (hereafter DW) approximation, similar to other opacity models [6-10]. Figure 1: The R-matrix codes for opacities calculations. Atomic data produced is further processed by a suite of equation-of-state, plasma broadening, and opacity codes to obtain monochromatic and mean opacities at each temperature and density [20]. The first complete RM calculation leading up to the calculation of opacities was carried out for the ion Fe xvii that is of considerable importance in determining the opacity at the base of the solar convection zone (BCZ) ([12], hereafter NP16). The solar radius of the BCZ has been accurately determined through Helioseismology to be 0.713\(\pm\)0.001 R\({}_{\odot}\). Other new physical issues also emerged in RM calculations for opacities. There are three major problems that need to be solved: (A) convergence of large coupled channel wavefunction expansions necessary to include sufficient atomic structures manifest in opacity spectra, (B) completeness of high \(n\ell\) contributions up to \(n\equiv\infty\), and (C) attenuation of resonance profiles due to _intrinsic_ autoionization broadening (included in RM calculations in an ab initio manner) and _extrinsic_ plasma effects due to temperature and density, as generally considered for bound-bound line opacity. RM photoionization calculations have been carried for several Fe ions [16]. In particular, large-scale computations of cross sections and transition probabilities have been done for Fe ions that determine iron opacity at the solar BCZ: Fe xvii, Fe xviii, Fe xix, Fe xx and Fe xxi (to be published; S.N. Nahar, private communication). ### R-matrix and Distorted Wave Methods Current opacity models employ the DW approximation or variants thereof. based on an atomic structure calculation coupled to the continuum. Oscillator strengths and photoionization cross sections are computed for all possible bound-bound and bound-free transitions among levels specified by electronic configurations included in the atomic calculation. However, since the DW approximation includes only the coupling between initial and final states, the complexity of interference between the bound and continuum wavefunction expansions involving other levels is neglected, and so are the detailed profiles of autoionizing resonances embedded in the continua. DW models employ the independent resonance approximation that treats the bound-bound transition probability independently from coupling to the continuum. Apart from relative simplicity of atomic computations, the advantages of DW models is that well-established plasma line broadening treatments may be used. On the other hand, RM opacities calculations are computationally laborious and time-consuming. However, as demonstrated in the erstwhile OP-RM work, albeit severely limited in scope, coupling effects are important. Opacity in the bound-free continuum is dominated by autoionizing resonances, as shown in recently completed works (viz. [12, 3, 19]. The most important consequence of neglecting detailed resonance profiles in DW models and missing opacity is that _intrinsic_ autoionizing broadening and _extrinsic_ plasma broadening thereof are not fully accounted for. It has now been shown that AI resonances are broadened much wider in the continuum than lines, and thereby enhance opacity significantly [12, 3]. Recent work ([14], D21) extended Fe xvii RM calculations by including more configurations than NP16a. Whereas that confirmed our earlier results for photoionization cross sections, D21 do not consider plasma broadening of autoionizing resonances and therefore do not obtain a complete description of bound-free opacity from RM calculations (discussed below). The unbroadened cross sections in D21 appear to similar to ours but they did not compare those in detail with previously published data in [12] for Fe xvii, and publicly available from the electronic database NORAD [10]. Also, D21 report 10% lower Rosseland mean opacities than OP2005, which is at variance with other DW models which are higher by up to a factor of about 1.5 ([12, 3], possibly because of incomplete number of bound Fe xvii levels. ## 3 Inner- and Outer-Shell Excitations Being simpler and based on pre-specified electronic configurations as in atomic structure calculations, inner-shell excitation DW data may be readily computed treating resonances as bound levels in the continuum. Although OP opacities were computed using DW data, OP atomic codes were originally developed to implement the RM methodology that could not be carried through owing to computational constraints. Most importantly it could not be employed for opacities due to inner-shell excitations that are dominant contributors because most electrons in complex ions are in closed shells and whose excitation energies lie above the first ionization threshold, giving rise to series of autoionizing resoances, and in particular PEC resonances due to strong dipole inner-shell trasitions in the core ion [12, 19]. On the other hand, the much simpler DW treatment in opacity models is readily implemented but is inaccurate in the treatment of important resonance phenomena. Extensive comparison of RM and DW calculations for Fe xvii considered herein, and implications for plasma opacities, is given in [12, 13]. ## 4 Plasma broadening of resonances Whereas line broadening has long been studied and its treatments are generally and routinely incorporated in opacity models (viz. [5]), plasma broadening of autoionizing resonance profiles is not heretofore considered. Attenuation of shape, height, energies, and magnitude of autoionizing resonances in photoionization cross sections must be delineated in detail, as in the RM method, as function of density and temperature in order to determine the distribution of total differential oscillator strength and structure of the bound-free continua. AI resonances are fundamentally different from bound-bound lines as related to quasi-bound levels with _intrinsic_ quantum mechanical autoionization widths. Broadening has significant contribution to mean opacities, enhancing the Rosseland mean opacity by factors ranging from 1.5 to 3, as shown in other works and discussed below [19]. However, line broadening processes and formulae may be to develop a theoretical treatment and computational algorithm outlined herein (details to be presented elsewhere). The convolved bound-free photoionization cross section of level may be written as: \[\sigma_{i}(\omega)=\int\tilde{\sigma}(\omega^{\prime})\phi(\omega^{\prime}, \omega)d\omega^{\prime}, \tag{9}\] where \(\sigma\) and \(\tilde{\sigma}\) are the cross sections with plasma-broadened and unbroadened AI resonance structures, \(\omega\) is the photon energy (Rydberg atomic units are used throughout), and \(\phi(\omega^{\prime},\omega)\) is the normalized Lorentzian profile factor in terms of the _total_ width \(\Gamma\) due to all AI broadening processes included: \[\phi(\omega^{\prime},\omega)=\frac{\Gamma(\omega)/\pi}{x^{2}+\Gamma^{2}}, \tag{10}\] where \(x\equiv\omega-\omega^{\prime}\). The crucial difference with line broadening is that AI resonances in the (e + ion) system correspond to and are due to quantum mechanical interference between discretized continua defined by excited core ion levels in a multitude of channels. The RM method (viz. [6, 5, 7]), accounts for AI resonances in an (e + ion) system with generally asymmetric profiles (unlike line profiles that are usually symmetric). Given \(N\) core ion levels corresponding to resonance structures, \[\sigma(\omega)=\sum_{i}^{N}\left[\int\tilde{\sigma}(\omega^{\prime})\left[ \frac{\Gamma_{i}(\omega)/\pi}{x^{2}+\Gamma_{i}^{(}\omega)}\right]d\omega^{ \prime}\right]. \tag{11}\] With \(x\equiv\omega^{\prime}-\omega\), the summation is over all excited thresholds \(E_{i}\) included in the \(N\)-level RM wavefunction expansion, and corresponding to total damping width \(\Gamma_{i}\) due to all broadening processes. The profile \(\phi(\omega^{\prime},\omega)\) is centered at each continuum energy \(\omega\), convolved over the variable \(\omega^{\prime}\) and relative to each excited core ion threshold \(i\). In the present formulation we associate the energy to the effective quantum number relative to each threshold \(\omega^{\prime}\rightarrow\nu_{i}\) to write the total width as: \[\Gamma_{i}(\omega,\nu,T,N_{e}) = \Gamma_{c}(i,\nu,\nu_{c})+\Gamma_{s}(\nu_{i},\nu_{s}^{*})\] \[+ \Gamma_{d}(A,\omega)+\Gamma_{f}(f-f;\nu_{i},\nu_{i}^{\prime}),\] pertaining to collisional \(\Gamma_{c}\), Stark \(\Gamma_{s}\), Doppler \(\Gamma_{d}\), and free-free transition \(\Gamma_{f}\) widths respectively, with additional parameters as defined below. We assume a Lorentzian profile factor that subsumes both collisional broadening due to electron impact, and Stark broadening due to ion microfields, that dominate in HED plasmas. This approximation should be valid since collisional profile wings extend much wider as \(x^{-2}\), compared to the shorter range \(exp(-x^{2})\) for thermal Doppler, and \(x^{-5/2}\) for Stark broadening (viz. [5, 19]). In Eq. (11) the limits \(\mp\infty\) are then replaced by \(\mp\Gamma_{i}/\sqrt{\delta}\); \(\delta\) is chosen to ensure the Lorentzian profile energy range for accurate normalization. Convolution by evaluation of Eqs. (1-3) is carried out for each energy \(\omega\) throughout the tabulated mesh of energies used to delineate all AI resonance structures, for each cross section, and each core ion threshold. We employ the following expressions for computations: \[\Gamma_{c}(i,\nu) = 5\left(\frac{\pi}{kT}\right)^{1/2}a_{o}^{3}N_{e}G(T,z,\nu_{i})( \nu_{i}^{4}/z^{2}), \tag{13}\] where T, \(N_{e}\), \(z\), and \(A\) are the temperature, electron density, ion charge and atomic weight respectively, and \(\nu_{i}\) corresponds to a given core ion threshold \(i:\omega\equiv E=E_{i}-\nu_{i}^{2}/z^{2}\) is a continuous variable. The Gaunt factor [19]\(G(T,z,\nu_{i})=\sqrt{3}/\pi[1/2+ln(\nu_{i}kT/z)]\) Another factor \((n_{x}/n_{g})^{4}\) is introduced for \(\Gamma_{c}\) to allow for doubly excited AI levels with excited core levels \(n_{x}\) relative to the ground configuration \(n_{g}\) (e.g. for Fe xviii\(n_{x}=3,4\) relative to the ground configuration \(n_{g}=2\)). A treatment of the Stark effect for complex systems entails two approaches, one where both electron and ion perturbations are combined, or separately (viz. [5, 19]) employed herein. Excited Rydberg levels are nearly hydrogenic, the Stark effect is linear and ion perturbations are the main broadening effect, though collisional broadening competes increasingly with density as \(\nu_{i}^{4}\) (Eq. 13). The total Stark width of a given \(n\) -complex is \(\approx(3F/z)n^{2}\), where F is the plasma electric microfields. Assuming the dominant ion perturbers to be protons and density equal to electrons, we take \(F=[(4/3)\pi a_{o}^{3}N_{e})]^{2/3}\), consistent with the Mihalas-Hummer-Dappen equation-of-state formulation [5]. \[\Gamma_{s}(\nu_{i},\nu_{s}^{*})=[(4/3)\pi a_{o}^{3}N_{e}]^{2/3}\nu_{i}^{2}. \tag{14}\] In employing Eq. (12) a Stark ionization parameter \(\nu_{s}^{*}=1.2\times 10^{3}N_{e}^{-2/15}z^{3/5}\) is introduced such that AI resonances may be considered fully dissolved into the continuum for \(\nu_{i}>\nu_{s}^{*}\) (analogous to the Inglis-Teller series limit for plasma ionization of bound levels). Calculations are carried out with and without \(\nu_{s}^{*}\) as shown later in Table 1. The Doppler width is: \[\Gamma_{d}(A,T,\omega)=4.2858\times 10^{-7}\sqrt{(}T/A), \tag{15}\] where \(\omega\) is _not_ the usual line center but taken to be each AI resonance energy. The last term \(\Gamma_{f}\) in Eq. (5) accounts for free-free transitions among autoionizing levels with \(\nu_{i},\nu_{i}^{\prime}\) such that \[X_{i}+e(E_{i},\nu_{i})\longrightarrow X_{i}^{\prime}+e^{\prime}(E_{i}^{\prime },\nu_{i}^{\prime}). \tag{16}\] The large number of free-free transition probabilities for \(+ve\) energy AI levels \(E_{i},E_{i}^{\prime}>0\) may be computed using RM or atomic structure codes (viz. [15]). We utilize new results from an extensive Breit-Pauli R-Matrix (BPRM) calculation with 218 fine structure levels dominated by \(n\leq 4\) levels of the core ion Fe xviii (to be reported elsewhere). A total of 587 Fe xvii bound levels (\(E<0\)) are considered, dominated by configurations \(1s^{2}2s^{2}2p^{6}(^{1}S_{0}),1s^{2}2s^{p}2p^{q}n\ell,[SLJ]\) (\(p,q=0-2,\ n\leq 10,\ \ell\leq 9,\ J\leq 12\)). The core Fe xvii levels included in the RM calculation for the (e + Fe xviii ) \(\rightarrow\)Fe xvii system are:\(1s^{2}2s^{2}2p^{5}(^{2}P_{1/2,3/2}^{o}),1s^{2}2s^{2}2p^{q},n\ell,[S_{i}L_{i}J_{i}]\) (\(p=4,5,\ n\leq 4,\ell\leq 3\)). The Rydberg series of AI resonances correspond to \((S_{i}L_{i}J_{i})\ n\ell,\ n\leq 10,\ell\leq 9\), with effective quantum number defined as a continuous variable \(\nu_{i}=z/\sqrt{(}E_{i}-E)\) (\(E>0\)), throughout the energy range up to the highest 218\({}^{th}\) Fe xviii core level; the \(n=2,3,4\) core levels range from E=0-90.7 Ry ([12]). The Fe xvii BPRM calculations were carried out resolving the bound-free cross sections at \(\sim\)40,000 energies for 454 bound levels with AI resonance structures. Given 217 excited core levels of Fe xviii, convolution is carried out at each energy or approximately \(10^{9}\) times for each (T,\(N_{e}\)) pair. Fig. 2 displays detailed results for unbroadened photoionization cross section (black) and plasma broadened (red and blue, without and with Stark ionization cut-off) The excited bound level of Fe xvii is \(2s^{2}2p^{2}\)\({}^{3}D_{2}\) at temperature-density T=\(2\times 10^{6}\)K and \(N_{e}=10^{23}\)cm\({}^{-3}\). The cross section is shown on the Log\({}_{1}\)0 scale in the top panel, and on a linear scale in the bottom panel isolating the energy region of highest and strongest AI resonances. The main features evident in the figure are as follows. (i) AI resonances show significant plasma broadening and smearing of a multitude of overlapping Rydberg series at The narrower high-\(n\)\(l\) resonances dissolve into the continua but stronger low-\(n\)\(l\) resonance retain their asymmetric shapes with attenuated heights and widths. (ii) At the \(N_{e}=10^{23}\)cm\({}^{-3}\), close to that at the solar BCZ, resonance structures not only broaden but their strengths shift and are redistributed over a wide range determined by total width \(\Gamma(\omega,\nu_{i},T,N_{e})\) at each energy \(\hbar\omega\) (Eq. 12). (iii) Stark ionization cut-off (blue curve) results in step-wise structures that represent the average due to complete dissolution into continua. (iv) Integrated AI resonance strengths are conserved, and are generally within 5-10% of each other for all three curves in Fig. 2, It is found that the ratio of RMOs with and without plasma broadening may be up to a factor of 1.6 or higher ([19]); recent work for other ions shows the ratio may be up to factor of 3. Figure 2: Energy-temperature-density dependent photoionization cross section of of highly excited bound level \(2s^{2}2p^{5}3p\)\({}^{2}D_{2}\) of Fe xvii\(\longrightarrow\) e + Fe xviii, due to plasma broadening of autoionizing resonances: unbroadened — black curve, broadened — red and blue (see text). Top panel: Log\(\sigma\) (MB) in the full energy range up to the highest ionization threshold of core ion Fe xviii, bottom panel: Linear-scale \(\sigma_{PI}\) in the energy range of the largest AI structures. The scale and magnitude of new opacity calculations is evident from the fact that photoionization cross sections of 454 bound levels of Fe xvii are explicitly calculated using the RM opacity codes, 1154 levels of Fe xviii, and 899 levels Fe xix. Plasma broadening is then carried out for for each temperature and density of interest throughout the solar and stellar interiors or HED plasma sources. ## 5 Energy Dependence Photoionization cross sections vary widely in different approximations used to calculate opacities. Simple methods such as the _quantum defect method_ and the central-field approximation, yield a feature-less background cross section. High-\(n\) levels in a Rydberg series of levels behave hydrogenically at sufficiently high energies, and the photoionization cross section may be approximated using Kramer's formula (discussed in [7]) \[\sigma_{PI}=(\frac{8\pi}{3^{1.5}c})\frac{1}{n^{5}\omega^{3}}. \tag{17}\] Eq. 17 is used in OP work to extrapolate photoionization cross sections in the high-energy region. However, it is not accurate, as seen in Fig. 3. At high energies inner shells and sub-shells are ionized, and their contribution must also be included in total photoionization cross sections. At inner (sub-)shell ionization thresholds there is a sharp upward jump or edge and enhancement of the photoionization cross section. Fig. 3 shows results from a relativistic distorted wave (RDW) calculation and Kramer's fomula Eq. 17. The RDW results do not include resonances, and differ from the OP results with resonance structures in the relatively small energy region near the ioniization threshold. ## 6 From Convergence to Completeness The NP16 work [12] also addressed an important point that a reasonably complete expansion of target configurations and levels in BPRM photoionization calculations is necessary to ensure converged bound-free opacities. The criteria for accuracy and completeness are: (i) _convergence_ of the wavefunction expansion (Eq. 6), and (ii) _completeness_ of PI cross sections, monochromatic and mean opacities with respect to possibly large number of multiply excited configurations. While NP16 demonstrated convergence with respect to \(n\)=2,3,4 levels of the Fe xviii target ion included in the RM calculations, more highly excited configurations that might affect high-energy behavior were not included. Subsequent work using and comparing with the DW method was therefore carried out to ascertain the effect of high-\(n\ell\) configurations on opacities [15]. Specifying excited configurations is straightforward in an atomic structure-DW calculation, but it is more complex and indirect in RM calculations. For example, in order to investigate the role of more excited configurations the NP16 BPRM calculations that yield 454 bound levels Fe xvii, were complemented with \(>50\,000\) high\(n,\ell\) "topup" levels to compute opacities and RMOs. Photoionization cross sections of the 454 strictly bound levels computed (negative eigenenergies) take into account embedded autoionizing resonances that are treated as distinct levels in DW calculations; therefore, in total there are commensurate number of levels to ensure completeness. However, the large number of highly-excited configurations made only a small contribution to opacities, relative to the main BPRM cross sections, and only to the background cross sections. without resonances. Therefore, the simpler DW method may be used for topup cotributions without loss of accuracy as to supplement RM calculations. Recent work has shown that the topup contribution to RM opacities does not exceed 5% to RMOs [20]. ## 7 Sum Rule and Oscillator Strength Distribution The total \(bb\) and integrated \(bf\) oscillator strength, when summed over all possible bb and bf transitions, must satisfy the definition of the oscillator strength as fractional excitation probability, i.e. \(\sum_{j}f_{ij}=N\), where \(N\) is the number of active electrons. But while the \(f\)-sum rule ensures completeness, it does not ensure accuracy of atomic calculations _per se_. That depends on the precise energy distribution of differential oscillator \(df/dE\), strength or photoionization cross section \(\sigma_{PI}\). To wit: the hydrogenic approximation, if used for complex atoms would satisfy the \(f\)-sum rule but would obviosuly be inaccurate. As disussed herein, the RM method is concerned primarily with \(df/dE\) in the \(bf\)-continuum Figure 3: Photoionization cross section \(\sigma_{PI}\) of the ground state of C I, \(1s^{2}2s^{2}2p^{2}\;{}^{3}P\), computed using the relativistic distorted wave (RDW) code by H.L. Zhang (discussed in [7]) compared with the Kramer’s hydrogenic formula Eq. 17. The large jump is due to photoionization of the inner \(1s\)-shell or the K-edge. The resonance structures at very low energies are obtained from the coupled channel RM calculations in the Opacity Project. based on full delineation of autoionizing resonance profiles. As the end result, the RMO depends on energy distribution of monochromatic opacity, convolved over the Planck function at a given temperature. Compared with OP results, the distribution of RM Fe xvii monochromatic opacity is quite different, and much more smoothed out without sharp variations that stem mainly from the treatment of resonances as \(bb\) lines, even with limited autoionization broadening included perturbatively in DW opacity models. Experimentally, a flatter opacity distribution is also observed, in contrast to theoretical opacity models that exhibit larger dips in opacity at "opacity windows" [21, 22, 12, 3]. ## 8 Conclusion This review describes photoionization work related to opacities. The state-of-the-art R-matrix calculations are discussed in comparison with the distorted wave data currently employed in opacity models. Atomic and plasma effects such as channel coupling, broadening of autoionizing resonances, high-energy behavior, and oscillator strength sum-rule are described. Existing OP and IP radiative data for photoionization and transition probabilities for astrophysically abundant elements have been archived in databases TOPbase and TIPbase. OP opacities and radiative accelerations are available online from OPserver [9]. R-matrix data for nearly 100 atoms and ions from uptodate and more accurate calculations are available from the database NORAD at OSU [10]. **Acknowledgements** I would like to thank Sultana Nahar for Fe xvii atomic data and discussions.
2305.01863
GPTutor: a ChatGPT-powered programming tool for code explanation
Learning new programming skills requires tailored guidance. With the emergence of advanced Natural Language Generation models like the ChatGPT API, there is now a possibility of creating a convenient and personalized tutoring system with AI for computer science education. This paper presents GPTutor, a ChatGPT-powered programming tool, which is a Visual Studio Code extension using the ChatGPT API to provide programming code explanations. By integrating Visual Studio Code API, GPTutor can comprehensively analyze the provided code by referencing the relevant source codes. As a result, GPTutor can use designed prompts to explain the selected code with a pop-up message. GPTutor is now published at the Visual Studio Code Extension Marketplace, and its source code is openly accessible on GitHub. Preliminary evaluation indicates that GPTutor delivers the most concise and accurate explanations compared to vanilla ChatGPT and GitHub Copilot. Moreover, the feedback from students and teachers indicated that GPTutor is user-friendly and can explain given codes satisfactorily. Finally, we discuss possible future research directions for GPTutor. This includes enhancing its performance and personalization via further prompt programming, as well as evaluating the effectiveness of GPTutor with real users.
Eason Chen, Ray Huang, Han-Shin Chen, Yuen-Hsien Tseng, Liang-Yi Li
2023-05-03T02:30:13Z
http://arxiv.org/abs/2305.01863v2
# GPTutor: a ChatGPT-powered ###### Abstract Learning new programming skills requires tailored guidance. With the emergence of advanced Natural Language Generation models like the ChatGPT API, there is now a possibility of creating a convenient and personalized tutoring system with AI for computer science education. This paper presents GPTutor, a ChatGPT-powered programming tool, which is a Visual Studio Code extension using the ChatGPT API to provide programming code explanations. By integrating Visual Studio Code API, GPTutor can comprehensively analyze the provided code by referencing the relevant source codes. As a result, GPTutor can use designed prompts to explain the selected code with a pop-up message. GPTutor is now published at the Visual Studio Code Extension Marketplace, and its source code is openly accessible on GitHub. Preliminary evaluation indicates that GPTutor delivers the most concise and accurate explanations compared to vanilla ChatGPT and GitHub Copilot. Moreover, the feedback from students and teachers indicated that GPTutor is user-friendly and can explain given codes satisfactorily. Finally, we discuss possible future research directions for GPTutor. This includes enhancing its performance and personalization via further prompt programming, as well as evaluating the effectiveness of GPTutor with real users. Keywords:ChatGPT, Tutoring System, Developer Tool, Prompt Engineering, Natural Language Generation. ## 1 Introduction Lately, there has been a rise in the need for skilled programmers, and as a result, many individuals are opting to learn coding and pursue lucrative software-related careers. At school, students are crowded in programming courses [1]. Moreover, the gap between learning and practical application requires students to continue learning after entering the workforce. For example, in 2020, 42% of beginner-level technology workers joined the US job market via the Coding Boot Camp [2]. Because of the strong demand for coding education, there is a shortage of teachers, which makes it difficult to provide personalized learning in these classrooms. Some students may feel frustrated. While self-studying and using Google to find solutions to problems can be helpful, there are times when students may require assistance when reading documents or examples for an unfamiliar programming language. Furthermore, it is especially challenging when novice people onboarding a new job and need to catch up by reading others' codes [3]. The code could include domain-specific business logics, which might be unfamiliar to them, and may be uncomment, poorly maintained, or even unclean. This paper presents GPTutor as a remedy to relieve programmers from aforementioned issues. GPTutor is a plugin for Visual Studio Code that uses ChatGPT to provide detailed explanations of source code. With GPTutor, students can conveniently receive personalized explanations for coding problems they encounter. Additionally, those seeking to learn a new programming language can use GPTutor to understand example code. Finally, new employees needing to quickly familiarize themselves with a codebase can use GPTutor to gain insights into the business logic behind each line of code. In sum, the main contributions of this paper are: 1. We developed GPTutor, a Visual Studio Code extension that utilizes the OpenAI ChatGPT API to provide detailed explanations of the given source code. 2. We demonstrated and explained why GPTutor surpasses other code explain applications, such as vanilla ChatGPT or GitHub Copilot, by advanced prompt designs. 3. We discussed potential applications, limitations, and future research directions on programming code explain applications like GPTutor. ## 2 Background ### Natural Language Generation Natural Language Generation (NLG) is a subfield of artificial intelligence (AI) that uses computer algorithms to produce human-like language output from the given input [4]. NLG aims to generate coherent and contextually appropriate language indistinguishable from human-writing language. NLG applications may appear to provide intelligent responses to given questions, but in reality, they just guess next words based on the vast amount of data they read [5]. For example, in Figure 1, if the NLG model uses the Artificial Intelligence in Education Conference websites as its training data and receives the prompt input "International Conference on Artificial Intelligence in". In that case, the NLG model may deem "Education" as a more possible follow-up after the given input than other words. As a result, the NLG model will complete the word with "Education" then continued to generate possible follow-up texts such as "will take place July 3-7, 2023 in Tokyo". The model may also produce results such as "July 27-31, 2022 in Durham" or even a fictitious outcome like "July 20-24, 1969 on the Moon". By providing additional contextual information in the prompt, the likelihood of the desired text being generated increases. Figure 1 demonstrates this phenomenon. When we include the prompt "The 24th" to the beginning of the prompt input, the model will be more inclined to generate "July 3-7, 2023 in Tokyo" as output since the website stated that the 24th AIED is held at 2023 in Tokyo. The technique of designing proper prompts to get the desired output is known as prompt programming [6]. ### Using NLG for Programming Code Explanation We could use prompt programming to employ large language models, such as GPT-3, as a tutor to answer question based on the context [4]. For example, if the NLG model was trained with lots of document about programming code and its comments/explanations, the model will be able to explain the given code like the example in Figure 2. Many existing applications, such as GPT-3, ChatGPT, and GitHub Copilot, can perform the NLG explanation as shown above in Figure 2. Nevertheless, these applications still have three main limitations, as presented in Figure 3. First, existing NLG code explainers are superficial, as they can only offer insights based on the code present on the current file. Consequently, they may overlook or speculate domain logics behind the function. This issue becomes particularly noteworthy when analyzing code with object-oriented design that imports objects from other files. Second, existing NLG code explainers tend to offer excessive, irrelevant, or even fictitious information. For instance, if a user asking on a line of code with GitHub Copilot, it may explain the entire code from top to bottom, which is often unnecessary. Figure 1: Example of probability on generating different outputs during NLG process. Figure 2: Example of input and output on using NLG model as a code explainer. Lastly, existing NLG code explainers may not be up to date. For example, ChatGPT was only trained with data until 2021 and, therefore, may perform well with popular libraries which had a lot of training data at that time. However, it may not provide a satisfactory explanation when dealing with new, unpopular, or private libraries. GPTutor was developed to surpass the aforementioned limitations, as shown in Figure 3. It offers the most concise and accurate explanations. Additionally, it can provide a comprehensive analysis of the provided code by examining the function's source code. **Figure 3.** Example Code and the comparison of the explanation from ChatGPT, GitHub Copilot, and the GPTutor. ## 3 Implementation of GPTutor In this section, we will first describe how we built the GPTutor Extension with Visual Studio Code API. Then, we discuss how we enhance its' performance in ChatGPT API. ### Building GPTutor as a Visual Studio Code Extension We built GPTutor in the Visual Studio Code extension development environment in Typescript. During the initial setup, the extension will ask users to provide their OpenAI API key, which will be stored in the extension's global state. Then, when users request an explanation of code through the GPTutor extension by command or hot key, the extension will perform the following steps: 1. Use the "editor.document.languagel" API to determine the language of the file. 2. Use the "editor.document.getText" API to obtain the code for the current file. 3. If the cursor is positioned on a function, the GPTutor will additionally use the "editor.action.revealDefinition" API to retrieve the source code behind the function. ### Getting answer by ChatGPT API with prompt programming Using the data obtained from the above steps, the GPTutor will create the prompt shown in Figure 4 for the _gpt-3.5-turbo_ model via the OpenAI API, which was just released on March 1, 2023. We tried several prompts and found the following formatted in Figure 4 yielded the most favorable results. ## 4 Current Results GPTutor has been published on the Visual Studio Code Extension Marketplace at [https://marketplace.visualstudio.com/items?itemName=gptutor.gptutor](https://marketplace.visualstudio.com/items?itemName=gptutor.gptutor), and its source code is openly accessible at [https://github.com/GPTutor/gptutor-extension](https://github.com/GPTutor/gptutor-extension). Preliminary user interview with students, programming teachers, and coding boot camp tutors indicated that GPTutor is user-friendly and can explain any given code satisfactorily. GPTutor especially impresses users with its remarkable ability to incorporate relevant source codes behind functions into prompts to provide a thorough explanation. Figure 4: The prompts GPTutor used to feed into the gpt-3.5-turbo model. ## 5 Discussion and Future Works ### Enhance performance and personalization by prompt programming GPTutor's superior performance compared to other similar applications can be attributed to its use of more relevant code in its prompts. This enables the NLG model to provide more desirable answers. We will continue to enhance GPTutor's performance by optimizing prompts. One possible way is by using heuristic search to identify relevant code in the code base. Then, after transforming the codes into many possible prompts, GPTutor could provide various explanations [7] to find users preference and then offer them personalized explanations and a better user experience. ### Evaluate the effectiveness of using GPTutor in the real world We will investigate the impact of GPTutor on students' comprehension of programming by observing how they interact with it to complete programming assignments. To assess the effectiveness of GPTutor, we will collaborate with coding course lecturers and utilize the Between-Subjects Design and the Interrupted Time Series Analysis to measure the relationship between the student grades and the frequency of the use of GPTutor. ## 6 Conclusion We created GPTutor, an extension for Visual Studio Code that leverages ChatGPT to provide programming code explanations. GPTutor collects relevant code and utilizes the OpenAI ChatGPT API to explain the chosen code. Comparisons indicate that GPTutor delivers the most concise and accurate explanations compared to Vanilla ChatGPT and GitHub Copilot. We believe that GPTutor can enhance computer science education and offer each student a convenient and personalized learning experience in the future. **Acknowledgement.** This work was supported by the Ministry of Science and Technology of Taiwan (R.O.C.) under Grants 109-2410-H-003-123-MY3 and 110-2511-H-003-031-MY2. We thank the KryptoCamp for the use cases and preliminary evaluation.
2301.02645
The Generalized Kauffman-Harary Conjecture is True
For a reduced alternating diagram of a knot with a prime determinant $p,$ the Kauffman-Harary conjecture states that every non-trivial Fox $p$-coloring of the knot assigns different colors to its arcs. In this paper, we prove a generalization of the conjecture stated nineteen years ago by Asaeda, Przytycki, and Sikora: for every pair of distinct arcs in the reduced alternating diagram of a prime link with determinant $\delta,$ there exists a Fox $\delta$-coloring that distinguishes them.
Rhea Palak Bakshi, Huizheng Guo, Gabriel Montoya-Vega, Sujoy Mukherjee, Józef H. Przytycki
2023-01-06T18:47:29Z
http://arxiv.org/abs/2301.02645v1
# The generalized Kauffman-Harary conjecture is true ###### Abstract. For a reduced alternating diagram of a knot with a prime determinant \(p\), the Kauffman-Harary conjecture states that every non-trivial Fox \(p\)-coloring of the knot assigns different colors to its arcs. In this paper, we prove a generalization of the conjecture stated nineteen years ago by Asaeda, Przytycki, and Sikora: for every pair of distinct arcs in the reduced alternating diagram of a prime link with determinant \(\delta\), there exists a Fox \(\delta\)-coloring that distinguishes them. Key words and phrases:Determinants of links, double branched cover, Fox colorings, Kauffman-Harary conjecture, knots and links, pseudo colorings 2020 Mathematics Subject Classification: Primary: 57K10 Secondary: 57M12 ###### Contents * 1 History of the alternation conjecture * 2 Preliminaries * 3 Proof of the generalized Kauffman-Harary conjecture * 4 Non-prime alternating links * 5 Examples of Fox colorings * 6 Odds and ends * 6.1 Pseudo colorings * 6.2 Future directions ## 1. History of the alternation conjecture In 1998, Louis H. Kauffman and Frank Harary formulated the following conjecture [HK]: **Alternation Conjecture**.: _Let \(D\) be a reduced, alternating diagram of a knot \(K\) having determinant \(p\), where \(p\) is prime. Then every non-trivial \(p\)-coloring of \(D\) assigns different colors to different arcs._ This conjecture is now known as the Kauffman-Harary conjecture. It was proved for rational knots [KL, PDDGS], Montesinos knots [APS], some Turk's head knots [DMMS], and for algebraic knots [DS]. In 2009, Thomas W. Mattman and Pablo Solis proved this conjecture using the notion of pseudo colorings. A generalization of this conjecture, known as the generalized Kauffman-Harary (GKH) conjecture, was formulated by Marta M. Asaeda, Adam S. Sikora, and the fifth author in 2004 [APS]. They proved this conjecture for Montesinos links in the same paper. In this paper, we prove it in full generality. The paper is structured as follows. In the next section we introduce the GKH conjecture and we prove it in Section 3. In Section 4, we reformulate and prove the conjecture for non-prime alternating links. We illustrate the results with some examples in Section 5. In the last section, we discuss pseudo colorings followed by some open questions. ## 2. Preliminaries In this section, we state the original and alternate versions of the GKH conjecture. The difference between the original and generalized versions of the conjecture is that the former is about links with prime determinant, while the generalized version is about links with determinant not necessarily prime. It is important to note that the only link whose determinant is prime is the Hopf link. **Generalized Kauffman-Harary Conjecture**.: _If \(D\) is a reduced alternating diagram of a prime link \(L\), then different arcs of \(D\) represent different elements of \(H_{1}(M_{L}^{(2)},\mathbb{Z})\), where \(M_{L}^{(2)}\) denotes the double branched cover of \(S^{3}\) branched along \(L\)._ The GKH conjecture was formulated in [1] using the homology of the double branched cover of \(S^{3}\) branched along \(L\). In this paper we use a diagrammatic version of this conjecture by using the universal1 group of Fox colorings \(Col(D)\) for a prime link \(L\) with diagram \(D\). Footnote 1: Analogous to the fundamental group and the fundamental quandle, this group is often called the fundamental group of Fox colorings. **Definition 2.1**.: _The group \(Col(D)\) is the abelian group whose generators are indexed by the arcs of \(D\), denoted by \(arcs(D)\), and whose relations are \(2b-a-c=0\) given by the crossings of \(D\). More precisely,_ \[Col(D)=\bigg{\{}\text{arcs}(D)\mid\raisebox{-1.29pt}{\includegraphics[height= 1.29pt]{fig/2.eps}}\raisebox{-1.29pt}{\includegraphics[height=1.29pt]{fig/2.eps}} \raisebox{-1.29pt}{\includegraphics[height=1.29pt]{fig/2.eps}}\raisebox{-1.29pt}{ \includegraphics[height=1.29pt]{fig/2.eps}}\raisebox{-1.29pt}{\includegraphics[ height=1. ## 3. Proof of the generalized Kauffman-Harary conjecture The proof of the GKH conjecture is organized as follows. First, we define the crossing matrix \(C^{\prime}(D)\) and coloring matrix \(L(D)\) of a link diagram \(D\). Following [10] we prove that every column of the coloring matrix represents a non-trivial Fox \(\delta(D)\)-coloring. Then using the fact that the coloring matrix of the mirror image of \(D\) is the transpose of \(L\), we prove part (b), and equivalently, part (a) of Conjecture 2.3. Additionally, we show that the columns of the coloring matrix generate the group \(Col^{red}(D)\) and use this fact to prove part (c) of Conjecture 2.3. **Definition 3.1**.: _A **Fox \(k\)-coloring** of a diagram \(D\) is a function \(f:\text{arcs}(D)\rightarrow\mathbb{Z}_{k}\), satisfying the property that every arc is colored by an element of \(\mathbb{Z}_{k}=\{0,1,2,3,\ldots,k-1\}\) in such a way that at each crossing the sum of the colors of the undercrossings is equal to twice the color of the overcrossing modulo \(k\). That is, if at a crossing \(v\) the overcrossing is colored by \(b\), and the undercrossings are colored by \(a\) and \(c\), then \(2b-a-c\equiv 0\) modulo \(k\). See Figure 3.1 for an illustration. The group of Fox \(k\)-colorings of a diagram \(D\) is denoted by \(Col_{k}(D)\) and the number of Fox \(k\)-colorings is denoted by \(col_{k}(D)\). Analogous to Definition 2.2, we divide the group \(Col_{k}(D)\) by the group of trivial colorings and denote the quotient group by \(Col_{k}^{red}(D)\)._ The matrix describing the space of colorings \(Col(D)\) is referred to, by Mattman and Solis, as the crossing matrix for a fixed arbitrary ordering of the crossings [10]. Here we do not assume that the diagram is alternating. **Definition 3.2**.: _Fix an ordering of the crossings of a reduced link diagram \(D\). Then the set of arcs inherits the order of the set of crossings. In this way, the over-arc has the same index as the crossing. The **crossing matrix2** of \(D\), denoted by \(C^{\prime}(D)\), is an \(n\times n\) matrix such that each row corresponds to a crossing that gives the relation \(2b-a-c=0\) (see Figure 3.1). The entries of the matrix are defined as follows3:_ Footnote 2: The alternative, more descriptive, name could be _unreduced fundamental Fox colorings matrix_. Footnote 3: It is possible that two under-arcs at a crossing are not distinct. Then the relation \(2b-a-c=0\) becomes \(2b-2a=0\). For instance, this may occur for the Hopf link. \[C^{\prime}_{ij}=\left\{\begin{array}{ccc}2&\text{if $a_{i}$ is the over-arc at $c_{i}$,}\\ -1&\text{if $a_{j}$ is an under-arc at $c_{i}$ ($i\neq j$), and}\\ 0&\text{otherwise.}\end{array}\right.\] Figure 3.1. Fox coloring relation at crossing \(v\). The following lemma holds only for alternating links and plays an important role in the proof of the GKH conjecture. **Lemma 3.3**.: _Let \(D\) be a reduced alternating link diagram with crossing matrix \(C^{\prime}(D)\) and let \(\overline{D}\) be its mirror image. Then the matrix \(C^{\prime T}\) is a crossing matrix for \(\overline{D}\)._ Proof.: Denote the crossings of the diagram \(D\) by \(c_{1},\ldots,c_{n}\) and let the over-arc at the crossing \(c_{i}\) be denoted by \(a_{i}\). Notice that, in the matrix \(C^{\prime}(D)\) all entries on the diagonal are \(2\). We obtain \(\overline{D}\) by crossing-change operations and we keep the ordering and names of the crossings. Now, let \(\overline{a_{i}}\) denote the over-arc at the crossing \(c_{i}\) in the diagram \(\overline{D}\). In the row corresponding to the crossing \(c_{i}\), suppose the columns corresponding to the arcs \(a_{j}\) and \(a_{k}\) have \(-1\) as entries. Then in the matrix \(C^{\prime}(\overline{D})\), the column corresponding to \(\overline{a_{i}}\) must have entries \(-1\) in the rows corresponding to the crossings \(c_{j}\) and \(c_{k}\); see Figure 3.2. Recall that if \(\delta(D)\neq 0\), then \(Col^{red}(D)\) is a finite group whose invariant factor decomposition is \(Col^{red}(D)=\mathbb{Z}_{n_{1}}\oplus\mathbb{Z}_{n_{2}}\oplus\cdots\oplus \mathbb{Z}_{n_{s}}\), with \(n_{i+1}|n_{i}\) for all \(i\). Notice that, \(s\) is the minimum number of generators of this group and \(n_{1}\) is the annihilator of the group. Let \(C(D)\) denote the reduced crossing matrix of \(D\), which is the matrix obtained from \(C^{\prime}(D)\) by removing its last row and last column. We call the arc corresponding to the last column of \(C^{\prime}(D)\) the **base arc**. This matrix describes the group \(Col^{red}(D)\). The matrix \(C^{-1}(D)\) is a matrix with rational entries. However, \(n_{1}C^{-1}(D)\) is an integral matrix, which we denote by \(L_{n_{1}}(D)\). Observe that the columns of \(L_{n_{1}}(D)\) modulo \(n_{1}\) represent Fox \(n_{1}\)-colorings of the diagram \(D\) after coloring the base arc by color \(0\). The following result also holds for reduced non-alternating links. **Theorem 3.4**.: _Let \(D\) be a reduced diagram of a link with non-zero determinant. Then the columns of \(L_{n_{1}}(D)\) modulo \(n_{1}\) generate the space of Fox \(n_{1}\)-colorings of \(D\)._ Proof.: Let \(C(D)\) be the reduced crossing matrix of \(D\) and let \(Col^{red}(D)=\mathbb{Z}_{n_{1}}\oplus\mathbb{Z}_{n_{2}}\oplus\cdots\oplus \mathbb{Z}_{n_{s}}\), with \(n_{i+1}|n_{i}\) for all \(i\). After row and column operations, \(C(D)\) can be reduced to its Smith normal form, denoted by \(C_{SNF}(D)\), given below. \[C_{SNF}(D)=\begin{pmatrix}n_{1}&&&&&&&\\ &n_{2}&&&&\text{\Large{0}}&\\ &&\ddots&&&&\\ &&&n_{s}&&&\\ &&&1&&&\\ &\text{\Large{0}}&&&&\ddots&\\ &&&&1\end{pmatrix}\] Its inverse matrix, \(C_{SNF}^{-1}(D)\), with entries in \(\mathbb{Q}\) has the following form. \[C_{SNF}^{-1}(D)=\begin{pmatrix}1/n_{1}&&&&&&&\\ &1/n_{2}&&&&\text{\Large{0}}&\\ &&\ddots&&&&\\ &&&1/n_{s}&&&\\ &\text{\Large{0}}&&&&\ddots&\\ &&&&&&&1\end{pmatrix}\] Thus, we obtain the following integral matrix \(L_{n_{1}}^{SNF}(D)\). \[L_{n_{1}}^{SNF}(D)=n_{1}C_{SNF}^{-1}(D)=\begin{pmatrix}n_{1}/n_{1}&&&&\\ &n_{1}/n_{2}&&&\text{\text@underline{\text@underline{\text@underline{\text@underline {\text@underline{\text@underline{\text@underline{\text@underline{\text@underline{\text@@@underline \text@underline{\text@@@underline{\text@ **Theorem 3.6**.: _If \(Col^{red}(D)=\mathbb{Z}_{n_{1}}\oplus\mathbb{Z}_{n_{2}}\oplus\cdots\oplus\mathbb{Z }_{n_{s}}\), with \(n_{i+1}|n_{i}\), then there are \(s\) Fox \(n_{1}\)-colorings (not necessarily corresponding to the columns of the coloring matrix) which distinguish all arcs. That is, for every pair of arcs of \(D\), one of these \(n_{1}\)-colorings distinguishes them._ Proof.: Denote the generators of the group \(Col^{red}(D)\) by \(a_{1}\), \(a_{2}\),..., \(a_{s}\). Every generator \(a_{i}\) is a linear combination of some columns of the coloring matrix \(L_{n_{1}}(D)\) modulo \(n_{1}\) (see Theorem 3.4). Therefore, they correspond to some coloring of the diagram \(D\). Hence, for every pair of arcs there is a column of \(L_{n_{1}}(D)\) modulo \(n_{1}\) that distinguishes them. **Corollary 3.7**.: _If \(Col^{red}(D)\) is the cyclic group \(\mathbb{Z}_{n_{1}}\),_ 1. _then there exists a non-trivial Fox_ \(n_{1}\)_-coloring that distinguishes all arcs._ 2. _Additionally, if_ \(n_{1}\) _is a prime number, then the original Kauffman-Harary conjecture holds. That is, every non-trivial Fox_ \(n_{1}\)_-coloring distinguishes all arcs._ Proof.: Part (a) follows directly from Theorem 3.6, for \(s=1\). Part (b) follows because every non-zero element of \(\mathbb{Z}_{n_{1}}\) is its generator. ## 4. Non-prime alternating links Theorems 3.5 and 3.6 do not hold as stated for the connected sum of alternating links4 (see part (a) of Lemma 4.1). In Theorem 4.2, we present a version of the GKH conjecture which holds for non-prime alternating links. Footnote 4: The connected sum of alternating links, is an alternating link. For example, see [11]. **Lemma 4.1**.: [12]__ _Let \(D=D_{1}\#D_{2}\) be the connected sum of two link diagrams. Then,_ 1. _the arcs connecting the two components represent the same element in_ \(Col(D)\)_, and_ 2. \(Col^{red}(D_{1}\#D_{2})\cong Col^{red}(D_{1})\oplus Col^{red}(D_{2})\)_._ **Theorem 4.2**.: _Let \(D=D_{1}\#\ D_{2}\#\ \cdots\#D_{n}\), where \(D_{i}\) is a reduced alternating diagram of a prime link \(L_{i}\), for \(i=1,2,\ldots,n\). Then,_ 1. _for any pair of arcs different from arcs joining_ \(D_{i}\) _with_ \(D_{i+1}\)_, there exists a Fox_ \(n_{1}\)_-coloring which distinguishes them, and_ 2. _there are_ \(t\) _(_\(t\leq s\)_) Fox_ \(n_{1}\)_-colorings such that any pair of arcs different from the ones joining_ \(D_{i}\) _with_ \(D_{i+1}\)_, is distinguished by one of them._ Proof.: This result follows from Theorems 3.4 and 3.5, and Lemma 4.1. **Remark 4.3**.: _Theorem 4.2 was formulated for connected sums of diagrams. However, from William W. Menasco's result (see [13]), it follows that if an alternating diagram represents the connected sum of alternating links, then it is already a connected sum of diagrams._ **Example 4.4**.: _Let \(D\) be an alternating diagram of the square knot, that is \(D=\overline{3}_{1}\#3_{1}\), with reduced crossing matrix \(C(D)\) (see Figure 4.1). Then \(Col^{red}(\overline{3}_{1}\#3_{1})=\mathbb{Z}_{3}\oplus\mathbb{Z}_{3}\). Observe that columns 3 and 5 of \(L_{3}(D)\) modulo \(3\) (Figure 4.2) distinguish all pairs of arcs except the ones connecting \(\overline{3}_{1}\) with \(3_{1}\). Also, the third row (corresponding to the third crossing in the chosen ordering and, therefore, to the third arc) has all zero entries. That is, the third arc cannot be distinguished from the base arc._ \[C(D)=\begin{pmatrix}2&-1&0&0&0\\ -1&2&-1&0&0\\ -1&-1&2&0&0\\ 0&0&-1&2&-1\\ 0&0&0&-1&2\end{pmatrix}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ 1. _For_ \(n=5\)_,_ \(D(W_{5})\) _is_ \(10_{123}\) _in Rolfsen's table_ [Rol]_._ \(Col^{red}(D(W_{5}))=\mathbb{Z}_{11}\oplus\mathbb{Z}_{11}\)_._ 2. _For_ \(n=6\)_,_ \(D(W_{6})\) _is the link_ \(12_{474}^{3}\) _in Thistlethwaite's tables_ _[_7, Thi_]__._ \(Col^{red}(D(W_{6}))=\mathbb{Z}_{40}\oplus\mathbb{Z}_{8}\)_._ **Example 5.3**.: _The group \(Col^{red}\) for pretzel links is given in Proposition 7 in [APS] and its generalization to Montesinos links is given in Proposition 8 in [APS]. Here we show two examples together with their coloring matrices modulo \(n_{1}\)._ 1. _Let_ \(P(3,3,3,3,3)\) _be a pretzel knot with_ \(15\) _crossings. Its group_ \(Col^{red}\) _is equal to_ \(\mathbb{Z}_{15}\oplus\mathbb{Z}_{3}\oplus\mathbb{Z}_{3}\oplus\mathbb{Z}_{3}\)_. See its coloring matrix,_ \(L(P(3,3,3,3,3))\) _modulo_ \(15\) _in Figure_ 5.4_._ 2. _Let_ \(P(3,3,3,6)\) _be a pretzel knot with_ \(15\) _crossings. Its group_ \(Col^{red}\) _is equal to_ \(\mathbb{Z}_{21}\oplus\mathbb{Z}_{3}\oplus\mathbb{Z}_{3}\)_. See its coloring matrix,_ \(L(P(3,3,3,6))\) _modulo_ \(21\) _in Figure_ 5.5_._ ## 6. Odds and ends ### Pseudo colorings An important tool in our proof of Theorem 3.5 is the idea of pseudo colorings. In [MS] and in this paper, it is shown that no pseudo colorings exist for reduced, prime, alternating link Figure 5.3. The knot \(D(W_{5})\) with two Fox colorings distinguishing all arcs (on the left), and the link \(D(W_{6})\) with two Fox colorings distinguishing all arcs (on the right). Figure 5.2. The knot \(7_{7}\) with a Fox \(21\)-coloring which distinguishes all arcs. diagrams. However, the existence of pseudo colorings can be used to see how far a diagram is from being an alternating link diagram. In this section, we briefly explore this concept. In [MS], Proposition 3.2 depends on the fact that for reduced alternating diagrams the rows of the crossing matrix add to zero. This does not hold for non-alternating diagrams, as we illustrate in the following examples. **Definition 6.1**.: _Let \(D\) be a link diagram and \(\epsilon\in\{-1,+1\}\). Following Mattman and Solis [MS], we define an \(\epsilon\)**-pseudo coloring** of \(D\) as colorings of the arcs of \(D\) such that, at all but two crossings the Fox coloring convention \(2b-a-c=0\) is satisfied. We denote the other two crossings by \(c_{+1}\) and \(c_{\epsilon}\), where the coloring conventions are \(2b-a-c=+1\) and \(2b-a-c=\epsilon\), respectively. To obtain the pseudo colorings as defined in [MS], put \(\epsilon=-1\)._ For an alternating link diagram \(D\), our convention was to order crossings first and then, the set of arcs inherits the order of the set of crossings. Compare Definition 3.1. The reason for such a choice is that \(C^{\prime}(\overline{D})\) is the same as \(C^{\prime}(D)^{T}\). This does not work for non-alternating link diagrams. Figure 5.4. \(L(P(3,3,3,3,3))\) modulo \(15\). The colorings given by the columns \(3\), \(4\), and \(10\) distinguish all arcs. Figure 5.5. \(L(P(3,3,3,6))\) modulo \(21\). The colorings given by columns \(1\) and \(6\) distinguish all arcs. In general, we can arbitrarily order crossings and arcs. In Figure 6.1 we give an example of ordering crossings and arcs for the knot \(8_{19}\). We first choose a base point and an orientation (shown by an arrow on the left-hand side of Figure 6.1). Starting at this base point, we move along the knot and order crossings. Next, arcs can be ordered arbitrarily with the base arc always being the last one. In Figure 6.1 the first coordinate gives the number of the crossing and the second one gives the number of the arc. In the following example, we analyze non-split, non-prime alternating diagrams. **Example 6.2**.: _Let \(D=D_{1}\neq D_{2}\) be a non-split, non-prime alternating link diagram. \(D\) always has a \(-1\)-pseudo coloring using color \(1\) on \(D_{1}\) and color \(0\) on \(D_{2}\). We illustrate this idea for the square knot \(\overline{3}_{1}\neq\overline{3}_{1}\) in Figure 6.2._ On the other hand, non-alternating link diagrams often have \(-1\)-pseudo colorings and \(+1\)-pseudo colorings. See Examples 6.3 and 6.4. If the determinant of a knot with diagram \(D\) is equal to \(1\), we have \(L(D)=C^{-1}(D)\) and every column of \(C^{-1}(D)\) colors the first \(n-1\) arcs of the diagram. Then for a complete \(\epsilon\)-pseudo coloring of \(D\), we color the last (base) arc \(a_{n}\) by color \(0\). **Example 6.3**.: _Consider the braid word \(\sigma_{2}^{3}\sigma_{1}\sigma_{3}^{-1}\sigma_{2}^{-2}\sigma_{1}\sigma_{2}^{ -1}\sigma_{1}\sigma_{3}^{-1}\) whose closure is the Conway knot. The determinant of this knot is \(1\) and its crossing matrix \(C^{\prime}(D)\) is given in Figure 6.4. The \(+1\)-pseudo coloring given by column \(4\) and the \(-1\)-pseudo coloring given by column \(1\) in the matrix shown in Figure 6.5 are illustrated in Figure 6.3 on the left and on the right, respectively._ Figure 6.1. The torus knot \(T(3,4)\) (\(8_{19}\) in Rolfsen’s table [Rol]) depicted as the pretzel knot \(P(3,3,-2)\) showing ordering of crossings and arcs (on the left). On the right, there is pseudo coloring given by the second column; compare Remark 6.5. Figure 6.2. \(-1\)-pseudo coloring of the square knot with the \(+1\)-crossing denoted by \(c_{+1}\) and the \(-1\)-crossing denoted by \(c_{-1}\). **Example 6.4**.: _Consider the torus knot \(T(3,4)\) with diagram \(D\) and crossings and arcs ordered as illustrated in Figure 6.1. Its crossing matrix is shown in Figure 6.7. Three columns of \(C^{-1}(D)\) (shown in Figure 6.8) are integral and they yield \(\epsilon\)-pseudo colorings. Column 5 gives a \(-1\)-pseudo coloring (shown on the right in Figure 6.6) and columns 1 and 2 give -1-pseudo colorings. The \(+1\)-pseudo coloring corresponding to column \(1\) is shown on the left of Figure 6.6._ Non-alternating link diagrams always have \(\epsilon\)-pseudo colorings, as we describe in the following remark. **Remark 6.5**.: _Let \(D\) be a non-alternating link diagram._ 1. _Every integral column of_ \(C^{-1}(D)\) _leads to some_ \(\epsilon\)_-pseudo coloring._ 2. \(D\) _has an_ \(\epsilon\)_-pseudo coloring. This follows from the fact that every non-alternating diagram has a tunnel of length at least two. Now, we can color_ \(D\) _by coloring one of the arcs of the tunnel by color_ \(-1\) _and Figure 6.3. The Conway knot with \(+1\)-pseudo coloring (on the left) and with \(-1\)-pseudo coloring (on the right). The last crossing \(c_{+1}\) in the left figure changes to \(c_{-1}\) in the right figure. Figure 6.4. The crossing matrix of the Conway knot. Notice that the rows of the crossing matrix satisfy the linear equation \(R_{1}-R_{2}-R_{3}-R_{4}-R_{5}-R_{6}-R_{7}+R_{8}+R_{9}+R_{10}+R_{11}=0\). all other arcs by color \(0\) to get the \(+1\)-pseudo coloring. An example of such a coloring is shown on the right-hand side of Figure 6.1._ ### Future directions The Kauffman-Harary conjecture was extended to the case of virtual knots by Mathew Williamson [Wil] and proved by Zhiyun Cheng [Che]. A natural question is to ask whether the conjecture in [APS] holds for virtual links whose determinants are not prime. Another path of further Figure 6.5. Coloring matrix for the Conway knot. The last row of zeroes correspond to the coloring of the base arc. Figure 6.6. The torus knot \(T(3,4)\) (\(8_{19}\) in the Rolfsen’s table [Rol]) depicted as the pretzel knot \(P(3,3,-2)\). Figure 6.7. The crossing matrix \(C^{\prime}(P(3,3,-2))\). The rows satisfy the linear relation \(R_{1}+R_{2}-R_{3}-R_{4}-R_{5}-R_{6}-R_{7}-R_{8}=0\). research is to look for a natural generalization to non-alternating diagrams using a set theoretic Yang-Baxter operator or a general Yang-Baxter operator. An interesting prospect is to approach the generalized Kauffman-Harary conjecture from the perspective of incompressible surfaces in the double branched cover \(M_{L}^{(2)}\) of \(S^{3}\) branched along \(L\). This was outlined in [1] with the hope of proving the GKH conjecture. Now that the GKH conjecture is proved, we can proceed in the opposite direction and analyze incompressible surfaces in \(M_{L}^{(2)}\). ## Acknowledgements The first author acknowledges the support of Dr. Max Rossler, the Walter Haefner Foundation, and the ETH Zurich Foundation. The third author acknowledges the support of the National Science Foundation through Grant DMS-2212736. The fourth author was supported by the American Mathematical Society and the Simons Foundation through the AMS-Simons Travel Grant. The fifth author was partially supported by the Simons Collaboration Grant 637794.
2310.14766
End-to-End Learning of Behavioural Inputs for Autonomous Driving in Dense Traffic
Trajectory sampling in the Frenet(road-aligned) frame, is one of the most popular methods for motion planning of autonomous vehicles. It operates by sampling a set of behavioural inputs, such as lane offset and forward speed, before solving a trajectory optimization problem conditioned on the sampled inputs. The sampling is handcrafted based on simple heuristics, does not adapt to driving scenarios, and is oblivious to the capabilities of downstream trajectory planners. In this paper, we propose an end-to-end learning of behavioural input distribution from expert demonstrations or in a self-supervised manner. Our core novelty lies in embedding a custom differentiable trajectory optimizer as a layer in neural networks, allowing us to update behavioural inputs by considering the optimizer's feedback. Moreover, our end-to-end approach also ensures that the learned behavioural inputs aid the convergence of the optimizer. We improve the state-of-the-art in the following aspects. First, we show that learned behavioural inputs substantially decrease collision rate while improving driving efficiency over handcrafted approaches. Second, our approach outperforms model predictive control methods based on sampling-based optimization.
Jatan Shrestha, Simon Idoko, Basant Sharma, Arun Kumar Singh
2023-10-23T10:06:13Z
http://arxiv.org/abs/2310.14766v1
# End-to-End Learning of Behavioural Inputs for Autonomous Driving in Dense Traffic ###### Abstract Trajectory sampling in the Frenet(road-aligned) frame, is one of the most popular methods for motion planning of autonomous vehicles. It operates by sampling a set of behavioral inputs, such as lane offset and forward speed, before solving a trajectory optimization problem conditioned on the sampled inputs. The sampling is handcrafted based on simple heuristics, does not adapt to driving scenarios, and is oblivious to the capabilities of downstream trajectory planners. In this paper, we propose an end-to-end learning of behavioral input distribution from expert demonstrations or in a self-supervised manner. We embed a novel differentiable trajectory optimizer as a layer in neural networks, allowing us to update behavioral inputs by considering the optimizer's feedback. Moreover, our end-to-end approach also ensures that the learned behavioral inputs aid the convergence of the optimizer. We improve the state-of-the-art in the following aspects. First, we show that learned behavioral inputs substantially decrease collision rate while improving driving efficiency over handcrafted approaches. Second, our approach outperforms model predictive control methods based on sampling-based optimization. ## I Introduction The planning layer for autonomous driving includes two hierarchical components. At the top level, the behavioral layer computes decisions such as lane change, speeding up, and braking based on the traffic scenario and the driving task. The behavioral inputs can be parameterized as set points for longitudinal velocity, lateral offsets from the center line, and goal positions. Such representation naturally integrates with the downstream optimal trajectory planner [1][2, 3]. _Existing Gaps:_ Existing approaches [1, 2, 4] for computing optimal behavior inputs and motion plans consist of two steps (see Fig.1). The behavioral inputs are sampled based on simple heuristics and then fed to the downstream trajectory optimizer. The resulting trajectories are then ranked based on their performance on the driving tasks, modeled through some meta costs- cruising speed, collision avoidance, etc. There are three fundamental problems associated with the existing approaches. First, the behavioral input sampling is handcrafted, usually sampled from a pre-specified grid. Second, the sampling does not adapt to driving scenarios and the capabilities of downstream trajectory optimizers. Third, the planner itself is just a simple Quadratic Programme (QP) without explicit collision-avoidance and kinematic constraints. This paper presents an end-to-end learning method addressing the core problems discussed above. The end-to-end aspect of our approach signifies that we jointly learn behavioral inputs and initialization for our trajectory optimizer while considering their interactions (see Remark III-B). Our core innovations and their associated benefits are summarized below. _Algorithmic Contribution:_ * We propose a supervised and also self-supervised approach for learning behavioral inputs. For the former, we use a Conditional Variational Autoencoder(CVAE) [5] that directly learns a distribution over optimal behavioral inputs. For the latter, we use Multi-Layer Perceptron (MLP) and treat its output as the mean of the distribution. * We propose a differentiable constrained optimizer that improves QP-based planning and can also be embedded as a layer in neural networks. The resulting backpropagation traces the gradient of the loss function through the differentiable constrained optimizer. We show that our optimizer has an efficient batchable structure and allows for the pre-storing of expensive computations such as matrix factorizations. _State-of-the-Art Performance:_ * Our end-to-end learning approach outperforms planning with handcrafted behavioural inputs (e.g. [6]) in collision rate and achieved speed metrics. The performance gap increases as the traffic becomes dense. * We also achieve a lower collision rate than Model Predictive Path Integral (MPPI), a state-of-the-art sampling-based optimizer. Fig. 1: Comparison between existing and proposed pipeline for motion planning in autonomous driving. We differ in two respects. First, we sample behavioral inputs from a learned distribution. Second, the downstream trajectory planner has a projection optimizer to aid in the satisfaction of collision and kinematic constraints. We also learn the optimizer initialization along with the behavioral inputs. ## II Mathematical Preliminaries Symbols and NotationNormal font lower-case letters will represent scalars, and bold font variants will represent vectors. The upper-case bold font letters will represent matrices. The superscript \(T\) will denote the transpose of a matrix or a vector. ### _Frenet Frame and Trajectory Parametrization_ We formulate motion planning of the ego-vehicle in the road-aligned reference known as the Frenet frame. In this setting, the longitudinal and lateral motions of the ego-vehicle are always aligned with the \(X\) and \(Y\) axes of the Frenet-frame respectively. We parametrize the positional space (\(x(t),y(t)\)) of the ego-vehicle in the Frenet frame at any time instant \(t\) in terms of polynomials: \[\left[x(t_{0}),\ldots,x(t_{f})\right]=\textbf{W}\textbf{c}_{x},\left[y(t_{0}),\ldots,y(t_{f})\right]=\textbf{W}\textbf{c}_{y}, \tag{1}\] where, **W** is a matrix formed with time-dependent polynomial basis functions and (\(\textbf{c}_{x},\textbf{c}_{y}\)) are the coefficients of the polynomial. We can also express the derivatives in terms of \(\dot{\textbf{W}},\dot{\textbf{W}}\). ### _Behavioral Input Parametrization_ We summarize the commonly used behavioural inputs below * \(\textbf{p}_{d}=(y_{d},v_{d})\): The desired lateral offset from the centre line and longitudinal speed. * \(\textbf{p}_{term}=(x_{f},y_{f},\dot{x}_{f},\dot{y}_{f},\ddot{x}_{f},\ddot{y}_ {f})\): Final states along the longitudinal and lateral directions. We stack all the behavioural inputs into one parameter vector: \[\textbf{p}=\left[\textbf{p}_{d},\textbf{p}_{term}\right]. \tag{2}\] Note that not all elements of **p** need to be used simultaneously in the downstream planner. For example, [7], use a single set-point for lateral offset and desired velocity as behavioural inputs while authors in [6] uses only \((x_{f},y_{f})\). It is also possible to expand the list. For longer horizons, we can split the planning horizon segments into \(m\) parts and assign individual lateral offsets \(y_{d,m}\) and desired speed \(v_{d,m}\) to each of these segments. ### _Existing Behavioural and Trajectory Planning_ #### Ii-C1 Downstream Trajectory Planner We can obtain different formulations for the trajectory planner, depending on the choice of behavioural inputs. We present below a generic construction that draws inspiration from [1, 2, 4, 6] and work for all the behavioural inputs presented in the previous subsection. \[\min\sum_{t}c_{s}+c_{l}+c_{v} \tag{3a}\] \[(x^{(r)}(t_{0}),y^{(r)}(t_{0}))=\textbf{b}_{0},(x^{(r)}(t_{f}),y^ {(r)}(t_{f}))=\textbf{p}_{term}\] (3b) \[c_{s}(\ddot{x}(t),\ddot{y}(t))=\ddot{x}(t)^{2}+\ddot{y}(t)^{2}\] (4a) \[c_{l}(\ddot{y}(t),\dot{y}(t))=(\ddot{y}(t)-\kappa_{p}(y(t)-y_{d} )-\kappa_{v}\dot{y}(t))^{2}\] (4b) \[c_{v}(\dot{x}(t),\ddot{x}(t))=(\ddot{x}(t)-\kappa_{p}(\dot{x}(t) -v_{d}))^{2} \tag{4c}\] The first term \(c_{s}(.)\) in the cost function (3a) ensures smoothness in the planned trajectory by penalizing high accelerations at discrete time instants. The last two terms (\(c_{l}(.),c_{v}(.)\)) model the tracking of lateral offset (\(y_{d}\)) and forward velocity (\(v_{d}\)) set-points respectively and is inspired from works like [7]. For the former, we define a Proportional Derivative (PD) like tracking with gain \((\kappa_{p},\kappa_{v})\). It induces lateral accelerations that will make the ego-vehicle converge to the \(y_{d}\). The derivative terms in \(c_{l}\) minimize oscillations while converging to the desired lateral offset. For velocity tracking, we only use a proportional term. Equality constraints (3b) ensures boundary conditions on the \(r^{th}\) derivative of the planned trajectory. We use \(r=\{0,1,2\}\) in our formulation. Optimization (3a)-(3b) is a convex QP. To make this form more explicit, we can use the parametrization proposed in (1) to put the above optimization into a more compact form \[\operatorname*{arg\,min}_{\boldsymbol{\xi}}\frac{1}{2}\boldsymbol{\xi}^{T} \textbf{Q}\boldsymbol{\xi}+\textbf{q}^{T}(\textbf{p})\boldsymbol{\xi}, \tag{5a}\] \[\textbf{A}\boldsymbol{\xi}=\textbf{b}(\textbf{p}) \tag{5b}\] where \(\boldsymbol{\xi}=(\textbf{c}_{x},\textbf{c}_{y})\). A part of **p** that models lateral offsets and desired velocity enters the cost function while the rest enters the r.h.s of the equality constraints. #### Ii-C2 Sampling and meta Cost Let \(\textbf{p}_{j}\) be the \(j^{th}\) behavioral input sampled from a fixed distribution (or a grid). Existing works like [1, 2, 4, 6] solve (5a)-(5b) for all \(\textbf{p}_{j}\) and rank the resulting trajectories based on some higher-level (meta) cost function. Let \(\boldsymbol{\xi}_{j}^{*}=(\textbf{c}_{x}^{*},\textbf{c}_{y}^{*})\) be the resulting optimal trajectory coefficients corresponding to \(\textbf{p}_{j}\). Accordingly, the meta cost used in this work to model the driving task can be defined as follows: \[c_{m}(\boldsymbol{\xi})=c_{res}(\boldsymbol{\xi}^{*})+\left\|\dot{\textbf{W}} \textbf{c}_{x}^{*}-v_{des}\right\|_{2}^{2}, \tag{6}\] where \(c_{res}\) measures the residual (violation) of kinematic and collision avoidance constraints. The second term in \(c_{m}\) measures the deviation from some desired longitudinal speed. In dense traffic scenarios, a heuristic sampling of \(\textbf{p}_{j}\) is likely to lead to a high meta-cost for all trajectories. In the next section, we introduce our main result; replacing the hand-crafted sampling with a neural network trained in an end-to-end fashion. ## III Main Results Fig.1(b) presents an overview of our main algorithmic results that have two key components. First, our trajectory planner consists of QP (5a)-(5b) augmented with a differentiable projection module. Second, the behavioural inputs are sampled from a learned distribution. We present the first component next. ### _Differentiable Constrained Optimizer_ Our projection optimizer has the following form \[\overline{\boldsymbol{\xi}}_{j}^{*} =\operatorname*{arg\,min}_{\overline{\boldsymbol{\xi}}_{j}^{*}} \frac{1}{2}\|\overline{\boldsymbol{\xi}}_{j}^{*}-\boldsymbol{\xi}_{j}^{*}\|_{2}^ {2} \tag{7}\] \[\textbf{A}\overline{\boldsymbol{\xi}}_{j}^{*} =\textbf{b}(\textbf{p}_{j}),\qquad\textbf{g}(\overline{\boldsymbol {\xi}}_{j}^{*})\leq\textbf{0} \tag{8}\] The cost function (7) aims to perform a minimal change to the output of the QP (5a)-(5b) in order to satisfy the constraints. The inequalities in (8) model collision avoidance, kinematic and lane bounds. We present their algebraic form in Appendix VII. Therein, we also show that inequality constraints can be reformulated to induce a special structure in our projection optimization. In particular, (7)-(8) can be reduced to the fixed point iterations (9)-(10), wherein \(k\) represents the iteration index. \[{}^{k+1}\mathbf{e}_{j},{}^{k+1}\mathbf{\lambda}_{j}=\mathbf{h}({}^{k}\overline{ \mathbf{\xi}}_{j}^{*},{}^{k}\mathbf{\lambda}_{j}) \tag{9}\] \[{}^{k+1}\overline{\mathbf{\xi}}_{j}^{*}=\arg\min_{\overline{\mathbf{\xi}}_{j} }\frac{1}{2}\|\overline{\mathbf{\xi}}_{j}^{*}-\mathbf{\xi}_{j}^{*}\|_{2}^{2}+ \frac{\rho}{2}\left\|\mathbf{F}\overline{\mathbf{\xi}}_{j}^{*}-{}^{k+1} \mathbf{e}_{j}\right\|_{2}^{2}\] \[-{}^{k+1}\mathbf{\lambda}_{j}^{T}\overline{\mathbf{\xi}}_{j}^{*},\qquad \mathbf{A}\overline{\mathbf{\xi}}_{j}^{*}=\mathbf{b}(\mathbf{p}_{j}) \tag{10}\] In (9)-(10), \(\mathbf{F}\) represents a constant matrix and \(\mathbf{h}\) is some closed-form analytical function. We derive these entities in Appendix VII. The main cost of projection optimization stems from solving the QP (10). However, since there are no inequality constraints in (10), the QP essentially boils down to an affine transformation of the following form: \[({}^{k+1}\overline{\mathbf{\xi}}_{j}^{*},{}^{k+1}\nu)=\mathbf{M}\boldsymbol{ \eta}({}^{k}\overline{\mathbf{\xi}}_{j}^{*}), \tag{11}\] \[\mathbf{M}=\begin{bmatrix}\mathbf{I}+\rho\mathbf{F}^{T}\mathbf{F}&\mathbf{A}^ {T}\\ \mathbf{A}&\mathbf{0}\end{bmatrix}^{-1},\boldsymbol{\eta}=\begin{bmatrix}-\rho \mathbf{F}^{Tk+1}\mathbf{e}_{j}+{}^{k+1}\mathbf{\lambda}_{j}+\mathbf{\xi}_{j} ^{*}\\ \mathbf{b}(\mathbf{p}_{j})\end{bmatrix} \tag{12}\] Fig.2 presents an unrolled perspective of our projection optimizer. As can be seen, it takes \(\mathbf{\xi}_{j}^{*}\) as the input along with the initial guess for the solution \({}^{k}\overline{\mathbf{\xi}}_{j}^{*}\), and parameter \({}^{k}\mathbf{\lambda}_{j}\) at \(k=0\). The latter is the so-called Lagrange multiplier associated with inequality constraints. The initial guesses are then gradually updated by recursively passing them through the \(\mathbf{h}(.)\) and \(QP(.)\) blocks a specified number of times. The following important features of our projection optimizer are crucial for building our end-to-end learning pipeline. **Differentiability:** Both the \(\mathbf{h}(.)\) and \(QP(.)\) blocks are differentiable since the former is a closed-form function and the latter reduces to simply an affine transformation (11). This allows us to compute how the output of the projection optimizer will vary if either the input or the initialization values will change. More generally, let \(\mathcal{L}(\overline{\mathbf{\xi}}_{j}^{*})\) be some loss function defined over the output of the projection optimizer. Due to the differentiability property, we can efficiently obtain the gradients \(\nabla_{\mathbf{\xi}_{j}^{*}}\mathcal{L}\), \(\nabla_{\mathbf{\xi}_{j}^{*}}\mathcal{L}\), \(\nabla_{\mathbf{\lambda}_{j}}\mathcal{L}\), etc. **Batchable Structure:** Besides, being differentiable, we need the projection optimizer to be batchable for it to be easily embedded into the neural network pipeline [8]. In other words, we should be able to compute the projection for several \(\mathbf{\xi}_{j}^{*}\) in parallel. To this end, we recall (10)-(11) and note that the \(QP(.)\) block in Fig.2 essentially reduces to a matrix-vector product that can be trivially batched. Moreover, the matrix \(\mathbf{M}\) in (11) is independent of the batch index and thus needs to be computed only once. In fact, for the learning pipelines discussed later, we pre-store \(\mathbf{M}\) before the training is started. ### _Supervised Learning with CVAE_ In this section, we derive a Behaviour Cloning (BC) framework to learn a policy that maps observations \(\mathbf{o}\) directly to optimal behavioral inputs \(\mathbf{p}\). Typically in BC, we assume that we have access to a dataset \((\mathbf{o},\boldsymbol{\tau}_{e})\) that demonstrates the expert (optimal) trajectory \(\boldsymbol{\tau}_{e}\) for each observation vector \(\mathbf{o}\). However, we cannot directly access a demonstration of the optimal behavioral input \(\mathbf{p}\) employed by the expert. Instead, we have their indirect observation through \(\boldsymbol{\tau}_{e}\). Thus, our problem is more complicated than the typical BC setup. We address these challenges using an unconventional architecture combining feedforward and differentiable optimization layers [8] to learn the optimal behavioral inputs from expert trajectory demonstrations. An overview of our approach is illustrated in Fig.3 (a). The learnable weights are present only in the feedforward layers. It takes in observations \(\mathbf{o}\) to output the behavioral inputs \(\mathbf{p}\) and the Lagrange multipliers \(\boldsymbol{\lambda}\) (recall (10)), which is fed to the differentiable optimizer resulting in optimal trajectory coefficients \(\overline{\mathbf{\xi}}^{*}\). The BC loss is computed over \(\overline{\mathbf{\xi}}^{*}\). The backpropagation required for updating the weights of the feedforward layer needs to trace the gradient of the loss function through the optimization layer. **Need for CVAE:** We want our learned policy to induce a distribution over \(\mathbf{p}\) so that for each observation \(\mathbf{o}\), we can then draw samples \(\mathbf{p}_{j}\) from it and solve the trajectory optimizations conditioned on them. With this motivation, we use a deep generative model called CVAE [5], illustrated in Fig.3 (b) as our learning pipeline. It consists of an encoder-decoder architecture constructed from a multi-layer Fig. 2: Unrolled representation of our projection optimizer formed by a repeated stacking of an analytical function \(\mathbf{h}(.)\) and a \(QP(.)\) block. We can compute the gradient of any loss function defined on the output \(\overline{\mathbf{\xi}}_{j}\) with respect to any intermediate value of the unrolling pipeline. perceptron (MLP) with weights \(\mathbf{\phi}\) and \(\mathbf{\theta}\) respectively. The decoder network also has an optimization layer that takes the output (\(\mathbf{p}\)) of its MLP to produce an estimate of optimal trajectory coefficients \(\overline{\mathbf{\xi}}^{*}\). The encoder network maps \((\mathbf{o},\mathbf{\tau}_{e})\) to a latent variable \(\mathbf{z}\) with distribution \(\mathbf{q}_{\mathbf{\phi}}(\mathbf{\mu}(\mathbf{\phi}),\mathbf{\Sigma}(\mathbf{\phi}))\). The covariance matrix \(\mathbf{\Sigma}(\mathbf{\phi})\) is diagonal and formed with the vector \(\mathbf{\sigma}^{2}\) produced by the encoder. The decoder then maps this latent distribution to \(\mathbf{p}_{\mathbf{\theta}}(\overline{\mathbf{\xi}}^{*}|\mathbf{z},\mathbf{o})\) through its MLP and optimization layers. In the training (offline) phase, both the networks are trained end-to-end with loss function (13), where \(\overline{\mathbf{W}}=\begin{bmatrix}\mathbf{W}&\mathbf{0}\\ \mathbf{0}&\mathbf{W}\end{bmatrix}\). The first term is the reconstruction loss responsible for bringing the output of the decoder network as close as possible to the expert trajectory. The second term in (13) acts as a regularizer that aims to make the learned latent distribution \(\mathbf{q}_{\mathbf{\phi}}(\mathbf{z}|\mathbf{o},\mathbf{\tau}_{e})\) as close as possible to the prior isotropic normal distribution \(\mathcal{N}(\mathbf{0},\mathbf{I})\). The \(\mathbf{\beta}\) hyper-parameter acts as a trade-off between the two cost terms. \[\mathcal{L}_{\text{CVAE}}=\sum\lVert\overline{\mathbf{W}}\, \overline{\mathbf{\xi}}^{*}(\mathbf{\theta},\mathbf{\phi})-\mathbf{\tau}_{e}\rVert_{2}^{2}+ \beta\,D_{\text{KL}}[\mathbf{q}_{\mathbf{\phi}}(\mathbf{z}\,|\,\mathbf{o},\,\mathbf{ \tau}_{e})\,|\mathcal{N}(\mathbf{0},\mathbf{I})] \tag{13}\] In the inferencing (online) phase, we draw samples of \(\mathbf{z}\) from the prior isotropic normal distribution and then pass them through the decoder MLP to get samples of optimal behavioural inputs \(\mathbf{p}\) along with \(\mathbf{\lambda}\). Finally, these are passed through the optimization layers to generate distribution for the optimal trajectory coefficients \(\overline{\mathbf{\xi}}^{*}\). **Incorporating Self Supervision Loss:** Let us assume a simplified world model where the neighboring vehicles are non-reactive dynamic obstacles. Moreover, we have some approximate predictions for their trajectories over a future time horizon. For example, we can take the current velocity and positions of the neighboring vehicles from the observation vector \(\mathbf{o}\) and perform a linear prediction. Under this simplified world mode, we can augment the meta-cost (6) into our learning pipeline as a self-supervision cost. That is, we can modify our loss function as \[\mathcal{L}_{vae}+s\sum_{k}c_{m}(^{k}\overline{\mathbf{\xi}}^{*}), \tag{14}\] where \({}^{k}\overline{\mathbf{\xi}}^{*}\) is the output at the \(k^{th}\) stage (iteration) of the unrolling. The scalar \(s\) trades off the CVAE and self-supervision loss. As discussed in (5a), a part of \((c_{m})\) is the constraint (kinematic, collision, lane) residuals which are in fact same as the residuals of our projection optimizer. Thus, the addition of the self-supervision loss forces the network to learn \(\mathbf{p}\) such that it aids in faster convergence of the optimizer. **Remark 1**: _We used the output of the QP (5a)-(5b) in Fig.3 along with the predicted \(\mathbf{\lambda}\) from the decoder MLP to initialize the differentiable projection layer. This design choice couples the learning of behavioral inputs and the initialization for the lower-level trajectory optimizer._ ### _Self-Supervised Learning with MLP_ In this subsection, we formulate a behavioral input learning pipeline with purely self-supervision loss. The primary motivation stems from the fact that demonstrations could be sub-optimal or hard to obtain in dense traffic conditions. To this end, we construct a simple feedforward network using an MLP (see Fig.4 ) with learnable parameter \(\mathbf{\omega}\). The network is trained with the loss function (15). \[\underset{\mathbf{\omega}}{min.}\ \mathbb{E}_{\mathbf{\omega}\sim\mathbf{p}_{\mathbf{ \omega}}}c_{m}(\mathbf{\pi}_{\mathbf{\omega}}(\mathbf{o});\mathbf{o})\approx\frac{1}{n} \sum_{j}c_{m}(\mathbf{\pi}_{\mathbf{\omega}}(\mathbf{o}_{j});\mathbf{o}_{j}) \tag{15}\] where \(\mathbf{\pi}_{\mathbf{\omega}}(.)\) represents the MLP policy of Fig.4 that take in observations \(\mathbf{o}\) and outputs the optimal trajectory coefficients. The Expectation operator in (15) is approximated by empirical mean. The observation samples are the ones encountered during the collection of expert demonstrations for supervised learning. The learned MLP provides only a single output. However, our planning approach needs a distribution from where multiple samples of \(\mathbf{p},\mathbf{\lambda}\) can be drawn. Thus, we treat the output of the MLP as the mean of a Gaussian distribution. For the Covariance, we use a constant diagonal matrix. ## IV Connections to Existing Works _Trajectory Sampling Approaches:_ As mentioned earlier, [2, 9] sample behavioral inputs from a pre-discretized grid that is oblivious to how the resulting trajectories are performing on the driving task. Authors in [10] address this drawback to some extent as they adapt the sampling strategy based on optimal trajectories obtained in the past planning cycles. However, none of these cited works explicitly enforce collision avoidance constraints in their approach. Our prior work [6] addressed constraint handling but the behavioral input sampling was still handcrafted. Fig. 4: Fig. shows an MLP combined with differentiable optimization layers used for self-supervised learning of behavioural inputs Fig. 3: (a) Our BC framework features a niche neural network architecture using a combination of feedforward and differentiable optimization layers. Fig.(b) shows the overall CVAE encoder-decoder architecture comprising feedforward layers as MLPs. The differentiable optimizer consists of equality-constrained QP and our custom projection operator. _Differentiable Optimization Layers:_ Embedding optimization layers into neural network pipelines has recently garnered much attention. Although technically, any off-the-shelf optimizer can be embedded into neural architectures [11], the efficiency of the resulting training could be limited. Thus, a strong focus has been on developing batchable GPU accelerated optimizers [8]. Our projection optimizer satisfies both of these requirements. Moreover, its unique structure allows us to avoid matrix factorizations during training (recall (11)) as these can be pre-stored. As a result, our whole training pipeline ran stably on 32-bit precision on Graphical Processing Units (GPUUs). In contrast, [8] strongly recommends running their differentiable optimizer in 64-bit, which could be slow. ## V Validation and benchmarking In this section, we qualitatively validate the performance of our projection optimizer and answer the following research questions: * **Q1:** How do learned behavioral inputs perform as compared to handcrafted heuristics? * **Q2:** How does our approach fare compare to the State-of-the-art trajectory planners and Model Predictive Control (MPC) methods? ### _Implementation Details_ We implemented our trajectory planner comprising of QP (5a)-(5b) and projection (7)-(8) in Python using JAX [12] library as our GPU-accelerated linear algebra back-end. The matrix \(\mathbf{W}\) in (1) is constructed from a \(10^{th}\) order polynomial. We also created equivalent PyTorch implementations for embedding into the training pipeline. Our simulation pipeline was built on the Highway Environment (highway-env) simulator [13]. The neighboring vehicles use simple rule-based approach for lateral and longitudinal control. #### V-A1 Hyper-parameter Selection The behavioral input \(\mathbf{p}\) was modeled as four set-points for lateral offsets and desired longitudinal velocities. That is, \(\mathbf{p}=\left[y_{d,1},\ldots,y_{d,4},v_{d,1},\ldots,v_{d,4}\right]\). We divided the planning horizon into four segments and associated one pair of lateral offsets and desired velocity to each of these. #### V-A2 CVAE and MLP Training The details of the encoder-decoder network architecture of our CVAE are presented in the accompanying video. During training, the input to the CVAE is the expert trajectory and a 55-dimensional observation vector (\(\mathbf{o}\)), containing the state of the ego-vehicle, the ten closest obstacles, and the road boundary. For the ego-vehicle, the state consists of heading, lateral and longitudinal speeds. The obstacle state consists of longitudinal/lateral positions and the corresponding velocities for the ten closest obstacles. We express all the position-level information with respect to the center of the ego vehicle. During inference, the decoder network of CVAE only needs \(\mathbf{o}\), and samples \(\mathbf{z}\) are drawn from an isotropic Gaussian. For MLP, only the observation vector is needed. We used the cross-entropy method [14], run offline with a batch size of 5000, to collect the demonstration of optimal trajectories for training our CVAE. We note that our demonstrations could be sub-optimal and sparse. However, even with such a simple data set, our CVAE and MLP were able to learn valuable behavioral inputs. #### V-A3 Baselines We used our trajectory planner in a receding horizon manner to create two MPC variants. We will henceforth refer to it as MPC-Supervised and MPC-Self-Supervised depending on whether the behavioral inputs are obtained from either supervised CVAE or self-supervised MLP. Both the MPC variants take the same observation vector \(\mathbf{o}\) as the input and output coefficients of the optimal trajectories. These are converted to steering and acceleration input vectors. We compare our MPC with the following baselines and SOTA approaches: **MPC-Grid**: This baseline operates with handcrafted behavioral inputs. The vector \(\mathbf{p}\) consists of some set-points for lateral offsets and desired velocities sampled from a pre-specified grid instead of a neural network. The grid is centered around the lane center-line and desired speed. **Batch-MPC** of [6]: This SOTA MPC uses a different set of behavioral inputs, namely goal positions for the longitudinal and lateral components of the trajectory. That is, \(\mathbf{p}=\left[x_{f},y_{f}\right]\). Again, the behavioral input is sampled from a pre-specified grid. **Model Predictive Path Integral (MPPI)**[15]: This is the SOTA approach for receding horizon planning. It operates by sampling trajectories, evaluating the cost \(c_{m}\) (recall (6)) and then updating the sampling distribution. The MPPI baseline directly works in the space of trajectories and does not use any behavioral input. We leverage the insight presented in [16] where the covariance matrix is also adapted for better optimization. **Remark 2**: _MPC-Grid has the same trajectory planner as our MPC-Supervised and MPC-Self-supervised variants. The only difference stems from the sampling of behavioral inputs. Batch-MPC [6] also explicitly enforces kinematic and collision avoidance constraints. In contrast, MPPI operates by rolling all the constraints as penalties in the cost function._ #### V-A4 Environments, Tasks, and Metrics The highway driving scenarios are presented in Fig.5. For each scenario, we had three different traffic densities. We use the internal parameter of highway-env named "density" to control how closely each vehicle is placed at the start of the simulation. We evaluate two sets of 50 configurations spawned using different random seeds for each density in two or four-lane driving settings. We fixed the random seed of the simulator Fig. 5: Two and four-lane highway driving scenarios with varying traffic density used for benchmarking our approach with different baseline MPCs. to ensure that all MPC baselines are tested across the same set of traffic configurations. The task in the experiment was for the ego-vehicle to drive as fast as possible without colliding with the obstacles and going outside the lane boundary. Thus, \(v_{des}=v_{max}\) was used in the meta-cost (6). Our evaluation metric has two components: (i) collision rate and (ii) average velocity achieved within an episode. Since the ego-vehicle can achieve arbitrary high velocity while driving rashly, we only consider velocities from collision-free episodes. ### _Empirical Validation of Projection Optimizer_ Fig.6(a) shows a typical output of our projection optimizer for a scene with static obstacles (blue rectangles). We consider 400 randomly sampled \(\mathbf{p}_{j}\) that were passed to QP (5a)-(5b) resulting in trajectory distribution shown in Fig.6(a)(top). These were then passed to our projection optimizer (7)-(8) that led to collision-free trajectories residing in different homotopies. Fig.6(b) shows the constraint residuals across iterations for every instance of the batch. Typically, 100 iterations were enough to drive the constraint residuals to zero for a majority of the trajectory samples. ### _Effect of Learned Behavioural Inputs_ A two-lane driving scenario offers minimal scope for maneuvers. Thus, in a low-traffic density, all the baselines and our two approaches perform equally well (Fig.7(a)). This shows that a simple handcrafted grid search performed in MPC-Grid and Batch-MPC of [6] is enough to come up with the right set of behavioral inputs. Moreover, the performance of MPPI shows that one can even bypass the behavioral input sampling altogether and search directly in the space of trajectories. As the traffic density increases in the two-lane scenarios, we can see the benefit of behavioral input sampling (MPC-Grid outperforming MPPI) and, even more importantly, going beyond the handcrafted heuristics (ours outperforming hand-crafted behavioral inputs sampling). The trend is particularly stark in dense four-lane scenarios where our MPC-Supervised and MPC-Self-supervised provide a \(4-10\times\) reduction in collision rate (Fig.7(b)). Fig.7(c)-(d) show that the average speed achieved by our approaches is either better or with the baselines in all traffic densities. Among our approaches, MPC-Supervised performs better in medium traffic densities; two-lane(1.0, 1.5) and four-lane (1.5, 2.5). In contrast, MPC-Self-supervised outperforms in two and four-lane scenarios with a traffic density of 3.0. This can be attributed to the fact that the expert demonstrations were sparse in challenging scenarios. Moreover, this pattern also showcases the importance of our self-supervised learning pipeline. Table I correlates the number of iterations of our projection optimizer with collision rate and max achieved speed. As can be seen, the learned behavioral inputs also aid in the convergence of the optimizer. For those learned from the supervised training, the projection optimizer needs around 75 iterations to achieve its best performance, documented in Fig.7. The self-supervised training led to even faster convergence. **Remark 3**: _The poor performance of MPPI is attributed to two reasons. First, we have observed that sampling in the space of behavioral inputs provides a more focused search than sampling direct trajectories for autonomous driving benchmarks. Second, MPPI rolls the collision constraints into the costs, and thus this soft-constraint handling proves detrimental in dense scenarios._ **Remark 4**: _The success rate in Fig.7(a)-(b) is based on the number of collision-free runs across all the episodes for a particular benchmark. Hence, this metric doesn't have an error bar. In contrast, the velocity profiles of Fig.7 vary within a simulation episode and across the whole data set. Thus, we present the error bars to capture this variability._ Fig. 6: Fig.(a) (top): Trajectories produced by QP (5a)-(5b) for randomly sampled \(\mathbf{p}_{j}\). The green and blue rectangle represents the ego vehicle and the obstacle, respectively. Fig.(a)(bottom): The projection of trajectories onto the feasible set of collision avoidance and velocity/acceleration bounds. Fig.(b): The trend of constraint residuals across iterations for every instance in the batch. Most trajectories residuals converge to zero within 100 iterations. Fig. 7: Comparison of MPC baselines with ours; MPC-Supervised and MPC-Self-supervised in two-lane and four-lane driving scenarios ### _Ablation: Effect of Training with Projection Layer_ Fig.8 presents the second key result of our work. It showcases the importance of embedding our custom projection operator in the training pipeline shown in Fig.3 and Fig.4. As can be seen, the performance severely degrades in the absence of the projection layer because the network does not get corrective feedback from the optimizer during training. An alternative to our approach could be to directly penalize the network output. However, as shown in [17], such an approach shows poor generalization. We note that our supervised approach shows higher degradation in performance than the self-supervised variant. We believe this is due to the neural network predictions mimicking the (sub-optimal) expert demonstration at the cost of violating the constraints. For self-supervised training, such conflicting objectives do not exist. ## VI Conclusions and Future Work We showed how behavioral inputs can be learned while considering the ability of the downstream trajectory optimizer. To this end, we proposed a differentiable optimizer and embedded it as a layer in a neural network. We adopted both supervised and self-supervised learning approaches. The latter generalized better in high-density traffic scenarios where expert demonstrations are hard to obtain and thus sparse. To validate our approach, we extensively compared against strong baselines, including MPPI and [6]. Finally, we showed how training without our projection optimizer leads to severely degraded performance due to the lack of constraints on the neural network predictions. Our differentiable optimizer opens-up new possibilities, especially in the context of autonomous navigation. The specialized structure offers computational advantages over the off-the-shelf libraries like [8] designed for a broader application spectrum. ## VII Appendix **Reformulating Constraints:** Table II presents the list of all the constraints included in our projection optimizer. The collision avoidance constraints presented there can be re-written in the following form: \[\mathbf{f}_{o,i}=\left\{\begin{array}{l}x(t)-x_{o,i}(t)-d_{o,i}(t)\cos \alpha_{o,i}(t)\\ y(t)-y_{o,i}(t)-d_{o,i}(t)\sin\alpha_{o,i}(t)\end{array}\right\}d_{o,i}(t)\geq 1 \tag{16}\] where \(\alpha_{o,i}(t)\) represents the angle that the line-of-sight vector between the ego-vehicle and its \(i^{th}\) neighbor makes with the \(X\) axis. Similarly, the variable \(d_{o,i}(t)\) represents the ratio of the length of this vector with the minimum distance separation required for collision avoidance. Following a similar approach, we can rephrase the velocity and acceleration bounds from Table II as: \[\mathbf{f}_{v}=\left\{\begin{array}{l}\dot{x}(t)-d_{v}(t)\cos \alpha_{v}(t)\\ \dot{y}(t)-d_{v}(t)\sin\alpha_{v}(t)\end{array}\right\},v_{min}\leq d_{v}(t) \leq v_{max} \tag{17}\] \[\mathbf{f}_{a}=\left\{\begin{array}{l}\ddot{x}(t)-d_{a}(t)\cos \alpha_{a}(t)\\ \ddot{y}(t)-d_{a}(t)\sin\alpha_{a}(t)\end{array}\right\},0\leq d_{a}(t)\leq a _{max} \tag{18}\] The variables \(\alpha_{o,i}(t)\), \(\alpha_{o,i}(t)\), \(\alpha_{a,i}(t)\), \(d_{o,i}(t)\), \(d_{v,i}(t)\), and \(d_{a,i}(t)\) are additional variables that our batch projection optimizer will obtain along with \(\overline{\boldsymbol{\xi}}_{j}^{*}\). **Reformulated Problem:** Using the developments in the previous section and the trajectory parametrization presented in (1), we can now replace the projection optimization (7)-(8) with the following. Note that (19e) is the matrix representation of the lane boundary constraints presented in Table II. \[\overline{\boldsymbol{\xi}}_{j}^{*}=\arg\min_{\overline{\boldsymbol{\xi}}_{ j}^{*}}\frac{1}{2}\|\overline{\boldsymbol{\xi}}_{j}^{*}-\boldsymbol{\xi}_{j}^{*}\| _{2}^{2} \tag{19a}\] \[\mathbf{A}\overline{\boldsymbol{\xi}}_{j}^{*}=\mathbf{b}(\mathbf{p }_{j})\] (19b) \[\mathbf{\tilde{v}}\,\overline{\boldsymbol{\xi}}_{j}^{*}=\mathbf{ \tilde{e}}(\boldsymbol{\alpha}_{j},\mathbf{d}_{j})\] (19c) \[\mathbf{d}_{min}\leq\mathbf{d}_{j}\leq\mathbf{d}_{max}\] (19d) \[\mathbf{G}\overline{\boldsymbol{\xi}}_{j}^{*}\leq\mathbf{y}_{ lane} \tag{19e}\] \[\mathbf{\tilde{F}}=\begin{bmatrix}\begin{bmatrix}\mathbf{F}_{o}\\ \dot{\mathbf{W}}\\ \dot{\mathbf{W}}\\ \mathbf{0}\end{bmatrix}&\mathbf{0}\\ \begin{bmatrix}\mathbf{F}_{o}\\ \dot{\mathbf{W}}\\ \dot{\mathbf{W}}\end{bmatrix}&\mathbf{0}\\ \mathbf{0}&\begin{bmatrix}\mathbf{F}_{o}\\ \dot{\mathbf{W}}\\ \dot{\mathbf{W}}\end{bmatrix}&\mathbf{\tilde{v}}\\ \mathbf{0}&\begin{bmatrix}\mathbf{F}_{o}\\ \dot{\mathbf{W}}\\ \dot{\mathbf{W}}\end{bmatrix}\end{bmatrix},\tilde{\mathbf{e}}=\begin{bmatrix} \mathbf{x}_{o}+\mathbf{a}\mathbf{d}_{o,j}\cos\boldsymbol{\alpha}_{o,j}\\ \mathbf{d}_{v,j}\cos\boldsymbol{\alpha}_{o,j}\\ \mathbf{d}_{a,j}\cos\boldsymbol{\alpha}_{o,j}\\ \mathbf{y}_{o}+\mathbf{a}\mathbf{d}_{o,j}\sin\boldsymbol{\alpha}_{o,j}\\ \mathbf{d}_{v,j}\sin\boldsymbol{\alpha}_{v,j}\\ \mathbf{d}_{a,j}\sin\boldsymbol{\alpha}_{o,j}\end{bmatrix}, \tag{20}\] \[\mathbf{G}=\begin{bmatrix}\mathbf{W}\\ -\mathbf{W}\end{bmatrix},\boldsymbol{y}_{lane}=\begin{bmatrix}y_{ub}&\ldots y_{ ub}&y_{b}\ldots y_{b}\end{bmatrix}^{T} \tag{21}\] \[\boldsymbol{\alpha}_{j}=(\boldsymbol{\alpha}_{o,j},\boldsymbol{\alpha}_{a,j}, \boldsymbol{\alpha}_{v,j}),\qquad\mathbf{d}_{j}=(\mathbf{d}_{o,j},\mathbf{d} _{v,j},\mathbf{d}_{a,j})\] Constraints (19c)-(19e) acts as substitutes for \(\mathbf{g}(\overline{\boldsymbol{\xi}}_{j}^{*})\leq 0\) in the projection optimization (7)-8. The matrix \(\mathbf{F}_{o}\) is obtained by stacking the matrix \(\mathbf{W}\) from (1) as many times as the number of neighboring vehicles considered for collision avoidance at a given planning cycle. The vector \(\mathbf{x}_{o},\mathbf{y}_{o}\) is formed by appropriately stacking \(x_{o,i}(t),y_{o,i}(t)\) at different time instants and for all the neighbors. Similar construction is followed to obtain \(\boldsymbol{\alpha}_{o},\boldsymbol{\alpha}_{v},\boldsymbol{\alpha}_{a}, \mathbf{d}_{o},\boldsymbol{d}_{v},\boldsymbol{d}_{a}\). The vector \(\mathbf{y}_{lane}\) is formed by stacking the upper and lower lane bounds after repeating them \(m\) times (planning horizon). Similarly, vectors \(d_{min},d_{max}\) are formed by stacking the lower and upper bounds for \(d_{o,i}(t),d_{a}(t),d_{v}(t)\). Note that the upper bound for \(d_{o,i}(t)\) can be simply some large number (recall (16)). Moreover, these bounds are the same across all batches. **Solution Process:** We relax the non-convex equality (19c) and affine inequality constraints as \(l_{2}\) penalties and augment them into the projection cost (19a). Fig. 8: Driving performance achieved with neural networks trained with and without our differentiable projection optimizer. \begin{tabular}{|c|c|c|} \hline Constraint Type & Expression & Parameters \\ \hline Collision Avoidance & \(-\frac{(\dot{x}(t)-x_{\alpha,i}(t))^{2}}{a^{2}}-\frac{(y(t)-y_{\alpha,i}(t))^{2}} {b^{2}}+1\leq 0\) & \(\begin{array}{l}\frac{a}{2},\frac{b}{2}\text{: axis of the circumscribing ellipse}\\ \text{of vehicle footprint}.\end{array}\) \\ \hline Velocity bounds & \(\sqrt{\dot{x}(t)^{2}+\dot{y}(t)^{2}}\leq v_{max}\) & \(v_{max}\): maximum velocity of the ego-vehicle \\ \hline Acceleration bounds & \(\sqrt{\ddot{x}(t)^{2}+\ddot{y}(t)^{2}}\leq a_{max}\) & \(a_{max}\): maximum acceleration of the ego-vehicle \\ \hline Lane boundary & \(l_{lb}\leq y(t)\leq l_{ub}\) & \(y_{lb},y_{ub}\): Lane bounds. \\ \hline \end{tabular} \[\mathcal{L} =\frac{1}{2}\left\|\overline{\mathbf{\xi}}_{j}^{*}-\mathbf{\xi}_{j}^{*} \right\|_{2}^{2}-\mathbf{\lambda}_{j}^{T}\overline{\mathbf{\xi}}_{j}^{*}+\frac{\rho}{ 2}\left\|\overline{\mathbf{\mathrm{F}}}\overline{\mathbf{\xi}}_{j}^{*}-\mathbf{\mathrm{e} }\right\|_{2}^{2}\] \[\quad+\frac{\rho}{2}\left\|\mathbf{G}\overline{\mathbf{\xi}}_{j}^{*} -\mathbf{y}_{lane}+\mathbf{s}_{j}\right\|^{2} \tag{22}\] \[=\frac{1}{2}\left\|\overline{\mathbf{\xi}}_{j}^{*}-\mathbf{\xi}_{j}^{*} \right\|_{2}^{2}-\mathbf{\lambda}_{j}^{T}\overline{\mathbf{\xi}}_{j}^{*}+\frac{\rho}{ 2}\left\|\overline{\mathbf{\mathrm{F}}}\overline{\mathbf{\xi}}_{j}^{*}-\mathbf{\mathrm{e} }\right\|_{2}^{2}\] \[\mathbf{F}=\begin{bmatrix}\widetilde{\mathbf{F}}\\ \mathbf{G}\end{bmatrix},\mathbf{e}=\begin{bmatrix}\widetilde{\mathbf{e}}\\ \mathbf{y}_{lane}-\mathbf{s}_{j}\end{bmatrix} \tag{23}\] Note, the introduction of the Lagrange multiplier \(\mathbf{\lambda}\) that drives the residual of the second and third quadratic penalties to zero. We minimize (22) subject to (19b) through Alternating Minimization (AM), which reduces to the following steps [18]\({}_{k+1}\mathbf{\alpha}_{j}=\arg\min_{\mathbf{\alpha}_{j}}\mathcal{L}(^{k} \overline{\mathbf{\xi}}_{j},^{k}\mathbf{d}_{j},\mathbf{\alpha}_{j}{}^{k}\mathbf{\lambda}_{ j},^{k}\mathbf{s}_{j})\) (24a) \[{}^{k+1}\mathbf{d}_{j}=\arg\min_{\mathbf{\delta}_{j}}\mathcal{L}(^{k} \overline{\mathbf{\xi}}_{j},^{k}\mathbf{d}_{j},^{k+1}\mathbf{\alpha}_{j},^{k}\mathbf{ \lambda}_{j},^{k}\mathbf{s}_{j})\] (24b) \[{}^{k+1}\mathbf{s}_{j}=\max\Big{(}0,-\mathbf{G}^{k}\overline{\mathbf{ \xi}}_{j}^{*}-\mathbf{y}_{lane}\Big{)}\] (24c) \[{}^{k+1}\mathbf{\lambda}_{j}=\overbrace{\mathbf{\lambda}_{j}+\rho\mathbf{F }^{T}(\mathbf{F}^{k}\overline{\mathbf{\xi}}_{j}^{*}-\mathbf{k}_{j})}^{\mathbf{h}_ {1}}\] (24d) \[{}^{k+1}\mathbf{e}_{j}=\overbrace{\widetilde{\mathbf{e}}^{(k+1 }\mathbf{\alpha}_{j},^{k+1}\mathbf{d}_{j})}^{\mathbf{h}_{2}}\] (24e) \[{}^{k+1}\overline{\mathbf{\xi}}_{j}^{*}=\arg\min_{\overline{\mathbf{\xi} }_{j}}\mathcal{L}(\overline{\mathbf{\xi}}_{j}^{*},^{k+1}\mathbf{\lambda}_{j},^{k+1} \mathbf{e}_{j})\] (24f) As can be seen, we optimize over only one group of variables at each AM step while others are held fixed at values obtained at the previous updates. Steps (24d)-(24e) provides the function \(\mathbf{h}\) presented in (9). That is, \(\mathbf{h}=(\mathbf{h}_{1},\mathbf{h}_{2})\). Step (24f) represents (10). An important thing to note is that (24a), (24b) have a closed-form solution in terms of \({}^{k}\overline{\mathbf{\xi}}_{j}^{*}\) and thus do not require any matrix factorization [18].
2308.10992
The Baryonic Content of Galaxies Mapped by MaNGA and the Gas Around Them
We analyze the cool gas in and around 14 nearby galaxies (at $z$$<$0.1) mapped with the SDSS-IV MaNGA survey by measuring absorption lines produced by gas in spectra of background quasars/AGN at impact parameters of 0-25 effective radii from the galaxy center. Using HST/COS, we detect absorption at the galaxy redshift and measure or constrain column densities of neutral (H I, N I, O I, Ar I), low-ionization (Si II, S II, C II, N II Fe II), and high-ionization (Si III, Fe III, N V, O VI) species for 11 galaxies. We derive the ionization parameter and ionization-corrected metallicity using CLOUDY photo-ionization models. The H I column density ranges from $\sim$$10^{13}$ to $\sim$$10^{20}\,{\rm cm^{-2}}$ and decreases with impact parameter for $r \ge R_{e}$. Galaxies with higher stellar mass have weaker H I absorption. Comparing absorption velocities with MaNGA radial velocity maps of ionized gas line emissions in galactic disks, we find that the neutral gas seen in absorption co-rotates with the disk out to $\sim$10 $R_{e}$. Sight lines with lower elevation angles show lower metallicities, consistent with the metallicity gradient in the disk derived from MaNGA maps. Higher elevation angle sight lines show higher ionization, lower H I-column density, super-solar metallicity, and velocities consistent with the direction of galactic outflow. Our data offer the first detailed comparisons of CGM properties (kinematics and metallicity) with extrapolations of detailed galaxy maps from integral field spectroscopy; similar studies for larger samples are needed to more fully understand how galaxies interact with their CGM.
Viacheslav V. Klimenko, Varsha Kulkarni, David A. Wake, Suraj Poudel, Matthew A. Bershady, Celine Peroux, Britt Lundgren
2023-08-21T19:17:59Z
http://arxiv.org/abs/2308.10992v1
# The Baryonic Content of Galaxies Mapped by MaNGA and the Gas Around Them ###### Abstract We analyze the cool gas in and around 14 nearby galaxies (at \(z\)\(<\)0.1) mapped with the SDSS-IV MaNGA survey by measuring absorption lines produced by gas in spectra of background quasars/AGN at impact parameters of 0-25 effective radii from the galaxy center. Using HST/COS, we detect absorption at the galaxy redshift and measure or constrain column densities of neutral (H i, N i, O i, Ar i), low-ionization (Si ii, S ii, C ii, N ii Fe ii), and high-ionization (Si iii, Fe iii, N v, O vi) species for 11 galaxies. We derive the ionization parameter and ionization-corrected metallicity using cloudy photo-ionization models. The H i column density ranges from \(\sim\)10\({}^{13}\) to \(\sim\)10\({}^{20}\) cm\({}^{-2}\) and decreases with impact parameter for \(r\gtrsim R_{e}\). Galaxies with higher stellar mass have weaker H i absorption. Comparing absorption velocities with MaNGA radial velocity maps of ionized gas line emissions in galactic disks, we find that the neutral gas seen in absorption co-rotates with the disk out to \(\sim\)10 \(R_{e}\). Sight lines with lower elevation angles show lower metallicities, consistent with the metallicity gradient in the disk derived from MaNGA maps. Higher elevation angle sight lines show higher ionization, lower H i-column density, super-solar metallicity, and velocities consistent with the direction of galactic outflow. Our data offer the first detailed comparisons of CGM properties (kinematics and metallicity) with extrapolations of detailed galaxy maps from integral field spectroscopy; similar studies for larger samples are needed to more fully understand how galaxies interact with their CGM. Observational cosmology; Galaxy evolution; Star formation; Quasar absorption line spectroscopy; Interstellar medium ## 1 Introduction Galaxies interact with their surroundings through gas flows. Inflows of cool gas bring in fresh material for star formation. Outflows of enriched gas carry the chemical elements produced by star formation back into the intergalactic medium (IGM). These gas flows pass through the circumgalactic medium (CGM) that acts as an interface between the galaxy and the IGM. Many aspects of the physical interactions between galaxies and the IGM are not well-understood. Examples include how galaxies acquire their gas, what processes affect the chemical abundances of stars and gas, and how processes such as accretion, mergers, and secular evolution affect the growth of galaxy components. Constraining these physical processes observationally requires spatially resolved information about the kinematics and chemical composition within and around galaxies and CGM. Integral field spectroscopy (IFS) enables spatially resolved measurements of emission line fluxes and line ratios, allowing construction of maps of important physical properties and their gradients such as gas kinematics, ionization, metallicity, and star formation rate (SFR). Comparisons of these rich datasets with predictions of galaxy structure and evolution models can then shed light on how disks and bulges assemble and how baryonic components of galaxies interact with their dark matter halos. A number of interesting studies using IFS have been carried out at intermediate and high redshifts to investigate the gas flows passing through the CGM (e.g., Bouche et al., 2007; Peroux et al., 2011, 2016, 2019, 2022; Schroetter et al., 2016, 2019; Fumagalli et al., 2016; Lofthouse et al., 2020, 2023). Many of these studies were based on absorption-selected samples. However, connecting these studies to local galaxies requires a parallel study of low-redshift, galaxy-selected samples. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA; Bundy et al., 2015) survey of the Sloan Digital Sky Survey IV (SDSS IV; Blanton et al., 2017) is particularly useful in this context. MaNGA has obtained IFS data for 10,000 nearby (0.01\(<z<\)0.15) galaxies with 19 to 127 fibers, spanning 3600-10300 A with a resolution of \(\sim\)2000. This survey has led to a number of interesting results relevant to the CGM. For example, extraplanar ionized gas with a variety of emission lines has been detected in edge-on or highly inclined MaNGA galaxies out to \(\sim\)4-9 kpc (e.g., Bizyaev et al., 2017; Jones and Nair, 2019). In a significant fraction of MaNGA galaxies, this gas appears to lag in rotation compared to the gas closer to the galactic plane (Bizyaev et al., 2017). While the MaNGA survey provides information about the structure and kinematics of the disk and bulge components in the inner 1.5\(-\)2.5 effective radii of the galaxies, it does not offer much insight about the gaseous halos of these galaxies. The H i-MaNGA program is obtaining 21-cm observations for a large fraction of MaNGA galaxies (e.g., Masters et al., 2019; Stark et al., 2021). However, this 21-cm emission survey is sensitive to relatively high H i column densities (\(\geq 10^{19.8}\) cm\({}^{-2}\)) 1, making it difficult to access the lower column-density outskirts of galaxies and the CGM. Footnote 1: The estimate corresponds to a 3 \(\sigma\) upper limit of the H i mass surface density for non-detections, see Masters et al. (2019). A powerful technique to probe the gaseous galaxy halos and the CGM is by means of the absorption signatures from the gas against the light of background sources such as quasars or gamma-ray bursts (GRBs). Indeed, halo/CGM gas has been detected extending to \(\gtrsim\)200 kpc around individual galaxies (e.g., Tumlinson et al., 2013) and in large samples of stacked spectra out to \(\sim\)10 Mpc (e.g., Perez-Rafols et al., 2015). Probing the outskirts of MaNGA galaxies with this absorption-line technique provide a mean to establish a local sample of exquisitely imaged galaxies studied in neutral and ionized gas. Such a local sample is essential for placing IFS observations of high-redshift galaxies and their CGM in perspective, and thereby developing a systematic understanding of the interactions between galaxies and the gas flows around them, and the evolution of these interactions with time. With these improvements in mind, we have started to investigate the ISM and CGM of MaNGA galaxies using the Hubble Space Telescope (HST) Cosmic Origins Spectrograph (COS). Here we describe the results from our COS observations of 14 MaNGA galaxies, and compare the gas properties deduced from the COS data to the properties of the ionized gas measured from the MaNGA data. This paper is organized as follows. Section 2 describes the sample selection, observations, and data reduction. Section 3 describes results from our COS spectroscopy. Section 4 presents a discussion of our results, including comparisons with the MaNGA data and other studies from the literature. Section 5 presents our conclusions. Throughout this paper, we adopt the "concordance" cosmological parameters (\(\Omega_{m}\)=0.3, \(\Omega_{\Lambda}\)=0.7, and \(H_{0}\)=70 km s\({}^{-1}\) Mpc\({}^{-1}\)). ## 2 Observations and Data Reduction ### Sample selection Our sample consists of 14 nearby galaxies (at \(z<0.1\)) mapped in the SDSS/MaNGA survey with UV-bright quasars/AGNs at impact parameters between 0 to 140 kpc from the galaxy centers. The targeted quasars have GALEX FUV mag \(<\)19.5, and their impact parameters range from 0 to 25 times the effective radii of the corresponding MaNGA galaxies. For 1-635629, a bonus galaxy covered in the same setting as 1-180522, the impact parameter is 38.7 \(R_{e}\). We performed HST COS spectroscopy for these quasars/AGNs, as described in section 2.2. The targets are listed in Table 1. We divide the targets into two groups by the value of impact parameter. The first group contains four objects with zero impact parameter, because in these cases we observed the central source, hereafter referred to as AGNs (see Fig. 1). In these cases, the absorbing gas can be at any distance from the central source along the line of sight, and could potentially be associated with the central engine. Physical conditions in the absorbing gas in such cases can be very different from those in the CGM and ISM. The objects in the second group introduce non-zero impact parameter (more than 20 kpc) and likely probe gas related to the CGM of galaxies (see Fig. 2). In two cases (J2130\(-\)0025 and J0838+2453), the quasar sight lines cross two galaxies at different impact parameters and redshifts. ### HST/COS Data Our sample of 11 quasars was observed with HST COS under program ID 16242 (PI V. Kulkarni) during September-December 2020. These observations are summarized in Table 2. The FUV channel of COS was used in TIME-TAG mode. The G130M FUV grating and the 2.5" Primary Science Aperture (PSA) were used. The grating was centered at 1222 A and 1291 A to cover the absorption lines of interest. This leads to a resolving power across the dispersion axis of \(R\sim 10,000-18,000\). The grating settings were optimized so that the key lines do not fall in the gaps in the wavelength coverage or in geocoronal emission lines. Target acquisitions were performed using the ACQ/IMAGE modes, after which 3 to 11 exposures ranging from 515 s to 1339 s each were obtained for each target. For a majority of the targeted galaxies, we clearly detect strong absorption lines of H i, Si ii and Si iii at redshifts close to the galaxy redshifts (within \(\pm 200\) kms\({}^{-1}\)). Since there are no other galaxies at these redshifts around the quasar sight lines (within \(|z_{\rm photom}-z_{\rm gal}|<0.05\) and \(\sim 100\) kpc and down to SDSS magnitude \(r\simeq 22\)), our HST COS spectra probe the CGM of the selected MaNGA galaxies.2 The profile fits to the HST COS absorption line data and the measurements of column densities are presented in detail in the Appendix B. Footnote 2: In the case of the quasar-galaxy pair J1629+4007-1-564490, there is another galaxy (SDSS J162842.25+400726.1) which is closer to the quasar sightline than the targeted MaNGA galaxy, but it has a higher redshift \(z=0.03357\) versus \(z=0.02588\). For this quasar sightline we detect a weak H i absorption at the redshift \(z=0.033\), associated with the SDSS J162842.25+400726.1 galaxy, and do not detect any absorption within \(\pm 1000\) km s\({}^{-1}\) at the redshift of MaNGA galaxy 1-564490. #### 2.2.1 Data Reduction and Spectral Extraction The original CaLCOS pipeline v3.4.0 was first used to reduce the HST COS exposures and extract the one-dimensional spectra. However, a reanalysis of the data was found to be necessary, because some of our exposures had low counts (\(N_{\rm counts}\simeq 1-10\)/pix). For these exposures, the flux uncertainties in individual exposures were found to be overestimated using the original pipeline. The flux variance was \(\sim\)2 times lower than the flux uncertainties estimated by the standard pipeline, and the difference was found to be correlated with the flux value. The procedure for estimating the flux uncertainty in the original pipeline was therefore modified in our reanalysis of the data. This problem was described in the ISR COS 2021-03 (Johnson et al., 2021), where it is shown that the number of received counts is described by a binomial distribution with an asymmetric shape at low count levels (\(N\leq 10\)). The CaLCOS pipeline uses the method developed by Gehrels (1986) to estimate flux uncertainties in this case. However, we found that the 1-\(\sigma\) uncertainties derived by Gehrels (1986) corresponds to 63.8% quantile interval, which is shifted to positive values relative to the uncertainties derived with the maximum likelihood estimate. This shift slightly reduces the negative uncertainty and increases the positive uncertainty. In spectra corrected for this shift, the 1-\(\sigma\) uncertainties correspond well to the flux dispersion values. Therefore, flux errors were reevaluated based on the modified estimates. Further details are provided in Appendix A. The procedure for the subtraction of the noise background also does not work well in a low-counts regime, and was therefore also modified. Originally, for each exposure the average background flux was calculated from the detector area free from the science target and wavelength calibration lamp signals. This average background flux was subtracted from the science spectrum in each exposure. However, in cases of low flux levels, the number of noise counts is also very low (e.g., 1-2 counts per 10 pixels), therefore the average background flux corresponded to a fractional number of counts (about \(\sim 0.2\) counts/pixel) and its subtraction shifted the zero-flux level to negative values, which was also observed in the final spectrum of the coadded exposures in the cores of saturated absorption lines. Therefore, instead of using this method for background subtraction, we used the following approach: we derived the background flux from the final spectrum of coadded "background" exposures, which were extracted by the same method as the method used for the extraction of the science exposures, but the method was applied to a shifted extraction box in the detector area free from the science target and wavelength calibration lamp signals (and with a minimum content of bad quality pixels, including the gain sag hole and poorly calibrated regions). The "background" exposures were next coadded, rebinned, smoothed by 10 pixels and then subtracted from the science exposure. This approach allows for a more accurate estimate of the average background flux level (since it gives a number of noise counts per bin strongly exceeding 1) and enables the calculation of the wavelength-dependence of the background. #### 2.2.2 Spectral Analysis For each quasar/AGN, we analyzed the absorption systems at the redshifts of associated MaNGA galaxies. Fig. 3 shows examples of our analysis for the four galaxies shown in Fig. 2. Fits for the remaining sight lines are shown in Appendix B. To perform this analysis, we used a custom modification of the Voigt profile fitting code3 to derive the redshifts, column densities and Doppler parameters of velocity components for H i, N i, O i, Ar i and various low-ionization (Si ii, S ii, C ii, N ii, Fe ii) and high-ionization (Si iii, Fe iii, N v, O vi) metal absorption lines. The wavelengths and oscillator strengths for these transition were taken from Morton (2004). Further details about this code and examples of its usage can be found in Balashev et al. (2017); Balashev et al. (2019). Footnote 3: [https://github.com/balashev/spectro](https://github.com/balashev/spectro) For each absorption line (except a case of damped H i Ly\(\alpha\) described below) we derived the local continuum using a B-spline interpolation matched on the adjacent unabsorbed spectral regions. The spectral pixels used to constrain the fit were selected visually. The number of velocity components was also defined visually and increased in case of remaining structure in the residuals. Since our spectra have low signal-to-noise ratio (\(\sim\)1-10) and given medium spectral resolution (\(\sim 20\) km s\({}^{-1}\)), we can not resolve the velocity structure in detail, therefore the redshifts and Doppler parameters were tied for all \begin{table} \begin{tabular}{l c c c c c c c c} \hline MaNGA & \(z_{\rm gal}\) & \(\log M_{\star}\) & SFR & \(\log\) sSFR & \(D_{n}(4000)\)a & quasar & \(z_{\rm quasar}\) & Imp & Par. & \(R_{e}\) & \(b/R_{e}\) \\ ID & & [\(M_{\odot}\)] & \(M_{\odot}\) yr\({}^{-1}\) & [yr\({}^{-1}\)] & & & (\(b\)) kpc & kpc & \\ \hline 1-71974 & 0.03316 & 10.30 & 1.72 & \(-\)10.06 & 1.30 & J0755+3911 & 0.0332 & 0 & 4.9 & 0 \\ 1-385099b & 0.02866 & 10.69 & 0.20 & \(-\)11.38 & 1.54 & J0838+2453 & 0.0287 & 0 & 5.4 & 0 \\ 12-192116 & 0.02615 & 8.80 & 0.59 & \(-\)9.03 & 1.22 & J1338+2620 & 0.0261 & 0 & 3.3 & 0 \\ 1-594755 & 0.03493 & 10.78 & N/A & N/A & 1.23 & J1653+3945 & 0.0349 & 0 & 1.3 & 0 \\ 1-575668 & 0.06018 & 11.02 & N/A & N/A & 1.87 & J1237+4447 & 0.4612 & 39 & 10.6 & 3.8 \\ 1-166736 & 0.01708 & 8.98 & 0.08 & \(-\)10.07 & 1.31 & J0950+4309 & 0.3622 & 23 & 3.4 & 6.9 \\ 1-180522c & 0.02014 & 9.31 & 0.55 & \(-\)9.56 & 1.26 & J2130\(-\)0025 & 0.4901 & 34 & 4.1 & 8.5 \\ 1-561034 & 0.09008 & 10.86 & N/A & N/A & 1.86 & J1709+3421 & 0.3143 & 75 & 6.0 & 12.8 \\ 1-58520rb & 0.02825 & 10.09 & N/A & N/A & 1.93 & J0838+2453 & 0.0287 & 37 & 2.4 & 15.5 \\ 1-113242 & 0.04372 & 10.88 & N/A & N/A & 2.14 & J2106+0909 & 0.3896 & 116 & 5.5 & 22.9 \\ 1-44487d,e & 0.03157 & 10.47 & 1.34 & \(-\)10.34 & 1.74 & J0758+4219 & 0.2111 & 136 & 6.2 & 22.6 \\ 1-44487d,e & 0.03174 & N/A & N/A & N/A & N/A & J0758+4219 & 0.2111 & 137 & N/A & N/A \\ 1-564490 & 0.02588 & 10.15 & 2.60 & \(-\)9.73 & 1.27 & J1629+4007 & 0.2725 & 132 & 5.7 & 23.8 \\ 1-635629e & 0.01989 & 9.56 & 1.26 & \(-\)9.45 & 1.34 & J2130\(-\)0025 & 0.4901 & 66 & 1.7 & 38.7 \\ \hline \end{tabular} \end{table} Table 1: The physical properties of the sample of MaNGA galaxies and background quasars Figure 1: The images of four galaxies with zero-impact parameter. Each panel shows the SDSS three-color image of the area near the MaNGA galaxy. The image is centered at the position of the galaxy. The pink hexagons show the sky coverage of the MaNGA IFU. The yellow circles show the position of the HST/COS aperture centered on the galaxy nucleus. species for each velocity component. The fit to each absorption lines was calculated by the convolution of the synthetic spectrum with the COS line spread function (LSF) chosen for the appropriate COS setting.4 For weak lines, we present measurements of column densities where possible, and 3-\(\sigma\) upper limits in cases of non-detections. Footnote 4: The COS LSF has broad non-Gaussian wings and the shape varies with the wavelength. We used the approximation by the piece-wise function taken from the COS documentation, see e.g. [https://www.stsci.edu/hst/instrumentation/cos/performance/spectral-resolution](https://www.stsci.edu/hst/instrumentation/cos/performance/spectral-resolution) The H i column density was measured from the Voigt profile fit to the Ly\(\alpha\) line5. For most of our spectra the H i line is not damped (with \(N({\rm HI})\leq 10^{18}\) cm\({}^{-2}\)) and corresponds to the linear or flat parts of the curve of growth. In these cases, the number of components and Figure 2: The images of galaxy-quasar pairs with non-zero impact parameter. The panels are arranged in order of increasing impact parameter. Each panel shows the SDSS three-color image of the area near the MaNGA galaxy. The image is centered at the position between the quasar and the galaxy. The pink hexagon shows the sky coverage of the MaNGA IFU. The yellow circle represents the position of the quasar and has the size of the HST/COS aperture. The distance between the quasar and the center of the MaNGA galaxy at the galaxy redshift (the impact parameter) is shown by the pink link. \(b\)-values should be accurately constrained. Therefore the number of components was defined visually from fitting to the profiles of associated metal lines (Si ii, Si iii, C ii), and increased in case of remaining structure in the residuals. The range of \(b\)-parameters was constrained to \(15-100\) km s\({}^{-1}\) (the values of \(b\)-parameters measured for H i absorbers at \(z\leq 1\) in the COS CGM Compendium (Lehner et al., 2018), where H i lines were fitted using several transitions in the Lyman series). Then the redshift, \(b\)-parameter and H i and metal column densities for each component were varied together using the AffineInvariantMCMC sampler by MADS6 to obtain the posterior probability density function (PDF) of the fitting parameters. The column density of H i and metals and the \(b\)-parameters can be rather uncertain for individual components that are blended, however the total column densities summed over the components are usually well constrained (with uncertainty \(\sim 0.3\) dex). We demonstrate the posterior PDF of fitting parameters for systems shown in Fig. 3 in Appendix B (see Fig. 17), along with comments to fits for individual systems. Footnote 6: [http://madsjulia.github.io/Mads.jl/](http://madsjulia.github.io/Mads.jl/) For one case, the galaxy J1338+2620, the absorption Ly\(\alpha\) line shows damping wings and is located close to the galaxy Ly\(\alpha\) emission line. An interesting feature of this spectrum is that both the emission Ly\(\alpha\) line and absorption Ly\(\alpha\) line are shifted relative to their expected positions. The emission Ly\(\alpha\) line is redshifted by \(150\) km s\({}^{-1}\) with respect to the the galaxy redshift derived by positions of other emission lines (H\(\alpha\), H\(\beta\), S ii, N ii, O iii) seen in the SDSS spectrum. The emission lines are very narrow \(\sim 300\) km s\({}^{-1}\) (similar to those for type II Seyfert galaxies), which allows the redshift to be well-constrained. The Ly\(\alpha\) absorption line has a broad core (\(\sim 300\) km s\({}^{-1}\) wide) and its center is blue-shifted by \(-150\) km s\({}^{-1}\) with respect to the strongest component of the metal absorption lines (Si ii, S ii, O i). Additionally we detect the decrease of the local continuum near the Ly\(\alpha\) absorption and Ly\(\alpha\) emission lines, which is consistent with the presence of the damped Ly\(\alpha\) (DLA) absorption line with broad damping wings and high H i column density (\(10^{20.3}\) cm\({}^{-2}\)). However, such a H i Ly\(\alpha\) line is expected to have a broad bottom \(\sim 600\) km s\({}^{-1}\), twice the observed value. We believe this situation is similar to studies of proximate DLA absorption systems in quasar spectra, which work as a natural coronagraph for the Ly\(\alpha\) emission from the accretion disk, while the leaking Ly\(\alpha\) emission remains partially blended in the wings of the DLA system (see e.g. Noterdaeme et al., 2021). In this case we simultaneously fitted both the absorption profile and the unabsorbed quasar continuum. We consider two potential ways to fit the Ly\(\alpha\) line in this spectrum: (i) it could be a sub-DLA system which covers the Ly\(\alpha\) emission line only partially. In this case the local continuum was modeled as the sum of a smooth component and a Gaussian emission line. The smooth component represents the flat part of the quasar continuum and was reconstructed locally by fitting with a B-spline interpolation. The Ly\(\alpha\) emission line was fitted by a Gaussian function centered on the redshift of the quasar. The H i Ly\(\alpha\) absorption line was fitted by the sum of four velocity components, whose redshifts were tied to the redshifts of the velocity components in metal absorption lines (O i, Si ii, S ii, Fe ii, Si iii). The detailed fit to metal lines is provided in \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Quasar & \(z_{\rm quasar}\) & RA & DEC & Date & COS Setting & \(T_{\rm exp}^{\rm a}\) & SNRb \\ & & (J2000.0) & (J2000.0) & & & (s) & \\ \hline J0755+0311 & 0.0332 & 118.85 & 39.18 & 2020 Sep 11 & G130M/1291 2192 & 13.5 \\ J0758+4219 & 0.2111 & 119.58 & 42.32 & 2020 Sep 06 & G130M/1222 2147 & 5.3 \\ J0838+2453 & 0.0287 & 129.55 & 24.89 & 2020 Oct 29 & G130M/1222 7436 & 7.4 \\ J0950+4309 & 0.3622 & 147.56 & 43.15 & 2020 Nov 27 & G130M/1222 15254 & 9.4 \\ J1237+4447 & 0.4612 & 189.39 & 44.79 & 2020 Dec 16 & G130M/1222 4986 & 9.9 \\ J1338+2620 & 0.0261 & 204.51 & 26.34 & 2020 Dec 17 & G130M/1222 2062 & 4.5 \\ J1629+4007 & 0.2725 & 247.26 & 40.13 & 2020 Sep 02 & G130M/1222 7631 & 10.4 \\ J1653+3945 & 0.0349 & 253.47 & 39.76 & 2020 Oct 08 & G130M/1222 2086 & 13.7 \\ J1709+3421 & 0.3143 & 257.49 & 34.36 & 2020 Sep 03 & G130M/1291 9592 & 12.0 \\ J2106+0909 & 0.3896 & 316.71 & 9.16 & 2020 Oct 10 & G130M/1222 7376 & 4.5 \\ J2130\(-\)0025 & 0.4901 & 322.59 & \(-\)0.43 & 2020 Sep 12 & G130M/1222 19310 & 7.8 \\ \hline \end{tabular} \end{table} Table 2: HST COS observing log Appendix B. In this case, we derive the total H i column density (\(10^{19.2\pm 0.1}\) cm\({}^{-2}\)), and the best fit is shown in the top panel of Fig. 4. We note, however, that (i) we needed to decrease the continuum level manually in the vicinity of the sub-DLA system and (ii) the strongest H i component (\(10^{19.2\pm 0.1}\)) is shifted to \(-150\) km s\({}^{-1}\) relative to the strongest component in the metal absorption lines. The second possibility is a combination of a broader, more damped Ly\(\alpha\) absorption line and leaked Ly\(\alpha\) emission. The damped Ly\(\alpha\) line was centered at the redshift of the strongest metal component at \(z=0.026043\). The profile of the leaked Ly\(\alpha\) emission is non-Gaussian, therefore it was fitted by a sum of three Gaussian lines. In this case we derive the H i column density \(10^{20.2\pm 0.1}\) cm\({}^{-2}\). The fit is shown in the middle panel of Fig. 4. The absorption Ly\(\alpha\) line is broad (\(\sim 600\) km s\({}^{-1}\)) and likely cover the emission Ly\(\alpha\) line from the galactic center completely. In the bottom panel we also show the fit by a model, where the galaxy Figure 3: The absorption lines of H i, Si iii, Si ii, C ii in the HST COS spectra of J0758+4219, J0950+4309, J1237+4447 and J2130\(-\)0025 at the redshifts of the corresponding galaxies (1-44487, 1-166736, 1-575668 and 1-180522, respectively). The synthetic profile is shown in red and the contribution from each component, associated with the studied galaxy, is shown in green, blue, purple and orange. Dashed vertical lines represent the position of each component. Vertical ticks indicate the position of absorption lines, associated with the Milky Way (MW, magenta sticks) and remote galaxies. Ly\(\alpha\) emission line at the quasar redshift is added to the fit. However the difference in the fit profile and the derived H i column density is small, compared to for the fit (middle panel) without including the galaxy Ly\(\alpha\) emission (\(\sim 0.1\) dex). The advantage of the fitting approach including the leaked Ly\(\alpha\) emission is that (i) it can describe the decrease of the local continuum near the Ly\(\alpha\) absorption without manual modification of the smooth B-spline fit and (ii) the redshift of H i component matches the redshift of the strongest metal components well. Therefore we adopt the column density of H i for this system to be \(10^{20.2\pm 0.1}\,\mathrm{cm}^{-2}\). ### SDSS-IV/MaNGA Data The Mapping Nearby Galaxies at APO (MaNGA; Bundy et al., 2015) survey is one of the three main components making up the Sloan Digital Sky Survey IV (SDSS-IV; Blanton et al., 2017). Completed in June 2020, MaNGA made integral field unit (IFU) spectroscopic observation of just over 10,000 galaxies using the 2.5m Sloan Telescope at Apache Point Observatory (Gunn et al., 2006). These galaxies were selected from the extended version of the NASA-Sloan Atlas (NSA; Blanton et al., 2017; Wake et al., 2017) to be in the redshift range of \(0.01<z<0.15\) and have an approximately flat number density distribution as a function of stellar mass between \(10^{9}\) and \(10^{12}\) M\({}_{\odot}\). The targets were further chosen so that they could be covered by the MaNGA IFU bundles out to either a radius of 1.5 or 2.5 times the effective radius (\(R_{e}\)). Full details of the MaNGA sample selection are given in Wake et al. (2017). The 17 MaNGA IFU bundles are hexagonal in shape with sizes ranging from 12" to 32" matched to the typical angular size distribution of the target sample of galaxies. In addition, there are 12 seven-fiber mini-bundles which are placed on flux calibration stars, and 92 single fibers for sky subtraction (Drory et al., 2015). All the fibers feed the dual-channel Baryon Oscillation Spectroscopic Survey (BOSS) spectrographs (Smee et al., 2013), which cover a wavelength range of 3,622A to 10,354A with a median spectral resolution of \(\sim\)2,000. In this paper we use the reduced MaNGA data produced by the MaNGA Data Reduction Pipeline (DRP; Law et al., 2016) as well as derived data products produced by the MaNGA Data Analysis Pipeline (DAP; Westfall et al., 2019). These derived products include maps of various emission lines ([O ii], H\(\beta\), [O ii], [N ii], H\(\alpha\), [S ii]), and emission line and stellar velocities. We access and interact with MaNGA data using the Marvin (Cherinka et al., 2019) Python package. We also make use of integrated galaxy properties included in the MaNGA dataset that are derived from the extended version of the NSA. These include redshift, total stellar mass (\(M_{*}\)), elliptical effective radius (\(R_{e}\)), and inclination, all derived from elliptical Petrosian aperture photometry (see Wake et al., 2017, for details). #### 2.3.1 Nebular Metallicity and Ionization Parameter In order to connect the properties of the CGM absorption systems detected in our COS spectra with the gas within the MaNGA galaxies, we derive maps of the metallicity and ionization parameter of emission lines originating in the nebulae photoionized by massive stars. To make these measurements, we use of the Bayesian strong emission line (SEL) fitting software IZI initially presented by Blanc et al. (2015) and extended to utilize MCMC, additionally fitting for extinction by Mingozzi et al. (2020). IZI compares a grid of photoionization models with a set of SELs and their uncertainties, to derive the marginalized posterior probability density functions (PDFs) for the metallicity (\(12+\log(\mathrm{O/H})\)), the ionization parameter (\(\log(q)\)), and the line-of-sight extinction (\(E(B-V)\)). Such an approach takes into account the covariance between these parameters, which is not insignificant. In this work we follow the approach of Mingozzi et al. (2020), who ran IZI on a subset of the MaNGA sample. We use the photoionization model grids presented in Dopita et al. (2013) fitting for [O ii]\(\lambda 3726,3729\), H\(\beta\), [O ii]\(\lambda 4959,5007\), [N ii]\(\lambda 6548,6584\), H\(\alpha\), [S ii]\(\lambda 6717\), and [S ii]\(\lambda 6731\). We only fit spaxels that are classified as star-forming according to either the [N ii]- or [S ii]-Baldwin, Phillips&Terlevich (BPT) diagrams (using the regions defined by Kauffmann et al. (2003) and by Kewley et al. (2001). We further restrict to spaxels with a H\(\alpha\)\(S/N>15\), which ensures sufficient \(S/N\) in the remaining SELs we use. Fig. 5 presents the example of MaNGA observations of the galaxy 1-544490. It shows maps of H\({}_{alpha}\) flux, gas kinematics, SFR and the physical conditions derived from IZI modelling. #### 2.3.2 Galaxy Rotational Velocities Beyond the ionization properties described above we are also interested to see if there is any association between the velocity of the CGM absorption systems and galaxy rotational velocity. One might imagine the absorption systems tracing the gas dynamics at large radii. In order to make such a connection we fit disk rotation models to the stellar and gas velocity fields using models similar to those described in Bekiaris et al. (2016). We assume a flat thin disc in all cases linking the observed coordinates \((x,y)\) to the projected major and minor axes coordinates of the disc \((x_{\rm e},y_{\rm e})\) using: \[x_{\rm e}=-(x-x_{\rm o})\sin PA+(y-y_{\rm o})\cos PA, \tag{1}\] \[y_{\rm e}=-(x-x_{\rm o})\cos PA-(y-y_{\rm o})\sin PA, \tag{2}\] where PA is position angle and \((x_{0},y_{0})\) are the coordinates of the center of the disk. We define the radius of the disk r in the disk plane at the observed coordinates \((x,y)\) as: \[r=\sqrt{x_{\rm e}^{2}+\Big{(}\frac{y_{\rm e}}{\cos i}\Big{)}^{2}}, \tag{3}\] where is the inclination of the disc to the line of sight. The position angle relative to the major axis of the disc, \(\theta\), at \((x,y)\) is given by \[\cos\theta=\frac{x_{\rm e}}{r}. \tag{4}\] Figure 4: Fit to the H i Ly\(\alpha\) absorption line in the spectrum of galaxy J1338+2620. Top, middle and bottom panels shows different solutions: sub-DLA + galactic Ly\(\alpha\) line, DLA + leaked Ly\(\alpha\) emission, DLA + leaked Ly\(\alpha\) emission + galactic Ly\(\alpha\) line, respectively (see details in the text). The black line represents the observed HST/COS spectrum, the red line shows the best fit. The red shaded area represents a variation of the synthetic fit due to the variation of H i column density within the derived uncertainty. The profile of the absorption Ly\(\alpha\) line is shown by the red dashed curve. The smooth part of the reconstructed continuum is shown by the dashed pink curve. The cyan dashed curves in the top and bottom panels represent the reconstructed emission Ly\(\alpha\) lines from the galactic center. The blue, orange and green dashed lines in middle and bottom panels show the components used to fit the leaked Ly\(\alpha\) emission. The red and black vertical lines denote the redshift of the strongest metal absorption component and the redshift of quasar, respectively. The derived total H i column density is given in the top left corner of each panel. To model the rotation curve we make use of a two-parameter \(arctan\) profile (Willick et al., 1997): \[V_{\rm rot}(r)=\frac{2}{\pi}V_{\rm t}\arctan\frac{r}{r_{\rm t}}, \tag{5}\] where \(V_{\rm rot}(r)\) gives the rotation velocity at radius \(r\), \(r_{\rm t}\) is the turnover radius and \(V_{\rm t}\) is the asymptotic circular velocity. At large radii beyond \(r_{\rm t}\) this model represents a very slowly rising rotation curve. Our final model for the velocity in the plane of the sky is given by \[v_{\rm model}(x,y)=V_{\rm sys}+V_{\rm rot}(r)\sin i\cos\theta. \tag{6}\] where \(V_{\rm sys}\) represents any velocity offset from the systemic redshift used to generate the MaNGA velocity field. This model contains seven free parameters that we must fit for. For all galaxies we attempt to fit both the emission line and stellar velocity maps provided by the MaNGA DAP. We make use of the default MILESHC-MASTARSSP hybrid maps, which use a Voronoi binning scheme for the stellar velocities and individual spaxels for the emission line velocities (see Westfall et al., 2019, for details). For the stellar velocity maps we fit to all Voronoi bins that have not been masked by the DAP and have a S/N \(>\) 10. For the emission line maps we again exclude all masked spaxels and fit to those spaxels where any of the H\(\alpha\), [O ii], or [O iii] lines have a S/N \(>\) 5. We also mask any regions of the maps not associated with the target galaxy, for instance the very close satellite galaxy of 1-44487. We fit our model using the MCMC code emcee (Foreman-Mackey et al., 2013). We make an initial simpler fit to estimate the position angle and use that as our initial guess for that fit parameter. For the center, inclination, and \(r_{\rm t}\) we make initial estimates based on the NSA photometry. For \(V_{\rm sys}\) our initial estimate is the median velocity within 0.5 \(R_{e}\). Finally, we set \(V_{\rm t}\) to 200 km s\({}^{-1}\) as our initial guess for all galaxies. For each fit, use 64 walkers each with 20,000 steps, discarding the first 10,000. We fit both the emission and stellar velocity maps simultaneously and each independently, potentially giving three fits for each galaxy. ## 3 HST/COS Fitting Results We detect associated absorption for 11 out of the 14 MaNGA galaxies. H i Ly\(\alpha\) absorption is detected in all 11 of these cases, while Si ii and Si iii are detected in 7 of the 11 cases. For two sight lines, each of which has two galaxies with closely spaced redshifts, we detect absorption in H i, Si ii, Si iii (and C ii in one case), but we can not reliably determine which galaxy corresponds to which velocity component in the detected absorption. In three cases, we do not detect any absorption (in H i or any of the metal ions) within the range of \(\pm 800\) km s\({}^{-1}\) relative to the galaxy redshifts; in these cases, we set upper limits on \(N({\rm HI})\sim 10^{13}\) cm\({}^{-2}\). For two of these sight lines, J1709+3421 and J2106+0909, the absence of any absorption may be because of high values of the Figure 5: The physical properties of the SDSS MaNGA galaxy 1-564490. Top panels show from left to right the SDSS three-color image, the H\({}_{\alpha}\) line emission map, the H\({}_{\alpha}\) line velocity map, and the stellar velocity map. Bottom panels shows the BPT diagram and maps of IZI metallicity, IZI ionization parameter and the density of star formation rate. impact parameters 75 and 116 kpc, respectively (with \(b/R_{e}\) of 12.8 and 22.9). The absence of any lines is more surprising in the third case J1653+3945, a galaxy sight line with zero impact parameter, and may be a result of high ionization of the gas. We discuss ionization corrections in Section 3.1 below. Table 3 summarizes the results of our fits. We present the absorption redshifts, total column densities of H i and associated strongest metal ions (Si ii, Si iii and C ii) and results of the photo-ionization code simulations. We refer to the sight lines with zero impact parameters as the "galactic" sight lines and list them in the first five lines of Table 3 before the remaining sight lines that we refer to as "quasar sight lines". The H i column density ranges from \(\sim 10^{13}\) cm\({}^{-2}\) to \(\sim 10^{20.2}\) cm\({}^{-2}\) for the AGN sight lines with H i detections and from \(\sim 10^{14}\) cm\({}^{-2}\) to \(\sim 10^{19}\) cm\({}^{-2}\) for the quasar sight lines with H i detections (i.e. with \(N(\rm HI)>10^{13}\) cm\({}^{-2}\)). For most of the systems we detect associated absorptions of low (Si ii, C ii, N ii) and high (Si iii) ionization ions and also set upper limits on weak absorption by N i, N v, O i, and Fe ii. The detailed fit for each system is shown in Appendix B and the fit results to individual velocity components are presented in Table 5. In Fig. 6, we examine the dependence of the column densities of the strongest metal ions detected in absorption (Si ii, Si iii and C ii) on the H i column density. There is an overall increase of Si ii, Si iii and C ii column densities with \(N(\rm HI)\). A similar trend was also reported previously by Lehner et al. (2018), Muzahid et al. (2018) and Werk et al. (2013) in the HST COS surveys of H i absorption systems in quasar spectra: COS CGM Compendium (at \(z_{\rm abs}<1\) ), COS-Weak (\(z_{\rm abs}<0.3\)) and COS-Halos (\(z_{\rm abs}<0.35\)), respectively. The results for our quasar sightlines are consistent with these trends. A difference is observed for the AGN sight lines: for J0838+2453 and J0755+0311 we detect higher metal column densities than those predicted by the trend for quasar absorption systems, for J1338+2620, the metal column densities are slightly lower. To demonstrate this in detail, we also show in Fig. 6 theoretical constraints on the metal and H i column densities calculated under simple assumptions: \(N(\rm X)/N(HI)=(X/H)_{\odot}Zf_{\rm X}/f_{\rm HI}\), where \((\rm X/H)_{\odot}\) is the solar abundance of element \(X\), \(Z\) is the metallicity relative to the solar level from Asplund et al. (2009), \(f_{\rm HI}=N(\rm HI)/N(H_{\rm tot})\) is the H i fraction, and \(f_{X}=N_{X}/N(X_{\rm tot})\) is the fraction of element X in the particular ionization stage considered. For physical conditions expected in the ISM and CGM, we assume the ranges \(Z=0.1-1\), \(f_{\rm HI}=10^{-3}-10^{-1}\) and \(f_{\rm X}=0.1-1\), and vary the factor \(Zf_{\rm X}/f_{\rm HI}\) between \(10^{-1}\) and \(10^{3}\). These constraints are fulfilled for all quasar absorption systems from our sample and from other COS surveys, while the detections and upper limits for our AGN sight lines are beyond these constraints. We also present the Spearman rank-order correlation coefficient (\(r_{S}\)) and the probability that the observed value of \(r_{S}\) could arise purely by chance (\(p\)-value) for all our systems and quasars only in the left top corner of each panel in Fig. 6. The ratio \(N(\rm SiIII)/N(\rm SiII)\) shows a statistically significant correlation (\(r_{\rm S}=1.0\) and \(p=0.0\)) with the H i column density for our quasar sight lines (although we caution that our sample consists of only three measurements). This correlation is consistent with the correlation seen in the COS-Weak survey, which, however, had low statistical significance (\(r_{\rm S}=0.13\) and \(p=0.63\)). For sightlines in the COS-Halos survey, there are mainly lower limits. We also note that the samples of quasar absorption systems from the COS-Weak and COS CGM Compendium surveys were selected by a blind method (or based on availability in the HST archives), while the quasar sight lines in our sample and from the COS-Halos survey were selected to have relatively small impact parameters. The consistency of our results with those of these other studies suggests that, on average, H i absorption with \(N(\rm HI)>10^{15}\) cm\({}^{-2}\) and associated metal features can correspond to the CGM of galaxies with impact parameters \(\leq\sim 140\) kpc. ### Ionization corrections Since our systems are not self-shielded from ionizing UV radiation, we need to calculate the ionization corrections to estimate the physical conditions and metallicity. We used the photo-ionization code cloudy to infer the ionization structure of systems and estimate the metallicity, ionization parameter and total hydrogen column density. We assumed a constant density model in a plane parallel geometry illuminated by the radiation field and cosmic rays (CRs). The radiation field was modeled as consisting of two parts: the extragalactic UV background (UVB) radiation at \(z=0.1\) as computed by Khaire & Srianand (2019)7, and the galaxy light component modelled by the interstellar radiation field as per the cloudy template, which is consistent with the Draine model in the UV range (Draine, 1978). The interstellar radiation field was scaled by the factor \(I_{\rm UV}\) to characterize the strength of the UV radiation from the nearby galaxy. This factor is especially important for our AGN sight lines (i.e., those with zero impact parameter), for which the distance of the absorbing region from the galaxy center is unknown. The UV and X-ray radiation produced by the galaxy (by stars and AGN) is generally ignored for the CGM absorption systems because the H i ionizing photons produced within the galaxy are assumed to be absorbed by the neutral hydrogen and dust within the galaxy. Indeed the average escape fraction of the H i ionizing photons from galaxies is assumed to be very low (\(<1\%\)) at \(z<1\), see e.g. (Khaire & Srianand, 2019). However, the UV spectral observations of nearby galaxies (including our AGN sightlines) do not show the strong damped Ly\(\alpha\) absorption line associated with the neutral hydrogen in those galaxies. Moreover, the spectra usually have strong Ly\(\alpha\) emission lines. This indicates that the H i ionizing radiation can leak out of the galaxies along these sightlines and increase the UV background around the galaxies. The intrinsic spectral energy distribution (SED) of the galactic radiation is unknown for our galaxies, therefore we chose one of the cloudy templates to model this radiation. The SED of this interstellar radiation model is similar to that for the UVB model, and about 100 times more intense in the range of 1-100 eV at \(I_{\rm UV}=1\). Therefore we varied this parameter in the range of \(-3\leq\log I_{\rm UV}\leq 1.0\) to allow for a wide range of values of the escape fraction and the galactic star formation rate/AGN activity. We also took into account the ionization of the CGM by cosmic rays (CR), given that simulations predict a strong effect of CR on the evolution of the CGM up to the distance about several hundreds of kpc from the galaxy (Salem et al., 2016). The intensity of CR ionization rate was consistently scaled with the same factor \(I_{\rm UV}\). The initial value of CR ionization rate was set to to the average value in the MW (\(2\times 10^{-16}\) s\({}^{-1}\)). The number density in the models is characterized by the parameter (\(n_{\rm H}\)) and the chemical composition by the parameter of gas metallicity [X/H]. The element abundance pattern was chosen according to the model by Jenkins (2009), where the parameter \(F_{\star}\) regulates the value of dust depletion. \(F_{\star}\) is varied from 0 to 1, where \(F_{\star}=0\) and \(F_{\star}=1\) denote the minimum and maximum level of depletion, respectively. The depletion pattern in these cases roughly corresponds to typical values seen in the MW halo and MW ISM (Welty et al., 1999). The size of the model cloud is calculated by cloudy in such a way that the modeled H i column density was equal to the observed value. Since the observed values of the H i column density (total and for individual components) are not very well constrained for the quasar sightlines (within \(\sim 0.1-0.7\) dex), we set the H i column \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \multicolumn{2}{c}{quasar} & \(z_{\rm abs}\) & \(\log N({\rm HI})\) & \(\log N({\rm SiII})\) & \(\log N({\rm SiIII})\) & \(\log N({\rm CII})\) & Si iii iii & [X/H] & \(\log q\) & \(\log N(H_{\rm tot})\) & \(\log f({\rm HI})\) & \(F_{\star}^{\star}\) \\ & & [cm\({}^{-2}\)] & [cm\({}^{-2}\)] & [cm\({}^{-2}\)] & [cm\({}^{-2}\)] & [cm\({}^{-2}\)] & & & & [cm\({}^{-2}\)] & \\ \hline J0755\(+\)0311 & 0.0330 & \(13.6^{+0.1}_{-0.1}\) & \(12.7^{+0.3}_{-0.5}\) & \(12.4^{+0.2}_{-0.5}\) & \(13.0^{+0.2}_{-0.3}\) & \(-0.3^{+0.5}_{-0.6}\) & \(1.2^{+0.2}_{-0.2}\) & \(-1.5^{+0.3}_{-0.3}\) & \(16.5^{+0.3}_{-0.3}\) & \(-2.8^{+0.3}_{-0.3}\) & \(0.0^{+0.4}_{-0.0}\) \\ J0838\(+\)2453A & 0.0280 & \(13.2^{+0.1}_{-0.1}\) & \(13.4^{+0.2}_{-0.2}\) & \(13.8^{+0.6}_{-0.5}\) & N/C & \(0.4^{+0.6}_{-0.5}\) & \(1.9^{+0.3}_{-0.3}\) & \(-0.8^{+0.2}_{-0.2}\) & \(16.7^{+0.3}_{-0.3}\) & \(-3.5^{+0.3}_{-0.3}\) & \(0.0^{+0.1}_{-0.0}\) \\ J0838\(+\)2453B & 0.0256 & \(14.0^{+0.1}_{-0.1}\) & \(<13.6\) & \(13.6^{+0.5}_{-1.0}\) & N/C & N/A & \(0.5^{+1.5}_{-0.5}\) & \(-1.1^{+0.5}_{-0.5}\) & \(17.1^{+1.5}_{-0.3}\) & \(-3.1^{+0.3}_{-1.5}\) & \(<1\) \\ J1338\(+\)2620 & 0.0260 & \(20.2^{+0.1}_{-0.1}\) & \(13.9^{+0.2}_{-0.2}\) & \(13.9^{+0.4}_{-0.2}\) & N/C & \(0.0^{+0.5}_{-0.3}\) & \(-0.4^{+0.4}_{-0.1}\) & \(-2.8^{+0.3}_{-0.2}\) & \(20.5^{+0.2}_{-0.2}\) & \(-0.4^{+0.4}_{-0.3}\) & \(0.8^{+0.2}_{-0.2}\) \\ J1653\(+\)3945 & 0.0341 & \(<12.8\) & \(<12.7\) & \(<12.0\) & N/C & N/A & N/A & N/A & N/A & N/A & N/A \\ \hline J1237\(+\)4447A & 0.0597 & \(17.2^{+0.3}_{-0.4}\) & \(<13\) & \(12.9^{+0.1}_{-0.1}\) & \(<14.7\) & N/A & \(-1.8^{+0.8}_{-0.8}\) & \(-2.9^{+0.9}_{-0.6}\) & \(19.4^{+0.5}_{-0.8}\) & \(-2.2^{+0.6}_{-0.1}\) & \(0.0^{+0.2}_{-0.0}\) \\ J1237\(+\)4447B & 0.0597 & \(15.2^{+0.5}_{-0.3}\) & \(<13\) & \(12.9^{+0.1}_{-0.1}\) & \(14.0^{+0.3}_{-1.2}\) & N/A & \(-0.2^{+0.5}_{-0.5}\) & \(-2.5^{+0.6}_{-0.4}\) & \(18.1^{+0.9}_{-0.5}\) & \(-2.9^{+0.7}_{-0.9}\) & \(0.0^{+0.2}_{-0.0}\) \\ J0950\(+\)4309 & 0.0170 & \(17.6^{+0.3}_{-0.7}\) & \(13.0^{+0.1}_{-0.1}\) & \(13.6^{+0.1}_{-0.1}\) & \(14.1^{+0.2}_{-0.2}\) & \(0.6^{+0.2}_{-0.2}\) & \(-0.6^{+0.2}_{-0.7}\) & \(-3.2^{+0.2}_{-0.2}\) & \(19.0^{+0.6}_{-0.2}\) & \(-1.4^{+0.6}_{-0.6}\) & \(0.0^{+0.2}_{-0.0}\) \\ J2130\(-\)0025 & 0.0195 & \(18.8^{+0.1}_{-0.1}\) & \(13.8^{+0.1}_{-0.2}\) & \(14.6^{+0.9}_{-0.5}\) & \(15.3^{+0.9}_{-0.5}\) & \(0.8^{+0.9}_{-0.9}\) & \(-1.1^{+0.2}_{-0.2}\) & \(-2.1^{+0.4}_{-0.5}\) & \(21.1^{+0.4}_{-0.4}\) & \(-2.3^{+0.6}_{-0.4}\) & \(0.1^{+0.3}_{-0.1}\) \\ J1709\(+\)3421 & 0.0880 & \(<13.5\) & \(<12.5\) & \(<13.0\) & N/C & N/A & N/A & N/A & N/A & N/A \\ J2106\(+\)0909 & 0.0442 & \(13.7^{+0.2}_{-0.2}\) & \(<13.0\) & \(<13.2\) & N/C & N/A & N/A & N/A & N/A & N/A \\ J0758\(+\)4219 & 0.0320 & \(15.3^{+0.3}_{-0.2}\) & \(13.2^{+0.1}_{-0.1}\) & \(13.4^{+0.1}_{-0.1}\) & N/C & \(0.2^{+0.1}_{-0.1}\) & \(0.8^{+0.3}_{-0.3}\) & \(-2.0^{+0.3}_{-0.4}\) & \(18.1^ density as an additional fitting parameter. Then we calculated a grid of models that uniformly covers the parameter space in the ranges of \(-3.5\leq\log n_{\rm H}/{\rm cm}^{-3}\leq 1.0\) (with a 0.5 dex step), \(-3\leq\log I_{\rm UV}\leq 1.0\) (with a 0.5 dex step), \(-3\leq\log[{\rm X}/{\rm H}]\leq 2.0\) (with a 0.5 dex step), \(0<F_{\star}<1\) (with a 0.25 step), and \(13<\log N({\rm HI})<20.5\) (with a 0.5 dex step). For each node of the grid, we saved the column densities of metals (Si ii, Si iii, S ii, C ii, N i, N ii, N v, Fe ii, O i) and the ionization parameter \(q=Q/4\pi R^{2}n_{\rm H}c\), and calculated interpolations of metal column densities and \(q\) on the grid. Then we calculated the likelihood function for the fitting parameters (\(n_{\rm H}\), \(I_{\rm UV}\), [X/H], \(F_{\star}\), \(N({\rm HI})\)) based on a least-squares comparison of the observed and modeled column densities for the various ionic species. For this, we used the Monte Carlo Markov Chain approach with implementation of the affine-invariant ensemble sampler. The parameters were varied simultaneously to derive maximum probability values and their uncertainties corresponded to 63.8% interval. The results are presented in Table 3 for the total column densities and Table 5 for the individual velocity components. The comparison of metal column densities predicted by cloudy with the observed one in the absorption systems is shown in Fig. 36 in Appendix B. The cloudy models allow us to describe the observed column densities relatively well. As can be seen from Fig. 36, the observed column density values and their uncertainties show good consistency with the predicted ranges of values for the ions of Si ii, Si iii, and C ii (which show strong absorption lines and are detectable in our data even at relatively low S/N ratio). For other ions whose lines are relatively weak (e.g., N i, N v, S ii), the cloudy models predict lower column densities than the observed values. This difference may Figure 6: The comparison of total column densities of Si ii, Si iii, C ii and the Si iii/ S ii ratio with \(N({\rm HI})\). Our systems are shown by diamonds (AGN sight lines) and circles (quasar sight lines). The color of points encodes the name of the systems in our sample. We also show upper limits for three non-detections: J1709+3421 (yellow circle), J1629+4007 (black circle), J1653+3945 (purple diamond). Grey, blue and cyan squares represent data from different COS surveys: COS-Weak (Muzahid et al., 2018), COS CGM Compendium (Lehner et al., 2018) and COS-Halos (Tumlinson et al., 2013; Werk et al., 2013) respectively. Spearman rank order correlation coefficient \(r_{s}\) and the \(p\)-value for our sample (all systems and only QSO absorptions) are given at the left top corner of each panel. Black lines indicate the range of theoretical constraints on the column densities of metal ions and H i for the parameters (\(Z\), \(f_{\rm HI}\)\(f_{\rm X}\)) typical for the ISM/CGM (see text). be caused by an overestimate of the measured column densities from the noisy spectra. In Table 6 we also present estimates of the metallicity and the ionization parameter in the individual velocity components. In some systems (e.g. J1237+4447 or J0950+4309), large differences are observed between the different components which may indicate substantial differences in physical conditions between the components (e.g., in the ionization parameters, which can cause different Si iii/Si ii ratios). In such cases, the fit to the total column densities is not reliable, and we need to analyze the parameters in individual components. In some cases, we are not able to resolve individual absorption components in the H i Ly\(\alpha\) line and therefore cannot accurately measure their H i column densities, although the total column density is well-constrained (e.g. J1338+2620). In the case of J1338+2620, we analyze the physical conditions assuming equal metallicity for the components. We also note that, in some cases (e.g., J0758+4219), there is good consistency between the physical conditions inferred for the different velocity components. Fig. 7 shows the comparison of the parameters derived with cloudy (metallicity, depletion level, ionization parameter, total hydrogen column density) and column density of H i. The metallicity spans over three orders of magnitude from -1.5 dex to 2 dex and is anti-correlated with the hydrogen column density. Low H i column density systems tend to have higher metallicities and higher ionization. Similar results were found by Muzahid et al. (2018) and Werk et al. (2014). There is a good agreement, but it is probably caused by fitting with the same photo-ionization code. Also we should remind that cloudy indeed contains many assumptions: perhaps most importantly, it is a 1-dimensional calculation. We note an interesting difference in the physical conditions between the absorbing regions in our quasar sight lines and AGN sight lines. The AGN absorbing regions are located at different ends of the distributions of the physical conditions. For J0755+0311 and J0838+2453, we detect high values of the metallicity and the \(q\) parameter, whereas J1338+2620 has a low metallicity and a low \(q\) parameter. The depletion level for J1338+2620 is also unusually high \(F_{\star}=0.8\pm 0.2\), while it is low for other systems (\(F_{\star}<0.3\)). A high value of \(F_{\star}\) is typical for the cold neutral phase of the ISM of the MW, while lower depletion level (\(\sim 0.2\)) corresponds to gas in the warm phase and galaxy halo (Welty et al., 1999). We speculate that the absorption in the J0755+0311 and J0838+2453 may be associated with outflowing gas driven by those AGN, while the absorption in J1338+2620 may be associated with inflowing cold gas falling into the AGN. Also, we note that there is no detection of low metallicity and low \(N\)(HI) gas, which could correspond to infalling metal free gas in the outskirts of the galaxies. This may be caused by a selection effect due to the difficulty of detecting weak metal lines: for low \(N\)(HI) and low metallicity, we can set only upper limits on the metal column densities, that do not allow us to constrain physical conditions with cloudy, so the estimates for such absorbers are very uncertain. ## 4 Discussion The combination of the HST COS spectroscopic data for the targeted sight lines and the MaNGA maps of the galaxies provides a powerful way to directly compare the CGM properties of the sample galaxies with their stellar properties. We now consider the relations between the various galaxy and CGM properties derived from the available data and discuss our results. To put our work in broader perspective, we compare our results along with those for other galaxies from the literature (Kulkarni et al., 2022; Muzahid et al., 2018; Tumlinson et al., 2013) and references therein. ### H i column density and impact parameter First, we check the correlation of the total H i column density with the impact parameter8. We find that the quasar and galaxy sight lines in our sample show different behaviors. The quasar sight lines probe gas around galaxies with impact parameters ranging from 20 to 130 kpc. For them we find a decrease in the H i column density with increasing impact parameter. This result is in line with other studies of quasar-galaxy pairs at low redshift, such as the COS-Halos, COS-Weak, and the Galaxies on Top of Quasars (within impact parameters \(\sim 1-7\) kpc) surveys (e.g., Tumlinson et al., 2013; Muzahid et al., 2018; Kulkarni et al., 2022), and at higher redshift (\(z=0.3-1.2\), e.g., the MUSE-ALMA Halos (MAH) survey, Weng et al., 2023; Karki et al., 2023). A comparison of our results with these other studies is shown in Fig. 8. Footnote 8: The impact parameter denotes the lower limit to the distance between the galactic center and the absorption system along the quasar sightline. The real distance can be higher, however it is believed that the distribution of gas around galaxies strongly decreases with the distance, and therefore the impact parameter has the highest probability of gas detection. The AGN sight lines have, technically, zero impact parameter, but the absorbing gas can be separated from the galactic center at any distance along the sight line. In one case, J1338+2620, we found a high H i column density (\(\simeq 10^{20.2}\,\rm{cm}^{-2}\)), consistent with what is seen in quasar sightlines at very low impact parameters (Kulkarni et al., 2022). In other AGNs, the H i absorption lines are weak (\(\simeq 10^{13}\,\rm{cm}^{-2}\)) or not detected. A natural explanation in these latter cases may be a high ionization of the gas in the central outflow or (less likely) highly ionized gas in the IGM. To avoid confusion, we do not show the cases of the AGN sight lines in Fig. 8. The top left and right panels of Fig. 8 show the H i column density plotted versus the impact parameter in physical (proper) and comoving9 units, respectively. The physical units correspond to the distance in the rest frame of each galaxy and can be meaningfully compared to simulations in physical units. The comoving units factor out the cosmological expansion, allowing a comparison of the properties of galaxies at different redshifts. The difference between the physical and comoving impact parameters is not significant for our MaNGA galaxies due to their low redshift (\(z=1\)). Figure 7: The parameters of ionization model ([X/H], \(F_{\star}\), \(\log q\), H\({}_{\rm tot}\)) and the observed H i column density of absorption systems. The circles and diamonds represent results for our sample. The large symbols show the average values (from fitting to the total column densities), the small symbols show the values for individual components. The color scheme is the same as in Fig. 6. The grey and cyan squares represent data from Muzahid et al. (2018) and Werk et al. (2014). The additional panel in the top right corner shows the observed H i column density versus the fitted value from our cloudy simulation. The model values of \(N\)(HI) correspond well with the observed values. \(0.01-0.10\)), but is larger for higher-redshift galaxies in the literature (\(z=0.15-0.35\) for the COS-Halos galaxies, \(z=0-0.3\) for the COS-Weak galaxies, and \(z=0.3-1.2\) for the MAH galaxies), and for the simulations of the higher-redshift CGM. We also plot in Fig. 8 the median radial profile of the H i column density and the 1 \(\sigma\) scatter around that from magnetohydrodynamic simulations of an isolated Milky Way-mass galaxy at \(z=0-0.3\) by van de Voort et al. (2019) (based on the Auriga project, Grand et al., 2017), and from the study of the distribution of cold gas in the CGM around galaxy groups (\(\log M_{\rm halo}\sim 13.2-13.8\) and \(\log M_{*}\sim 11.3-11.8\)) at \(z=0.5\) by post-processing TNG50 simulations by Nelson et al. (2020). Combining the observational data from the different studies mentioned above, we cover relatively well a wide range of \(b\) parameters from 1 to 150 kpc. There is agreement between most of the observations and the radial profile from the simulations within the uncertainties, although we note higher H i column densities compared to the simulations for a few MAH galaxies at high impact parameters. These outliers have \(z>0.7\), that is higher than for the rest of the observed galaxies. A similar increase in the \(N\)(HI) profile at high impact parameters for high-redshift galaxies was reported earlier by Kulkarni et al. (2022). In the comoving coordinates, the median \(N\)(HI) radial profiles from the Auriga and TNG50 simulations are in better agreement with each other, suggesting that the difference between them is probably due to the difference in redshift. Most of our MaNGA galaxies show weaker H i absorption than the simulated Auriga galaxy. The agreement with the simulated TNG50 galaxies is better than with the Auriga galaxy. We note, however, that our MaNGA galaxies are lower in redshift than the TNG50 galaxies, and are also lower in stellar mass than the Auriga and TNG50 galaxies. The bottom left and right panels of Fig. 8 show the H i column density plotted versus the impact parameter normalized to the effective radius (\(R_{\rm e}\)) and virial radius (\(R_{\rm vir}\)). The virial radius was estimated as \((3M_{\rm halo}/200\rho_{\rm cr}4\pi)^{1/3}\), where \(M_{\rm halo}\) was estimated from the \(M_{*}-M_{\rm halo}\) relation by Girelli et al. (2020) and \(\rho_{\rm cr}\) is the critical density at the redshift of the galaxy. For galaxies at low redshift, all absorbers classified as DLAs and several classified as sub-DLAs are associated with the region within \(\sim 3\) effective radii. Most LLSs appear to correspond to the region from \(\sim 3\) to \(\sim 30\) effective radii. The trend of \(N\)(HI) with \(b/R_{\rm e}\) is similar to the trend with \(b\). Comparison of \(N\)(HI) with \(b/R_{\rm vir}\) shows that all the detected H i absorbers are within the virial radius. ### H i column density versus stellar mass, sSFR and \(D_{n}(4000)\) Fig. 9 shows the relations between the H i column density of the associated absorbers, and the stellar mass, the specific star formation rate (sSFR = SFR/\(M_{*}\)) and the \(D_{n}(4000)\) index of the host galaxies, based on our sample and the literature. The stellar mass and sSFR in our sample range from \(10^{7}\,M_{\odot}\) to \(10^{12}\,M_{\odot}\) and from \(10^{-12}\,{\rm yr}^{-1}\) to \(10^{-9}\,{\rm yr}^{-1}\), respectively. Most of our galaxies are star-forming (sSFR \(>10^{-11}\,{\rm yr}^{-1}\)). The value of \(D_{n}(4000)\) index ranges from 1.27 to 2.14 and characterizes the star formation history in the center of the galaxy. Absorption systems with a high H i column density are more likely related to low stellar mass galaxies (Kulkarni et al., 2022), while systems with a lower H i column density are associated with the halos of more massive galaxies (e.g., Kulkarni et al., 2010; Augustin et al., 2018; Tumlinson et al., 2013). The MaNGA galaxies from our sample also follow this trend. Three systems with the highest H i column density (\(N\)(HI) \(\geq 10^{18}\,{\rm cm}^{-2}\)) are observed near galaxies with \(M_{*}\leq 10^{9}\,M_{\odot}\), while the other systems correspond to high stellar mass galaxies with \(M_{*}\simeq 10^{10}-10^{11}\,M_{\odot}\). For the sample of low-redshift \(z<0.35\) systems (our data, Tumlinson et al., 2013; Kulkarni et al., 2022) we obtain a correlation coefficient \(r_{S}=-0.34\) and \(p\)-value of \(5\times 10^{-3}\), for the entire sample, including MUSE-ALMA observations, \(r_{S}=-0.44\) and \(p=2\times 10^{-5}\). We do not see a difference in the H i content for galaxies with low or high specific star formation rates. However, a strong negative correlation is observed between the sSFR and stellar mass for AGNs in all the samples examined here, including our own galaxies and those from the literature (\(r_{S}=-0.66\), \(p=2\times 10^{-9}\)). We also report a strong dependence of \(N\)(HI) decreasing with increasing \(D_{n}(4000)\) for quasar sight line - discounting the non-detection of 1-564490 due to large impact parameter. The Spearman rank order correlation coefficient is \(r_{S}=-0.94\) and \(p\)-value is 0.015. It is interesting that this correlation connects the gas content at very large radii to the star-formation history in the center of the galaxy. We suspect we would see the same thing with sSFR if we had measures of sSFR for the three quasar sight line galaxies with \(D_{n}(4000)>1.7\). For galaxy sight lines (when we get spectrum from the central AGN) we do no see a such dependence probably due to the contamination of the observed spectra by AGN. Comparing the correlations between \(N\)(HI) and \(b\), \(M_{*}\), \(D_{n}(4000)\) we believe that the primary correlation is likely with stellar mass. Higher mass galaxies have higher \(D_{n}(4000)\) index and their higher past star formation activity is expected to affect cool gas around them, both because cool gas is consumed in star formation and because AGN radiation and stellar winds blow out gas around these galaxies. We do not have many observations of cool gas around massive galaxies at low impact parameters to check this statement in further detail. ### Galaxy geometry and kinematics At the spectral resolution of COS G130M, we can obtain fairly reliable velocity profiles for the absorbing gas along the sight line through the galaxy. This can be compared with kinematics of the ionized gas from MaNGA data. With this in mind, we used the radial velocity maps of the stellar disk and H\({}_{\alpha}\) gas for our galaxies to reconstruct the position of the quasar sight lines relative to the gaseous disks of the galaxies and examined the correspondence between the radial velocities of the absorbers and the rotation of the galactic gaseous disks. First, we fitted both the gas velocity map (H\({}_{\alpha}\) line emission) and the stellar velocity map with symmetric models of thin disk rotation. The formalism was described in Section 2.3.2. We choose the best fit giving priority to the joint fit (stellar+H\({}_{\alpha}\) emission) first, followed by the H\({}_{\alpha}\) emission line fit, using only the stellar fit if others didn't fit. The position angle (\(PA\)), inclination angle (\(i\)), and maximal rotation velocities Figure 8: The comparison of H i column density against the impact parameter measured in physical kpc (top left panel), in comoving kpc (top right panel), effective radii (bottom left panel) and virial radii (bottom right panel). Our systems are shown by circles (quasar sightlines). The color scheme is the same as in Fig. 6. Red squares represent “galaxies on top of quasars” from Kulkarni et al. 2022 (\(z<0.15\)), cyan squares are from the COS-Halos survey (Tumlinson et al., 2013; Werk et al., 2013) (\(z=0.14-0.35\)), grey squares are from the COS-Weak survey (Muzahid et al., 2018) (\(z<0.32\)), and the orange diamonds are from the MUSE-ALMA Halos survey (Karki et al., 2023; Weng et al., 2023) (\(z=0.3-1.2\)). The effective radii of the COS-Halos and MUSE-ALMA Halos galaxies were estimated based on the relation with stellar mass from Movla et al. (2019). The green and blue views with the shaded areas show the median radial profiles of the H i column density and the 1 \(\sigma\) scatter around those from high resolution galaxy simulations: Auriga project (van de Voort et al., 2019) and TNG50 (Nelson et al., 2020). (\(V_{\rm max}\)) are presented in Table 4. The fitted velocity map and rotation curve for each galaxy are shown in Figs. 18-26. #### 4.3.1 Elevation angle and the position of absorbers The analysis of velocity maps gives us the orientation of the galactic disk relative to the quasar/or galaxy sight line. Here we determine the orientation of absorption system along the quasar sight line with respect to disk plane. To be consistent with Peroux et al. (2020) we adopt (\(\phi\)) to be the elevation angle (\(90^{\circ}-\rm polar\,angle\)) or latitude with respect to the disk plane and (\(\theta\)) to be deprojected angle in the disk plane with respect to the major axis10. Footnote 10: Péroux et al. (2020) the angle \(\phi\) is referred to as the “azimuthal angle”, but we reserve that term for the disk in-plane angle with respect to the major axis. We estimated the elevation angle (\(\phi\)) of absorption systems in two ways: (a) using the standard approach as the angle between the galaxy's major axis and the line joining the galaxy center to the quasar on the sky plane (Bouche et al., 2012), and (b) by integrating the elevation angle along the quasar sight line using a model for the gas distribution around the galaxy. In this approach, we assume that the probability of detecting gas absorption along the sight line can be described as following: \[f(r,\phi)=C\times f_{\Omega}f_{r}f_{\phi}, \tag{7}\] where (\(r,\phi\)) are the radial coordinate and elevation angle of the point along the sight line, \(C\) is a normalization coefficient, \(f_{\Omega}=1/r^{2}\) characterizes the decrease in the gas cross section with increasing distance from the galaxy center (lower solid angles are probed at larger distances), \(f_{r}\) and \(f_{\phi}\) are model distributions of the gas density around the galaxy. For \(f(r)\), we adopt the Navarro-Frenk-White halo density profile (Navarro et al., 1997) \[f_{r}=r_{s}/r(r+r_{s})^{2}, \tag{8}\] with the parameter \(r_{s}=6R_{e}\). For the elevation angle distribution \(f_{\phi}\), we adopt \[f_{\phi}=\mathcal{N}(0,\pi/6)+\mathcal{N}(\pi/2,\pi/6), \tag{9}\] where \(\mathcal{N}(\mu,\sigma)\) is a Gaussian distribution with a mean \(\mu\) and a width \(\sigma\). Thus, \(f_{\phi}\) is a bimodal distribution with two peaks, one near the galaxy plane and the other near polar axis, with opening angles of about 30 degrees, consistent with the range of outflow opening angles from \(\theta_{\rm max}=30\) to 45 degrees estimated from galaxy spectra with outflow detections (e.g., Martin et al., 2012). Using the probability function (7) we calculate the mean value of the elevation angle as: \[\overline{\phi}=\int_{-\infty}^{\infty}\phi(x)f(r(x),\phi(x))\,dx, \tag{10}\] Figure 9: The comparison of H i column density against the stellar mass (left panel), the specific star formation rate (middle panel) and the \(D_{n}(4000)\) index (right panel). Symbols are the same as in Fig. 8. In addition, the large diamonds represent our AGN sight lines with “zero impact parameter”. The H i column density is anti-correlated with the galaxy stellar mass, but not with sSFR. The quasar sightlines suggest \(N\)(HI) decreasing sharply with increasing \(D_{n}(4000)\) (excluding the non-detection of 1-549490 due to the large impact parameter). This indicates the connection of the gas content at very large radii to the star-formation history in the center of the galaxy. where \(x\) is the coordinate along the quasar sight line. Fig. 10 compares these two estimates of the elevation angle. The values agree mostly within the uncertainties. Approach (b) usually predicts lower elevation angles, because it takes into account a higher probability of detection for directions along the galactic plane, while the standard method corresponds to the direction with the smallest impact parameter. For galaxy sight lines (i.e., those with zero impact parameters), we estimated the elevation angle of absorption systems as \((\pi/2)-i\), where \(i\) is the inclination angle of the galaxy. Using approach (b), we also calculated the deprojected radial coordinate of the absorption system in the disk plane (\(d\)) and height of absorption system above the disk plane (\(h\)) as follow: \[d=r\cos\overline{\phi},\,h=r\sin\overline{\phi}, \tag{11}\] where \(r\) is the radial coordinate of the point along the sight line with the highest probability \(f(r,\phi)\). ### Gas kinematics #### 4.4.1 Quasar sight lines We now compare the absorber velocities in the 5 quasar sight lines that show absorption detections with the corresponding best fit models of galactic disk rotation in 6 MaNGA galaxies. Using the fits to MaNGA emission-line velocities maps we calculate the radial velocity of the galactic disk along the direction towards the quasar sight line. The comparison is shown in Fig. 11. We show both the components seen in H i alone, and the components seen in H i as well as metals. Additionally, we show each of these quasar-galaxy pairs in Figs. 18, 19,21, 22, and 23 in Appendix B. Figure 10: The comparison of elevation angles derived by the two methods: from the modeling of gas distribution around the galaxy (vertical axis) and by the standard method (horizontal axis). See detail in text. The colors and shapes of symbols are the same as in previous figures. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline MaNGA & \(z_{\rm gal}\) & \(R_{e}\) & [O/H] & \(\nabla_{\rm R}\)[O/H] & \(\log q_{\rm ion}\) & \(\nabla_{\rm R}\)[q] & \(V_{\rm max}^{a}\) & PA & Incl. & \(\phi_{\rm stand}^{b}\) & \(\phi_{\rm model}^{c}\) \\ ID & & kpc & & \(10^{-3}\)kpc\({}^{-1}\) & & \(10^{-3}\)kpc\({}^{-1}\) & km s\({}^{-1}\) & deg & deg & deg \\ \hline 1-71974 & 0.03316 & 4.9 & \(0.31\pm 0.04\) & \(-7\pm 2\) & \(7.16\pm 0.07\) & \(-13\pm 2\) & 151 & 147 & 33 & \(57^{+4}_{-4}\) & \(57^{+4}_{-4}\) \\ 1-385099 & 0.02866 & 5.4 & N/A & N/A & N/A & N/A & 200 & 21 & 32 & \(58^{+2}_{-2}\) & \(58^{+2}_{-2}\) \\ 1-585207 & 0.02825 & 2.4 & N/A & N/A & N/A & N/A & 155 & 13 & 47 & \(34^{+2}_{-2}\) & \(18^{+20}_{-20}\) \\ 12-192116 & 0.02615 & 3.3 & \(-0.25\pm 0.08\) & \(-18\pm 2\) & \(7.05\pm 0.20\) & \(-40\pm 2\) & 64 & 141 & 36 & \(54^{+1}_{-1}\) & \(54^{+2}_{-2}\) \\ 1-594755 & 0.03493 & 1.3 & N/A & N/A & N/A & N/A & 144 & 162 & 22 & \(68^{+4}_{-4}\) & \(68^{+4}_{-4}\) \\ 1-575668 & 0.06018 & 10.6 & N/A & N/A & N/A & N/A & 500 & 172 & 8 & \(10^{+4}_{-4}\) & \(0^{+24}_{-24}\) \\ 1-166736 & 0.01708 & 3.4 & \(-0.16\pm 0.10\) & \(-18\pm 8\) & \(6.96\pm 0.18\) & \(-8\pm 1\) & 58 & 156 & 53 & \(50^{+8}_{-8}\) & \(28^{+25}_{-23}\) \\ 1-180522 & 0.02014 & 4.1 & \(0.06\pm 0.07\) & \(-26\pm 2\) & \(7.02\pm 0.12\) & \(3\pm 1\) & 124 & 122 & 74 & \(4^{+12}_{-12}\) & \(4^{+15}_{-15}\) \\ 1-635629 & 0.01989 & 1.7 & \(0.35\pm 0.05\) & \(-14\pm 4\) & \(7.04\pm 0.10\) & \(-50\pm 5\) & 124 & 16 & 65 & \(28^{+2}_{-2}\) & \(27^{+15}_{-15}\) \\ 1-561034 & 0.09008 & 6.0 & N/A & N/A & N/A & N/A & 236 & 61 & 50 & \(44^{+3}_{-3}\) & \(28^{+27}_{-18}\) \\ 1-113242 & 0.04372 & 5.5 & N/A & N/A & N/A & N/A & 550 & 12 & 26 & \(17^{+2}_{-2}\) & \(5^{+22}_{-23}\) \\ 1-44487 & 0.03157 & 6.2 & \(0.33\pm 0.05\) & \(-14\pm 1\) & \(7.06\pm 0.10\) & \(-11\pm 1\) & 225 & 25 & 78 & \(6^{+2}_{-2}\) & \(9^{+6}_{-8}\) \\ 1-564490 & 0.02588 & 5.7 & \(0.36\pm 0.05\) & \(-20\pm 2\) & \(7.10\pm 0.13\) & \(-60\pm 3\) & 150 & 129 & 52 & \(56^{+2}_{-2}\) & \(30^{+25}_{-22}\) \\ \hline \end{tabular} * The maximal rotation velocity of the galaxy derived from fitting by“arctan” model. * The elevation angle derived by the standard method and using the model of gas distribution around the galaxy, respectively. \end{table} Table 4: The properties of MaNGA galaxies For three of the six galaxies (1-166736, 1-180522, 1-575668) there is good agreement between the velocities of the strongest H i absorption components and the predicted radial velocities of the galactic disks within \(\pm 50\) km s\({}^{-1}\). We note that these quasar sight lines are located within 10 effective radii from the corresponding galaxies. For two other galaxies (1-635629 and 1-113242), the absorption velocity is in opposite direction to that expected from the galactic disk rotation. And for one galaxy (1-44487), the absorption components are spread over a wide range \(\sim 350\) km s\({}^{-1}\). However, this range is comparable with the velocity of galaxy rotation in the quasar sight line direction (\(\sim 250\) km s\({}^{-1}\)). The middle panel of Fig. 11 shows the velocity offset normalized by the predicted galactic disk rotation radial velocity at the appropriate distance. It is clear that this normalized velocity offset is within \(\sim\pm\) 1 in most cases. In other words, the absorbing gas velocity is generally consistent with co-rotation with the galactic disk within \(\sim\)25 effective radii. We comment now on the difference between the kinematics of the absorbing gas components seen in both H i and metal lines, and those seen in only H i lines. In all cases, only H i absorption is observed in components with a low H i column density (\(\sim 10^{13}\) cm\({}^{-2}\)), whereas absorption in both H i and metals is observed in components with \(N(\rm HI)>10^{15}\) cm\({}^{-2}\). Thus the absence of metal absorption in only H i components is likley to be due to a limit in the sensitivity for detecting weak lines at the S/N reached. Second, 2 out of the 3 cases of large normalized velocity offsets are for H i-only absorbers, possibly suggesting that the H i-only absorption may be related to the galaxy halo or the IGM and thus not participate in the galaxy disk's rotation. The metal-bearing H i absorbers are, however, likely to relate to the disk gas and hence cororate with it. It is of interest to understand whether or not the H i -only absorption is bound to the galaxies. To examine this, we compare the velocity offsets of the absorbers relative to the systemic redshifts of the galaxies with the expected escape velocities at the impact parameters of the quasar sight lines. To estimate the escape velocity at a distance \(r\) from the center of galaxy with stellar mass \(M_{*}\) we used the methodology described in Kulkarni et al. (2022). The right panel of Fig. 11 shows the velocity offsets with respect to galaxy redshifts11 for the quasar sight lines in our sample. The curves show the escape velocity as a function of the distance for stellar masses of \(\log M_{*}=9\), 9.5, 10, 10.5, 11. There are two galaxies (1-166736 and 1-44487), for which the velocity offset of only H i components exceeds the escape velocity at the corresponding distance: these components may be associated with unbound outflow or be formed from the IGM. At the same time, the other two cases of only H i absorption correspond to more massive galaxies (1-575668 and 1-113242, \(M_{*}\sim 10^{11}M_{\odot}\)), where the gas is likely bound to the galaxies. The metal-bearing H i absorbers appear to be bound to the galaxies (only one such absorber, associated with the galaxy 1-180522, has a radial velocity very close to the escape velocity). Footnote 11: The galaxy redshift was corrected for the systematic velocity offset \(V_{\rm sys}\) derived by fitting to MaNGA maps, see Sect. 2.3.2 The most interesting case is that of the quasar-galaxy pair, J2130\(-\)0025 and 1\(-\)180522. The galaxy is observed with a high inclination angle of about 70\({}^{\circ}\) (nearly "edge-on"), and the quasar sight line is located very close to the galactic plane (elevation angle is \(\sim 3\pm 7^{\circ}\)) at \(\sim 8.5\) effective radii. In this case we find very good consistency between the absorption velocity and the galaxy disk rotation velocity. The sight line of J2130\(-\)0025 is also close to another galaxy, 1-635629, which has a similar redshift as 1-180522, but is located at a distance of about 39 effective radii. In fact, the velocity of the absorption system is opposite to the expected disk velocity for 1-635629. We therefore infer that the absorption system corresponds to only one galaxy, 1-180522, and that there is no detection for 1-635629. In two cases, 1-166736 and 1-575668, we also detected high velocity components with velocity offset \(v\simeq-200\) km s\({}^{-1}\) and \(+400\) km s\({}^{-1}\), respectively. Since these components have low H i column densities \(\sim 10^{13.5}\) cm\({}^{-2}\), they are likely to be highly ionized clouds. In the case of 1-166736, the direction of the cloud velocity is consistent with the direction of the gas outflow, which can be detected with the targeted quasar sight line (see Fig. 18). Assuming that the galaxy has two cone-shaped outflows from the center in both directions along the polar axis, the quasar sight line can probe the outflow only in the direction to the observer (with negative radial velocity) and cannot probe the outflow in the opposite direction (with positive radial velocity), because this part of sight line is located far from the galaxy center, see the "\(Z-Y\)" projection in Fig. 18. In the second case, the galaxy 1-575668 is observed nearly "face-on" with a small inclination \(\sim 8^{\circ}\) (see Fig. 22). The quasar sight line can probe both outflows, however the distance between the sight line and galaxy polar axis is lower from the side of positive velocity outflow. Therefore, the probability of detecting an absorption system with a positive radial velocity is higher, which is in line with observations. Two other galaxies, 1-44487, and 1-113242 show velocity offsets, with the velocity of the strongest H i components about \(-100\) and \(+200\) km s\({}^{-1}\), respectively. In both of these cases, the quasar sight lines are located at \(\sim 23\) effective radii, that is about twice the distance of the first group, where we detect a good agreement. In the case of 1-44487 we find at least 4 absorption components with velocities spanning a wide range of \(\sim 300\) km s\({}^{-1}\). Since the galaxy is relatively far from the quasar and the absorption has a complex structure, relatively high \(N(\rm HI)\sim 10^{15}\) cm\({}^{-2}\) and super-solar metallicity ([X/H] = +0.8 ), we checked the area around the quasar for other galaxies with similar redshift, but found none. However, we note that this galaxy is merging with another smaller galaxy and it is likely that the observed high metallicity is due to outflows in a region of enhanced star formation caused by the merger. Summing up, we find consistency with gas co-rotation along with the galactic disk within at least 10 effective radii in most cases. The sign of the velocity of higher-velocity absorption, when detected, is consistent with the direction of the central galactic outflows, which have a higher probability of detection in these sight lines. For quasar sight lines at larger impact parameters, the situation is less clear. #### 4.4.2 AGN sight lines We now discuss the gas kinematics for three of the four AGN sight lines (those with zero impact parameters) in our sample that show detections of absorption. We observed the AGNs of these galaxies at elevation angles of about 60\({}^{\circ}\). These directions are within the outflow opening angles (\(\theta_{\rm max}=30^{\circ}\) to 45\({}^{\circ}\)) reported by Martin et al. (2012), and therefore these sight lines can probe gas in the central outflows. Of course, the AGN sight lines can also probe gas in the CGM/IGM at a large distance, and these scenarios can be difficult to distinguish. For two sight lines (1-71974 and 1-385099), the H i absorption lines are weak (\(N(\rm HI)\sim 10^{13}-10^{14}\) cm\({}^{-2}\)) and blue-shifted by \(-50\) and \(-250\) km s\({}^{-1}\), and by \(-50\) and \(-750\) km s\({}^{-1}\), respectively, with respect to the galaxy redshift. The velocity of the low-velocity components is comparable to the galactic disk rotation velocity measured by MaNGA (\(\sim 100\) km s\({}^{-1}\)). The high-velocity components in these sight lines can not be described by the galactic disk rotation model. The absorption in these components is characterized by high ionization and high metallicity, about two orders higher than measured for the absorption in the quasar sight lines. The high ionization could potentially arise in either outflowing gas ionized by the AGN radiation, or in low-density IGM gas. However, the high metallicity suggests that the outflow scenario is more likely, since the IGM is not expected to be metal-rich. The third AGN sight line, 12-192116, probes gas with high neutral hydrogen content (\(N(\rm HI)\sim 10^{20.2}\) cm\({}^{-2}\)), low ionization and low metallicity ([X/H] \(\simeq-1\)). This absorption can not be related to the galactic disk due to the high elevation angle, but may arise in a "satellite" galaxy, similar to what may be observed by extragalactic observers as absorption from the Magellanic Clouds toward the center of the Milky Way. Alternatively, this absorption could arise in gas clouds tidally interacting with the galaxy, similar to high velocity clouds (HVCs) with \(N(\rm HI)>10^{20}\) cm\({}^{-2}\)(e.g., Putman et al., 2002; Hsu et al., 2011). We also note that in the spectrum of the AGN of 1-385099, we find additional absorption of H i at a very high velocity \(v\simeq-800\) km s\({}^{-1}\). Since this galaxy is part of a group along with at least 2 other galaxies, 1-585207 and SDSS J083804.94+245327, with similar redshifts and projected distances of \(\sim 50\) kpc from the observed sight line, the high-velocity cloud we detect may correspond to the intra-group gas perturbed due to the interaction of these galaxies. The SDSS image of this region shows the presence of long tidal tails for all galaxies (see, e.g., Fig. 1). As an analog from the local universe, we note that absorption at such high velocities (much higher than the velocities associated with the Milky Way's halo gas or the Magellanic Stream) is observed in the Sculptor group galaxies (e.g., Putman et al., 2003). ### Metallicity gradient Combining the cool-gas metallicity along with the warm-gas metallicity is essential to build a complete census of metals in and around galaxies. While such comparisons of cool-gas metallicity and warm-gas metallicity have been performed in integral field spectroscopic studies of quasar absorbers at higher redshifts (e.g., Peroux et al., 2012, 2014), such comparisons have not been performed for the \(z\sim 0\) galaxies that have much more detailed information. Our study of the CGM of MaNGA galaxies offers an opportunity to study differences in metallicity in the inner vs. outer parts of galaxies in some of the closest venues available, and can thus provide fresh insights into processes affecting galaxy evolution. With this in mind, we study how the gradient of IZI metallicity derived from the fit to MaNGA emission-line maps within a few effective radii corresponds to the metallicity measured in the absorption systems along the targeted sight lines. Simulations predict a change in the metallicity gradient from an almost linear relation in the galactic disk (e.g. Mingozzi et al., 2020) to a flatter behavior in the CGM (Peroux et al., 2020). Our sightlines probe the transition region between these two limits. Fig. 12 shows the comparison for five galaxies, where we simultaneously measured metallicity in the galaxy and the absorption system. For the absorption systems we show the average metallicity and local metallicity in individual components derived from fits with the cloudy photoionization models (see section 3.1). For the studied systems, the average and local values are in good agreement with each other. For absorption in quasar sight lines we use the deprojected radial coordinate of the absorption system in the disk plane (\(d\)) calculated in Section 4.3.1. For the galaxies, we calculate the gradient of the IZI metallicity in the galactic disk in two ways: (i) by averaging over all directions and (ii) by averaging over only the spaxels within \(\pm 15^{\circ}\) opening angle around the direction to the quasar sight line. The second way is possible only for quasar-galaxy pairs, when we have a preferred direction. The gradients are shown by grey and pink lines, respectively. In Figs. 18-26 we also present the IZI metallicity maps and fits to their radial profiles. The deprojected radial coordinate of each spaxel in the MaNGA maps was derived from the best fit model of the galaxy radial velocity map (see Section 2.3.2). The model for the metallicity radial gradient in the CGM has been taken from post-processing of the TNG50 galaxy simulation presented by Peroux et al. (2020) in their Fig. 5. It represents measurements of the CGM metallicity in the TNG50 simulation at four values of impact parameter: \(b=25\), \(50\), \(100\) and \(200\) kpc, and four values of azimuthal angle: \(0^{\circ}\), \(30^{\circ}\), \(60^{\circ}\) and \(90^{\circ}\). Other parameters were set to the appropriate values, which are: redshift \(z=0\), stellar mass \(M_{*}\) equal to stellar mass of the MaNGA galaxies. The metallicity is decreased by \(0.4-0.7\) dex between \(25\) and \(200\) kpc in outflow (at \(90^{\circ}\)) and galactic disk (at \(0^{\circ}\)) directions, respectively. It is greater than the metallicity gradient due to the elevation angle change, \(0.1-0.4\) dex at \(25\) and \(200\) kpc, respectively. Peroux et al. (2020) suggested that this is because fountains do not yet promote metal mixing over the full volume (i.e. range of elevation angles) at the distances \(b\sim 100\) kpc, as occurs closer to the galaxy. For the absorption systems in the AGN sight lines we show the level of metallicity in the absorption system with the horizontal red line, since the distance of the absorbing region from the galaxy center is not known. Comparing the five panels of Fig. 12, we note that we confirm the increase of the galaxy central metallicity with the stellar mass, previously reported by Mingozzi et al. (2020) for star-forming galaxies from the MaNGA survey. Second, we detect a consistency of metallicity in the absorption system in the quasar sight lines with the prediction from the metallicity gradient for two galaxies (1-180522 and 1-166736). These quasar sight lines (J2130\(-\)0025 and J0950+4309, respectively) more likely Figure 11: Left panel shows the difference between radial velocities of H i absorption components in quasar sight lines and predicted velocity of galaxy rotational models are shown as a function of radial coordinate. Circles and squares represent components with both H i and metal lines and only H i, respectively. The color encodes the H i column density of the components. The horizontal dotted lines show the range of velocities from \(-50\) km s\({}^{-1}\) to 50 km s\({}^{-1}\). The middle panel is the same as the left panel, but shows the velocity offset in units of the predicted velocity of galaxy rotational models. Dotted lines represent the \(\pm 1\) offsets relative to the predicted velocity (offset \(=0\)). Right panel shows the radial velocity of absorption calculated in the galaxy rest frame as a function of the radial coordinate. The symbols are the same as in other panels. Color encodes the stellar mass of nearby galaxies. The curves show the escape velocity as a function of the distance for stellar masses of \(\log M_{*}/M_{\odot}=9,9.5,10,10.5,11\). See the text for more details. probe gas near the galactic plane at \(\sim 7\) and \(\sim 8\) effective radii. In these cases, the metallicity gradient measured from the fit to MaNGA maps (at \(2-3\) effective radii) can persist over a longer distance. The metallicities in these absorption systems are also consistent with the prediction for the CGM metallicity showing that the simulations may reproduce the CGM properties well. For the third galaxy, 1-44487, we measure about two orders of magnitude higher metallicity, than the predictions from the "galaxy" gradient. Since the impact parameter is high (137 kpc), this quasar sight line should probe metallicity in the CGM, which is expected to be higher than that predicted from the "galaxy" gradient. For stellar mass \(\sim 10^{10.5}M_{\odot}\), the CGM metallicity is expected to be about [X/H] \(=-0.5\), which is still \(\sim\)1.5 orders of magnitude lower than the measured value. This discrepancy may result from the activity of the merged galaxy. For the absorption systems located toward the galactic centers of 1-71974 and 12-192116 galaxies, we found an excess of metallicity in the first case, which can be caused by a high-metallicity central outflow, and one order lower metallicity in the second, which may be the metallicity of the "satellite" galaxy or an HVC (see the discussion of this case in previous section). ### Dependence of metallicity on the elevation angle Hydrodynamic simulations of galaxy evolution predict an increase of CGM metallicity with the elevation angle with respect to the disk plane (see e.g. Peroux et al., 2020). Metal-free gas is expected to fall into the galaxy along directions close to the galactic disk plane, while metal-rich gas is expected to flows out of the galactic disk in directions near to the perpendicular to the plane by stellar winds and supernova explosions. Therefore, we can expect an increase of gas metallicity with the elevation angle (e.g. see Fig. 5 in (Peroux et al., 2020)). The angular gradient of metallicity is predicted to be around \(+0.4\) dex/90\({}^{\circ}\) for galaxies at \(z\simeq 0\). Fig. 13 presents the dependence of metallicity in absorption systems on the elevation angle in our sample. For each system, we show the metallicity measured in the velocity components to test the variation of physical conditions. We also show the two estimates of the elevation angle discussed in Section 4.3.1 ((a) from the standard method, and (b) from the model of gas distribution). We find that, if we consider the metallicity only in quasar absorption systems (apart from the case of the merged galaxy, 1-44487), we have a good agreement between the observed absorption metallicities and the simulations. The main difference in the measured metallicities is due to the difference in galaxy stellar masses, whereas the gradient with the elevation angle is small. The metallicity measurements in the AGN sight lines should not follow the trend in the simulations, because they probably do not probe the CGM. The galaxy sight lines can probe gas at very low distances and in AGN central ejections, that could describe high metallicity and high ionization of these systems. However these processes were not considered in the simulations by Peroux et al. (2020); van de Voort et al. (2021); Wendt et al. (2021), meaning no AGN at low \(M_{\star}\) in simulations. At the same time, the galaxy 12-192116 shows good consistency. In this case, we may be dealing with a galactic absorption system with an unusually large elevation angle, and we suggest that it may be caused by absorption from a "satellite" galaxy, or from metal-poor inflowing gas. We also compare our results with metallicity measurements in the COS-Halos survey (Werk et al., 2014). The effective radii of the COS-Halos galaxies were derived from the relation between the effective radius and the stellar mass by Mowla et al. (2019). We do not detect a significant correlation between [X/H] and \(b/R_{e}\) for the joint sample, with a Spearman rank order correlation coefficient of 0.1 and a \(p\)-value of 0.46. ### Ionization parameter Fig. 14 shows the ionization parameter vs. impact parameter and elevation angle for our sample galaxies. We find the ionization parameter to be roughly constant for MaNGA galaxies with a mean of \(\sim\)10\({}^{-3.3}\) and the dispersion of \(\sim\)0.1 dex (see Table 3). On the contrary, the ionization parameter in the absorption systems spans over two orders of magnitude above the average galactic ionization. The difference maybe primarily due to the difference of gas number density between the ISM (\(\sim 10^{2}\) cm\({}^{-3}\), Mingozzi et al. e.g., 2020) and the CGM (\(\sim 10^{-1}-10^{-3}\) cm\({}^{-3}\)). The ionization parameter \(q=n_{\gamma}/n_{\rm H}\) is proportional to the ratio of \(I_{\rm UV}/n_{\rm H}\). Assuming the ionization of the CGM is due to the extragalactic background only (whose intensity is about \(10^{-2}\) of the average UV galactic radiation), the difference in \(q\) parameter is obtained from \((n_{\rm H}^{\rm CGM})^{-1}\), which gives a factor \(10-10^{3}\) for the range of the CGM number densities. The factor will be lower if the galactic UV intensity is stronger than the average UV galactic radiation. We did not find much correlation of the ionization parameter of the absorption systems with galaxy properties such as stellar mass, SFR, or sSFR. However, it correlates with the elevation angle (see the right panel of Fig. 14). As we showed above quasar sight lines probe gas around galaxies at lower elevation angles and at higher impact parameters, than AGN sight lines (1-385099 and 1-71974), and for the former, we found lower ionization parameters. This is consistent with the picture that gas at small elevation angles corresponds to galactic disks (or inflowing gas) and has lower metallicity and lower ionization parameters than the gas observed in absorption at larger elevation angles (which may arise in outflows with higher ionization fractions and higher metallicity). For the quasar sight line J0950+4309, which probes the gas around the galaxy 1-166736 at a moderate elevation angle, we found a large difference in the ionization parameter between the components seen in the Si ii and Si iii absorption lines. The low-ionization gas traced by Si ii may correspond to the galactic disk, while the highly ionized gas observed only in Si iii may correspond to the CGM. The higher ionization component in 1-166736 is consistent with the trend observed between the ionization parameter and the elevation angle. Figure 12: The comparison of the gradient of IZI metallicity derived from fitting to MaNGA emission-line maps and metallicity measured in absorption systems. Left panels show the comparison for absorption systems in quasar sight lines with non-zero impact parameter. Right panels show the comparison for absorption systems along galaxy/AGN sight lines with zero impact parameter. Red and pink circles represent the average metallicity in absorption systems, and the metallicity in individual components, respectively. The black solid line and black shaded area in each panel show the linear gradient of IZI metallicity averaged over the elevation angle. The pink solid line and pink shaded area in left panels show the linear gradient of IZI metallicity in the direction to the quasar sight line. The orange, blue, green and red dashed curves show the model distribution of metallicity in the CGM from the TNG50 simulation (see Fig. 5 in Péroux et al., 2020), derived at different azimuthal angles 0, 30, 60, 90 deg, respectively, and four values of impact parameter (25, 50, 100, 200 kpc). The model is adopted to \(z=0\) and the galaxy stellar mass. For two galaxies (1-180522 and 1-166736) quasar sight lines more likely probe gas near the galactic plane. The galaxy 1-44487 is an interacting galaxy, and this activity may result a high metallicity measured in the absorption system at high impact parameter. AGN sightlines likely probe a high-metallicity central outflow (1-71974) and gas in ”satellite” galaxy (12-192116), see text. We also compare our data with measurements of the ionization parameter in the COS-Halos survey (Werk et al., 2014). Our results are consistent with their results and cover the same range of ionization parameter. However, we do not confirm the trend \(\log q=-2.2\pm 0.3+(0.8\pm 0.3)\times\log(R/R_{\rm vir})\) reported by (Werk et al., 2014) (based on the points at very low and high impact parameters). Combining our sample with the COS-Halos sample, we find that the correlation of ionization parameter with impact parameter is not statistically significant with a Spearman rank order correlation coefficient of 0.2 and \(p\)-value 0.16. Indeed, high ionization parameters, on average, correspond to a higher impact parameter. However, there is a large scatter of \(q\) parameters which likely reflects the inhomogeneity of physical conditions in the CGM of different galaxies. ## 5 Conclusions We have measured the CGM properties out to 25 effective radii using HST/COS spectroscopy of quasars and AGNs about a sample of low-redshift galaxies with exquisite data from the MaNGA survey. We detected the associated absorption for 11 of 14 galaxies in our sample. In three cases, the absorption was detected in the sight lines toward the bright source near the galactic center, in other cases the absorption was detected in background quasar sight lines at impact parameter from 23 to 137 kpc. For the AGN sight lines we detected a strong H i absorption (\(N(\rm HI)\simeq 10^{20.2}\,\rm cm^{-2}\)) only in one case, in two other cases we found weak H i absorption (\(N(\rm HI)\simeq 10^{13}\,\rm cm^{-2}\)) which may be related to high-metallicity and high-ionization gas in the central outflow. Our quasar sight lines show H i absorption with a wide range of \(N(\rm HI)\simeq 10^{13}-10^{19}\,\rm cm^{-2}\). To summarize, our main results are as follows: 1. The H i column density vs. impact parameter measurements for quasar sight lines correspond generally well with the radial H i column density profile predicted from galaxy simulations (van de Voort et al., 2019; Nelson et al., 2020). 2. Our data also agree well with other spectroscopic studies of halos of galaxies at low redshift \(z<0.3\) (COS-Halos by Tumlinson et al., 2013, COS-Weak by Muzahid et al., 2018) and of the gas in "galaxies on top of quasars" (in the close vicinity of low-\(z\) galaxies within impact parameters \(\sim 1-7\) kpc, Kulkarni et al., 2022). 3. We confirm the anticorrelation between the H i column density and the galaxy stellar mass that was previously reported by Kulkarni et al. (2022). 4. We report a strong dependence of \(N(\rm HI)\) decreasing with increasing \(D_{n}(4000)\) index of the host galaxies for quasar sight lines, that may be Figure 13: The comparison of metallicity of absorption systems against the impact parameter (left panel), impact parameter in units of effective radii (middle panel) and elevation angle (right panel). Circles and diamonds represent the metallicity in individual velocity components for the quasar and AGN absorbers, respectively, from our sample. Transparent squares show data from the COS-Halos survey (Tumlinson et al., 2013). The color of symbols denotes the galaxy stellar mass. The solid and dotted curves in the left and middle panel show the metallicity gradient in MaNGA observations and its extrapolation to 20 effective radii (as in Fig. 12). The dashed lines in the left and right panels show the gradient of metallicity with distance (left) and with elevation angle (right) from galaxy formation simulations (Péroux et al., 2020). The radial gradient was derived at \(z=0\), for an elevation angle of 0 degrees, and for galaxy stellar masses of \(10^{9}\,M_{\odot}\) and \(10^{11}\,M_{\odot}\). The gradient with the elevation angle was derived at \(z=0\), for \(b=25-100\) kpc, and for galaxy stellar masses of \(10^{9}\,M_{\odot}\) and \(10^{11}\,M_{\odot}\). a result of past star-formation activity having consumed or blown out cool gas from the CGM. 5. A comparison of absorption velocities with radial velocity maps of ionized gas line emission in galaxies shows consistency with corotation of the strong H i-absorption component with the disk out to \(\sim\)10 effective radii (within \(\pm\)50 km s\({}^{-1}\)) and \(\sim\) 25 effective radii (within \(\pm\)1 galactic disk rotational velocity). The components with only H i absorption (without associated metal lines) are likely to have a high velocity shift and in some cases may be unbound to the galaxy. 6. Comparing the observed CGM properties with the galaxy properties from MaNGA maps, we estimate the gradients in metallicity and ionization parameters. The measurements of absorption metallicities in individual quasar sight lines correspond well with the gradient of metallicity in the galactic disk derived from MaNGA observations. Overall, from our sample and previous studies we find a lower metallicity in the quasar sight lines with respect to the AGN sight lines. The difference is consistent with the predictions of the CGM metallicity from TNG50 simulations (Peroux et al., 2020). The ionization parameter in absorption systems is on average one order of magnitude higher than the galactic value (\(q\sim 10^{-3.5}\)). The measurements in our sample and previous studies do not show a statistically significant gradient of the ionization parameter with distance from the galaxy. This indicates a strong inhomogeneity of the physical conditions in the CGM (number density and intensity of H i-ionized radiation). However, the data are consistent with an increasing ionization parameter with increasing elevation angle. Our data offer the first detailed comparisons of CGM properties with extrapolations of detailed galaxy maps. While our data offer a number of interesting insights into the exchange of gas and metals between galaxies and their CGM, our current sample is still small. Observations of the CGM of many more galaxies mapped with integral field spectroscopy are essential to more fully understand how galaxies interact with their CGM. ## 6 Data Availability Data directly related to this publication and its figures can be requested from the authors. The MaNGA data used in this paper can be downloaded from the MaNGA public archives. The _HST_ data used in this paper can be found in MAST: 10.17909/zpy3-w565. ## Acknowledgements This work is supported by a grant from the Space Telescope Science Institute for GO program 16242 (PI V. Kulkarni). Additional partial support is also gratefully acknowledged from the US National Science Foundation grant AST/2007538 and NASA grant 80NSSC20K0887 (PI V. Kulkarni). We would like to thank Francesco Belfiore and Kyle Westfall for helpful advice on analysing the the MaNGA Figure 14: The comparison of gas ionization parameter of absorption systems against the impact parameter (left panel), impact parameter to effective radius (middle panel) and the elevation angle (right panel). The symbols are the same as in Fig. 13: circles and diamonds - our sample, transparent squares - from COS-Halos survey. The color of the symbols denotes the galaxy stellar mass. The solid and dotted curves in the left and middle panel show the ionization parameter gradient in MaNGA observations and its approximation to 20 effective radii (as in Fig. 12). emission line ratios. We thank Sergei Balashev for sharing a version of the Spectro Voigt profile fitting code, before its official release. We are grateful to an anonymous referee for careful reading and constructive suggestions that have helped to improve this paper. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss4.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics -- Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University, and the Collaboration Overview Start Guide Affiliate Institutions Key People in SDSS Collaboration Council Committee on Inclusiveness Architects SDSS-IV Survey Science Teams and Working Groups Code of Conduct Publication Policy How to Cite SDSS External Collaborator Policy. ## Appendix A Flux Uncertainty Estimate The flux errors of the HST/COS spectra originates in three sources: the errors associated with flat-field response, the Poisson error in the counts from the object flux (galaxy/quasar), and the Poisson error in the counts from the background flux (e.g., Johnson et al., 2021). In our case the first and third contributions are much smaller then the second. However, for faint sources, the object counts are low. Therefore, the problem is to estimate the upper and lower flux errors for the Poisson distribution in the case of a small number of counts. The standard CalCOS pipeline uses an asymmetric uncertainty based on the frequentist-confidence method (see Gehrels, 1986) and described by \[\sigma_{N;\mathrm{upper}}=1+\sqrt{N+\frac{3}{4}}\] (A1) and \[\sigma_{N;\mathrm{lower}}=N-\left[N\left(1-\frac{1}{9N}-\frac{1}{3\sqrt{N}} \right)^{3}\right],\] (A2) where \(N\) is the number of observed counts. We found that for low count numbers (\(<10\)), these uncertainties are overestimated by the standard CalCOS pipeline. Therefore we reevaluated the uncertainties by the maximum probability estimate (MPE) method. A comparison of the uncertainties estimated using the two methods is shown for the case \(N=3\) in Fig. 15. The lower uncertainties are similar, while the upper uncertainties derived by the MPE method is about two times lower than ones derived by the frequentist-confidence method. We note that both estimates correspond to 68% confidence interval (\(1\sigma\)), however the frequentist-confidence estimate is shifted to higher values. The relative difference between the uncertainty estimates decreases with an increase in the number of counts (\(N\)), and is small for \(N>20\). The top panel of Fig. 16 shows the upper and lower uncertainty estimates from the two methods. We adopt the uncertainty estimates in integral number of counts, since this seems more physical. The bottom panel shows the comparison of the ratio of uncertainties to the standard deviation for the Poisson distribution. The standard deviation was calculated independently for fluxes above and below the mean value (\(N\)). The estimates correspond well to the standard deviation over the entire range of \(N\). ## Appendix B Absorption-line analysis details In this section we present the detailed results of analysis of each absorption system individually. Fig. 17 presents the posterior PDF of fitting parameters for systems shown in Fig. 3. Figs. 18-23 show results for quasar-galaxy pairs, Figs. 24-26 show results for AGNs. In these figures we compare radial velocity, metallicity and ionization parameter derived from the fitting to MaNGA maps of the galaxies and ones measured from fitting to absorption lines in the absorption system in our HST/COS spectra. Below we describe the panels in these figures. The panels in the top row show the HST COS data for the H i, Si iii, Si ii, C ii absorption lines and our best fits to these lines. The synthetic profile is shown in red and the contribution from each component is shown in green, blue, purple and orange. The panels in the second row show the comparison of radial velocities. In each case, the left panel shows the MaNGA velocity field (determined from the H\({}_{\alpha}\)-line emission or the stellar continuum). The black line represents the positional angle (\(PA\)), the pink line and pink shaded area represent the the direction to the quasar sight line within opening angle \(15^{\circ}\), the black cross represents the position of the center of the disk, and the orange cross represents the position of the AGN (only for AGN sight lines). The second panel shows the galaxy rotation velocity curve, reconstructed using the best fit to the radial velocity map. The circles show measurements from the MaNGA spaxels, and the pink line shows the model. The right panel compares the model of radial velocity in the direction to the quasar (\(QA\)) and the velocities of the absorption components. The dashed vertical line represents the value of the impact parameter. The panels in the third row show the orientation of the quasar sight line with respect to the disk plane. The left panel shows a 3d plot: quasar sight line is shown by the black line (with the black star denoting the quasar), the observer is located at the top of the panel. The color of points in the galactic disk corresponds to the value of the radial velocity measured by the observer (same as in the MaNGA velocity map). The pink shaded area shows the range of the elevation angles corresponding to our probability estimate of the position of the absorption system along the quasar sight line (see Section 4.3.1). The dashed and solid pink lines represent the maximum probability value Figure 15: The comparison of the estimates of positive and negative uncertainties for the Poisson distribution in the case of a low number of counts (\(N=3\)). The top and bottom panels show the probability distribution function (PDF) and the cumulative distribution function (CDF), respectively. the vertical red dashed lines show the mean value. The green lines and green dashed area represent the confidence interval derived by the CalCOS pipeline. The yellow lines and yellow dashed area represent the confidence interval derived by the maximum probability estimate (MPE) method. (\(\overline{\phi}\)) and its \(1\,\sigma\) uncertainty. The middle and right panels show the \(Y-Z\) and \(X-Y\) projections of the 3-d plot, respectively. The panels in the fourth row show the comparison of the metallicity of the ionized gas measured from the IZI modelling of MaNGA maps of emission lines and the metallicity of the cool gas from the CLOUDY fitting to metal column densities in the absorption system. The left panel shows the MaNGA maps, the lines and symbols are the same as in the first panel in the second row (radial velocity map). The middle panel shows the radial profile of the IZI metallicity (circles with errorbars) and the best fit to IZI metallicity gradients by a linear model (black line). The pink line corresponds to the IZI metallicity gradients in the direction to the quasar (\(QA\)) within a \(15^{\circ}\) opening angle. Shaded areas represent \(1\sigma\) uncertainty. The right panel show the comparison of the IZI metallicity gradient with metallicity measured in the absorption system (small circles represent values for individual components, red circle shows the total value). The panels in the fifth row are similar to the panels in the fourth row, but for the ionization parameter. Figs. 27-35 show the best fits to absorption lines, and Fig. 36 shows the comparison of measured total column densities of metals with the values predicted by cloudy photo-ionization models. Figure 16: Top panel shows the comparison of of positive and negative uncertainties for the Poisson distribution for different values of \(N\). Dashed and solid yellow curves show our estimate, which is represented by a fraction in an integer number of samples. Green line represents the CalCOS estimate. Bottom panel shows the comparison of upper and lower estimates with the standard deviations for the Poisson distribution, calculated independently for the upper and lower outliers. ### Comments to fit to absorption systems #### b.1.1 J0755+3911 We measured the H i and metal species column densities of the absorption system at the redshift of the AGN 1-71974 (\(z_{\rm gal}=0.0336\)). The absorption system consists of at least four velocity components detected in H i Ly\(\alpha\) and Si iii 1206A transitions. The H i absorption consists of two weak components with column densities \(\sim 10^{12.8}\) and \(10^{13.3}\) cm\({}^{-2}\), which are blue-shifted by \(-62\) and \(-228\) km/s with respect to the galaxy redshift. The Si iii 1206 absorption is detected in two components at \(-130\) and \(-200\) km/s, which are shifted with respect to H i absorption lines. We fitted the absorption system with four velocity components with the redshifts tied to the redshift of the prominent H i and Si iii absorptions. The result of the fit is given in Table 5 and line profiles are shown in Fig. 32. #### b.1.2 J0758+4219 We detected the absorption system associated with the CGM of the galaxy 1-44487, which consists of four velocity components detected in H i, Si ii, Si iii, S ii absorption lines. The velocity components of metal absorption lines well correspond to the position of the H i velocity components. Therefore we fitted this system with four components. One of the H i components is blended with the Milky Way S ii 1253A absorption line, and we fitted the MW S ii absorptions consistently with the fit to the galaxy absorption lines. The result of the fit is given in Table 5 and line profiles are shown in Fig. 30. #### b.1.3 J0838+2453 In the spectrum of the AGN J0838+2453 we detected two H i absorptions at \(-34\) and \(-750\) km/s with respect to the redshift of the host galaxy. Because of low signal to noise ratio for this spectrum, the associated metal absorption lines are detected only in Si iii 1206 A transition, which is located close to the H i Ly\(\alpha\) quasar emission line. The Si iii absorption has five velocity components. Two velocity components well correspond to the position of the H i absorption components, whereas other three component are detected only in Si iii. Therefore we fitted the absorption system with five velocity components. The result of the fit is given in Table 5 and line profiles are shown in Fig. 33. #### b.1.4 J0950+4309 The absorption system at the redshift of the galaxy 1-166736 is well detected in H i, Si ii, Si iii, C ii transitions. The H i absorption line is saturated, therefore we derived the velocity structure from the fit to metal absorption lines. We found that Si iiand C ii absorptions can be well fitted by one component which is red-shifted relative to the center of the strong Si iii absorption line. The additional blue component seen in only the Si iii absorption line also has a higher Doppler parameter, that indicates the difference in physical conditions between two components. The redshift of blue component has been chosen to well fit the blue wing of H i Ly\(\alpha\) absorption. Additionally we detected the third weak component at \(-247\) km/s with respect to the galaxy redshift, which is detected only in H i profile. The result of the fit is given in Table 5 and line profiles are shown in Fig. 27. #### b.1.5 J1237+4447 The absorption system at the redshift of the galaxy is detected in H i Ly\(\alpha\) and Ly\(\beta\) and Si iii absorption lines. Si ii absorption line is blended by the saturated C ii and C ii\({}^{*}\) absorption lines of the MW. Since the H i Ly\(\alpha\) is saturated, we derived the velocity structure from the fit to Si iii absorption line, which is well fitted by two velocity components. The fit to the red-shifted component in the H i line profile is degenerate in the parameter space \(N({\rm HI})-b\) and has two solutions with low and high \(N({\rm HI})\). We consider both solutions named by them (case A) and (case B). The likelihood functions are shown in fig. 17. The result of the fit is given in Table 5 and line profiles are shown in Fig. 28. #### b.1.6 J1338+0311 The absorption system associated with the host galaxy of the AGN 12-192116 has the strong damped H i Ly\(\alpha\) absorption (\(N=10^{20.2}\) cm\({}^{-2}\)). We described the fit to the H i Ly\(\alpha\) line in Section 2.2.2. The associated metal absorption lines are fitted by four velocity components. The result of the fit is given in Table 5 and line profiles are shown in Fig. 31. #### b.1.7 J2106+0909 We detected only one weak absorption H i Ly\(\alpha\) line (\(N=10^{13.7}\) cm\({}^{-2}\)) at the redshift of the galaxy 1-113242. The associated Si iii absorption line is blended with the MW S ii absorption lines, therefore we can set only upper limit to Si iii column density. The result of the fit is given in Table 5 and line profiles are shown in Fig. 34. #### b.1.8 J2130-0025 We detected strong saturated absorption lines of H i and metal species at the redshift of the galaxy 1-180522. The absorption lines are well fitted with one-component model. The result of the fit is given in Table 5 and line profiles are shown in Fig. 29. #### b.1.9 The case of nondetections In three cases (1653+3945, J1709+3421 and J1629+4007), we do not detect any absorption (in H i or any of the metal ions) within the range of \(\pm 800\) km s\({}^{-1}\) relative to the galaxy redshifts (1-594755, 1-561034, 1-564490). Fig. 35 shows the parts of quasar spectra near the expected positions of H i, Si ii, Si iii, N v lines. We set upper limits on \(N(\mathrm{HI})\sim 10^{13}\) cm\({}^{-2}\).
2302.13218
Schrödinger equation with finitely many $δ$-interactions: closed form, integral and series representations for solutions
A closed form solution for the one-dimensional Schr\"{o}dinger equation with a finite number of $\delta$-interactions \[ \mathbf{L}_{q,\mathfrak{I}_{N}}y:=-y^{\prime\prime}+\left( q(x)+\sum _{k=1}^{N}\alpha_{k}\delta(x-x_{k})\right) y=\lambda y,\quad0<x<b,\;\lambda \in\mathbb{C}% \] is presented in terms of the solution of the unperturbed equation \[ \mathbf{L}_{q}y:=-y^{\prime\prime}+q(x)y=\lambda y,\quad0<x<b,\;\lambda \in\mathbb{C}% \] and a corresponding transmutation operator $\mathbf{T}_{\mathfrak{I}_{N}}^{f}$ is obtained in the form of a Volterra integral operator. With the aid of the spectral parameter power series method, a practical construction of the image of the transmutation operator on a dense set is presented, and it is proved that the operator $\mathbf{T}_{\mathfrak{I}_{N}}^{f}$ transmutes the second derivative into the Schr\"{o}dinger operator $\mathbf{L}_{q,\mathfrak{I}_{N}}$ on a Sobolev space $H^{2}$. A Fourier-Legendre series representation for the integral transmutation kernel is developed, from which a new representation for the solutions and their derivatives, in the form of a Neumann series of Bessel functions, is derived.
Vladislav V. Kravchenko, Víctor A. Vicente-Benítez
2023-02-26T03:02:26Z
http://arxiv.org/abs/2302.13218v2
Closed form solution and transmutation operators for Schrodinger equations with finitely many \(\delta\)-interactions ###### Abstract A closed form solution for the one-dimensional Schrodinger equation with a finite number of \(\delta\)-interactions \[{\bf L}_{q,{\mathfrak{I}}_{N}}y:=-y^{\prime\prime}+\left(q(x)+\sum_{k=1}^{N} \alpha_{k}\delta(x-x_{k})\right)y=\lambda y,\quad 0<x<b,\;\lambda\in{\mathbb{C}}\] is presented in terms of the solution of the unperturbed equation \[{\bf L}_{q}y:=-y^{\prime\prime}+q(x)y=\lambda y,\quad 0<x<b,\;\lambda\in{ \mathbb{C}}\] and a corresponding transmutation operator \({\bf T}_{{\mathfrak{I}}_{N}}^{f}\) is obtained in the form of a Volterra integral operator. With the aid of the spectral parameter power series method, a practical construction of the image of the transmutation operator on a dense set is presented, and it is proved that the operator \({\bf T}_{{\mathfrak{I}}_{N}}^{f}\) transmutes the second derivative into the Schrodinger operator \({\bf L}_{q,{\mathfrak{I}}_{N}}\) on a Sobolev space \(H^{2}\). A Fourier-Legendre series representation for the integral transmutation kernel is developed, from which a new representation for the solutions and their derivatives, in the form of a Neumann series of Bessel functions, is derived. **Keywords:** One-dimensional Schrodinger equation, point interactions, transmutation operator, Fourier-Legendre series, Neumann series of Bessel functions. **MSC Classification:** 34A25; 34A45; 46F10; 47G10; 81Q05. ## 1 Introduction We consider the one-dimensional Schrodinger equation with a finite number of \(\delta\)-interactions \[-y^{\prime\prime}+\left(q(x)+\sum_{k=1}^{N}\alpha_{k}\delta(x-x_{k})\right)y= \lambda y,\quad 0<x<b,\;\lambda\in{\mathbb{C}}, \tag{1}\] where \(q\in L_{2}(0,b)\) is a complex valued function, \(\delta(x)\) is the Dirac delta distribution, \(0<x_{1}<x_{2}<\cdots<x_{N}<b\) and \(\alpha_{1},\ldots,\alpha_{N}\in\mathbb{C}\setminus\{0\}\). Schrodinger equations with distributional coefficients supported on a set of measure zero naturally appear in various problems of mathematical physics [3, 4, 5, 6, 16, 44] and have been studied in a considerable number of publications and from different perspectives. In general terms, Eq. (1) can be interpreted as a regular equation, i.e., with the regular potential \(q\in L_{2}(0,b)\), whose solutions are continuous and such that their first derivatives satisfy the jump condition \(y^{\prime}(x_{k}+)-y^{\prime}(x_{k}-)=\alpha_{k}y(x_{k})\) at special points [25, 26]. Another approach consists in considering the interval \([0,b]\) as a quantum graph whose edges are the segments \([x_{k},x_{k+1}]\), \(k=0,\ldots,N\), (setting \(x_{0}=0\), \(x_{N+1}=b\)), and the Schrodinger operator with the regular potential \(q\) as an unbounded operator on the direct sum \(\bigoplus_{k=0}^{N}H^{2}(x_{k},x_{k+1})\), with the domain given by the families \((y_{k})_{k=0}^{N}\) that satisfy the condition of continuity \(y_{k}(x_{k}-)=y_{k+1}(x_{k}+)\) and the jump condition for the derivative \(y^{\prime}_{k+1}(x_{k}+)-y^{\prime}_{k}(x_{k}-)=\alpha_{k}y_{k}(x_{k})\) for \(k=1,\ldots N\) (see, e.g., [18, 34, 35]). This condition for the derivative is known in the bibliography of quantum graphs as the \(\delta\)-type condition [9]. Yet another approach implies a regularization of the Schrodinger operator with point interactions, that is, finding a subdomain of the Hilbert space \(L_{2}(0,b)\), where the operator defines a function in \(L_{2}(0,b)\). For this, note that the potential \(q(x)+\sum_{k=1}^{N}\alpha_{k}\delta(x-x_{k})\) defines a functional that belongs to the Sobolev space \(H^{-1}(0,b)\). In [11, 20, 23, 42] these forms of regularization have been studied, rewriting the operator by means of a factorization that involves a primitive \(\sigma\) of the potential. Theory of transmutation operators, also called transformation operators, is a widely used tool in studying differential equations and spectral problems (see, e.g., [8, 29, 36, 39, 43]), and it is especially well developed for Schrodinger equations with regular potentials. It is known that under certain general conditions on the potential \(q\) the transmutation operator transmuting the second derivative into the Schrodinger operator can be realized in the form of a Volterra integral operator of the second kind, whose kernel can be obtained by solving a Goursat problem for the Klein-Gordon equation with a variable coefficient [14, 36, 39]. Furthermore, functional series representations of the transmutation kernel have been constructed and used for solving direct and inverse Sturm-Liouville problems [29, 30]. For Schrodinger equations with \(\delta\)-point interactions, there exist results about equations with a single point interaction and discontinuous conditions \(y(x_{1}+)=ay(x_{1}-)\), \(y^{\prime}(x_{1}+)=\frac{1}{a}y^{\prime}(x_{1}-)+dy(x_{1}-)\), \(a,b>0\) (see [22, 46]), and for equations in which the spectral parameter is also present in the jump condition (see [1, 37, 38]). Transmutation operators have also been studied for equations with distributional coefficients belonging to the \(H^{-1}\)-Sobolev space in [11, 23, 42]. In [14], the possibility of extending the action of the transmutation operator for an \(L_{1}\)-potential to the space of generalized functions \(\mathscr{D}^{\prime}\), was studied. The aim of this work is to present a construction of a transmutation operator for the Schrodinger equation with a finite number of point interactions. The transmutation operator appears in the form of a Volterra integral operator, and with its aid we derive analytical series representations for solutions of (1). For this purpose, we obtain a closed form of the general solution of (1). From it, the construction of the transmutation operator is deduced, where the transmutation kernel is ensembled from the convolutions of the kernels of certain solutions of the regular equation (with the potential \(q\)), in a finite number of steps. Next, the spectral parameter power series (SPPS) method is developed for Eq. (1). The SPPS method was developed for continuous ([27, 31]) and \(L_{1}\)-potentials ([10]), and it has been used in a piecewise manner for solving spectral problems for equations with a finite number of point interactions in [6, 7, 41]. Following [15], we use the SPPS method to obtain an explicit construction of the image of the transmutation operator acting on polynomials. Similarly to the case of a regular potential [30], we obtain a representation of the transmutation kernel as a Fourier series in terms of Legendre polynomials and as a corollary, a representation for the solutions of equation (1) in terms of a Neumann series of Bessel functions. Similar representations are obtained for the derivatives of the solutions. It is worth mentioning that the methods based on Fourier-Legendre representations and Neumann series of Bessel functions have shown to be an effective tool in solving direct and inverse spectral problems for equations with regular potentials, see, e.g., [29, 30, 33]. In Section 2, basic properties of the solutions of (1) are compiled, studying the equation as a distributional sense in \(\mathscr{D}^{\prime}(0,b)\) and deducing properties of its regular solutions. Section 3 presents the construction of the closed form solution of (1). In Section 4, the construction of the transmutation operator and the main properties of the transmutation kernel are developed. In Section 5, the SPPS method is presented, with the mapping and transmutation properties of the transmutation operator. Section 6 presents the Fourier-Legendre series representations for the transmutation kernels and the Neumann series of Bessel functions representations for solutions of (1), and a recursive integral relation for the Fourier-Legendre coefficients is obtained. Finally, in Section 7, integral and Neumann series of Bessel functions representations for the derivatives of the solutions are presented. ## 2 Problem setting and properties of the solutions We use the standard notation \(W^{k,p}(0,b)\) (\(b>0\)) for the Sobolev space of functions in \(L_{p}(0,b)\) that have their first \(k\) weak derivatives in \(L_{p}(0,b)\), \(1\leqslant p\leqslant\infty\) and \(k\in\mathbb{N}\). When \(p=2\), we denote \(W^{k,2}(0,b)=H^{k}(0,b)\). We have that \(W^{1,1}(0,b)=AC[0,b]\), and \(W^{1,\infty}(0,b)\) is precisely the class of Lipschitz continuous functions in \([0,b]\) (see [12, Ch. 8]). The class of smooth functions with compact support in \((0,b)\) is denoted by \(C_{0}^{\infty}(0,b)\), then we define \(W_{0}^{1,p}(0,b)=\overline{C_{0}^{\infty}(0,b)}^{W^{1,p}}\) and \(H_{0}^{1}(0,b)=W_{0}^{1,2}(0,b)\). Denote the dual space of \(H_{0}^{1}(0,b)\) by \(H^{-1}(0,b)\). By \(L_{2,loc}(0,b)\) we denote the class of measurable functions \(f:(0,b)\to\mathbb{C}\) such that \(\int_{\alpha}^{\beta}|f(x)|^{2}dx<\infty\) for all subintervals \([\alpha,\beta]\subset(0,b)\). The characteristic function of an interval \([A,B]\subset\mathbb{R}\) is denoted by \(\chi_{[A,B]}(t)\). In order to simplify the notation, for the case of a symmetric interval \([-A,A]\), we simply write \(\chi_{A}\). The Heaviside function is given by \(H(t)=\chi_{(0,\infty)}(t)\). The lateral limits of the function \(f\) at the point \(\xi\) are denoted by \(f(\xi\pm)=\lim_{x\to\xi\pm}f(x)\). We use the notation \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). The space of distributions (generalized functions) over \(C_{0}^{\infty}(0,b)\) is denoted by \(\mathscr{D}^{\prime}(0,b)\), and the value of a distribution \(f\in\mathscr{D}^{\prime}(0,b)\) at \(\phi\in C_{0}^{\infty}(0,b)\) is denoted by \((f,\phi)_{C_{0}^{\infty}(0,b)}\). Let \(N\in\mathbb{N}\) and consider a partition \(0<x_{1}<\cdots<x_{N}<b\) and the numbers \(\alpha_{1},\ldots,\alpha_{N}\in\mathbb{C}\setminus\{0\}\). The set \(\mathfrak{I}_{N}=\{(x_{j},\alpha_{j})\}_{j=1}^{N}\) contains the information about the point interactions of Eq. (1). Denote \[q_{\delta,\mathfrak{I}_{N}}(x):=\sum_{k=1}^{N}\alpha_{k}\delta(x-x_{k}),\quad \mathbf{L}_{q}:=-\frac{d^{2}}{dx^{2}}+q(x),\quad\mathbf{L}_{q,\mathfrak{I}_{ N}}:=\mathbf{L}_{q}+q_{\delta,\mathfrak{I}_{N}}(x).\] For \(u\in L_{2,loc}(0,b)\), \({\bf L}_{q,{\mathfrak{I}}_{N}}u\) defines a distribution in \({\mathscr{D}}^{\prime}(0,b)\) as follows \[({\bf L}_{q,{\mathfrak{I}}_{N}}u,\phi)_{C^{\infty}_{0}(0,b)}:=\int_{0}^{b}u(x){ \bf L}_{q}\phi(x)dx+\sum_{k=1}^{N}\alpha_{k}u(x_{k})\phi(x_{k})\quad\mbox{for} \;\;\phi\in C^{\infty}_{0}(0,b).\] Note that the function \(u\) must be well defined at the points \(x_{k}\), \(k=1,\ldots,N\). Actually, for a function \(u\in H^{1}(0,b)\), the distribution \({\bf L}_{q,{\mathfrak{I}}_{N}}u\) can be extended to a functional in \(H^{-1}(0,b)\) as follows \[({\bf L}_{q,{\mathfrak{I}}_{N}}u,v)_{H^{1}_{0}(0,b)}:=\int_{0}^{b}\{u^{\prime} (x)v^{\prime}(x)+q(x)u(x)v(x)\}dx+\sum_{k=1}^{N}\alpha_{k}u(x_{k})v(x_{k}) \quad\mbox{for}\;\;v\in H^{1}_{0}(0,b).\] We say that a distribution \(F\in{\mathscr{D}}^{\prime}(0,b)\) is \(L_{2}\)-regular, if there exists a function \(g\in L_{2}(0,b)\) such that \((F,\phi)_{C^{\infty}_{0}(0,b)}=(g,\phi)_{C^{\infty}_{0}(0,b)}:=\int_{0}^{b}g( x)\phi(x)dx\) for all \(\phi\in C^{\infty}_{0}(0,b)\). Denote \(x_{0}=0\), \(x_{N+1}=b\). We recall the following characterization of functions \(u\in L_{2,loc}(0,b)\) for which \({\bf L}_{q,{\mathfrak{I}}_{N}}u\) is \(L_{2}\)-regular. **Proposition 1**: _If \(u\in L_{2,loc}(0,b)\), then the distribution \({\bf L}_{q,{\mathfrak{I}}_{N}}u\) is \(L_{2}\)-regular iff the following conditions hold._ 1. _For each_ \(k=0,\ldots,N\)_,_ \(u|_{(x_{k},x_{k+1})}\in H^{2}(x_{k},x_{k+1})\)_._ 2. \(u\in AC[0,b]\)_._ 3. _The discontinuities of the derivative_ \(u^{\prime}\) _are located at the points_ \(x_{k}\)_,_ \(k=1,\ldots,N\)_, and the jumps are given by_ \[u^{\prime}(x_{k}+)-u^{\prime}(x_{k}-)=\alpha_{j}u(x_{k})\quad\mbox{for}\;k=1, \cdots,N.\] (2) _In such case,_ \[({\bf L}_{q,{\mathfrak{I}}_{N}}u,\phi)_{C^{\infty}_{0}(0,b)}=({\bf L}_{q}u, \phi)_{C^{\infty}_{0}(0,b)}\quad\mbox{for all}\;\phi\in C^{\infty}_{0}(0,b). \tag{3}\] **Proof.** Suppose that \({\bf L}_{q,{\mathfrak{I}}_{N}}u\) is \(L_{2}\)-regular. Then there exists \(g\in L_{2}(0,b)\) such that \[({\bf L}_{q,{\mathfrak{I}}_{N}}u,\phi)_{C^{\infty}_{0}(0,b)}=(g,\phi)_{C^{ \infty}_{0}(0,b)}\quad\mbox{for all}\;\phi\in C^{\infty}_{0}(0,b). \tag{4}\] 1. Fix \(k\in\{1,\ldots,N-1\}\). Take a test function \(\phi\in C^{\infty}_{0}(0,b)\) with \(\mbox{Supp}(\phi)\subset(x_{k},x_{k+1})\). Hence \[\int_{x_{k}}^{x_{k+1}}g(x)\phi(x)dx=({\bf L}_{q,{\mathfrak{I}}_{N}}u,\phi)_{C^{ \infty}_{0}(0,b)}=\int_{x_{k}}^{x_{k+1}}u(x){\bf L}_{q}\phi(x)dx,\] (5) because \(\phi(x_{j})=0\) for \(j=1,\ldots,N\). From (5) we obtain \[\int_{x_{k}}^{x_{k+1}}u(x)\phi^{\prime\prime}(x)dx=\int_{x_{k}}^{x_{k+1}}\{q( x)u(x)-g(x)\}\phi(x)dx.\] Set \(v(x)=\int_{0}^{x}\int_{0}^{t}\{q(s)u(s)-g(s)\}dsdt\). Hence \(v\in W^{2,1}(x_{j},x_{j+1})\), \(v^{\prime\prime}(x)=q(x)u(x)-g(x)\) a.e. \(x\in(x_{j},x_{j+1})\), and we get the equality \[\int_{x_{k}}^{x_{k+1}}(u(x)-v(x))\phi^{\prime\prime}(x)dx=0\quad\forall\phi\in C _{0}^{\infty}(x_{k},x_{k+1}).\] (6) Equality (6) implies that \(u(x)=v(x)+Ax+B\) a.e. \(x\in(x_{k},x_{k+1})\) for some constants \(A\) and \(B\) ([45, pp. 85]). In consequence \(u\in W^{2,1}(x_{k},x_{k+1})\) and \[-u^{\prime\prime}(x)+q(x)u(x)=g(x)\quad\text{a.e. }x\in(x_{k},x_{k+1}).\] (7) Furthermore, \(u\in C[x_{k},x_{k+1}]\), hence \(qu\in L_{2}(x_{k},x_{k+1})\) and then \(u^{\prime\prime}=qu-g\in L_{2}(x_{k},x_{k+1})\). In this way \(u|_{(x_{k},x_{k+1})}\in H^{2}(x_{k},x_{k+1})\). Now take \(\varepsilon>0\) and an arbitrary \(\phi\in C_{0}^{\infty}(\varepsilon,x_{1})\). We have that \[(\mathbf{L}_{q,\mathcal{I}_{N}}u,\phi)_{C_{0}^{\infty}(0,b)}=\int_{ \varepsilon}^{x_{1}}\{-u(x)\phi^{\prime\prime}(x)+q(x)u(x)\phi(x)\}dx=\int_{ \varepsilon}^{x_{1}}g(x)\phi(x)dx.\] Applying the same procedure as in the previous case we obtain that \(u\in H^{2}(\varepsilon,x_{1})\) and satisfies Eq. (7) in the interval \((\varepsilon,x_{1})\). Since \(\varepsilon\) is arbitrary, we conclude that \(u\) satisfies (7) for a.e. \(x\in(0,x_{1})\). Since \(q,g\in L_{2}(0,x_{1})\), then \(u|_{(0,x_{1})}\in H^{2}(0,x_{1})\) (see [47, Th. 3.4]). The proof for the interval \((x_{N},b)\) is analogous. Since \(u\in C^{1}[x_{k},x_{k+1}]\), \(k=0,\ldots,N\), the following equality is valid (see formula (6) from [24, pp. 100]) \[\int_{0}^{b}u(x)\phi^{\prime\prime}(x)dx =\sum_{k=1}^{N}\left\{u^{\prime}(x_{k}+)-u^{\prime}(x_{k}-) \right\}\phi(x_{k})\] (8) \[-\sum_{k=1}^{N}\left\{u(x_{k}+)-u(x_{k}-)\right\}\phi^{\prime}(x_{k})+ \int_{0}^{b}u^{\prime\prime}(x)\phi(x)dx,\qquad\forall\phi\in C_{0}^{\infty}( 0,b).\] Fix \(k\in\{1,\cdots,N\}\) arbitrary and take \(\varepsilon>0\) small enough such that \((x_{k}-\varepsilon,x_{k}+\varepsilon)\subset(x_{k-1},x_{k+1})\). Choose a cut-off function \(\psi\in C_{0}^{\infty}(x_{k}-\varepsilon,x_{k}+\varepsilon)\) satisfying \(0\leqslant\psi\leqslant 1\) on \((x_{k}-\varepsilon,x_{k}+\varepsilon)\) and \(\psi(x)=1\) for \(x\in(x_{k}-\frac{\varepsilon}{3},x_{k}+\frac{\varepsilon}{3})\). 2. By statement 1, it is enough to show that \(u(x_{k}+)=u(x_{k}-)\). Set \(\phi(x)=(x-x_{k})\psi(x)\), in such a way that \(\phi(x_{k})=0\) and \(\phi^{\prime}(x_{k})=1\). Hence \[(\mathbf{L}_{q,\mathcal{I}_{N}}u,\phi)_{C_{0}^{\infty}(0,b)}=\int_{x_{k}- \varepsilon}^{x_{k}+\varepsilon}u(x)\mathbf{L}_{q}\phi(x)dx.\] By (8) we have \[\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}u(x)\phi^{\prime\prime}(x)dx=u(x_{ k}-)-u(x_{k}+)+\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}u^{\prime\prime}(x) \phi(x)dx,\] because \(\phi(x_{k})=0\) and \(\phi^{\prime}(x_{k})=1\). Since \(u\) satisfies (4), we have \[\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}(\mathbf{L}_{q}u(x)-g(x))\phi(x)dx +u(x_{k}+)-u(x_{k}-)=0.\] By statement 1, \(\mathbf{L}_{q}u=g\) on both intervals \((x_{k-1},x_{k})\), \((x_{k},x_{k+1})\). Then we obtain that \(u(x_{k}+)-u(x_{k}-)\)=0. 3. Now take \(\psi\) as the test function. Hence \[(\mathbf{L}_{q,\mathfrak{I}_{N}}u,\psi)_{C_{0}^{\infty}(0,b)}=\int_{x_{k}-\varepsilon }^{x_{k}+\varepsilon}u(x)\mathbf{L}_{q}\psi(x)dx+\alpha_{k}u(x_{k}),\] because \(\mathrm{Supp}(\psi)\subset(x_{k}-\varepsilon,x_{k}+\varepsilon)\) and \(\psi\equiv 1\) on \((x_{k}-\frac{\varepsilon}{3},x_{k}+\frac{\varepsilon}{3})\). On the other hand, by (8) we obtain \[\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}u(x)\psi^{\prime\prime}(x)dx=u^{ \prime}(x_{k}+)-u^{\prime}(x_{k}-)+\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon }u^{\prime\prime}(x)\psi(x)dx,\] because \(\psi^{\prime}(x_{k})=0\). Thus, by (4) we have \[\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}(\mathbf{L}_{q}u(x)-g(x))\psi(x) dx+u^{\prime}(x_{k}-)-u^{\prime}(x_{k}+)+\alpha_{k}u(x_{k})=0.\] Again, by statement 1, we obtain (2). Reciprocally, if \(u\) satisfies conditions 1,2 and 3, equality (8) implies (3). By condition 1, \(\mathbf{L}_{q,\mathfrak{I}_{N}}u\) is \(L_{2}\)-regular. **Definition 2**: _The \(L_{2}\)**-regularization domain** of \(\mathbf{L}_{q,\mathfrak{I}_{N}}\), denoted by \(\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\), is the set of all functions \(u\in L_{2,loc}(0,b)\) satisfying conditions 1,2 and 3 of Proposition 1._ If \(u\in L_{2,loc}(0,b)\) is a solution of (1), then \(\mathbf{L}_{q-\lambda,\mathfrak{I}_{N}}u\) equals the regular distribution zero. Then we have the next characterization. **Corollary 3**: _A function \(u\in L_{2,loc}(0,b)\) is a solution of Eq. (1) iff \(u\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) and for each \(k=0,\ldots,N\), the restriction \(u|_{(x_{k},x_{k+1})}\) is a solution of the regular Schrodinger equation_ \[-y^{\prime\prime}(x)+q(x)y(x)=\lambda y(x)\quad\text{for }x_{k}<x<x_{k+1}. \tag{9}\] **Remark 4**: _Let \(f\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\). Given \(g\in C^{1}[0,b]\), we have_ \[(fg)^{\prime}(x_{k}+)-(fg)^{\prime}(x_{k}-) =f^{\prime}(x_{k}+)g(x_{k})+f(x_{k})g^{\prime}(x_{k}+)-f^{\prime} (x_{k}-)g(x_{k})-f(x_{k})g^{\prime}(x_{k}-)\] \[=[f^{\prime}(x_{k}+)-f^{\prime}(x_{k}-)]\,g(x_{k})=\alpha_{k}f(x_ {k})g(x_{k})\] _for \(k=1,\ldots,N\). In particular, \(fg\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) for \(g\in H^{2}(0,b)\)._ **Remark 5**: _Let \(u_{0},u_{1}\in\mathbb{C}\). Consider the Cauchy problem_ \[\begin{cases}\mathbf{L}_{q,\mathfrak{I}_{N}}u(x)=\lambda u(x),\quad 0<x<b,\\ u(0)=u_{0},\ u^{\prime}(0)=u_{1}.\end{cases} \tag{10}\] _If the solution of the problem exists, it must be unique. It is enough to show the assertion for \(u_{0}=u_{1}=0\). Indeed, if \(w\) is a solution of such problem, by Corollary 3, \(w\) is a solution of (9) on \((0,x_{1})\) satisfying \(w(0)=w^{\prime}(0)=0\). Hence \(w\equiv 0\) on \([0,x_{1}]\). By the continuity of \(w\) and condition (2), we have \(w(x_{1})=w^{\prime}(x_{1}-)=0\). Hence \(w\) is a solution of (9) satisfying these homogeneous conditions. Thus, \(w\equiv 0\) on \([x_{1},x_{2}]\). By continuing the process until the points \(x_{k}\) are exhausted, we arrive at the solution \(w\equiv 0\) on the whole segment \([0,b]\)._ _The uniqueness of the Cauchy problem with conditions \(u(b)=u_{0}\), \(u^{\prime}(b)=u_{1}\) is proved in a similar way._ **Remark 6**: _Suppose that \(u_{0}=u_{0}(\lambda)\) and \(u_{1}=u_{1}(\lambda)\) are entire functions of \(\lambda\) and denote by \(u(\lambda,x)\) the corresponding unique solution of (10). Since \(u\) is the solution of the Cauchy problem \({\bf L}_{q}u=\lambda u\) on \((0,x_{1})\) with the initial conditions \(u(\lambda,0)=u_{1}(\lambda)\), \(u^{\prime}(\lambda,0)=u_{1}(\lambda)\), both \(u(\lambda,x)\) and \(u^{\prime}(\lambda,x+)\) are entire functions for any \(x\in[0,x_{1}]\) (this is a consequence of [47, Th. 3.9] and [10, Th. 7]). Hence \(u^{\prime}(\lambda,x_{1}-)=u^{\prime}(\lambda,x_{1}+)-\alpha_{1}u(\lambda,x_{ 1})\) is entire in \(\lambda\). Since \(u\) is the solution of the Cauchy problem \({\bf L}_{q}u=\lambda u\) on \((x_{1},x_{2})\) with initial conditions \(u(\lambda,x_{1})\) and \(u^{\prime}(\lambda,x_{1}+)\), we have that \(u(\lambda,x)\) and \(u^{\prime}(\lambda,x+)\) are entire functions for \(x\in[x_{1},x_{2}]\). By continuing the process we prove this assertion for all \(x\in[0,b]\)._ ## 3 Closed form solution In what follows, denote the square root of \(\lambda\) by \(\rho\), so \(\lambda=\rho^{2}\), \(\rho\in\mathbb{C}\). For each \(k\in\{1,\cdots,N\}\) let \(\widehat{s}_{k}(\rho,x)\) be the unique solution of the Cauchy problem \[\begin{cases}-\widehat{s}_{k}^{\prime\prime}(\rho,x)+q(x+x_{k})\widehat{s}_{k} (\rho,x)=\rho^{2}\widehat{s}_{k}(\rho,x)\quad\text{ for }0<x<b-x_{k},\\ \widehat{s}_{k}(\rho,0)=0,\ \widehat{s}_{k}^{\prime}(\rho,0)=1.\end{cases} \tag{11}\] In this way, \(\widehat{s}_{k}(\rho,x-x_{k})\) is a solution of \({\bf L}_{q}u=\rho^{2}u\) on \((x_{k},b)\) with initial conditions \(u(x_{k})=0\), \(u^{\prime}(x_{k})=1\). According to [45, Ch. 3, Sec. 6.3], \(({\bf L}_{q}-\rho^{2})\left(H(x-x_{k})\widehat{s}_{k}(\rho,x-x_{k})\right)=- \delta(x-x_{k})\) for \(x_{k}<x<b\). We denote by \({\cal J}_{N}\) the set of finite sequences \(J=(j_{1},\ldots,j_{l})\) with \(1<l\leqslant N\), \(\{j_{1},\ldots,j_{l}\}\subset\{1,\ldots,N\}\) and \(j_{1}<\cdots<j_{l}\). Given \(J\in{\cal J}_{N}\), the length of \(J\) is denoted by \(|J|\) and we define \(\alpha_{J}:=\alpha_{j_{1}}\cdots\alpha_{j_{|J|}}\). **Theorem 7**: _Given \(u_{0},u_{1}\in\mathbb{C}\), the unique solution \(u_{{\cal J}_{N}}\in{\cal D}_{2}\left({\bf L}_{q,{\cal J}_{N}}\right)\) of the Cauchy problem (10) has the form_ \[u_{{\cal J}_{N}}(\rho,x)=\widetilde{u}(\rho,x)+\sum_{k=1}^{N} \alpha_{k}\widetilde{u}(\rho,x_{k})H(x-x_{k})\widehat{s}_{k}(\rho,x-x_{k})\\ +\sum_{J\in{\cal J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\widetilde{u}( \rho,x_{j_{1}})\left(\prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}(\rho,x_{j_{l+1}}-x _{j_{l}})\right)\widehat{s}_{j_{|J|}}(\rho,x-x_{j_{|J|}}), \tag{12}\] _where \(\widetilde{u}(\rho,x)\) is the unique solution of the regular Schrodinger equation_ \[{\bf L}_{q}\widetilde{u}(\rho,x)=\rho^{2}\widetilde{u}(\rho,x),\quad 0<x<b, \tag{13}\] _satisfying the initial conditions \(\widetilde{u}(\rho,0)=u_{1},\ \widetilde{u}^{\prime}(\rho,0)=u_{1}\)._ **Proof.** The proof is by induction on \(N\). For \(N=1\), the proposed solution has the form \[u_{{\cal J}_{1}}(\rho,x)=\widetilde{u}(\rho,x)+\alpha_{1}H(x-x_{1})\widetilde {u}(\rho,x_{1})\widehat{s}_{1}(\rho,x-x_{1}).\] Note that \(u_{{\cal J}_{1}}(\rho,x)\) is continuous, and \(u_{{\cal J}_{1}}(\rho,x_{1})=\widetilde{u}(\rho,x_{1})\). Hence \[({\bf L}_{q}-\rho^{2})u_{{\cal J}_{1}}(\rho,x)=\alpha_{1}\widetilde{u}(\rho,x_ {1})({\bf L}_{q}-\rho^{2})\left(H(x-x_{1})\widehat{s}_{1}(\rho,x-x_{1})\right)= -\alpha_{1}\widetilde{u}(\rho,x_{1})\delta(x-x_{1}),\] that is, \(u_{\mathfrak{I}_{1}}(\rho,x)\) is a solution of (1) with \(N=1\). Suppose the result is valid for \(N\). Let \(u_{\mathfrak{I}_{N+1}}(\rho,x)\) be the proposed solution given by formula (12). It is clear that \(u_{\mathfrak{I}_{N+1}}(\rho,\cdot)|_{(x_{k},x_{k+1})}\in H^{2}(x_{k},x_{k+1})\), \(k=0,\cdots,N\), \(u_{\mathfrak{I}_{N+1}}(\rho,x)\) is a solution of (9) on each interval \((x_{k},x_{k+1})\), \(k=0,\ldots,N+1\), and \(u_{\mathfrak{I}_{N+1}}^{(j)}(\rho,0)=\widetilde{u}^{(j)}(\rho,0)=u_{j}\), \(j=0,1\). Furthermore, we can write \[u_{\mathfrak{I}_{N+1}}(\rho,x)=u_{\mathfrak{I}_{N}}(\rho,x)+H(x-x_{N+1})f_{N}( \rho,x),\] where \(\mathfrak{I}_{N}=\mathfrak{I}_{N+1}\setminus\{(x_{N+1},\alpha_{N+1})\}\), \(u_{\mathfrak{I}_{N}}(\rho,x)\) is the proposed solution for the interactions \(\mathfrak{I}_{N}\), and the function \(f_{N}(\rho,x)\) is given by \[f_{N}(\rho,x) =\alpha_{N+1}\widetilde{u}(\rho,x_{N+1})\widehat{s}_{N+1}(x-x_{ N+1})\] \[\quad+\sum_{J\in\mathcal{J}_{N+1}\atop j_{|J|}=N+1}\alpha_{J} \widetilde{u}(\rho,x_{j_{1}})\left(\prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}( \rho,x_{j_{l+1}}-x_{j_{l}})\right)\widehat{s}_{N+1}(\rho,x-x_{N+1}),\] where the sum is taken over all the sequences \(J=(j_{1},\ldots,j_{|J|})\in\mathcal{J}_{N}\) with \(j_{|J|}=N+1\). From this representation we obtain \(u_{\mathfrak{I}_{N+1}}(\rho,x_{N+1}\pm)=u_{\mathfrak{I}_{N}}(\rho,x_{N+1})\) and hence \(u_{\mathfrak{I}_{N+1}}\in AC[0,b]\). By the induction hypothesis, \(u_{\mathfrak{I}_{N}}(\rho,x)\) is the solution of (1) for \(N\), then in order to show that \(u_{\mathfrak{I}_{N+1}}(\rho,x)\) is the solution for \(N+1\) it is enough to show that \((\mathbf{L}_{q}-\rho^{2})\hat{f}_{N}(\rho,x)=-\alpha_{N}u_{N}(x_{N+1})\delta(x -x_{N+1})\), where \(\hat{f}_{N}(\rho,x)=H(x-x_{N+1})f_{N}(\rho,x)\). Indeed, we have \[(\rho^{2}-\mathbf{L}_{q})\hat{f}_{N}(\rho,x) =\alpha_{N+1}\widetilde{u}(\rho,x_{N+1})\delta(x-x_{N+1})+\] \[\quad+\sum_{J\in\mathcal{J}_{N+1}\atop j_{|J|}=N+1}\alpha_{J} \widetilde{u}(\rho,x_{j_{1}})\left(\prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}( \rho,x_{j_{l+1}}-x_{j_{l}})\right)\delta(x-x_{N+1})\] \[=\alpha_{N+1}\delta(x-x_{N+1})\Bigg{[}\widetilde{u}(\rho,x_{N+1} )+\sum_{k=1}^{N}\alpha_{k}\widetilde{u}(\rho,x_{N+1})\widehat{s}_{k}(\rho,x_{ N+1}-x_{k})\] \[\quad\quad+\sum_{J\in\mathcal{J}_{N}}\alpha_{J}\widetilde{u}(\rho,x_{j_{1}})\left(\prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}(\rho,x_{j_{l+1}}-x_{j _{l}})\right)\widehat{s}_{j_{|J|}}(\rho,x_{N+1}-x_{j_{|J|}})\Bigg{]}\] \[=\alpha_{N+1}u_{\mathfrak{I}_{N}}(\rho,x_{N+1})\delta(x-x_{N+1}) =\alpha_{N+1}u_{\mathfrak{I}_{N+1}}(\rho,x_{N+1})\delta(x-x_{N+1}),\] where the second equality is due to the fact that \[\{J\in\mathcal{J}_{N+1}\,|\,j_{|J|}=N+1\}=\{(J^{\prime},N+1)\,|\,J^{\prime}\in \mathcal{J}_{N}\}\cup\{(j,N+1)\}_{j=1}^{N}.\] Hence \(u_{\mathfrak{I}_{N+1}}(\rho,x)\) is the solution of the Cauchy problem. **Example 8**: _Consider the case \(q\equiv 0\). Denote by \(e^{0}_{\mathfrak{I}_{N}}(\rho,x)\) the unique solution of_ \[-y^{\prime\prime}+\left(\sum_{k=1}^{N}\alpha_{k}\delta(x-x_{k})\right)y=\rho^{ 2}y,\quad 0<x<b, \tag{14}\] _satisfying \(e^{0}_{\mathfrak{I}_{N}}(\rho,0)=1\), \(e^{0}_{\mathfrak{I}_{N}}(\rho,0)=i\rho\). In this case we have \(\widehat{s}_{k}(\rho,x)=\frac{\sin(\rho x)}{\rho}\) for \(k=1,\ldots,N\). Hence, according to Theorem 7, the solution \(e^{0}_{\mathfrak{I}_{N}}(\rho,x)\) has the form_ \[e^{0}_{\mathfrak{I}_{N}}(\rho,x)=e^{i\rho x}+\sum_{k=1}^{N} \alpha_{k}e^{i\rho x_{k}}H(x-x_{k})\frac{\sin(\rho(x-x_{k}))}{\rho}\\ +\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})e^{i\rho x_{ j_{1}}}\left(\prod_{l=1}^{|J|-1}\frac{\sin(\rho(x_{j_{l+1}}-x_{j_{l}}))}{\rho} \right)\frac{\sin(\rho(x-x_{j_{|J|}}))}{\rho}. \tag{15}\] ## 4 Transmutation operators ### Construction of the integral transmutation kernel Let \(h\in\mathbb{C}\). Denote by \(\widetilde{e}_{h}(\rho,x)\) the unique solution of Eq. (13) satisfying \(\widetilde{e}_{h}(\rho,0)=1\), \(\widetilde{e}^{\prime}_{h}(\rho,0)=i\rho+h\). Hence the unique solution \(e^{h}_{\mathfrak{I}_{N}}(\rho,x)\) of Eq. (1) satisfying \(e^{h}_{\mathfrak{I}_{N}}(\rho,0)=1\), \((e^{h}_{\mathfrak{I}_{N}})^{\prime}(\rho,0)=i\rho+h\) is given by \[e^{h}_{\mathfrak{I}_{N}}(\rho,x)=\widetilde{e}_{h}(\rho,x)+\sum_ {k=1}^{N}\alpha_{k}\widetilde{e}_{h}(\rho,x_{k})H(x-x_{k})\widehat{s}_{k}(\rho,x-x_{k})\\ +\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\widetilde{e }_{h}(\rho,x_{j_{1}})\left(\prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}(\rho,x_{j_{l +1}}-x_{j_{l}})\right)\widehat{s}_{j_{|J|}}(\rho,x-x_{j_{|J|}}). \tag{16}\] It is known that there exists a kernel \(\widetilde{K}^{h}\in C(\overline{\Omega})\cap H^{1}(\Omega)\), where \(\Omega=\{(x,t)\in\mathbb{R}^{2}\,|\,0<x<b,|t|<x\}\), such that \(\widetilde{K}^{h}(x,x)=\frac{h}{2}+\frac{1}{2}\int_{0}^{x}q(s)ds\), \(\widetilde{K}^{h}(x,-x)=\frac{h}{2}\) and \[\widetilde{e}_{h}(\rho,x)=e^{i\rho x}+\int_{-x}^{x}\widetilde{K}^{h}(x,t)e^{i \rho t}dt \tag{17}\] (see, e.g., [36, 39]). Actually, \(\widetilde{K}^{h}(x,\cdot)\in L_{2}(-x,x)\) and it can be extended (as a function of \(t\)) to a function in \(L_{2}(\mathbb{R})\) with a support in \([-x,x]\). For each \(k\in\{1,\ldots,N\}\) there exists a kernel \(\widehat{H}_{k}\in C(\overline{\Omega_{k}})\cap H^{1}(\Omega_{k})\) with \(\Omega_{k}=\{(x,t)\in\mathbb{R}^{2}\,|\,0<x<b-x_{k},\ |t|\leqslant x\}\), and \(\widehat{H}_{k}(x,x)=\frac{1}{2}\int_{x_{k}}^{x+x_{k}}q(s)ds\), \(\widehat{H}_{k}(x,-x)=0\), such that \[\widehat{s}_{k}(\rho,x)=\frac{\sin(\rho x)}{\rho}+\int_{0}^{x}\widehat{H}_{k} (x,t)\frac{\sin(\rho t)}{\rho}dt \tag{18}\] (see [19, Ch. 1]). From this we obtain the representation \[\widehat{s}_{k}(\rho,x-x_{k})=\frac{\sin(\rho(x-x_{k}))}{\rho}+\int_{0}^{x-x_{ k}}\widehat{H}_{k}(x-x_{k},t)\frac{\sin(\rho t)}{\rho}dt=\int_{-(x-x_{k})}^{x-x_{ k}}\ \widetilde{K}_{k}(x,t)e^{i\rho t}dt, \tag{19}\] where \[\widetilde{K}_{k}(x,t)=\frac{1}{2}\chi_{x-x_{k}}(t)+\frac{1}{2}\int_{|t|}^{x-x _{k}}\widehat{H}_{k}(x-x_{k},s)ds. \tag{20}\] We denote the Fourier transform of a function \(f\in L_{1}(\mathbb{R})\) by \(\mathcal{F}f(\rho)=\int_{\mathbb{R}}f(t)e^{i\rho t}dt\) and the convolution of \(f\) with a function \(g\in L_{1}(\mathbb{R})\) by \(f*g(t)=\int_{\mathbb{R}}f(t-s)g(s)ds\). We recall that \(\mathcal{F}(f*g)(\rho)=\mathcal{F}f(\rho)\cdot\mathcal{F}g(\rho)\). Given \(f_{1},\ldots,f_{M}\in L_{2}(\mathbb{R})\) with compact support, we denote their convolution product by \(\left(\prod_{l=1}^{M}\right)^{*}f_{l}(t):=(f_{1}*\cdots*f_{M})(t)\). For the kernels \(\widetilde{K}^{h}(x,t),\widetilde{K}_{k}(x,t)\), the operations \(\mathcal{F}\) and \(*\) will be applied with respect to the variable \(t\). **Lemma 9**: _Let \(A,B>0\). If \(f\in C[-A,A]\) and \(g\in C[-B,B]\), then \((\chi_{A}f)*(\chi_{B}g)\in C(\mathbb{R})\) with \(\mathrm{Supp}\left((\chi_{A}f)*(\chi_{B}g)\right)\subset[-(A+B),A+B]\)._ **Proof.** The assertion \(\mathrm{Supp}\left((\chi_{A}f)*(\chi_{B}g)\right)\subset[-(A+B),A+B]\) is due to [12, Prop. 4.18]. Since \((\chi_{A}f)\in L_{1}(\mathbb{R})\) and \((\chi_{B}g)\in L_{\infty}(\mathbb{R})\), it follows from [17, Prop. 8.8] that \((\chi_{A}f)*(\chi_{B}g)\in C(\mathbb{R})\). **Theorem 10**: _There exists a kernel \(K^{h}_{\mathfrak{I}_{N}}(x,t)\) defined on \(\Omega\) such that_ \[e^{h}_{\mathfrak{I}_{N}}(\rho,x)=e^{i\rho x}+\int_{-x}^{x}K^{h}_{\mathfrak{I}_ {N}}(x,t)e^{i\rho t}dt. \tag{21}\] _For any \(0<x\leqslant b\), \(K^{h}_{\mathfrak{I}_{N}}(x,t)\) is piecewise absolutely continuous with respect to the variable \(t\in[-x,x]\) and satisfies \(K^{h}_{\mathfrak{I}_{N}}(x,\cdot)\in L_{2}(-x,x)\). Furthermore, \(K^{h}_{\mathfrak{I}_{N}}\in L_{\infty}(\Omega)\)._ **Proof.** Ssubtitution of formulas (17) and (19) in (16) leads to the equality \[e^{h}_{\mathfrak{I}_{N}}(\rho,x)=e^{i\rho x}+\int_{-x}^{x} \widetilde{K}^{h}(x,t)e^{i\rho t}dt+\] \[+\sum_{k=1}^{N}\alpha_{k}H(x-x_{k})\left(e^{i\rho x_{k}}+\int \limits_{-x_{k}}^{x_{k}}\widetilde{K}^{h}(x_{k},t)e^{i\rho t}dt\right)\left( \int\limits_{-(x-x_{k})}^{x-x_{k}}\widetilde{K}_{k}(x,t)e^{i\rho t}dt\right)\] \[+\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\Bigg{[}\left( e^{i\rho x_{j_{1}}}+\int\limits_{-x_{j_{1}}}^{x_{j_{1}}}\widetilde{K}^{h}(x_{j_{1}},t )e^{i\rho t}dt\right)\left(\prod_{l=1}^{|J|-1}\int\limits_{-(x_{j_{l+1}}-x_{j_{ l}})}^{x_{j_{l+1}}-x_{j_{l}}}\widetilde{K}_{k}(x_{j_{l+1}},t)e^{i\rho t}dt\right)\] \[\cdot\int\limits_{-(x-x_{j_{|J|}})}^{x-x_{j_{|J|}}}\widetilde{K}_ {k}(x,t)e^{i\rho t}dt\Bigg{]}\] Note that \[\prod_{l=1}^{|J|-1}\int\limits_{-(x_{j_{l+1}}-x_{j_{l}})}^{x_{j_{l+1}}-x_{j_{l} }}\widetilde{K}_{k}(x_{j_{l+1}},t)e^{i\rho t}dt=\mathcal{F}\left\{\left(\prod _{l=1}^{|J|-1}\right)^{*}\left(\chi_{x_{j_{l+1}}-x_{j_{l}}}(t)\widetilde{K}_{ k}(x_{j_{l+1}},t)\right)\right\}.\] In a similar way, if we denote \(I_{A,B}=\left(e^{i\rho A}+\int\limits_{-A}^{A}\widetilde{K}^{h}(A,t)e^{i\rho t}dt \right)\left(\int\limits_{-B}^{B}\widetilde{K}_{k}(B,t)e^{i\rho t}dt\right)\) with \(A,B\in(0,b)\), then \[I_{A,B}= e^{i\rho A}\int\limits_{-B}^{B}\widetilde{K}_{k}(B,t)e^{i\rho t}dt+ \mathcal{F}\left(\chi_{A}(t)\widetilde{K}^{h}(A,t)*\chi_{B}(t)\widetilde{K}_{ k}(B,t)\right)\] \[= \mathcal{F}\left(\chi_{[A-B,B+A]}(t)\widetilde{K}_{k}(B,t-A)+ \chi_{A}(t)\widetilde{K}^{h}(A,t)*\chi_{B}(t)\widetilde{K}_{k}(B,t)\right).\] Set \(R_{N}(\rho,x)=e_{N}(\rho,x)-e^{i\rho x}\). Thus, \[R_{N}(\rho,x)= \mathcal{F}\Bigg{[}\chi_{x}(t)\widetilde{K}^{h}(x,t)\] \[+\sum_{k=1}^{N}\alpha_{k}H(x-x_{k})\left(\chi_{[2x_{k}-x,x]}(t) \widetilde{K}_{k}(x,t-x_{k})+\chi_{x_{k}}(t)\widetilde{K}^{h}(x_{k},t)*\chi_ {x-x_{k}}(t)\widetilde{K}_{k}(x,t)\right)\] \[+\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\left(\prod \limits_{l=1}^{|J|-1}\right)^{*}\left(\chi_{x_{j_{l+1}}-x_{j_{l}}}(t) \widetilde{K}_{k}(x_{j_{l+1}},t)\right)\] \[*\left(\chi_{[x_{j_{|J|}}+x_{j_{1}}-x,x-(x_{j_{|J|}}-x_{j_{1}})] }(t)\widetilde{K}_{j_{|J|}}(x,t-x_{j_{1}})\right.\] \[\qquad\left.+\chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}},t)* \chi_{x-x_{j_{|J|}}}(t)\widetilde{K}_{j_{|J|}}(x,t)\right)\right]\] According to Lemma 9, the support of \(\left(\prod\nolimits_{l=1}^{|J|-1}\right)^{*}\left(\chi_{x_{j_{l+1}}-x_{j_{l} }}(t)\widetilde{K}_{k}(x_{j_{l+1}},t)\right)\) lies in \([x_{j_{1}}-x_{j_{|J|}},x_{j_{|J|}}-x_{j_{1}}]\) and \(\chi_{x-(x_{j_{|J|}}-x_{j_{1}})}(t)\widetilde{K}_{j_{|J|}}(x,t-x_{j_{1}})+ \chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}},t)*\chi_{x-x_{j_{|J|}}}(t) \widetilde{K}_{j_{|J|}}(x,t)\) has its support in \([x_{j_{|J|}}+x_{j_{1}}-x,x-(x_{j_{|J|}}-x_{j_{1}})]\). Hence the convolution in the second sum of \(R_{N}(\rho,x)\) has its support in \([-x,x]\). On the other hand, \(\chi_{x_{k}}(t)\widetilde{K}^{h}(x_{k},t)*\chi_{x-x_{k}}(t)\widetilde{K}_{k}(x,t)\) has its support in \([-x,x]\), and since \([2x_{k}-x,x]\subset[-x,x]\), we conclude that \(\operatorname{Supp}\left(\mathcal{F}^{-1}R_{N}(\rho,x)\right)\subset[-x,x]\). Thus, we obtain (21) with \[K^{h}_{\mathcal{I}_{N}}(x,t)= \chi_{x}(t)\widetilde{K}^{h}(x,t)\] \[+\sum_{k=1}^{n}\alpha_{k}H(x-x_{k})\left(\chi_{[2x_{k}-x,x]}(t) \widetilde{K}_{k}(x,t-x_{k})+\chi_{x_{k}}(t)\widetilde{K}^{h}(x_{k},t)*\chi_{x- x_{k}}(t)\widetilde{K}_{k}(x,t)\right)\] \[+\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\left(\prod \limits_{l=1}^{|J|-1}\right)^{*}\left(\chi_{x_{j_{l+1}}-x_{j_{l}}}(t) \widetilde{K}_{j_{l}}(x_{j_{l+1}},t)\right) \tag{22}\] \[\qquad*\left(\chi_{x-(x_{j_{|J|}}-x_{j_{1}})}(t)\widetilde{K}_{j_{ |J|}}(x,t-x_{j_{1}})+\chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}},t)*\chi_{x- x_{j_{|J|}}}(t)\widetilde{K}_{j_{|J|}}(x,t)\right)\!,\] and \(K_{\mathcal{I}_{N}}(x,\cdot)\in L_{2}(x,-x)\). By formula (22) and the definitions of \(\widehat{K}^{h}(x,t)\) and \(\widetilde{K}_{k}(x,t)\), \(K_{\mathcal{I}_{N}}(x,t)\) is piecewise absolutely continuous for \(t\in[-x,x]\). Since \(\widehat{K}^{h},\widetilde{K}_{k}\in L_{\infty}(\Omega)\), is clear that \(K^{f}_{\mathfrak{I}_{N}}\in L_{\infty}(\Omega)\). As a consequence of (21), \(e^{h}_{\mathfrak{I}_{N}}(\rho,x)\) is an entire function of exponential type \(x\) on the spectral parameter \(\rho\). **Example 11**: _Consider (15) with \(N=1\). In this case the solution \(e^{0}_{\mathfrak{I}_{1}}(\rho,x)\) is given by_ \[e^{0}_{\mathfrak{I}_{1}}(\rho,x)=e^{i\rho x}+\alpha_{1}e^{i\rho x_{1}}H(x-x_{1 })\frac{\sin(\rho(x-x_{1}))}{\rho}.\] _We have_ \[e^{i\rho x_{1}}\frac{\sin(\rho(x-x_{1}))}{\rho}=\frac{1}{2}\int_{x_{1}-x}^{x-x_ {1}}e^{i\rho(t+x_{1})}dt=\frac{1}{2}\int_{2x_{1}-x}^{x}e^{i\rho t}dt.\] _Hence_ \[e^{0}_{\mathfrak{I}_{1}}(\rho,x)=e^{i\rho x}+\int_{-x}^{x}K^{0}_{\mathfrak{I} _{1}}(x,t)e^{i\rho t}dt\quad\text{with}\;\;K^{0}_{\mathfrak{I}_{1}}(x,t)= \frac{\alpha_{1}}{2}H(x-x_{1})\chi_{[2x_{1}-x,x]}(t).\] **Example 12**: _Consider again Eq. (15) but now with \(N=2\). In this case the solution \(e^{0}_{\mathfrak{I}_{2}}(\rho,x)\) is given by_ \[e^{0}_{\mathfrak{I}_{2}}(\rho,x)= e^{i\rho x}+\alpha_{1}e^{i\rho x_{1}}H(x-x_{1})\frac{\sin(\rho(x-x_{1}))} {\rho}+\alpha_{2}e^{i\rho x_{2}}H(x-x_{2})\frac{\sin(\rho(x-x_{2}))}{\rho}\] \[+\alpha_{1}\alpha_{2}e^{i\rho x_{1}}H(x-x_{2})\frac{\sin(\rho(x_{ 2}-x_{1}))}{\rho}\frac{\sin(\rho(x-x_{2}))}{\rho},\] _and the transmutation kernel \(K^{0}_{\mathfrak{I}_{2}}(x,t)\) has the form_ \[K^{0}_{\mathfrak{I}_{2}}(x,t) =\frac{\alpha_{1}H(x-x_{1})}{2}\chi_{[2x_{1}-x,x]}(t)+\frac{ \alpha_{2}H(x-x_{2})}{2}\chi_{[2x_{1}-x,x]}(t)\] \[\quad+\frac{\alpha_{1}\alpha_{2}H(x-x_{2})}{4}\left(\chi_{x_{2}-x _{1}}*\chi_{x-x_{2}}\right)(t-x_{1}).\] _Direct computation shows that_ \[\chi_{x_{2}-x_{1}}*\chi_{x-x_{2}}(t-x_{1})=\\ \begin{cases}0,&t\not\in[2x_{1}-x,x],\\ t+x-2x_{1},&2x_{1}-x<t<-|2x_{2}-x-x_{1}|+x_{1},\\ x-x_{1}-|2x_{2}-x-x_{1}|,&-|2x_{2}-x-x_{1}|+x_{1}<t<|2x_{2}-x-x_{1}|+x_{1}\\ x-t,&|2x_{2}-x-x_{1}|+x_{1}<t<x.\end{cases}\] _In Figure 1, we can see some level curves of the kernel \(K^{0}_{\mathfrak{I}_{2}}(x,t)\) (as a function of \(t\)), \(\mathfrak{I}_{2}=\{(0.25,1),(0.75,2)\}\), for some values of \(x\)._ For the general case we have the following representation for the kernel. **Proposition 13**: _The transmutation kernel \(K^{0}_{\mathfrak{I}_{N}}(\rho,x)\) for the solution \(e^{0}_{\mathfrak{I}_{N}}(\rho,x)\) of (15) is given by_ \[K^{0}_{\mathfrak{I}_{N}}(x,t) =\sum_{k=0}^{N}\frac{\alpha_{k}H(x-x_{k})}{2}\chi_{[2x_{k}-x,x]}(t)\] \[+\sum_{J\in\mathcal{J}_{N}}\frac{\alpha_{J}H(x-x_{j_{|J|}})}{2^{|J |}}\left(\left(\prod_{l=1}^{|J|-1}\right)^{*}\chi_{x_{j_{l+1}}-x_{j_{l}}}(t) \right)*\chi_{x-x_{j_{|J|}}}(t-x_{j_{1}}) \tag{23}\] **Proof.** In this case \(\widetilde{e}_{0}(\rho,x)=e^{i\rho x}\), \(\widehat{s}_{k}(\rho,x-x_{k})=\frac{\sin(\rho(x-x_{k}))}{\rho}\), hence \(\widetilde{K}^{0}(x,t)\equiv 0\), \(\widetilde{K}_{k}(x,t)=\frac{1}{2}\chi_{x-x_{k}}(t)\). Substituting these expressions into (22) and taking into account that \(\chi_{x_{j_{|J|}}+x_{j_{1}}-x,x-(x_{j_{|J|}}-x_{j_{1}})}(t)=\chi_{x-x_{j_{|J|} }}(t-x_{j_{1}})\) we obtain (23) Let \[\mathbf{T}^{h}_{\mathfrak{I}_{N}}u(x):=u(x)+\int_{-x}^{x}K^{h}_{\mathfrak{I}_ {N}}(x,t)u(t)dt. \tag{24}\] By Theorem 10, \(\mathbf{T}^{f}_{\mathfrak{I}_{N}}\in\mathcal{B}\left(L_{2}(-b,b)\right)\) and \[e^{h}_{\mathfrak{I}_{N}}(\rho,x)=\mathbf{T}^{h}_{\mathfrak{I}_{N}}\left[e^{i \rho x}\right]. \tag{25}\] ### Goursat conditions Let us define the function \[\sigma_{\mathfrak{I}_{N}}(x):=\sum_{k=1}^{N}\alpha_{k}H(x-x_{k}). \tag{26}\] Hence \(\sigma^{\prime}_{\mathfrak{I}_{N}}(x)=q_{\delta,\mathfrak{I}_{n}}(x)\) in the distributional sense ( \((\sigma_{\mathfrak{I}_{N}},\phi)_{C^{\infty}_{0}(0,b)}=-(q_{\delta, \mathfrak{I}_{N}},\phi^{\prime})_{C^{\infty}_{0}(0,b)}\) for all \(\phi\in C^{\infty}_{0}(0,b)\)). Note that in Examples 11 and 12 we have \[K^{0}_{\mathfrak{I}_{N}}(x,x)=\frac{1}{2}\left(\int_{0}^{x}q(s)ds+\sigma_{ \mathfrak{I}_{N}}(x)\right)\ \ \ \text{and}\ \ K^{0}_{\mathfrak{I}_{N}}(x,-x)=0\ \ \ \text{for}\ N=1,2.\] More generally, the following statement is true. **Proposition 14**: _The integral transmutation kernel \(K^{h}_{\mathfrak{I}_{N}}\) satisfies the following Goursat conditions for \(x\in[0,b]\)_ \[K^{h}_{\mathfrak{I}_{N}}(x,x)=\frac{1}{2}\left(h+\int_{0}^{x}q(s)ds+\sigma_{ \mathfrak{I}_{N}}(x)\right)\ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ K^{h}_{\mathfrak{I}_{N}}(x,-x)=\frac{h}{2}. \tag{27}\] **Proof.** Fix \(x\in[0,b]\) and take \(\xi\in\{-x,x\}\). By formula (22) we can write \[K^{h}_{\mathfrak{I}_{N}}(x,\xi)=\widetilde{K}^{h}(x,\xi)+\sum_{k=1}^{N}\alpha _{k}H(x-x_{k})\chi_{[2x_{k}-x,x]}(\xi)\widetilde{K}_{k}(x,\xi-x_{k})+F(x,\xi),\] where \[F(x,t)= \sum_{k=1}^{n}\alpha_{k}H(x-x_{k})\chi_{x_{k}}(t)\widetilde{K}^{h }(x_{k},t)*\chi_{x-x_{k}}(t)\widetilde{K}_{k}(x,t)\] \[+\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\left(\prod _{l=1}^{|J|-1}\right)^{*}\Big{(}\chi_{x_{j_{l+1}}-x_{j_{l}}}(t)\widetilde{K} _{j_{l}}(x_{j_{l+1}},t)\Big{)}\] \[\ \ \ \ \ \ *\Big{(}\chi_{x-(x_{j_{|J|}}-x_{j_{1}})}(t)\widetilde{K} _{j_{|J|}}(x,t-x_{j_{1}})+\chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}},t)* \chi_{x-x_{j_{|J|}}}(t)\widetilde{K}_{j_{|J|}}(x,t)\Big{)}.\] In the proof of Theorem 10 we obtain that \(\text{Supp}(F(x,t))\subset[-x,x]\). Since \(\widetilde{K}^{h}(x_{j},t)\) and \(\widetilde{K}_{k}(x_{j},t)\) are continuous with respect to \(t\) in the intervals \([-x_{j},x_{j}]\) and \([x_{k}-x_{j},x_{j}-x_{k}]\) respectively for \(j=1,\ldots,N\), \(k\leqslant j\), by Lemma 9 the function \(F(x,t)\) is continuous for all \(t\in\mathbb{R}\). Thus \(F(x,\xi)=0\). For the case \(\xi=x\), we have that \(\widetilde{K}^{h}(x,x)=\frac{h}{2}+\frac{1}{2}\int_{0}^{x}q(s)ds\), \(\chi_{[2x_{k}-x,x]}(x)=1\) and \[\widetilde{K}_{k}(x,x-x_{k})=\frac{1}{2}\chi_{x-x_{k}}(x-x_{k})+\frac{1}{2} \int_{|x-x_{k}|}^{x-x_{k}}\widehat{H}_{k}(x-x_{k},s)ds=\frac{1}{2}\] (we assume that \(x\geqslant x_{k}\) in order to have \(H(x-x_{k})=1\)). Thus \(K^{h}_{\mathfrak{I}_{N}}(x,x)=\frac{1}{2}\left(h+\int_{0}^{x}q(s)ds+\sigma_{ \mathfrak{I}_{N}}(x)\right)\). For the case \(\xi=-x\), \(\widetilde{K}^{h}(x,-x)=\frac{h}{2}\) and \(\chi_{[2x_{k}-x,x]}(-x)=0\). Hence \(K^{h}_{\mathfrak{I}_{N}}(x,x)=\frac{h}{2}\). **Remark 15**: _According to Proposition 14, \(2K^{h}_{\mathfrak{I}_{N}}(x,x)\) is a (distributional) antiderivative of the potential \(q(x)+q_{\delta,\mathfrak{I}_{N}}(x)\)._ ### The transmuted Cosine and Sine solutions Let \(c_{\mathfrak{I}_{N}}^{h}(\rho,x)\) and \(s_{\mathfrak{I}_{N}}(\rho,x)\) be the solutions of Eq. (1) satisfying the initial conditions \[c_{\mathfrak{I}_{N}}^{h}(\rho,0)=1, (c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,0)=h, \tag{28}\] \[s_{\mathfrak{I}_{N}}(\rho,0)=0, s_{\mathfrak{I}_{N}}^{\prime}(\rho,0)=1. \tag{29}\] Note that \(c_{\mathfrak{I}_{N}}^{h}(\rho,x)=\frac{e_{\mathfrak{I}_{N}}^{h}(\rho,x)+e_{ \mathfrak{I}_{N}}^{h}(-\rho,x)}{2}\) and \(s_{\mathfrak{I}_{N}}(\rho,x)=\frac{e_{\mathfrak{I}_{N}}^{h}(\rho,x)-e_{ \mathfrak{I}_{N}}^{h}(-\rho,x)}{2i\rho}\). **Remark 16**: _By Corollary 3, \(c_{\mathfrak{I}_{N}}^{h}(\rho,\cdot),s_{\mathfrak{I}_{N}}(\rho,\cdot)\in AC[0,b]\) and both functions are solutions of Eq. (9) on \([0,x_{1}]\), hence their Wronskian is constant for \(x\in[0,x_{1}]\) and_ \[1 =W\left[c_{\mathfrak{I}_{N}}^{h}(\rho,x),s_{\mathfrak{I}_{N}}( \rho,x)\right](0)=W\left[c_{\mathfrak{I}_{N}}^{h}(\rho,x),s_{\mathfrak{I}_{N}} (\rho,x)\right](x_{1}-)=\begin{vmatrix}c_{\mathfrak{I}_{N}}^{h}(\rho,x_{1})&s _{\mathfrak{I}_{N}}(\rho,x_{1})\\ (c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x_{1}-)&s_{\mathfrak{I}_{N}}^{\prime }(\rho,x_{1}-)\end{vmatrix}\] \[=\begin{vmatrix}c_{\mathfrak{I}_{N}}^{h}(\rho,x_{1})&s_{ \mathfrak{I}_{N}}(\rho,x_{1})\\ (c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x_{1}+)-\alpha_{1}c_{\mathfrak{I}_{N} }^{h}(\rho,x_{1})&s_{\mathfrak{I}_{N}}^{\prime}(\rho,x_{1}+)-\alpha_{1}s_{ \mathfrak{I}_{N}}(\rho,x_{1})\end{vmatrix}\] \[=\begin{vmatrix}c_{\mathfrak{I}_{N}}^{h}(\rho,x_{1})&s_{ \mathfrak{I}_{N}}(\rho,x_{1})\\ (c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x_{1}+)&s_{\mathfrak{I}_{N}}^{\prime }(\rho,x_{1}+)\end{vmatrix}=W\left[c_{\mathfrak{I}_{N}}^{h}(\rho,x),s_{ \mathfrak{I}_{N}}(\rho,x)\right](x_{1}+)\] _(the equality in the second line is due to (2)). Since \(c_{\mathfrak{I}_{N}}^{h}(\rho,x),s_{\mathfrak{I}_{N}}(\rho,x)\) are solutions of (9) on \([x_{1},x_{2}]\), then \(W\left[C_{\mathfrak{I}_{N}}^{h}(\rho,x),s_{\mathfrak{I}_{N}}(\rho,x)\right]\) is constant for \(x\in[x_{1},x_{2}]\). Thus, \(W\left[c_{\mathfrak{I}_{N}}^{h}(\rho,x),s_{\mathfrak{I}_{N}}(\rho,x)\right](x)=1\) for all \(x\in[0,x_{2}]\). Continuing the process we obtain that the Wronskian equals one in the whole segment \([0,b]\). Thus, \(c_{\mathfrak{I}_{N}}^{h}(\rho,x),s_{\mathfrak{I}_{N}}(\rho,x)\) are linearly independent. Finally, if \(u\) is a solution of (1), by Remark 5, \(u\) can be written as \(u(x)=u(0)c_{\mathfrak{I}_{N}}^{h}(\rho,x)+u^{\prime}(0)s_{\mathfrak{I}_{N}}( \rho,x)\). In this way, \(\left\{c_{\mathfrak{I}_{N}}^{h}(\rho,x),s_{\mathfrak{I}_{N}}(\rho,x)\right\}\) is a fundamental set of solutions for (1)._ Similarly to the case of the regular Eq. (13) (see [39, Ch. 1]), from (21) we obtain the following representations. **Proposition 17**: _The solutions \(c_{\mathfrak{I}_{N}}^{h}(\rho,x)\) and \(s_{\mathfrak{I}_{N}}(\rho,x)\) admit the following integral representations_ \[c_{\mathfrak{I}_{N}}^{h}(\rho,x) =\cos(\rho x)+\int_{0}^{x}G_{\mathfrak{I}_{N}}^{h}(x,t)\cos(\rho t )dt, \tag{30}\] \[s_{\mathfrak{I}_{N}}(\rho,x) =\frac{\sin(\rho x)}{\rho}+\int_{0}^{x}S_{\mathfrak{I}_{N}}(x,t) \frac{\sin(\rho t)}{\rho}dt, \tag{31}\] _where_ \[G_{\mathfrak{I}_{N}}^{h}(x,t) =K_{\mathfrak{I}_{N}}^{h}(x,t)+K_{\mathfrak{I}_{N}}^{h}(x,-t), \tag{32}\] \[S_{\mathfrak{I}_{N}}(x,t) =K_{\mathfrak{I}_{N}}^{h}(x,t)-K_{\mathfrak{I}_{N}}^{h}(x,-t). \tag{33}\] **Remark 18**: _By Proposition 14, the cosine and sine integral transmutation kernels satisfy the conditions_ \[G_{\mathfrak{I}_{N}}^{h}(x,x)=h+\frac{1}{2}\left(\int_{0}^{x}q(s)ds+\sigma_{ \mathfrak{I}_{N}}(x)\right), \tag{34}\] \[S_{\mathfrak{I}_{N}}(x,x)=\frac{1}{2}\left(\int_{0}^{x}q(s)ds+\sigma_{\mathfrak{I}_ {N}}(x)\right)\quad\text{and}\quad S_{\mathfrak{I}_{N}}(x,0)=0. \tag{35}\] _Introducing the cosine and sine transmutation operators_ \[\mathbf{T}^{C}_{\mathfrak{I}_{N},h}u(x)=u(x)+\int_{0}^{x}G^{h}_{\mathfrak{I}_{ N}}(x,t)u(t)dt,\quad\mathbf{T}^{S}_{\mathfrak{I}_{N}}u(x)=u(x)+\int_{0}^{x}S_{ \mathfrak{I}_{N}}(x,t)u(t)dt \tag{36}\] _we obtain_ \[c^{h}_{\mathfrak{I}_{N}}(\rho,x)=\mathbf{T}^{C}_{\mathfrak{I}_{N},h}\left[ \cos(\rho x)\right],\quad s_{\mathfrak{I}_{N}}(\rho,x)=\mathbf{T}^{S}_{ \mathfrak{I}_{N}}\left[\frac{\sin(\rho x)}{\rho}\right]. \tag{37}\] **Remark 19**: _According to Remark 16, the space of solutions of (1) has dimension 2, and given \(f,g\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) solutions of (1), repeating the same procedure of Remark 16, \(W[f,g]\) is constant in the whole segment \([0,b]\). The solutions \(f,g\) are a fundamental set of solutions iff \(W[f,g]\neq 0\)._ ## 5 The SPPS method and the mapping property ### Spectral parameter powers series As in the case of the regular Schrodinger equation [10, 31], we obtain a representation for the solutions of (1) as a power series in the spectral parameter (SPPS series). Assume that there exists a solution \(f\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) that does not vanish in the whole segment \([0,b]\). **Remark 20**: _Given \(g\in L_{2}(0,b)\), a solution \(u\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) of the non-homogeneous Cauchy problem_ \[\begin{cases}\mathbf{L}_{q,\mathfrak{I}_{N}}u(x)=g(x),\quad 0<x<b\\ u(0)=u_{0},\;u^{\prime}(0)=u_{1}\end{cases} \tag{38}\] _can be obtained by solving the regular equation \(\mathbf{L}_{q}u(x)=g(x)\) a.e. \(x\in(0,b)\) as follows. Consider the Polya factorization \(\mathbf{L}_{q}u=-\frac{1}{f}Df^{2}D\frac{u}{f}\), where \(D=\frac{d}{dx}\). A direct computation shows that \(u\) given by_ \[u(x)=-f(x)\int_{0}^{x}\frac{1}{f^{2}(t)}\int_{0}^{t}f(s)g(s)ds+\frac{u_{0}}{f (0)}f(x)+(f(0)u_{1}-f^{\prime}(0)u_{0})f(x)\int_{0}^{x}\frac{dt}{f^{2}(t)} \tag{39}\] _satisfies (38) (actually, \(f(x)\int_{0}^{x}\frac{1}{f^{2}(t)}dt\) is the second linearly independent solution of \(\mathbf{L}_{q}u=0\) obtained from \(f\) by Abel's formula). By Remark 4, \(u\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) and by Proposition 1 and Remark 5, formula (39) provides the unique solution of (38). Actually, if we denote \(\mathcal{I}u(x):=\int_{0}^{x}u(t)dt\) and define \(\mathbf{R}^{f}_{\mathfrak{I}_{N}}:=-f\mathcal{I}f^{2}\mathcal{I}\), then \(\mathbf{R}^{f}_{\mathfrak{I}_{N}}\in\mathcal{B}\left(L_{2}(0,b)\right)\), \(\mathbf{R}^{f}_{\mathfrak{I}_{N}}\left(L_{2}(0,b)\right)\subset\mathcal{D}_{2 }\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) and is a right-inverse for \(\mathbf{L}_{q,\mathfrak{I}_{N}}\), i.e., \(\mathbf{L}_{q,\mathfrak{I}_{N}}\mathbf{R}^{f}_{\mathfrak{I}_{N}}g=g\) for all \(g\in L_{2}(0,b)\)._ Following [31] we define the following recursive integrals: \(\widetilde{X}^{(0)}\equiv X^{(0)}\equiv 1\), and for \(k\in\mathbb{N}\) \[\widetilde{X}^{(k)}(x) :=k\int_{0}^{x}\widetilde{X}^{(k-1)}(s)\left(f^{2}(s)\right)^{(-1)^{k -1}}ds, \tag{40}\] \[X^{(k)}(x) :=k\int_{0}^{x}X^{(k-1)}(s)\left(f^{2}(s)\right)^{(-1)^{k}}ds. \tag{41}\] The functions \(\{\varphi_{f}^{(k)}(x)\}_{k=0}^{\infty}\) defined by \[\varphi_{f}^{(k)}(x):=\begin{cases}f(x)\widetilde{X}^{(k)}(x),&\text{if $k$ even},\\ f(x)X^{(k)}(x),&\text{if $k$ odd}.\end{cases} \tag{42}\] for \(k\in\mathbb{N}_{0}\), are called the _formal powers_ associated to \(f\). Additionally, we introduce the following auxiliary formal powers \(\{\psi_{f}^{(k)}(x)\}_{k=0}^{\infty}\) given by \[\psi_{f}^{(k)}(x):=\begin{cases}\frac{\widetilde{X}^{(k)}(x)}{f(x)},&\text{ if $k$ odd},\\ \frac{X^{(k)}(x)}{f(x)},&\text{if $k$ even}.\end{cases} \tag{43}\] **Remark 21**: _For each \(k\in\mathbb{N}_{0}\), \(\varphi_{f}^{(k)}\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\). Indeed, direct computations show that the following relations hold for all \(k\in\mathbb{N}_{0}\):_ \[D\varphi_{f}^{(k)} =\frac{f^{\prime}}{f}\varphi_{f}^{(k)}+k\psi_{f}^{(k-1)} \tag{44}\] \[D^{2}\varphi_{f}^{(k)} =\frac{f^{\prime\prime}}{f}\varphi_{f}^{(k)}+k(k-1)\varphi_{f}^{ (k-2)} \tag{45}\] _Since \(\varphi_{f}^{(k)},\psi_{f}^{(k)}\in C[0,b]\), using the procedure from Remark 4 and (44) we obtain \(\varphi_{f}^{(k)}\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\)._ **Theorem 22** (SPPS method): _Suppose that \(f\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) is a solution of (1) that does not vanish in the whole segment \([0,b]\). Then the functions_ \[u_{0}(\rho,x)=\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{(2k)}(x )}{(2k)!},\quad u_{1}(\rho,x)=\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k} \varphi_{f}^{(2k+1)}(x)}{(2k+1)!} \tag{46}\] _belong to \(\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\), and \(\{u_{0}(\rho,x),u_{1}(\rho,x)\}\) is a fundamental set of solutions for (1) satisfying the initial conditions_ \[u_{0}(\rho,0)=f(0), u_{0}^{\prime}(\rho,0) =f^{\prime}(0), \tag{47}\] \[u_{1}(\rho,0)=0, u_{1}^{\prime}(\rho,0) =\frac{1}{f(0)}, \tag{48}\] _The series in (46) converge absolutely and uniformly on \(x\in[0,b]\), the series of the derivatives converge in \(L_{2}(0,b)\) and the series of the second derivatives converge in \(L_{2}(x_{j},x_{j+1})\), \(j=0,\cdots,N\). With respect to \(\rho\) the series converge absolutely and uniformly on any compact subset of the complex \(\rho\)-plane._ **Proof.** Since \(f\in C[0,b]\), the following estimates for the recursive integrals \(\{\widetilde{X}^{(k)}(x)\}_{k=0}^{\infty}\) and \(\{X^{(k)}(x)\}_{k=0}^{\infty}\) are known: \[|\widetilde{X}^{(n)}(x)|\leqslant M_{1}^{n}b^{n},\ |X^{(n)}(x)|\leqslant M_{1}^{n}b^{n} \quad\text{for all }x\in[0,b], \tag{49}\] where \(M_{1}=\|f^{2}\|_{C[0,b]}\cdot\left\|\frac{1}{f^{2}}\right\|_{C[0,b]}\) (see the proof of Theorem 1 of [31]). Thus, by the Weierstrass \(M\)-tests, the series in (46) converge absolutely and uniformly on \(x\in[0,b]\), and for \(\rho\) on any compact subset of the complex \(\rho\)-plane. We prove that \(u_{0}(\rho,x)\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) and is a solution of (1) (the proof for \(u_{1}(\rho,x)\) is analogous). By Remark 21, the series of the derivatives of \(u_{0}(\rho,x)\) is given by \(\frac{f^{\prime}}{f}\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{( 2k)}}{(2k)!}+\sum_{k=1}^{\infty}\frac{(-1)^{k}\rho^{2k}\psi_{f}^{(2k-1)}}{(2k- 1)!}\). By (49), the series involving the formal powers \(\varphi_{f}^{(k)}\) and \(\psi_{f}^{(k)}\) converge absolutely and uniformly on \(x\in[0,b]\). Hence, \(\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{k}D\varphi_{f}^{(2k)}(x)}{(2k)!}\) converges in \(L_{2}(0,b)\). Due to [10, Prop. 3], \(u_{0}(\rho,\cdot)\in AC[0,b]\) and \(u_{0}^{\prime}(\rho,x)=\frac{f^{\prime}(x)}{f(x)}\sum_{k=0}^{\infty}\frac{(- 1)^{k}\rho^{2k}\varphi_{f}^{(2k)}}{(2k)!}+\sum_{k=1}^{\infty}\frac{(-1)^{k} \rho^{2k}\psi_{f}^{(2k-1)}}{(2k-1)!}\) in \(L_{2}(0,b)\). Since the series involving the formal powers defines continuous functions, then \(u_{0}(\rho,x)\) satisfies the jump condition (2). Applying the same reasoning it is shown that \(u_{0}^{\prime\prime}(\rho,x)=\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}D^{2} \varphi_{f}^{(2k)}}{(2k)!}\), the series converges in \(L_{2}(x_{j},x_{j+1})\) and \(u_{0}(\rho,\cdot)|_{(x_{j},x_{j+1})}\in H^{2}(x_{j},x_{j+1})\), \(j=0,\ldots,N\). Since \(\widetilde{X}^{(n)}(0)=0\) for \(n\geqslant 1\), we have (47). Finally, by (45) \[\mathbf{L}_{q}u_{0}(\rho,x) =\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}\mathbf{L}_{q}\varphi _{f}^{(2k)}(x)}{(2k)!}=\sum_{k=2}^{\infty}\frac{(-1)^{k+1}\rho^{2k}\varphi_{ f}^{(2k-2)}(x)}{(2k-2)!}\] \[=\rho^{2}\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{ (2k)}(x)}{(2k)!}=\rho^{2}u(\rho,x),\] this for a.e. \(x\in(x_{j},x_{j+1})\), \(j=0,\ldots,N\). Using (47) and (48) we obtain \(W[u_{0}(\rho,x),u_{1}(\rho,x)](0)=1\). Since the Wronskian is constant (Remark 19), \(\{u_{0}(\rho,x),u_{1}(\rho,x)\}\) is a fundamental set of solutions. ### Existence and construction of the non-vanishing solution The existence of a non-vanishing solution is well known for the case of a regular Schrodinger equation with continuous potential (see [31, Remark 5] and [13, Cor. 2.3]). The following proof adapts the one presented in [21, Prop. 2.9] for the Dirac system. **Proposition 23** (Existence of non-vanishing solutions): _Let \(\{u,v\}\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) be a fundamental set of solutions for (1). Then there exist constants \(c_{1},c_{2}\in\mathbb{C}\) such that the solution \(f=c_{1}u+c_{2}v\) does not vanish in the whole segment \([0,b]\)._ **Proof.** Let \(\{u,v\}\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) be a fundamental set of solutions for (1). Then \(u\) and \(v\) cannot have common zeros in \([0,b]\). Indeed, if \(u(\xi)=v(\xi)=0\) for some \(\xi\in[0,b]\), then \(W[u,v](\xi+)=u(\xi)v^{\prime}(\xi+)-v(\xi)u^{\prime}(\xi+)=0\). Since \(W[u,v]\) is constant in \([0,b]\), this contradicts that \(\{u,v\}\) is a fundamental system. This implies that in each interval \([x_{j},x_{j+1}]\), \(j=0,\cdots,N\), the map \(F_{j}:[x_{j},x_{j+1}]\to\mathbb{CP}^{1}\), \(F_{j}(x):=\left[u|_{[x_{j},x_{j+1}]}(x):v|_{[x_{j},x_{j+1}]}(x)\right]\) (where \(\mathbb{CP}^{1}\) is the complex projective line, i.e., the quotient of \(\mathbb{C}^{2}\setminus\{(0,0)\}\) under the action of \(\mathbb{C}^{*}\), and \([a:b]\) denotes the equivalent class of the pair \((a,b)\)) is well defined and differentiable. In [13, Prop. 2.2] it was established that a differentiable function \(f:I\to\mathbb{CP}^{1}\), where \(I\subset\mathbb{R}\) is an interval, is never surjective, using that Sard's theorem implies that \(f(I)\) has measure zero. Suppose that \((\alpha,\beta)\in\mathbb{C}^{2}\setminus\{(0,0)\}\) is such that \(\alpha u(\xi)-\beta v(\xi)=0\) for some \(\xi\in[0,b]\). Hence \(\begin{vmatrix}u(\xi)&\beta\\ v(\xi)&\alpha\end{vmatrix}=0\), that is, \((u(\xi),v(\xi))\) and \((\alpha,\beta)\) are proportional. Since \(\xi\in[x_{j},x_{j+1}]\) for some \(j\in\{0,\cdots,N\}\), hence \([\alpha:-\beta]\in F_{j}\left([x_{j},x_{j+1}]\right)\). Thus, the set \(C:=\left\{[\alpha:\beta]\in\mathbb{CP}^{1}\,|\,\exists\xi\in[0,b]\,:\,\alpha u (\xi)+\beta v(\xi)=0\right\}\) is contained in \(\cup_{j=0}^{N}F_{j}\left([x_{j},x_{j+1}]\right)\), and then \(C\) has measure zero. Hence we can obtain a pair of constants \((c_{1},c_{2})\in\mathbb{C}^{2}\setminus\{(0,0)\}\) with \([c_{1}:-c_{2}]\in\mathbb{CP}^{1}\setminus C\) and \(f=c_{1}u+c_{2}v\) does not vanish in the whole segment \([0,b]\). **Remark 24**: _If \(q\) is real valued and \(\alpha_{1},\cdots,\alpha_{N}\in\mathbb{R}\setminus\{0\}\), taking a real-valued fundamental system of solutions for the regular equation \(\mathbf{L}_{q}y=0\) and using formula (12), we can obtain a real-valued fundamental set of solutions \(\{u,v\}\) for \(\mathbf{L}_{q,\mathfrak{I}_{N}}y=0\). In the proof of Proposition 23 we obtain that \(u\) and \(v\) have no common zeros. Hence \(f=u+iv\) is a non vanishing solution._ _For the complex case, we can choose randomly a pair of constants \((c_{1},c_{2})\in\mathbb{C}^{2}\setminus\{(0,0)\}\) and verify if the linear combination \(c_{1}u+c_{2}v\) has no zero. If there is a zero, we repeat the process until we find the non vanishing solution. Since the set \(C\) (from the proof of Proposition 23) has measure zero, is almost sure to find the coefficients \(c_{1},c_{2}\) in the first few tries._ By Proposition 23, there exists a pair of constants \((c_{1},c_{2})\in\mathbb{C}^{2}\setminus\{(0,0)\}\) such that \[y_{0}(x) =c_{1}+c_{2}x+\sum_{k=1}^{N}\alpha_{k}(c_{1}+c_{2}x_{k})H(x-x_{k} )(x-x_{k})\] \[\quad+\sum_{J\in\mathcal{J}_{N}}\alpha_{J}(c_{1}+c_{2}x_{j_{1}})H (x-x_{j_{j}|J})\left(\prod_{l=1}^{|J|-1}(x_{j_{l+1}}-x_{j_{1}})\right)(x-x_{j _{|J|}}) \tag{50}\] is a non-vanishing solution of (1) for \(\rho=0\) (if \(\alpha_{1},\ldots,\alpha_{k}\in(0,\infty)\), it is enough with take \(c_{1}=1\), \(c_{2}=0\)). Below we give a procedure based on the SPPS method ([10, 31]) to obtain the non-vanishing solution \(f\) from \(y_{0}\). **Theorem 25**: _Define the recursive integrals \(\{Y^{(k)}\}_{k=0}^{\infty}\) and \(\{\tilde{Y}^{(k)}\}_{k=0}^{\infty}\) as follows: \(Y^{(0)}\equiv\tilde{Y}^{(0)}\equiv 1\), and for \(k\geqslant 1\)_ \[Y^{(k)}(x) =\begin{cases}\int_{0}^{x}Y^{(k)}(s)q(s)y_{0}^{2}(s)ds,&\text{ if $k$ is even},\\ \int_{0}^{x}\frac{Y^{(k)}(s)}{y_{0}^{2}(s)}ds,&\text{ if $k$ is odd},\end{cases} \tag{51}\] \[\tilde{Y}^{(k)}(x) =\begin{cases}\int_{0}^{x}\tilde{Y}^{(k)}(s)q(s)y_{0}^{2}(s)ds,& \text{ if $k$ is odd},\\ \int_{0}^{x}\frac{\tilde{Y}^{(k)}(s)}{y_{0}^{2}(s)}ds,&\text{ if $k$ is even}.\end{cases} \tag{52}\] _Define_ \[f_{0}(x)=y_{0}(x)\sum_{k=0}^{\infty}\tilde{Y}^{(2k)}(x),\qquad f_{1}(x)=y_{0}(x) \sum_{k=0}^{\infty}Y^{(2k+1)}(x). \tag{53}\] _Then \(\{f_{0},f_{1}\}\subset\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) is a fundamental set of solution for \(\mathbf{L}_{q,\mathfrak{I}_{N}}u=0\) satisfying the initial conditions \(f_{0}(0)=c_{1}\), \(f_{0}^{\prime}(0)=c_{2}\), \(f_{1}(0)=0\), \(f_{1}^{\prime}(0)=1\). Both series converge uniformly and absolutely on \(x\in[0,b]\). The series of the derivatives converge in \(L_{2}(0,b)\), and on each interval \([x_{j},x_{j+1}]\), \(j=0,\ldots,N\), the series of the second derivatives converge in \(L_{2}(x_{j},x_{j+1})\). Hence there exist constants \(C_{1},C_{2}\in\mathbb{C}\) such that \(f=C_{1}f_{0}+C_{2}f_{1}\) is a non-vanishing solution of \(\mathbf{L}_{q,\mathfrak{I}_{N}}u=0\) in \([0,b]\)._ **Proof.** Using the estimates \[|\tilde{Y}^{(2k-j)}(x)|\leqslant\frac{M_{1}^{(n-j)}M_{2}^{n}}{(n-j)!n!},\quad| Y^{(2k-j)}(x)|\leqslant\frac{M_{1}^{n}M_{2}^{(n-j)}}{n!(n-j)!},\quad x\in[0,b], \;j=0,1,\;k\in\mathbb{N},\] where \(M_{1}=\left\|\frac{1}{y_{0}^{2}}\right\|_{L_{1}(0,b)}\) and \(M_{2}=\|qy_{0}^{2}\|_{L_{1}(0,b)}\), from [10, Prop. 5], the series in (53) converge absolutely and uniformly on \([0,b]\). The proof of the convergence of the derivatives and that \(\{f_{0},f_{1}\}\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) is a fundamental set of solutions is analogous to that of Theorem 22 (see also [31, Th. 1]) and [10, Th. 7] for the proof in the regular case). ### The mapping property Take a non vanishing solution \(f\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) normalized at zero, i.e., \(f(0)=1\), and set \(h=f^{\prime}(0)\). Then the corresponding transmutation operator and kernel \(\mathbf{T}_{\mathfrak{I}_{N}}^{h}\) and \(K_{\mathfrak{I}_{N}}^{h}(x,t)\) will be denoted by \(\mathbf{T}_{\mathfrak{I}_{N}}^{f}\) and \(K_{\mathfrak{I}_{N}}^{f}(x,t)\) and called the _canonical_ transmutation operator and kernel associated to \(f\), respectively (same notations are used for the cosine and sine transmutations). **Theorem 26**: _The canonical transmutation operator \(\mathbf{T}_{\mathfrak{I}_{N}}^{f}\) satisfies the following relations_ \[\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left[x^{k}\right]=\varphi_{f}^{(k)}(x)\qquad \forall k\in\mathbb{N}_{0}. \tag{54}\] _The canonical cosine and sine transmutation operators satisfy the relations_ \[\mathbf{T}_{\mathfrak{I}_{N},f}^{C}\left[x^{2k}\right] =\varphi_{f}^{(2k)}(x)\qquad\forall k\in\mathbb{N}_{0}. \tag{55}\] \[\mathbf{T}_{\mathfrak{I}_{N}}^{S}\left[x^{2k+1}\right] =\varphi_{f}^{(2k+1)}(x)\qquad\forall k\in\mathbb{N}_{0}. \tag{56}\] **Proof.** Consider the solution \(e_{\mathfrak{I}_{N}}^{h}(\rho,x)\) with \(h=f^{\prime}(0)\). By the conditions (47) and (48), solution \(e^{h}_{\mathcal{I}_{N}}(\rho,x)\) can be written in the form \[e^{h}_{\mathcal{I}_{N}}(\rho,x) =u_{0}(\rho,x)+i\rho u_{1}(\rho,x)\] \[=\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{(2k)}(x)}{ (2k)!}+\sum_{k=0}^{\infty}\frac{i(-1)^{k}\rho^{2k+1}\varphi_{f}^{(2k+1)}(x)}{(2 k+1)!}\] \[=\sum_{k=0}^{\infty}\frac{(i\rho)^{2k}\varphi_{f}^{(2k)}(x)}{(2k)!}+\sum_{k=0}^{\infty}\frac{(i\rho)^{2k+1}\varphi_{f}^{(2k+1)}(x)}{(2k+1)!}\] \[=\sum_{k=0}^{\infty}\frac{(i\rho)^{k}\varphi_{f}^{(k)}(x)}{k!} \tag{57}\] (The rearrangement of the series is due to absolute and uniform convergence, Theorem 22). On the other hand \[e^{h}_{\mathcal{I}_{N}}(\rho,x)=\mathbf{T}^{f}_{\mathcal{I}_{N}}\left[e^{i \rho x}\right]=\mathbf{T}^{f}_{\mathcal{I}_{N}}\left[\sum_{k=0}^{\infty}\frac {(i\rho)^{k}x^{k}}{k!}\right]\] Note that \(\int_{-x}^{x}K^{f}_{\mathcal{I}_{N}}(x,t)\left(\sum_{k=0}^{\infty} \frac{(i\rho)^{k}t^{k}}{k!}\right)dt=\sum_{k=0}^{\infty}\frac{(i\rho)^{k}}{k! }\int_{-x}^{x}K^{f}_{\mathcal{I}_{N}}(x,t)t^{k}dt\), due to the uniform convergence of the exponential series in the variable \(t\in[-x,x]\). Thus, \[e^{h}_{\mathcal{I}_{N}}(\rho,x)=\sum_{k=0}^{\infty}\frac{(i\rho)^{k}\mathbf{T }^{f}_{\mathcal{I}_{N}}\left[x^{k}\right]}{k!}. \tag{58}\] Comparing (58) and (57) as Taylor series in the complex variable \(\rho\) we obtain (54). Relations (55) and (56) follows from (54), (32), (33) and the fact that \(G^{f}_{\mathcal{I}_{N}}(x,t)\) and \(S_{\mathcal{I}_{N}}(x,t)\) are even and odd in the variable \(t\), respectively. **Remark 27**: _The formal powers \(\{\varphi_{f}^{(k)}(x)\}_{k=0}^{\infty}\) satisfy the asymptotic relation \(\varphi_{f}^{(k)}(x)=x^{k}(1+o(1))\), \(x\to 0^{+}\), \(\forall k\in\mathbb{N}\)._ _Indeed, by Theorem 26 and the Cauchy-Bunyakovsky-Schwarz inequality we have_ \[|\varphi_{f}^{(k)}(x)-x^{k}| =\left|\int_{-x}^{x}K^{f}_{\mathcal{I}_{\mathcal{I}_{N}}}(x,t)t^{ k}dt\right|\leqslant\left(\int_{-x}^{x}\left|K^{f}_{\mathcal{I}_{N}}(x,t) \right|^{2}dt\right)^{\frac{1}{2}}\left(\int_{-x}^{x}|t|^{2k}dt\right)^{\frac{ 1}{2}}\] \[\leqslant\sqrt{2b}\left\|K_{\mathcal{I}^{f}_{N}}\right\|_{L_{ \infty}(\Omega)}\sqrt{\frac{2}{2k+1}}x^{k+\frac{1}{2}}\] _(because \(K^{f}_{\mathcal{I}_{N}}\in L_{\infty}(\Omega)\) by Theorem 10). Hence_ \[\left|\frac{\varphi_{f}^{(k)}(x)}{x^{k}}-1\right|\leqslant\sqrt{2b}\left\|K_{ \mathcal{I}^{f}_{N}}\right\|_{L_{\infty}(\Omega)}\sqrt{\frac{2}{2k+}}x^{\frac {1}{2}}\to 0,\qquad x\to 0^{+}.\] **Remark 28**: _Denote \(\mathcal{P}(\mathbb{R})=\text{Span}\{x^{k}\}_{k=0}^{\infty}\). According to Remark 21 and Proposition 1 we have that \(\mathbf{T}^{f}_{\mathcal{I}_{N}}\left(\mathcal{P}(\mathbb{R})\right)=\text{ Span}\left\{\varphi_{f}^{(k)}(x)\right\}_{k=0}^{\infty}\), and by (45) we have the relation_ \[{\bf L}_{q,{\mathfrak{I}}_{N}}{\bf T}^{f}_{{\mathfrak{I}}_{N}}p=-{\bf T}^{f}_{{ \mathfrak{I}}_{N}}D^{2}p\qquad\forall p\in{\cal P}({\mathbb{R}}). \tag{59}\] _According to [14], \({\bf T}^{f}_{q,{\mathfrak{I}}_{N}}\) is a transmutation operator for the pair \({\bf L}_{q,{\mathfrak{I}}_{N}}\), \(-D^{2}\) in the subspace \({\cal P}({\mathbb{R}})\), and \(\{\varphi^{(k)}_{f}(x)\}_{k=0}^{\infty}\) is an \({\bf L}_{q,{\mathfrak{I}}_{N}}\)-basis. Since \(\varphi^{(K)}_{f}(0)=D\varphi^{(k)}_{f}(0)=0\) for \(k\geqslant 2\), \(\{\varphi^{(k)}_{f}(x)\}_{k=0}^{\infty}\) is called a **standard**\({\bf L}_{q,{\mathfrak{I}}_{N}}\)-basis, and \({\bf T}^{f}_{{\mathfrak{I}}_{N}}\) a standard transmutation operator. By Remark 20 we can recover \(\varphi^{(k)}_{f}\) for \(k\geqslant 2\) from \(\varphi^{(0)}_{f}\) and \(\varphi^{(0)}_{f}\) by the formula_ \[\varphi^{(k)}_{f}(x)=-k(k-1){\bf R}^{f}_{{\mathfrak{I}}_{N}}\varphi^{(k)}_{f} (x)=k(k-1)f(x)\int_{0}^{x}\frac{1}{f^{2}(t)}\int_{0}^{t}f(s)\varphi^{(k-2)}_{f }(s)ds \tag{60}\] _(compare this formula with [14, Formula (8), Remark 9])._ The following result adapts Theorem 10 from [14], proved for the case of an \(L_{1}\)-regular potential. **Theorem 29**: _The operator \({\bf T}^{f}_{{\mathfrak{I}}_{N}}\) is a transmutation operator for the pair \({\bf L}_{q,{\mathfrak{I}}_{N}}\), \(-D^{2}\) in \(H^{2}(-b,b)\), that is, \({\bf T}^{f}_{{\mathfrak{I}}_{N}}\left(H^{2}(-b,b)\right)\subset{\cal D}_{2} \left({\bf L}_{q,{\mathfrak{I}}_{N}}\right)\) and_ \[{\bf L}_{q,{\mathfrak{I}}_{N}}{\bf T}_{{\mathfrak{I}}_{N}}u=-{\bf T}_{{ \mathfrak{I}}_{N}}D^{2}u\qquad\forall u\in H^{2}(-b,b) \tag{61}\] **Proof.** We show that \[{\bf T}_{{\mathfrak{I}}_{N}}u(x)=u(0)\varphi^{(0)}_{f}(x)+u^{\prime}(0)\varphi ^{(1)}_{f}(x)-{\bf R}^{f}_{{\mathfrak{I}}_{N}}{\bf T}^{f}_{{\mathfrak{I}}_{N} }u^{\prime\prime 2}(-b,b). \tag{62}\] Let us first see that (62) is valid for \(p\in{\cal P}({\mathbb{R}})\). Indeed, set \(p(x)=\sum_{k=0}^{M}c_{k}x^{k}\). By the linearity of \({\bf T}^{f}_{{\mathfrak{I}}_{N}}\), Theorem 26 and (60) we have \[{\bf T}^{f}_{{\mathfrak{I}}_{N}}p(x) =c_{0}\varphi^{(0)}_{f}+c_{1}\varphi^{(1)}_{f}(x)+\sum_{k=2}^{M}c _{k}\varphi^{(k)}_{f}(x)\] \[=p(0)\varphi^{(0)}_{f}+p^{\prime}(0)\varphi^{(1)}_{f}(x)-\sum_{k=2 }^{M}c_{k}k(k-1){\bf R}^{f}_{{\mathfrak{I}}_{N}}{\bf T}^{f}_{{\mathfrak{I}}_{N }}\left[x^{k-2}\right]\] \[=p(0)\varphi^{(0)}_{f}+p^{\prime}(0)\varphi^{(1)}_{f}(x)-\mathbf{ R}^{f}_{{\mathfrak{I}}_{N}}{\bf T}^{f}_{{\mathfrak{I}}_{N}}p^{\prime\prime}(x)\] This establishes (62) for \(p\in{\cal P}({\mathbb{R}})\). Take \(u\in H^{2}(-b,b)\) arbitrary. There exists a sequence \(\{p_{n}\}\subset{\cal P}({\mathbb{R}})\) such that \(p^{(j)}_{n}\stackrel{{[-b,b]}}{{\rightarrow}}u^{(j)}\), \(j=0,1\), and \(p^{\prime\prime}_{n}\to u\) in \(L_{2}(-b,b)\), when \(n\rightarrow\infty\) (see [14, Prop. 4]). Since \({\bf R}^{f}_{{\mathfrak{I}}_{N}}{\bf T}^{f}_{{\mathfrak{I}}_{N}}\in{\cal B} \left(L_{2}(-b,b),L_{2}(0,b)\right)\) we have \[{\bf T}^{f}_{{\mathfrak{I}}_{N}}u(x) =\lim_{n\rightarrow\infty}{\bf T}^{f}_{{\mathfrak{I}}_{N}}p_{n}(x )=\lim_{n\rightarrow\infty}\left[p_{n}(0)\varphi^{(0)}_{f}+p^{\prime}_{n}(0) \varphi^{(1)}_{f}(x)-{\bf R}^{f}_{{\mathfrak{I}}_{N}}{\bf T}^{f}_{{\mathfrak{I} }_{N}}p^{\prime\prime}_{n}(x)\right]\] \[=u(0)\varphi^{(0)}_{f}(x)+u^{\prime}(0)\varphi^{(1)}_{f}(x)-{\bf R }^{f}_{{\mathfrak{I}}_{N}}{\bf T}^{f}_{{\mathfrak{I}}_{N}}u^{\prime\prime}(x)\] and we obtain (62). Hence, by Remark 20, \({\bf T}^{f}_{{\mathfrak{I}}_{N}}\left(H^{2}(-b,b)\right)\subset{\cal D}_{2} \left({\bf L}_{q,{\mathfrak{I}}_{N}}\right)\), and since \({\bf L}_{q,{\mathfrak{I}}_{N}}\varphi^{(k)}_{f}=0\) for \(k=0,1\), applying \({\bf L}_{q,{\mathfrak{I}}_{N}}\) in both sides of (62) we have (61). Fourier-Legendre and Neumann series of Bessel functions expansions ### Fourier-Legendre series expansion of the transmutation kernel Fix \(x\in(0,b]\). Theorem 10 establishes that \(K^{h}_{\mathfrak{I}_{N}}(x,\cdot)\in L_{2}(-x,x)\), then \(K^{h}_{\mathfrak{I}_{N}}(x,t)\) admits a Fourier series in terms of an orthogonal basis of \(L_{2}(-x,x)\). Following [30], we choose the orthogonal basis of \(L_{2}(-1,1)\) given by the Legendre polynomials \(\{P_{n}(z)\}_{n=0}^{\infty}\). Thus, \[K^{h}_{\mathfrak{I}_{N}}(x,t)=\sum_{n=0}^{\infty}\frac{a_{n}(x)}{x}P_{n}\left( \frac{t}{x}\right) \tag{63}\] where \[a_{n}(x)=\left(n+\frac{1}{2}\right)\int_{-x}^{x}K^{h}_{\mathfrak{I}_{N}}(x,t) P_{n}\left(\frac{t}{x}\right)dt\qquad\forall n\in\mathbb{N}_{0}. \tag{64}\] The series (63) converges with respect to \(t\) in the norm of \(L_{2}(-x,x)\). Formula (64) is obtained multiplying (63) by \(P_{n}\left(\frac{t}{x}\right)\), using the general Parseval's identity [2, pp. 16] and taking into account that \(\|P_{n}\|_{L_{2}(-1,1)}^{2}=\frac{2}{2n+1}\), \(n\in\mathbb{N}_{0}\). **Example 30**: _Consider the kernel \(K^{0}_{\mathfrak{I}_{1}}(x,t)=\frac{\alpha_{1}}{2}H(x-x_{1})\chi_{[2x_{1}-x, x]}\) from Example 11. In this case, the Fourier-Legendre coefficients has the form_ \[a_{n}(x)=\frac{\alpha_{1}}{2}\left(n+\frac{1}{2}\right)H(x-x_{1})\int_{2x_{1} -x}^{x}P_{n}(t)dt=\frac{\alpha_{1}}{2}\left(n+\frac{1}{2}\right)xH(x-x_{1}) \int_{2\frac{x_{1}}{x}-1}^{1}P_{n}(t)dt.\] _From this we obtain \(a_{0}(x)=\frac{\alpha_{1}}{2}H(x-x_{1})(x-x_{1})\). Using formula \(P_{n}(t)=\frac{1}{2n+1}\frac{d}{dt}\left(P_{n+1}(t)-P_{n-1}(t)\right)\) for \(n\in\mathbb{N}\), and that \(P_{n}(1)=0\) for all \(n\in\mathbb{N}\), we have_ \[a_{n}(x)=\frac{\alpha_{1}}{4}xH(x-x_{1})\left[P_{n-1}\left(\frac{2x_{1}}{x}-1 \right)-P_{n+1}\left(\frac{2x_{1}}{x}-1\right)\right]\] **Remark 31**: _From (64) we obtain that the first coefficient \(a_{0}(x)\) is given by_ \[a_{0}(x) =\frac{1}{2}\int_{-x}^{x}K^{h}_{\mathfrak{I}_{N}}(x,t)P_{0}\left( \frac{t}{x}\right)dt=\frac{1}{2}\int_{-x}^{x}K^{h}_{\mathfrak{I}_{N}}(x,t)dt\] \[=\frac{1}{2}\mathbf{T}^{h}_{\mathfrak{I}_{N}}[1]-\frac{1}{2}= \frac{1}{2}(e^{h}_{\mathfrak{I}_{N}}(0,x)-1).\] _Thus, we obtain the relations_ \[a_{0}(x)=\frac{1}{2}(e^{h}_{\mathfrak{I}_{N}}(0,x)-1),\qquad e^{h}_{ \mathfrak{I}_{N}}(0,x)=2a_{n}(x)+1. \tag{65}\] For the kernels \(G^{h}_{\mathfrak{I}_{N}}(x,t)\) and \(S_{\mathfrak{I}_{N}}(x,t)\) we obtain the series representations in terms of the even and odd Legendre polynomials, respectively, \[G_{\mathfrak{I}_{N}}^{h}(x,t) =\sum_{n=0}^{\infty}\frac{g_{n}(x)}{x}P_{2n}\left(\frac{t}{x}\right), \tag{66}\] \[S_{\mathfrak{I}_{N}}(x,t) =\sum_{n=0}^{\infty}\frac{s_{n}(x)}{x}P_{2n+1}\left(\frac{t}{x} \right), \tag{67}\] where the coefficients are given by \[g_{n}(x) =2a_{2n}(x)=(4n+1)\int_{0}^{x}G_{\mathfrak{I}_{N}}^{h}(x,t)P_{2n} \left(\frac{t}{x}\right)dt, \tag{68}\] \[s_{n}(x) =2a_{2n+1}(4n+3)\int_{0}^{x}S_{\mathfrak{I}_{N}}(x,t)P_{2n+1} \left(\frac{t}{x}\right)dt. \tag{69}\] The proof of these facts is analogous to that in the case of Eq. (9), see [30] or [29, Ch. 9]. **Remark 32**: _Since \(g_{0}(x)=2a_{0}(x)\), then \(g_{0}(x)=e_{\mathfrak{I}_{N}}^{h}(0,x)-1\). Since \(e_{\mathfrak{I}_{N}}^{h}(0,x)\) is the solution of (1) with \(\rho=0\) satisfying \(e_{\mathfrak{I}_{N}}^{h}(0,0)=1\), \((e_{\mathfrak{I}_{N}}^{h})^{\prime}(0,0)=h\), hence by Remark 5, \(e_{\mathfrak{I}_{N}}^{h}(0,x)=c_{\mathfrak{I}_{N}}^{h}(0,x)\) and_ \[g_{0}(x)=c_{\mathfrak{I}_{N}}^{h}(0,x)-1. \tag{70}\] _On the other hand, for the coefficient \(s_{0}(x)\) we have the relation_ \[s_{0}(x)=3\int_{0}^{x}H_{\mathfrak{I}_{N}}(x,t)P_{1}\left(\frac{t}{x}\right)dt =\frac{3}{x}\int_{0}^{x}H_{\mathfrak{I}_{N}}(x,t)tdt.\] _Since \(\frac{\sin(\rho x)}{\rho}\big{|}_{x=0}=x\), from (31) we have_ \[s_{0}(x)=3\left(\frac{s_{\mathfrak{I}_{N}}(0,x)}{x}-1\right). \tag{71}\] For every \(n\in\mathbb{N}_{0}\) we write the Legendre polynomial \(P_{n}(z)\) in the form \(P_{n}(z)=\sum_{k=0}^{n}l_{k,n}z^{k}\). Note that if \(n\) is even, \(l_{k,n}=0\) for odd \(k\), and \(P_{2n}(z)=\sum_{k=0}^{n}\tilde{l}_{k,n}z^{2k}\) with \(\tilde{l}_{k,n}=l_{2k,2n}\). Similarly \(P_{2n+1}(z)=\sum_{k=0}^{n}\hat{l}_{k,n}z^{2k+1}\) with \(\hat{l}_{k,n}=l_{2k+1,2n+1}\). With this notation we write an explicit formula for the coefficients (64) of the canonical transmutation kernel \(K_{\mathfrak{I}_{N}}^{f}(x,t)\). **Proposition 33**: _The coefficients \(\{a_{n}(x)\}_{n=0}^{\infty}\) of the Fourier-Legendre expansion (63) of the canonical transmutation kernel \(K_{\mathfrak{I}_{N}}^{f}(x,t)\) are given by_ \[a_{n}(x)=\left(n+\frac{1}{2}\right)\left(\sum_{k=0}^{n}l_{k,n}\frac{\varphi_{ f}^{(k)}(x)}{x^{k}}-1\right). \tag{72}\] _The coefficients of the canonical cosine and sine kernels satisfy the following relations for all \(n\in\mathbb{N}_{0}\)_ \[g_{n}(x) =(4n+1)\left(\sum_{k=0}^{n}\tilde{l}_{k,n}\frac{\varphi_{f}^{(2k) }(x)}{x^{2k}}-1\right), \tag{73}\] \[s_{n}(x) =(4n+3)\left(\sum_{k=0}^{n}\hat{l}_{k,n}\frac{\varphi_{f}^{(2k+1) }(x)}{x^{2k+1}}-1\right), \tag{74}\] **Proof.** From (64) we have \[a_{n}(x) =\left(n+\frac{1}{2}\right)\int_{-x}^{x}K^{f}_{\mathfrak{I}_{N}}(x,t )\left(\sum_{k=0}^{n}l_{k,n}\left(\frac{t}{x}\right)^{k}\right)dt\] \[=\left(n+\frac{1}{2}\right)\sum_{k=0}^{n}\frac{l_{k,n}}{x^{k}} \int_{0}^{x}K^{f}_{\mathfrak{I}_{N}}(x,t)t^{k}dt\] \[=\left(n+\frac{1}{2}\right)\sum_{k=0}^{n}\frac{l_{k,n}}{x^{k}} \left(\mathbf{T}^{f}_{\mathfrak{I}_{N}}\left[x^{k}\right]-x^{k}\right).\] Hence (72) follows from Theorem 26 and that \(P_{n}(z)=1\). Since \(g_{n}(x)=2a_{2n}(x)\), \(s_{n}(x)=2a_{2n+1}(x)\), \(l_{2k+1,2n}=0\), \(l_{2k,2n+1}=0\) and \(l_{2k,2n}=\tilde{l}_{k,n}\),\(l_{2k+1,2n+1}=\tilde{l}_{k,n}\), we obtain (73) and (74). **Remark 34**: _By Remark 27, formula (72) is well defined at \(x=0\). Note that \(x^{n}a_{n}(x)\) belongs to \(\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) for all \(n\in\mathbb{N}_{0}\)._ ### Representation of the solutions as Neumann series of Bessel functions Similarly to the case of the regular Eq. (13) [30], we obtain a representation for the solutions in terms of Neumann series of Bessel functions (NSBF). For \(M\in\mathbb{N}\) we define \[K^{h}_{\mathfrak{I}_{N},M}(x,t):=\sum_{n=0}^{M}\frac{a_{n}(x)}{x}P_{n}\left( \frac{t}{x}\right),\] that is, the \(M\)-partial sum of (63). **Theorem 35**: _The solutions \(c^{h}_{\mathfrak{I}_{N}}(\rho,x)\) and \(s_{\mathfrak{I}_{N}}(\rho,x)\) admit the following NSBF representations_ \[c^{h}_{\mathfrak{I}_{N}}(\rho,x)=\cos(\rho x)+\sum_{n=0}^{\infty }(-1)^{n}g_{n}(x)j_{2n}(\rho x), \tag{75}\] \[s_{\mathfrak{I}_{N}}(\rho,x)=\frac{\sin(\rho x)}{\rho}+\frac{1} {\rho}\sum_{n=0}^{\infty}(-1)^{n}s_{n}(x)j_{2n+1}(\rho x), \tag{76}\] _where \(j_{\nu}\) stands for the spherical Bessel function \(j_{\nu}(z)=\sqrt{\frac{\pi}{2z}}J_{\nu+\frac{1}{2}}(z)\) (and \(J_{\nu}\) stands for the Bessel function of order \(\nu\)). The series converge pointwise with respect to \(x\) in \((0,b]\) and uniformly with respect to \(\rho\) on any compact subset of the complex \(\rho\)-plane. Moreover, for \(M\in\mathbb{N}\) the functions_ \[c^{h}_{\mathfrak{I}_{N},M}(\rho,x)=\cos(\rho x)+\sum_{n=0}^{M}(- 1)^{n}g_{n}(x)j_{2n}(\rho x), \tag{77}\] \[s_{\mathfrak{I}_{N},M}(\rho,x)=\frac{\sin(\rho x)}{\rho}+\frac{1 }{\rho}\sum_{n=0}^{M}(-1)^{n}s_{n}(x)j_{2n+1}(\rho x), \tag{78}\] obey the estimates_ \[|c^{h}_{\mathfrak{I}_{N}}(\rho,x)-c^{h}_{\mathfrak{I}_{N},M}(\rho,x)| \leqslant 2\epsilon_{2M}(x)\sqrt{\frac{\sinh(2bC)}{C}}, \tag{79}\] \[|\rho s_{\mathfrak{I}_{N}}(\rho,x)-\rho s_{\mathfrak{I}_{N},M}( \rho,x)| \leqslant 2\epsilon_{2M+1}(x)\sqrt{\frac{\sinh(2bC)}{C}}, \tag{80}\] _for any \(\rho\in\mathbb{C}\) belonging to the strip \(|\operatorname{Im}\rho|\leqslant C\), \(C>0\), and where \(\epsilon_{M}(x)=\|K^{h}_{\mathfrak{I}_{N}}(x,\cdot)-K^{h}_{\mathfrak{I}_{N},2M }(x,\cdot)\|_{L_{2}(-x,x)}\)._ **Proof.** We show the results for the solution \(c^{h}_{\mathfrak{I}_{N}}(\rho,x)\) (the proof for \(s_{\mathfrak{I}_{N}}(\rho,x)\) is similar). Substitution of the Fourier-Legendre series (66) in (30) leads us to \[c^{h}_{\mathfrak{I}_{N}}(\rho,x) =\cos(\rho x)+\int_{0}^{x}\left(\sum_{n=0}^{\infty}\frac{g_{n}(x) }{x}P_{2n}\left(\frac{t}{x}\right)\right)\cos(\rho t)dt\] \[=\cos(\rho x)+\sum_{n=0}^{\infty}\frac{g_{n}(x)}{x}\int_{0}^{x}P_ {2n}\left(\frac{t}{x}\right)\cos(\rho t)dt\] (the exchange of the integral with the summation is due to the fact that the integral is nothing but the inner product of the series with the function \(\overline{\cos(\rho t)}\) and the series converges in \(L_{2}(0,x)\)). Using formula 2.17.7 in [40, pp. 433] \[\int_{0}^{a}\left\{\begin{array}{l}P_{2n+1}\left(\frac{y}{a}\right)\cdot \sin(by)\\ P_{2n}\left(\frac{y}{a}\right)\cdot\cos(by)\end{array}\right\}dy=(-1)^{n}\sqrt {\frac{\pi a}{2b}}J_{2n+\delta+\frac{1}{2}}(ab),\quad\delta=\left\{\begin{array} []{l}1\\ 0\end{array}\right\},\ a>0,\] we obtain the representation (75). Take \(C>0\) and \(\rho\in\mathbb{C}\) with \(|\operatorname{Im}\rho|\leqslant C\). For \(M\in\mathbb{N}\) define \(G^{h}_{\mathfrak{I}_{N},M}(x,t):=K^{h}_{\mathfrak{I}_{N},2M}(x,t)-K^{h}_{ \mathfrak{I}_{N},2M}(x,-t)=\sum_{n=0}^{M}\frac{g_{n}(x)}{x}P_{2n}\left(\frac{t }{x}\right)\), the \(M\)-th partial sum of (66). Then \[c^{h}_{\mathfrak{I}_{N},M}(\rho,x)=\cos(\rho x)+\int_{0}^{x}G^{h}_{\mathfrak{I }_{N},M}(x,t)\cos(\rho t)dt.\] Using the Cauchy-Bunyakovsky-Schwarz inequality we obtain \[|c^{h}_{\mathfrak{I}_{N}}(\rho,x)-C^{h}_{\mathfrak{I}_{N},M}( \rho,x)| =\left|\int_{0}^{x}\left(G^{h}_{\mathfrak{I}_{N}}(x,t)-G^{h}_{ \mathfrak{I}_{N},M}(x,t)\right)\cos(\rho t)dt\right|\] \[=\left|\left\langle\overline{G^{h}_{\mathfrak{I}_{N}}(x,t)-G^{h}_ {\mathfrak{I}_{N},M}(x,t)},\cos(\rho t)\right\rangle_{L_{2}(0,x)}\right|\] \[\leqslant\|G^{h}_{\mathfrak{I}_{N}}(x,\cdot)-G^{h}_{\mathfrak{I} _{N},M}(x,\cdot)\|_{L_{2}(0,x)}\|\cos(\rho t)\|_{L_{2}(0,x)}.\] Since \(\|K^{h}_{\mathfrak{I}_{N}}(x,\cdot)-K^{h}_{\mathfrak{I}_{N},2M}(x,\cdot)\|_{L _{2}(-x,x)}=\frac{1}{2}\|G^{h}_{\mathfrak{I}_{N}}(x,\cdot)-G^{h}_{M,n}(x, \cdot)\|_{L_{2}(0,x)}\), \[\int_{0}^{x}|\cos(\rho t)|^{2}dt \leqslant\frac{1}{4}\int_{0}^{x}\left(|e^{i\rho t}|+|e^{-i\rho t }|\right)^{2}dt\leqslant\frac{1}{2}\int_{0}^{x}\left(e^{-2t\operatorname{Im} \rho}+e^{2t\operatorname{Im}\rho}\right)dt\] \[=\int_{-x}^{x}e^{-2\operatorname{Im}\rho t}dt=\frac{\sinh(2x \operatorname{Im}\rho)}{\operatorname{Im}\rho}\] and the function \(\frac{\sinh(\xi x)}{\xi}\) is monotonically increasing in both variables when \(\xi,x\geqslant 0\), we obtain (79). Given \(H\in\mathbb{C}\), we look for a pair of solutions \(\psi^{H}_{\mathcal{I}_{N}}(\rho,x)\) and \(\vartheta_{\mathcal{I}_{N}}(\rho,x)\) of (1) satisfying the conditions \[\psi^{H}_{\mathcal{I}_{N}}(\rho,b)=1, (\psi^{H}_{\mathcal{I}_{N}})^{\prime}(\rho,b)=-H, \tag{81}\] \[\vartheta_{\mathcal{I}_{N}}(\rho,b)=0, \vartheta^{\prime}_{\mathcal{I}_{N}}(\rho,b)=1. \tag{82}\] **Theorem 36**: _The solutions \(\psi^{H}_{\mathcal{I}_{N}}(\rho,x)\) and \(\vartheta_{\mathcal{I}_{N}}(\rho,x)\) admit the integral representations_ \[\psi^{H}_{\mathcal{I}_{N}}(\rho,x)=\cos(\rho(b-x))+\int_{x}^{b} \widetilde{G}^{H}_{\mathcal{I}_{N}}(x,t)\cos(\rho(b-t))dt, \tag{83}\] \[\vartheta_{\mathcal{I}_{N}}(\rho,x)=\frac{\sin(\rho(b-x))}{\rho }+\int_{x}^{b}\widetilde{S}^{H}_{\mathcal{I}_{N}}(x,t)\frac{\sin(\rho(b-t))}{ \rho}dt, \tag{84}\] _where the kernels \(\widetilde{G}^{H}_{\mathcal{I}_{N}}(x,t)\) and \(\widetilde{S}_{\mathcal{I}_{N}}(x,t)\) are defined in \(\Omega\) and satisfy \(\widetilde{G}^{H}_{\mathcal{I}_{N}}(x,\cdot),\widetilde{S}_{\mathcal{I}_{N}}( x,\cdot)\in L_{2}(0,x)\) for all \(x\in(0,b]\). In consequence, the solutions \(\psi^{H}_{\mathcal{I}_{N}}(\rho,x)\) and \(\vartheta_{\mathcal{I}_{N}}(\rho,x)\) can be written as NSBF_ \[\psi^{H}_{\mathcal{I}_{N}}(\rho,x)=\cos(\rho(b-x))+\sum_{n=0}^{ \infty}(-1)^{n}\tau_{n}(x)j_{2n}(\rho(b-x)), \tag{85}\] \[\vartheta_{\mathcal{I}_{N}}(\rho,x)=\frac{\sin(\rho(b-x))}{\rho }+\sum_{n=0}^{\infty}(-1)^{n}\zeta_{n}(x)j_{2n}(\rho(b-x)), \tag{86}\] _with some coefficients \(\{\tau_{n}(x)\}_{n=0}^{\infty}\) and \(\{\zeta_{n}(x)\}_{n=0}^{\infty}\)._ **Proof.** We prove the results for \(\psi^{H}_{\mathcal{I}_{N}}(\rho,x)\) (the proof for \(\vartheta_{\mathcal{I}_{N}}(\rho,x)\) is similar). Set \(y(\rho,x)=\psi^{H}_{\mathcal{I}_{N}}(\rho,b-x)\). Note that \(y(\rho,0)=1\), \(y^{\prime}(\rho,0)=H\) and for \(\phi\in C_{0}^{\infty}(0,b)\) we have \[(y^{\prime\prime 2}y(x),\phi(x))_{C_{0}^{\infty}(0,b)} =(\psi^{H}_{\mathcal{I}_{N}}(\rho,x),\phi^{\prime\prime 2}\phi(b-x))_{C_{0}^ {\infty}(0,b)}\] \[=(q(x)\psi^{H}_{\mathcal{I}_{N}}(\rho,x),\phi(b-x))_{C_{0}^{ \infty}(0,b)}+\sum_{k=0}^{N}\alpha_{k}\psi^{H}_{\mathcal{I}_{N}}(\rho,x_{k}) \phi(b-x_{k})\] \[=(q(b-x)y(x),\phi(x))_{C_{0}^{\infty}(0,b)}+\sum_{k=0}^{N}\alpha_{ k}y(b-x_{k})\phi(b-x_{k}),\] that is, \(\psi^{H}_{\mathcal{I}_{N}}(\rho,x)\) is a solution of (1) iff \(y(x)=\psi^{H}_{\mathcal{I}_{N}}(\rho,b-x)\) is a solution of \[-y^{\prime\prime}(x)+\left(q(b-x)+\sum_{k=0}^{N}\alpha_{k}\delta(x-(b-x_{k})) \right)y(x)=\rho^{2}y(x). \tag{87}\] Since \(0<b-x_{N}<\cdots<b-x_{0}<b\), hence (87) is of the type (1) with the point interactions \(\mathcal{I}_{N}^{\star}=\{(b-x_{N-j},\alpha_{N-j})\}_{j=0}^{N}\) and \(\psi^{H}_{\mathcal{I}_{N}}(\rho,b-x)\) is the corresponding solution \(c^{H}_{\mathcal{I}_{N}^{\star}}(\rho,x)\) for (87). Hence \[\psi^{H}_{\mathcal{I}_{N}}(\rho,b-x)=\cos(\rho x)+\int_{0}^{x}G^{H}_{\mathcal{I }_{N}}(x,t)\cos(\rho t)dt \tag{88}\] for some kernel \(G^{H}_{\mathfrak{T}_{N}^{\star}}(x,t)\) defined on \(\Omega\) with \(\widetilde{G}^{H}_{\mathfrak{T}_{N}}(x,\cdot)\in L_{2}(0,x)\) for \(x\in(0,b]\). Thus, \[\psi_{\mathfrak{I}_{N}}(\rho,x) =\cos(\rho(b-x))+\int_{0}^{b-x}G^{H}_{\mathfrak{I}_{N}^{\star}}(b -x,t)\cos(\rho t)dt\] \[=\psi_{\mathfrak{I}_{N}}(\rho,x)=\cos(\rho(b-x))+\int_{x}^{b}G^{ H}_{\mathfrak{I}_{N}^{\star}}(b-x,b-t)\cos(\rho(b-t))dt,\] where the change of variables \(x\mapsto b-x\) was used. Hence we obtain (83) with \(\widetilde{G}^{H}_{\mathfrak{I}_{N}^{\star}}(x,t)=G^{H}_{\mathfrak{I}_{N}^{ \star}}(b-x,b-t)\) In consequence, by Theorem 35 we obtain (85). **Remark 37**: _As in Remark 32_ \[\tau_{0}(x)=\psi^{H}_{\mathfrak{I}_{N}}(0,x)-1\quad\text{and}\ \ \zeta_{0}(x)=3\left(\frac{\vartheta_{\mathfrak{I}_{N}}(0,x)}{b-x}-1\right). \tag{89}\] **Remark 38**: _Let \(\lambda\in\mathbb{C}\) and \(\lambda=\rho^{2}\)._ 1. _The functions_ \(\widehat{s}_{k}(\rho,x-x_{k})\) _are entire with respect to_ \(\rho\)_. Then from (_12_)_ \(c^{h}_{\mathfrak{I}_{N}}(\rho,x)\)_,_ \(s_{\mathfrak{I}_{N}}(\rho,x)\) _and_ \(\psi^{H}_{\mathfrak{I}_{N}}(\rho,x)\) _are entire as well._ 2. _Suppose that_ \(q\) _is real valued and_ \(\alpha_{0},\ldots,\alpha_{N},u_{0},u_{1}\in\mathbb{R}\)_. If_ \(u(\lambda,x)\) _is a solution of_ \(u^{(k)}(\lambda,0)=u_{k}\)_,_ \(k=0,1\)_, then by the uniqueness of the Cauchy problem_ \(\overline{u(\lambda,x)}=u(\overline{\lambda},x)\)_. In particular, for_ \(\rho,h,H\in\mathbb{R}\)_, the solutions_ \(c^{h}_{\mathfrak{I}_{N}}(\rho,x)\)_,_ \(s_{\mathfrak{I}_{N}}(\rho,x)\) _and_ \(\psi^{H}_{\mathfrak{I}_{N}}(\rho,x)\) _are real valued._ ### A recursive integration procedure for the coefficients \(\{a_{n}(x)\}_{n=0}^{\infty}\) Similarly to the case of the regular Schrodinger equation [29, 30, 32], we derive formally a recursive integration procedure for computing the Fourier-Legendre coefficients \(\{a_{n}(x)\}_{n=0}^{\infty}\) of the canonical transmutation kernel \(K^{f}_{\mathfrak{I}_{N}}(x,t)\). Consider the sequence of functions \(\sigma_{n}(x):=x^{n}a_{n}(x)\) for \(n\in\mathbb{N}_{0}\). According to Remark 34, \(\{\sigma_{n}(x)\}_{n=0}^{\infty}\subset\mathcal{D}_{2}\left(\mathbf{L}_{q, \mathfrak{I}_{N}}\right)\). **Remark 39**: 1. _By Remark_ 32_,_ \[\sigma_{0}(x)=\frac{f(x)-1}{2}.\] (90) 2. _By (_72_),_ \(a_{1}(x)=\frac{3}{2}\left(\frac{\varphi^{(1)}_{f}(x)}{x}-1\right)\)_. Thus, from (_42_) and (_43_) we have_ \[\sigma_{1}(x)=\frac{3}{2}\left(f(x)\int_{0}^{x}\frac{dt}{f^{2}(t)}-x\right).\] (91) 3. _For_ \(n\geqslant 2\)_,_ \(\sigma_{n}(0)=0\)_, and by (_72_) we obtain_ \[D\sigma_{n}(x) =\left(n+\frac{1}{2}\right)\sum_{k=0}^{n}l_{k,n}D\left(x^{n-k} \varphi^{(k)}_{f}(x)\right)\] \[=\left(n+\frac{1}{2}\right)\left(\sum_{k=0}^{n-1}l_{k,n}(n-k)x^{n-k -1}\varphi^{(k)}_{f}(x)+\sum_{k=0}^{n}l_{k,n}x^{n-k}D\varphi^{(k)}_{f}(x) \right).\] _By (_44_) and (_43_),_ \(D\varphi^{(k)}_{f}(0)=0\) _for_ \(k\geqslant 1\)_. Hence,_ \(\sigma^{\prime}_{n}(0)=0\) Denote by \(c^{f}_{\mathfrak{J}_{N}}(\rho,x)\) the solution of (1) satisfying (28) with \(h=f^{\prime}(0)\). On each interval \([x_{k},x_{k+1}]\), \(k=0,\cdots,N\), \(c^{f}_{\mathfrak{J}_{N}}(\rho,x)\) is a solution of the regular equation (9). In [30, Sec. 6] by substituting the Neumann series (75) of \(c^{f}_{\mathfrak{J}_{N}}(\rho,x)\) into Eq. (9) it was proved that the functions \(\{\sigma_{2n}(x)\}_{n=0}^{\infty}\) must satisfy, at least formally, the recursive relations \[\mathbf{L}_{q}\sigma_{2n}(x)=\frac{4n+1}{4n-3}x^{4n-1}\mathbf{L}_{q}\left[ \frac{\sigma_{2n-2}(x)}{x^{4n-3}}\right],\quad x_{k}<x<x_{k} \tag{92}\] for \(k=0,\cdots,N\). Similarly, substitution of the Neumann series (76) of \(s_{\mathfrak{J}_{N}}(\rho,x)\) into (9) leads to the equalities \[\mathbf{L}_{q}\sigma_{2n+1}(x)=\frac{4n+3}{4n+1}x^{4n+3}\mathbf{L}_{q}\left[ \frac{\sigma_{2n+1}(x)}{x^{4n+1}}\right],\quad x_{k}<x<x_{k}. \tag{93}\] Taking into account that \(\sigma_{n}\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{J}_{N}}\right)\) and combining (92), by Remark 39(iii) and (93) we obtain that the functions \(\sigma_{n}(x)\), \(n\geqslant 2\), must satisfy (at least formally) the following Cauchy problems \[\begin{cases}\mathbf{L}_{q,\mathfrak{J}_{N}}\sigma_{n}(x)=\frac{2n+1}{2n-3}x^{ 2n-1}\mathbf{L}_{q}\left[\frac{\sigma_{n-2}(x)}{x^{2n-3}}\right],\quad 0<x<b,\\ \sigma_{n}(0)=\sigma^{\prime}_{n}(0)=0.\end{cases} \tag{94}\] **Remark 40**: _If \(g\in\mathcal{D}_{2}\left(L_{q,\mathfrak{J}_{N}}\right)\), then \(\frac{g}{f}\in H^{2}(0,b)\)._ _Indeed, \(\frac{g}{f}\in C[0,b]\), and the jump of the derivative at \(x_{k}\) is given by_ \[\left(\frac{g}{f}\right)^{\prime}(x_{k}+)-\left(\frac{g}{f}\right) ^{\prime}(x_{k}-) =\frac{g^{\prime}(x_{k}+)f(x_{k})-f^{\prime}(x_{k}+)g(x_{k})}{f^{ 2}(x_{k})}-\frac{g^{\prime}(x_{k}-)f(x_{k})-f^{\prime}(x_{k}-)g(x_{k})}{f^{2}( x_{k})}\] \[=\frac{1}{f^{2}(x_{k})}\left[\left(g^{\prime}(x_{k}+)-g^{\prime}( x_{k}-)\right)f(x_{k})-g(x_{k})\left(f^{\prime}(x_{k}+)-f^{\prime}(x_{k}-) \right)\right]\] \[=\frac{1}{f^{2}(x_{k})}\left[\alpha_{k}g(x_{k})f(x_{k})-\alpha_{ k}g(x_{k})f(x_{k})\right]=0.\] _Hence \(\frac{g}{f}\in AC[0,b]\), and then \(\frac{g}{f}\in H^{2}(0,b)\)._ **Proposition 41**: _The sequence \(\{\sigma_{n}(x)\}_{n=0}^{\infty}\) satisfying the recurrence relation (94) for \(n\geqslant 2\), with \(\sigma_{0}(x)=\frac{f(x)-1}{2}\) and \(\sigma_{1}(x)=\frac{3}{2}\left(f(x)\int_{0}^{x}\frac{dt}{f^{2}(t)}-x\right)\), is given by_ \[\sigma_{n}(x)=\frac{2n+1}{2n-3}\left(x^{2}\sigma_{n-2}(x)+2(2n-1)\theta_{n}(x )\right),\quad n\geqslant 2, \tag{95}\] _where_ \[\theta_{n}(x):=\int_{0}^{x}\left(\eta_{n}(t)-tf(t)\sigma_{n-2}(t)\right)\frac{ dt}{f^{2}(t)},\quad n\geqslant 2, \tag{96}\] _and_ \[\eta_{n}(x):=\int_{0}^{x}\left((n-1)f(t)+tf^{\prime}(t)\right)\sigma_{n-2}(t) dt,\quad n\geqslant 2. \tag{97}\] **Proof.** Set \(g\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\) and \(n\geqslant 2\). Consider the Cauchy problem \[\begin{cases}\mathbf{L}_{q,\mathfrak{I}_{N}}u_{n}(x)=\frac{2n+1}{2n-3}x^{2n-1} \mathbf{L}_{q}\left[\frac{g(x)}{x^{2n-3}}\right],\quad 0<x<b,\\ u_{n}(0)=u_{n}^{\prime}(0)=0.\end{cases} \tag{98}\] By formula (39) and the Polya factorization \(\mathbf{L}_{q}=-\frac{1}{f}Df^{2}D\frac{1}{f}\) we obtain that the unique solution of the Cauchy problem (98) is given by \[u_{n}(x)=\frac{2n+1}{2n-3}f(x)\int_{0}^{x}\frac{1}{f^{2}(t)}\left(\int_{0}^{t} s^{2n-1}Df^{2}(s)D\left[\frac{g(s)}{s^{2n-3}f(s)}\right]ds\right)dt.\] Consider an antiderivative \(\int s^{2n-1}Df^{2}(s)D\left[\frac{g(s)}{s^{2n-3}f(s)}\right]ds\). Integration by parts gives \[\int s^{2n-1}Df^{2}(s)D\left[\frac{g(s)}{s^{2n-3}f(s)}\right]ds =s^{2n-1}f^{2}(s)D\left(\frac{g(s)}{s^{2n-3}f(s)}\right)-(2n-1)sf (s)g(s)\] \[\quad+\int\left((2n-1)(2n-2)f(s)+2(2n-1)sf^{\prime}(s)\right)g(s)ds.\] Note that \[s^{2n-1}f^{2}(s)D\left(\frac{g(s)}{s^{2n-3}f(s)}\right) =s^{2n-1}f^{2}(s)\frac{D\left(\frac{g(s)}{f(s)}\right)}{s^{2n-3}} -s^{2n-1}f^{2}(s)\frac{\frac{g(s)}{f(s)}}{s^{4n-6}}(2n-3)s^{2n-4}\] \[=s^{2}f^{2}(s)D\left(\frac{g(s)}{f(s)}\right)-(2n-3)sf(s)g(s).\] Since \(g\in\mathcal{D}_{2}\left(\mathbf{L}_{q,\mathfrak{I}_{N}}\right)\), by Remark 40, \(D\left(\frac{g(s)}{f(s)}\right)\) is continuous in \([0,b]\). Thus, \[\int s^{2n-1}Df^{2}(s)D\left[\frac{g(s)}{s^{2n-3}f(s)}\right]ds =s^{2}f^{2}(s)D\left(\frac{g(s)}{f(s)}\right)-(4n-4)sf(s)g(s)\] \[\quad+2(2n-1)\int\left((n-1)f(s)+sf^{\prime}(s)\right)ds\] is well defined at \(s=0\) and is continuous in \([0,b]\). Then we obtain that \[\Phi(t) :=\int_{0}^{t}s^{2n-1}Df^{2}(s)D\left[\frac{g(s)}{s^{2n-3}f(s)} \right]ds\] \[=t^{2}f^{2}(t)D\left(\frac{g(t)}{f(t)}\right)-(4n-4)tf(t)g(t)+2(2 n-1)\Theta_{n}[g](t),\] with \(H_{n}[g](t):=\int_{0}^{t}\left((n-1)f(s)+sf^{\prime}(s)\right)g(s)ds\), is a continuous function in \([0,b]\). Now, \[\int_{0}^{x}\Phi(t)\frac{dt}{f^{2}(t)} =\int_{0}^{x}t^{2}D\left[\frac{g(t)}{f(t)}\right]dt-(4n-4)\int_{0 }^{t}t\frac{g(t)}{f(t)}dt+2(2n-1)H_{n}[g](t)\] \[=x^{2}\frac{g(x)}{f(x)}-2(2n-1)\int_{0}^{x}\left[H_{n}[g](t)-tf( t)g(t)\right]dt.\] Hence \[u_{n}(x)=\frac{2n+1}{2n-3}\left(x^{2}g(x)-2(2n-1)\Theta_{n}[g](x)\right), \tag{99}\] with \(\Theta_{n}[g](x):=\int_{0}^{x}\left[H_{n}[g](t)-tf(t)g(t)\right]dt\). Finally, since \(\sigma_{0},\sigma_{1}\in\mathcal{D}_{2}\left(\mathbf{L}_{q,3_{N}}\right)\), formula (95) is obtained for all \(n\geqslant 2\) by induction, taking \(g=\sigma_{2n-2}\) in (98) and \(\eta_{n}(x)=H_{n}[\sigma_{n-2}](x)\), \(\theta_{n}(x)=\Theta_{n}[\sigma_{n-2}](x)\) in (99). Integral relations of type (95) are effective for the numerical computation of the partial sums (77) and (78), as seen in [30, 32]. ## 7 Integral representation for the derivative Since \(e^{h}_{\mathcal{I}_{N}}(\rho,\cdot)\in AC[0,b]\), it is worthwhile looking for an integral representation of the derivative of \(e^{h}_{\mathcal{I}_{N}}(\rho,x)\). Differentiating (16) we obtain \[(e^{h}_{\mathcal{I}_{N}})^{\prime}(\rho,x) =\widetilde{e}^{\prime}_{h}(\rho,x)+\sum_{k=0}^{N}\alpha_{k} \widetilde{e}_{h}(\rho,x_{k})H(x-x_{k})\widetilde{s}^{\prime}_{k}(\rho,x-x_{ k})\] \[\quad+\sum_{J\in\mathcal{I}_{N}}\alpha_{J}H(x-x_{j_{|J|}}) \widetilde{e}_{h}(\rho,x_{j_{1}})\left(\prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}} (\rho,x_{j_{l+1}}-x_{j_{l}})\right)\widetilde{s}^{\prime}_{j_{|J|}}(\rho,x-x_ {j_{|J|}}).\] Differentiating (18) and using that \(\widehat{H}_{k}(x,x)=\frac{1}{2}\int_{0}^{x}q(t+x_{k})dt\), we obtain \[\widetilde{s}^{\prime}_{k}(\rho,x)=\cos(\rho x)+\frac{1}{2}\frac{\sin(\rho x) }{\rho}\int_{0}^{x}q(t+x_{k})dt+\int_{0}^{x}\partial_{x}\widehat{H}_{k}(x,t) \frac{\sin(\rho t)}{\rho}dt.\] Denote \[w(y,x):=\frac{1}{2}\int_{y}^{x}q(s)ds\quad\text{ for }\;x,y\in[0,b]. \tag{100}\] Hence, the derivative \(\widetilde{s}^{\prime}_{k}(\rho,x-x_{k})\) can be written as \[\widetilde{s}^{\prime}_{k}(\rho,x-x_{k})=\cos(\rho(x-x_{k}))+\int_{-(x-x_{k}) }^{x-x_{k}}\widetilde{K}^{1}_{k}(x,t)e^{i\rho t}dt, \tag{101}\] where \(\widetilde{K}^{1}_{k}(x,t)=w(x_{k},x)+\frac{1}{2}\int_{|t|}^{x-x_{k}} \partial_{x}\widehat{H}_{k}(x,t)dt\). On the other hand, differentiation of (17) and the Goursat conditions for \(\widetilde{K}^{h}(x,t)\) lead to the equality \[\widetilde{e}^{\prime}_{h}(\rho,x)=(i\rho+w(0,x))e^{i\rho x}+h\cos(\rho x)+ \int_{-x}^{x}\partial_{x}\widetilde{K}^{h}(x,t)e^{i\rho t}dt. \tag{102}\] Using the fact that \[\cos(\rho A)\int_{-B}^{B}f(t)e^{i\rho t}dt=\int_{-(B+A)}^{B+A}\frac{1}{2} \left(\chi_{[-(B+A),B-A]}(t)f(t-A)+\chi_{[A-B,B+A]}(t)f(t+A)\right)e^{i\rho t}dt\] for \(A,B>0\) and \(f\in L_{2}(\mathbb{R})\) with \(\operatorname{Supp}(f)\subset[-B,B]\), we obtain \[\tilde{e}_{h}(\rho,x_{j})\widehat{S}_{k}^{\prime}(\rho,x-x_{k})=e^{i\rho x_{j}} \cos(\rho(x-x_{k}))+\mathcal{F}\left[\widehat{K}_{x_{j},x_{k}}(x,t)\right],\] where \[\widehat{K}_{x_{j},x_{k}}(x,t) =\chi_{[x_{k}-x-x_{j},x-x_{k}-x_{j}]}(t)\widetilde{K}_{k}^{1}(x,t -x_{j})+\chi_{x_{j}}(t)\widetilde{K}^{h}(x_{j},t)*\chi_{x-x_{k}}(t)\widehat{K }_{k}^{1}(x,t)\] \[\qquad+\frac{1}{2}\chi_{[x_{k}-x_{j}-x,x_{j}-x+x_{k}]}(t)\widehat {K}^{h}(x_{j},t-x+x_{k})\] \[\qquad+\frac{1}{2}\chi_{[x-x_{k}-x_{j},x-x_{k}+x_{j}]}(t)\widehat {K}^{h}(x_{j},t+x-x_{k})\Big{)}.\] By Lemma 9 the support of \(\widehat{K}_{x_{j},x_{k}}(x,t)\) belongs to \([x_{k}-x-x_{j},x-x_{k}+x_{j}]\). Using the equality \[\prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}(\rho,x_{j_{l+1}}-x_{j_{l}})=\mathcal{F }\left\{\left(\prod_{l=1}^{|J|-1}\right)^{*}\Big{(}\chi_{x_{j_{l+1}}-x_{j_{l}} }(t)\widetilde{K}_{k}(x_{j_{l+1}},t)\Big{)}\right\}\] we have \[(e_{\mathcal{I}_{N}}^{h})^{\prime i\rho x}+h\cos(\rho x)+\sum_{k=0}^{N}\alpha _{k}H(x-x_{k})e^{i\rho x_{k}}\cos(\rho(x-x_{k}))+\mathcal{F}\left\{E_{ \mathcal{I}_{N}}^{h}(x,t)\right\}\] where \[E_{\mathcal{I}_{N}}^{h}(x,t) =\chi_{x}(t)\partial_{x}\widetilde{K}^{h}(x,t)+\sum_{k=0}^{N} \alpha_{k}H(x-x_{k})\widehat{K}_{x_{k},x_{k}}(x,t)\] \[\quad+\sum_{J\in\mathcal{I}_{N}}\alpha_{J}H(x-x_{j_{|J|}}) \widehat{K}_{x_{j_{1}},x_{j_{|J|}}}(x,t)*\left(\prod_{l=1}^{|J|-1}\right)^{*} \Big{(}\chi_{x_{j_{l+1}}-x_{j_{l}}}(t)\widetilde{K}_{k}(x_{j_{l+1}},t)\Big{)}\,.\] Again, by Lemma 9 the support of \(E_{\mathcal{I}_{N}}^{h}(x,t)\) belongs to \([-x,x]\). Since \(e^{i\rho x_{k}}\cos(\rho(x-x_{k}))=\frac{1}{2}e^{i\rho x}\left(1+e^{-2i\rho(x -x_{k})}\right)\), we obtain the following representation. **Theorem 42**: _The derivative \((e_{\mathcal{I}_{N}}^{h})^{\prime}(\rho,x)\) admits the integral representation_ \[(e_{\mathcal{I}_{N}}^{h})^{\prime}(\rho,x) =\left(i\rho+w(0,x)+\frac{1}{2}\sigma_{\mathcal{I}_{N}}(x)\right) e^{i\rho x}+h\cos(\rho x)\] \[\quad+\sum_{k=0}^{N}\frac{\alpha_{k}}{2}H(x-x_{k})e^{-2i\rho(x-x_ {k})}+\int_{-x}^{x}E_{\mathcal{I}_{N}}^{h}(x,t)e^{i\rho t}dt, \tag{103}\] _where \(E_{\mathcal{I}_{N}}^{h}(x,\cdot)\in L_{2}(-x,x)\) for all \(x\in(0,b]\)._ **Corollary 43**: _The derivatives of the solutions \(c^{h}_{\mathfrak{I}_{N}}(\rho,x)\) and \(s_{\mathfrak{I}_{N}}(\rho,x)\) admit the integral representations_ \[(c^{h}_{\mathfrak{I}_{N}})^{\prime}(\rho,x) =-\rho\sin(\rho x)+\left(h+w(0,x)+\frac{1}{2}\sigma_{\mathfrak{I} _{N}}(x)\right)\cos(\rho x)\] \[\quad+\sum_{k=0}^{N}\frac{\alpha_{k}}{2}H(x-x_{k})\cos(2\rho(x-x_ {k}))+\int_{0}^{x}M^{h}_{\mathfrak{I}_{N}}(x,t)\cos(\rho t)dt, \tag{104}\] \[s^{\prime}_{\mathfrak{I}_{N}}(\rho,x) =\cos(\rho x)+\left(w(0,x)+\frac{1}{2}\sigma_{\mathfrak{I}_{N}}( x)\right)\frac{\sin(\rho x)}{\rho}\] \[\quad-\sum_{k=0}^{N}\alpha_{k}H(x-x_{k})\frac{\sin(2\rho(x-x_{k}) )}{2\rho}+\int_{0}^{x}R_{\mathfrak{I}_{N}}(x,t)\frac{\sin(\rho t)}{\rho}dt, \tag{105}\] _where_ \[N^{h}_{\mathfrak{I}_{N}}(x,t) =E^{h}_{\mathfrak{I}_{N}}(x,t)+E^{h}_{\mathfrak{I}_{N}}(x,-t) \tag{106}\] \[R^{h}_{\mathfrak{I}_{N}}(x,t) =E^{h}_{\mathfrak{I}_{N}}(x,t)-E^{h}_{\mathfrak{I}_{N}}(x,-t), \tag{107}\] _defined for \(x\in[0,b]\) and \(|t|\leqslant x\)._ **Corollary 44**: _The derivatives of the solutions \(c^{h}_{\mathfrak{I}_{N}}(\rho,x)\) and \(s_{\mathfrak{I}_{N}}(\rho,x)\) admit the NSBF representations_ \[(c^{h}_{\mathfrak{I}_{N}})^{\prime}(\rho,x) =-\rho\sin(\rho x)+\left(h+w(0,x)+\frac{1}{2}\sigma_{\mathfrak{I} _{N}}(x)\right)\cos(\rho x)\] \[\quad+\sum_{k=0}^{N}\frac{\alpha_{k}}{2}H(x-x_{k})\cos(2\rho(x-x_ {k}))+\sum_{n=0}^{\infty}(-1)^{n}l_{n}(x)j_{2n}(\rho x), \tag{108}\] \[s^{\prime}_{\mathfrak{I}_{N}}(\rho,x) =\cos(\rho x)+\left(w(0,x)+\frac{1}{2}\sigma_{\mathfrak{I}_{N}} (x)\right)\frac{\sin(\rho x)}{\rho}\] \[\quad-\sum_{k=0}^{N}\alpha_{k}H(x-x_{k})\frac{\sin(2\rho(x-x_{k}) )}{2\rho}+\sum_{n=0}^{\infty}(-1)^{n}r_{n}(x)j_{2n+1}(\rho x), \tag{109}\] _where \(\{l_{n}(x)\}_{n=0}^{\infty}\) and \(\{r_{n}(x)\}_{n=0}^{\infty}\) are the coefficients of the Fourier-Legendre expansion of \(M^{h}_{\mathfrak{I}_{N}}(x,t)\) and \(R_{\mathfrak{I}_{N}}(x,t)\) in terms of the even and odd Legendre polynomials, respectively._ ## 8 Conclusions The construction of a transmutation operator that transmute the solutions of equation \(v^{\prime\prime}+\lambda v=0\) into solutions of (1) is presented. The transmutation operator is obtained from the closed form of the general solution of equation (1). It was shown how to construct the image of the transmutation operator on the set of polynomials, this with the aid of the SPPS method. A Fourier-Legendre series representation for the integral transmutation kernel is obtained, together with a representation for the solutions \(c^{h}_{\mathfrak{I}_{N}}(\rho,x)\), \(s_{\mathfrak{I}_{N}}(\rho,x)\) and their derivatives as Neumann series of Bessel functions, together with integral recursive relations for the construction of the Fourier-Legendre coefficients. The series (75), (76), (108), (109) are useful for solving direct and inverse spectral problems for (1), as shown for the regular case [28, 29, 30, 32]. ## Acknowledgments Research was supported by CONACYT, Mexico via the project 284470.
2302.12106
On tree decompositions whose trees are minors
In 2019, Dvo\v{r}\'{a}k asked whether every connected graph $G$ has a tree decomposition $(T, \mathcal{B})$ so that $T$ is a subgraph of $G$ and the width of $(T, \mathcal{B})$ is bounded by a function of the treewidth of $G$. We prove that this is false, even when $G$ has treewidth $2$ and $T$ is allowed to be a minor of $G$.
Pablo Blanco, Linda Cook, Meike Hatzel, Claire Hilaire, Freddie Illingworth, Rose McCarty
2023-02-23T15:46:23Z
http://arxiv.org/abs/2302.12106v1
# On tree decompositions whose trees are minors ###### Abstract In 2019, Dvorak asked whether every connected graph \(G\) has a tree decomposition \((T,\mathcal{B})\) so that \(T\) is a subgraph of \(G\) and the width of \((T,\mathcal{B})\) is bounded by a function of the treewidth of \(G\). We prove that this is false, even when \(G\) has treewidth \(2\) and \(T\) is allowed to be a minor of \(G\). ## 1 Introduction Suppose that a graph \(G\) has small treewidth, and consider all tree decompositions \((T,\mathcal{B})\) of \(G\) whose width is not too much larger than the optimum. To what extent can we choose or manipulate the "shape" of \(T\)? For graphs with no long path, we can choose \(T\) to also have no long path [11]; this gives rise to the parameter called _treedepth_. Similarly, for graphs of bounded degree, we can choose \(T\) to also have bounded degree [10]; this relates to the parameters of _congestion_ and _dilation_. Moreover, for graphs excluding any tree as a minor, we can choose \(T\) to just be a path [1]; this results in the parameter called _pathwidth_. It would be wonderful if we could unify all such results into a single theorem which relates the shape of \(T\) to \(G\). In 2019, Dvorak suggested one way of accomplishing this goal. In the question below and throughout the paper, we write \(\operatorname{tw}(G)\) for the treewidth of \(G\). **Question 1** ([10]).: _Does there exist a polynomial \(P\) such that every connected graph \(G\) has a tree decomposition \((T,\mathcal{B})\) of width at most \(P(\operatorname{tw}(G))\) such that \(T\) is a subgraph of \(G\)?_ Unfortunately, we prove that the answer to Question 1 is "no" in the following strong sense. **Theorem 2**.: _For every positive integer \(k\), there is a connected graph \(G\) of treewidth \(2\) such that if \((T,\mathcal{B})\) is a tree decomposition of \(G\) and \(T\) is a minor of \(G\), then \((T,\mathcal{B})\) has width at least \(k\)._ Intriguingly, in our proof of Theorem 2, it seems crucial that the constructed graphs contain all trees as minors; perhaps Question 1 could be true when \(\operatorname{tw}(G)\) is replaced by \(\operatorname{pw}(G)\), the pathwidth of \(G\). In other words, perhaps there exists a polynomial (or even just some function) \(P\) so that every connected graph \(G\) has a tree decomposition \((T,\mathcal{B})\) of width at most \(P(\operatorname{pw}(G))\) such that \(T\) is a subgraph of \(G\). We leave this as an open problem. There has been strong interest in obtaining good bounds for treedepth [11, 12, 13, 14], pathwidth [15], and treewidth [16, 17] as a function of the natural obstructions (which are paths, trees, and grids, respectively2). These problems were in large part motivated by the desire to obtain better approximation algorithms and better win-win algorithms based on the obstructions. An affirmative answer to Question 1 would have unified these approaches, but unfortunately Theorem 2 shows that this is not possible. Footnote 2: Formally, a class of graphs has bounded treedepth/pathwidth/treewidth if and only if it does not contain all paths/trees/grids as minors, respectively. See [11], [13], and [14] for the respective proofs. Note that sometimes the obstructions are considered as subgraphs or subdivisions rather than minors. This occurs when the two definitions are equivalent, for instance when considering paths as minors (or equivalently as subgraphs). There has also been recent interest in finding the \(2\)-connected obstructions for treedepth [12] and pathwidth [15, 16] in \(2\)-connected graphs. It seems unlikely that requiring \(G\) to be \(2\)-connected would change the answer to Question 1, but the graphs we construct for Theorem 2 are not \(2\)-connected, thus leaving this as an open possibility. We present a self-contained proof of Theorem 2, however some steps were discovered independently by Hickingbotham [12]. In particular, Hickingbotham [12, Lemma 7.2.1] noticed that it is just as hard to ensure \(T\) is a subgraph of \(G\) in Question 1 as it is to ensure \(T\) is a minor of \(G\). Thus our main contribution is Lemma 3, which essentially shows that we can also force each vertex of \(T\) to be in its own bag. Hickingbotham [12, Theorem 7.5.1] already proved that this stronger condition can blow up the width. Moreover, Hickingbotham proved some positive results, including that the answer to Question 1 is "yes" if \(G\) is an outerplanar graph [12, Theorem 7.3.3]. Note that outerplanar graphs are the graphs with simple treewidth at most \(2\)[13], and so in this sense Theorem 2 is optimal. We outline our approach to proving Theorem 2 in more detail in the next section. ## 2 Preliminaries We use the following "subtree view" of tree decompositions. Recall that a _subtree_ of a graph \(G\) is any subgraph of \(G\) which is connected and acyclic. **Definition 2.1**.: _Let \(G\) be a graph, let \(T\) be a tree, and let \(\mathcal{B}=\{B_{x}\colon x\in V(T)\}\) be a family of subsets of \(V(G)\) indexed by the vertices of \(T\). For each vertex \(v\) of \(G\), we define_ \[T_{v}\coloneqq T[\{x\colon v\in B_{x}\}].\] _Then \((T,\mathcal{B})\) is a tree decomposition of \(G\) if and only if the following conditions both hold._ * _Each_ \(T_{v}\) _is a non-empty subtree of_ \(T\)_._ * _If_ \(uv\in E(G)\)_, then_ \(V(T_{u})\cap V(T_{v})\neq\varnothing\)_._ We use this notation \(T_{v}\) throughout the paper. When there is no chance for confusion, we refer to \(T_{v}\) and its vertex set \(V(T_{v})\) interchangeably. The _width_ of \((T,\mathcal{B})\) is then the maximum, over all \(x\in V(T)\), of \(|\{v\in V(G)\colon x\in T_{v}\}|-1\). The _treewidth_ of \(G\) is the minimum width of a tree decomposition of \(G\). Note that, if we are given a tree \(T\) and a collection \((T_{v}\colon v\in V(G))\) of subtrees of \(T\) which satisfy the conditions from Definition2.1, then we can define a tree decomposition \((T,\mathcal{B})\) of \(G\) by setting \(B_{x}\coloneqq\{v\in V(G)\colon x\in T_{v}\}\) for each \(x\in V(T)\). We now outline our overall strategy for proving Theorem2. This theorem equivalently says that creftype2.2 below is false, even for connected graphs of treewidth \(2\). We disprove creftype2.2 by reducing each of the three conjectures below to the next one, and then disproving the final conjecture. Afterwards, we evaluate the treewidth of the constructed counterexamples more carefully. Note that in creftype2.4, the condition "for every vertex \(v\) of \(G\), \(v\in T_{v}^{\,\ast}\) is equivalent to "for every vertex \(x\) of \(T\), \(x\in B_{x}\)". **Conjecture 2.2**.: _There is a function \(f\) such that every connected graph \(G\) has a tree decomposition \((T,\mathcal{B})\) of width at most \(f(\operatorname{tw}(G))\) such that \(T\) is a minor of \(G\)._ **Conjecture 2.3**.: _There is a function \(f\) such that every connected graph \(G\) has a tree decomposition \((T,\mathcal{B})\) of width at most \(f(\operatorname{tw}(G))\) such that \(T\) is a spanning tree of \(G\)._ **Conjecture 2.4**.: _There is a function \(f\) such that every connected graph \(G\) has a tree decomposition \((T,\mathcal{B})\) of width at most \(f(\operatorname{tw}(G))\) such that \(T\) is a spanning tree of \(G\) and, for every vertex \(v\) of \(G\), we have \(v\in T_{v}\)._ Hickingbotham proved that creftype2.2 implies creftype2.3 in [12, Lemma 7.2.1]. In Section3, we show that creftype2.3 implies creftype2.4; this crucial new step is our main contribution. Finally, in Section4, we construct a graph that does not satisfy creftype2.4. Hickingbotham [12, Theorem 7.5.1] independently discovered a counterexample to creftype2.4 which actually contains our counterexample. However, we include ours since it is slightly simpler and makes the paper self-contained. We now conclude this section by providing a short proof that creftype2.2 (where \(T\) is a minor) implies creftype2.3 (where \(T\) is a spanning tree), for the sake of completeness. **Lemma 2.5**.: _If \(G\) is a connected graph with a tree decomposition \((T,\mathcal{B})\) of width \(k\) such that \(T\) is a minor of \(G\), then there exists a tree decomposition \((T^{\prime},\mathcal{B}^{\prime})\) of \(G\) of width \(k\) such that \(T^{\prime}\) is a spanning tree of \(G\)._ Proof.: Since \(T\) is a minor of \(G\), there exists a collection \((Q_{x}\colon x\in V(T))\) of pairwise disjoint non-empty subtrees of \(G\) such that, for each edge \(xy\in E(T)\), there exists an edge \(e_{xy}\in E(G)\) with one end in \(V(Q_{x})\) and the other end in \(V(Q_{y})\). Since \(G\) is connected, we may assume that \(V(G)=\cup_{x\in V(T)}V(Q_{x})\). Now, let \(T^{\prime}\) be the spanning tree of \(G\) which is obtained from \(\cup_{x\in V(T)}Q_{x}\) by adding the edge \(e_{xy}\) for all \(xy\in E(T)\). For each \(v\in V(G)\), let \(T^{\prime}_{v}\) be the subtree of \(T^{\prime}\) which is induced by the union of all sets \(V(Q_{x})\) such that \(x\in T_{v}\). This collection of subtrees of \(T^{\prime}\) satisfies the conditions of Definition2.1 and therefore yields a tree decomposition \((T^{\prime},\mathcal{B}^{\prime})\). Furthermore, this tree decomposition has the same width at \((T,\mathcal{B})\), which completes the proof. ## 3 Reduction to creftype2.4 In this section we show that creftype2.3 (where \(T\) is a spanning tree) implies creftype2.4 (where, additionally, for every vertex \(v\) of \(G\), we have \(v\in T_{v}\)). We use the following well-known fact about tree decompositions of paths. We include a proof for the sake of completeness. The bounds are not optimal; we aim for simplicity instead. **Lemma 3.1**.: _For any positive integers \(h\) and \(k\), if \(P\) is a path with at least \((k+2)^{h}\) vertices and \((T,\mathcal{B})\) is a tree decomposition of \(P\) of width at most \(k\), then \(T\) contains a path of length \(h\)._ Proof.: We consider a tree decomposition \((T,\mathcal{B})\) where \(T\) is rooted at an arbitrary vertex \(r\in V(T)\). The _height_ of \(T\) is then the maximum length of a path which has \(r\) as one of its ends. For fixed \(k\), we prove by induction on \(h\) that, under the same hypothesis, we actually obtain the following stronger conclusion: that the height of \(T\) is at least \(h\). The base case of \(h=1\) holds since \(P\) has more than \(k+1\) vertices and therefore \((T,\mathcal{B})\) has more than one bag. So we may assume that \(h>1\) and the claim holds for \(h-1\). Observe that we can partition \(V(P)\) into \(k+2\) sets, each of which induces in \(P\) a path with at least \((k+2)^{h-1}\) vertices. Since \((T,\mathcal{B})\) has width at most \(k\), one of these sets is disjoint from the root bag \(B_{r}\). Thus, by the inductive hypothesis, one of the components of \(T-\{r\}\), when rooted at its neighbour of \(r\), has height at least \(h-1\). So \(T\) has height at least \(h\), as desired. We use the following construction to show that Conjecture 2.3 implies Conjecture 2.4. Given a positive integer \(k\), a graph \(G\), and an arbitrary ordering of the vertices of \(G\), we define a new graph denoted \(\widetilde{G}\). This graph \(\widetilde{G}\) is obtained from \(G\) by attaching one rooted tree to each vertex of \(G\); so \(\widetilde{G}\) has the same treewidth as \(G\) (unless \(E(G)=\varnothing\)). Moreover, in Lemma 3.3, we prove that if \(\widetilde{G}\) satisfies Conjecture 2.3 with a tree decomposition of width \(k\), then \(G\) satisfies Conjecture 2.4 with a tree decomposition of width \(k+1\). The attached trees are chosen such that no two have "comparable" tree decompositions. More formally, given two trees \(T_{1}\) and \(T_{2}\), there is no tree decomposition \((T^{\prime}_{2},\mathcal{B}^{\prime}_{2})\) of \(T_{2}\) of width \(k\) such that \(\mathcal{T}^{\prime}_{2}\) is a subgraph of \(T_{1}\), and likewise with the roles of \(T_{1}\) and \(T_{2}\) reversed. We do not frame the argument in this way, but it is the underlying reason our proof works. We accomplish this condition by, up to symmetry between \(T_{1}\) and \(T_{2}\), making height of \(T_{2}\) much larger than the height of \(T_{1}\), and the width of \(T_{1}\) much larger than \(|V(T_{2})|\). See Figure 1 for a depiction. With this intuition, we are ready to state the main definition. **Definition 3.2**.: _Fix a positive integer \(k\), a graph \(G\), and an arbitrary ordering \(a_{1},\ldots,a_{n}\) of the vertices of \(G\). Then let \(\widetilde{G}\) be the graph which is constructed from \(G\) as follows._ * _First define integers_ \(2=h_{1}\ll h_{2}\ll\cdots\ll h_{n}\) _as follows. Given_ \(h_{j-1}\)_, we define_ \(h_{j}\coloneqq(k+2)^{2h_{j-1}}+1\)_. Thus, by Lemma_ 3.1_, if_ \(P\) _is a path on at least_ \(h_{j}-1\) _vertices and_ \((T,\mathcal{B})\) _is a tree decomposition of_ \(P\) _of width at most_ \(k\)_, then_ \(T\) _contains a path of length_ \(2h_{j-1}\)_._ * _Next define integers_ \((k+1)n+1=w_{n}\ll w_{n-1}\ll\cdots\ll w_{1}\) _and corresponding rooted trees_ \(S_{n},S_{n-1},\ldots,S_{1}\) _as follows. Given_ \(w_{j}\)_, define_ \(S_{j}\) _to be the complete rooted_ \(w_{j}\)_-ary tree of height_ \(h_{j}\)_. Then, given_ \(w_{n},w_{n-1},\ldots,w_{j+1}\) _and_ \(S_{n},S_{n-1},\ldots,S_{j+1}\)_, define_ \[w_{j}\coloneqq(k+1)\bigg{(}n+\sum_{i=j+1}^{n}|V(S_{i})|\bigg{)}+1.\] _Finally, let \(\widetilde{G}\) be the graph which is obtained from the disjoint union of \(G,S_{1},S_{2},\ldots,S_{n}\) by, for each \(j\in\{1,2,\ldots,n\}\), identifying \(a_{j}\) with the root of \(S_{j}\)._ Note that the graph \(\widetilde{G}\) from Definition 3.2 can be obtained from \(G\) by adding pendant vertices one at a time. It follows that \(\operatorname{tw}(\widetilde{G})=\max{(\operatorname{tw}(G),1)}\). The next key lemma therefore completes the reduction from Conjecture 2.3 to Conjecture 2.4. **Lemma 3.3**.: _Let \(k\) be a positive integer, let \(G\) be a connected graph, let \(a_{1},\ldots,a_{n}\) be an ordering of the vertices of \(G\), and let \(\widetilde{G}\) be the resulting graph constructed using Definition 3.2. Suppose that \(\widetilde{G}\) has a tree decomposition \((T,\mathcal{B})\) of width at most \(k\) such that \(T\) is a spanning tree of \(\widetilde{G}\)._ _Then there exists a tree decomposition \((T^{\prime},\mathcal{B}^{\prime})\) of \(G\) of width at most \(k+1\) such that \(T^{\prime}\) is a spanning tree of \(G\) and for every \(v\in V(G)\), we have \(v\in T^{\prime}_{v}\)._ Proof.: We use the notation introduced in Definition 3.2, except that we view each tree \(S_{j}\) as an induced subgraph of \(\widetilde{G}\) which is rooted at the vertex \(a_{j}\in V(G)\). For the sake of convenience, we do not distinguish between \(S_{j}\) and its vertex set. In the first few claims we deduce roughly where each subtree \(T_{v}\) (as defined in Definition 2.1) lies. We say that two sets _meet_ if their intersection is non-empty. **Claim 3.3.1**.: _For every \(j\in\{1,2,\ldots,n\}\) and every non-leaf vertex \(v\) of \(S_{j}\), the set \(T_{v}\) meets \((S_{1}\cup\cdots\cup S_{j})\setminus V(G)\)._ Proof.: Consider the union of the bags of \((T,\mathcal{B})\) that contain \(v\). Each bag has size at most \(k+1\), so this union has size at most \((k+1)|V(T_{v})|\). On the other hand, this union contains every neighbour of \(v\) in \(\widetilde{G}\) and so, by the choice of \(w_{j}\), \[(k+1)|V(T_{v})|\geqslant\deg_{\widetilde{G}}(v)\geqslant\deg_{S_{j}}(v) \geqslant w_{j}>(k+1)\bigg{(}|V(G)|+\sum_{i=j+1}^{n}|V(S_{i})|\bigg{)}.\] In particular, \(T_{v}\) is not a subgraph of \(S_{j+1}\cup\cdots\cup S_{n}\cup G\). The claim follows. We say that a vertex \(v\) of \(\widetilde{G}\) is _free_ if \(T_{v}\) meets \(V(G)\) or, equivalently, if \(v\in B_{a_{1}}\cup\cdots\cup B_{a_{n}}\). Otherwise, we call \(v\)_constrained_. Note that if \(v\) is constrained, then \(T_{v}\) is a subgraph of some \(S_{j}-a_{j}\) since \(T_{v}\) is a subtree of \(\widetilde{G}\). The number of free vertices is at most \[|B_{a_{1}}\cup\cdots\cup B_{a_{n}}|\leqslant(k+1)n,\] and so almost all vertices are constrained. Figure 1: The graph \(\widetilde{G}\) which is obtained from \(G\) by attaching the complete \(w_{j}\)-ary tree \(S_{j}\) of height \(h_{j}\) to each vertex \(a_{j}\in V(G)\). **Claim 3.3.2**.: _For every \(j\in\{1,2,\ldots,n\}\), the vertex \(a_{j}\) has a child \(b_{j}\) in \(S_{j}\) such that \(T_{b_{j}}\) is a subgraph of \(S_{j}-a_{j}\)._ Proof.: As \(S_{j}\) is a complete \(w_{j}\)-ary tree of height \(h_{j}\), there are \(w_{j}\) vertex-disjoint paths that start at the children of \(a_{j}\) and end at parents of leaves of \(S_{j}\). Since \(w_{j}>(k+1)n\), at least one of these paths contains no free vertices - call this path \(P\). Let \(T_{P}=\cup_{v\in V(P)}T_{v}\). So \(T_{P}\) is a subtree of \(\widetilde{G}\). Each \(v\in V(P)\) is constrained, and so for each \(v\) there is an \(i\) such that \(T_{v}\) is a subgraph of \(S_{i}-a_{i}\). But, as \(T_{P}\) is connected and there is no edge between different \(S_{i}-a_{i}\), this \(i\) must be the same for all \(v\in V(P)\). That is, there is some \(i\) such that \(T_{P}\) is a subgraph of \(S_{i}-a_{i}\). Let \(b_{j}\) be the child of \(a_{j}\) that is a vertex of \(P\) (since \(h_{1}\geqslant 2\), such a \(b_{j}\) exists). By Claim 3.3.1, we have that \(T_{b_{j}}\) meets \((S_{1}-a_{1})\cup\cdots\cup(S_{j}-a_{j})\). Since \(T_{b_{j}}\) is a subgraph of \(T_{P}\), we have \(i\leqslant j\). Next focus on the tree decomposition \((T_{P},\mathcal{B}_{P})\) of \(P\) where \(\mathcal{B}_{P}\) is \(\mathcal{B}\) restricted to the vertices of \(P\). This tree decomposition has width at most \(k\). The path \(P\) contains \(h_{j}-1\) vertices and so, by the choice of \(h_{j}\), the tree \(T_{P}\) must contain a path of length at least \(2h_{j-1}\). However, \(T_{P}\) is a subgraph of \(S_{i}-a_{i}\) whose longest paths have length less than \(2h_{i}\). In particular, this implies that \(h_{i}>h_{j-1}\) and so \(i\geqslant j\). Thus \(i=j\) and \(b_{j}\) is as required. We say that a vertex \(a_{j}\in V(G)\) is _grounded_ if \(T_{a_{j}}\) contains \(a_{j}\). **Claim 3.3.3**.: _If a vertex \(a_{j}\in V(G)\) is not grounded, then \(T_{a_{j}}\) is a subgraph of \(S_{j}-a_{j}\) and every neighbour \(a_{i}\in V(G)\) of \(a_{j}\) is both grounded and satisfies \(a_{j}\in T_{a_{i}}\)._ Proof.: Suppose that \(a_{j}\in V(G)\) is not grounded. Then \(a_{j}\notin T_{a_{j}}\). Let \(b_{j}\) be the child of \(a_{j}\) given by Claim 3.3.2. As \(a_{j}\) and \(b_{j}\) are adjacent, \(T_{a_{j}}\) meets \(T_{b_{j}}\). But \(T_{b_{j}}\) is a subgraph of \(S_{j}-a_{j}\), and so \(T_{a_{j}}\) meets \(S_{j}-a_{j}\). However, \(T_{a_{j}}\) is connected and does not contain \(a_{j}\), so \(T_{a_{j}}\) must be a subgraph of \(S_{j}-a_{j}\). Let \(a_{i}\in V(G)\) be a neighbour of \(a_{j}\). If \(a_{i}\) is not grounded, then \(T_{a_{i}}\) is a subgraph of \(S_{i}-a_{i}\). But then \(T_{a_{i}}\) and \(T_{a_{j}}\) do not meet, which is impossible as \(a_{i}\) and \(a_{j}\) are adjacent. Thus \(a_{i}\) is grounded. Now \(T_{a_{i}}\) and \(T_{a_{j}}\) meet and so \(T_{a_{i}}\) meets \(S_{j}-a_{j}\). So, since \(T_{a_{i}}\) is connected, \(T_{a_{i}}\) contains \(a_{j}\). We now define a tree decomposition of \(G\) which satisfies Lemma 3.3. First, let \(T^{\prime}\) be the subgraph of \(T\) induced by \(V(G)\); notice that \(T^{\prime}\) is a spanning tree of \(G\) since \(T\) is a spanning tree of \(\widetilde{G}\). Next, delete all bags \(B_{x}\) where \(x\notin V(T^{\prime})\) and delete all vertices of \(\widetilde{G}\) that are not vertices of \(G\). Finally, if a vertex \(a_{j}\) is not grounded, then add \(a_{j}\) to the bag \(B_{a_{j}}\). Call the resulting collection of bags \(\mathcal{B}^{\prime}=(B^{\prime}_{a_{j}})_{1\leqslant j\leqslant n}\). We claim that \((T^{\prime},\mathcal{B}^{\prime})\) is a tree decomposition of \(G\). This completes the proof of Lemma 3.3 since it is clear that \((T^{\prime},\mathcal{B}^{\prime})\) has width at most \(k+1\), that \(T^{\prime}\) is a spanning tree of \(G\), and that for every \(a_{j}\in V(G)\), we have \(a_{j}\in T^{\prime}_{a_{j}}\). Notice that if a vertex \(a_{j}\in V(G)\) is grounded, then \(T^{\prime}_{a_{j}}\) is just the induced subgraph of \(T_{a_{j}}\) restricted to \(V(G)\); so \(T^{\prime}_{a_{j}}\) is still connected. Likewise, if \(a_{j}\) is not grounded, then by Claim 3.3.3, \(T^{\prime}_{a_{j}}=\{a_{j}\}\) is connected. We are left to check that for every edge \(a_{i}a_{j}\in E(G)\), the subtrees \(T^{\prime}_{a_{i}}\) and \(T^{\prime}_{a_{j}}\) meet. First suppose that \(a_{j}\) is not grounded. Then, by Claim 3.3.3, \(T^{\prime}_{a_{i}}\) and \(T^{\prime}_{a_{j}}\) both contain the vertex \(a_{j}\). The case that \(a_{i}\) is not grounded is symmetric, so we may assume that both \(a_{i}\) and \(a_{j}\) are grounded. As \(a_{i}\) and \(a_{j}\) are adjacent in \(\widetilde{G}\), the trees \(T_{a_{i}}\) and \(T_{a_{j}}\) meet in \(\widetilde{G}\). Let \(\ell\in\{1,2,\ldots,n\}\) be such that they meet in \(S_{\ell}\). Now \(T_{a_{i}}\) contains \(a_{i}\) and is connected and \(T_{a_{j}}\) contains \(a_{j}\) and is connected, so \(T_{a_{i}}\) and \(T_{a_{j}}\) both contain \(a_{\ell}\). So both \(T^{\prime}_{a_{i}}\) and \(T^{\prime}_{a_{j}}\) contain \(a_{\ell}\), as required. This completes the proof of Lemma 3.3. ## 4 Construction In this section we disprove Conjecture 2.4 and then combine the previous reductions in order to prove Theorem 2. We begin by defining the relevant graphs. Then we prove that they are counterexamples in Lemmas 4.2 and 4.3. **Definition 4.1**.: _The first reflected-tree, which we denote by \(G_{1}\), is the singleton graph with exactly one vertex and no edges. We call the vertex of \(G_{1}\) its root vertex. Then, for any positive integer \(r\geqslant 2\), the \(r\)-th reflected-tree \(G_{r}\) is constructed recursively as follows:_ * _Let_ \(H\) _and_ \(H^{\prime}\) _be two disjoint copies of_ \(G_{r-1}\)_, and let_ \(u\) _and_ \(v\) _be two new vertices, which we call the_ root _vertices of_ \(G_{r}\)_. To construct_ \(G_{r}\)_, we start with_ \(H\) _and_ \(H^{\prime}\)_, then make_ \(u\) _adjacent to a root vertex of_ \(H\) _and a root vertex of_ \(H^{\prime}\)_. Finally, we make_ \(v\) _adjacent to the remaining root vertex of_ \(H\) _and the remaining root vertex of_ \(H^{\prime}\)_. See Figure_ 2 _for a depiction._ Now we prove a lemma about the spanning trees of the reflected-tree. Whenever \(T\) is a spanning tree of a graph \(G\), we denote the fundamental cycle of an edge \(e\in E(G)\setminus E(T)\) with respect to \(T\) by \(C_{T}^{e}\); thus \(C_{T}^{e}\) is the unique cycle in the graph obtained from \(T\) by adding \(e\). **Lemma 4.2**.: _For any integer \(r\geqslant 2\) and any spanning tree \(T\) of \(G_{r}\), there is a matching \(M\subseteq E(G_{r})\setminus E(T)\) of size \(r-1\) such that_ \[\bigcap_{e\in M}V\left(C_{T}^{e}\right)\neq\varnothing.\] Proof.: Let \(u\) and \(v\) be the root vertices of \(G_{r}\), and denote the path between them in \(T\) by \(P_{uv}\). Under the same conditions, we prove the following stronger outcome holds by induction: \[\bigcap_{e\in M}E\left(C_{T}^{e}\right)\cap E(P_{uv})\neq\varnothing,\] for some matching \(M\subseteq E(G_{r})\setminus E(T)\) of size \(r-1\). For the base case of \(r=2\), the graph \(G_{r}\) is a cycle on four vertices; then any spanning tree \(T\) of \(G_{2}\) is a path on four vertices, and we can take \(M\) to be the \(1\)-edge matching \(E(G_{2})\setminus E(T)\). Next, we may assume that \(r>2\) and the claim holds for \(r-1\). By definition, \(G_{r}-\{u,v\}\) has exactly two connected components both of which are isomorphic to \(G_{r-1}\). We denote these Figure 2: The \(4\)th reflected-tree \(G_{4}\) (right, with root vertices larger and in red) being constructed from the \(3\)rd reflected-tree \(G_{3}\) (left). components by \(H\) and \(H^{\prime}\). Exactly one of \(T_{H}\coloneqq T[V(H)\cup\{u,v\}]\) and \(T_{H^{\prime}}\coloneqq T[V(H^{\prime})\cup\{u,v\}]\) is connected in \(T\); without loss of generality, we assume that \(T_{H}\) is connected in \(T\). We can apply the inductive hypothesis on \(T[V(H)]\), which is a spanning tree of \(H\), to find a matching \(M_{H}\subseteq E(H)\setminus E(T[V(H)])\) of size \(r-2\) with \(\bigcap_{e\in M_{H}}E\left(C_{T}^{e}\right)\cap E(P_{uv}-\{u,v\})\neq\varnothing\). The other subgraph \(T_{H^{\prime}}\) is not connected. In fact, it contains exactly two components: one containing \(u\), and the other containing \(v\). Thus there exists an edge \(e^{\prime}\in E(G_{r})\setminus E(T)\) with one end in each of these two components of \(T_{H^{\prime}}\). Observe that \(e^{\prime}\) lies in \(G[V(H^{\prime})\cup\{u,v\}]\) which is vertex-disjoint from \(H\). Thus \(M_{H}\cup\{e^{\prime}\}\) is a matching since \(M_{H}\subseteq H\). For convenience, let us define \(M\coloneqq M_{H}\cup\{e^{\prime}\}\). \(M\) is a matching of size \(r-1\), and we have \(E(P_{uv})\subseteq E(C_{T}^{e^{\prime}})\). From here, it follows that \[\bigcap_{e\in M}E\left(C_{T}^{e}\right)\cap E(P_{uv})\neq\varnothing.\] Thus \(M\) is our desired matching. We are now ready to prove the following lemma, which shows that reflected-trees are a counterexample to Conjecture 2.4. **Lemma 4.3**.: _For every \(k\in\mathbb{N}\), if \((T,\mathcal{B})\) is a tree decomposition of \(G_{k+2}\) such that \(T\) is a spanning tree of \(G_{k+2}\) and, for every \(v\in V(G_{k+2})\), we have \(v\in T_{v}\), then the width of \((T,\mathcal{B})\) is at least \(k\)._ Proof.: We begin by finding a matching \(M\coloneqq\{u_{1}v_{1},\ldots,u_{k+1}v_{k+1}\}\subseteq E(G_{k+2})\setminus E(T)\) of size \(k+1\) satisfying the properties in Lemma 4.2. Let \(x\in\bigcap_{e\in M}V\left(C_{T}^{e}\right)\). By construction, \(x\) is in the path \(P_{u_{i}v_{i}}\) between \(u_{i}\) and \(v_{i}\) in \(T\) for every \(i\in\{1,\ldots,k+1\}\). From Definition 2.1, the trees \(T_{u_{i}}\) and \(T_{v_{i}}\) intersect; furthermore, since \(T_{u_{i}}\) and \(T_{v_{i}}\) are connected in \(T\), with \(u_{i}\in V(T_{u_{i}})\) and \(v_{i}\in V(T_{v_{i}})\), we find that every vertex of \(P_{u_{i}v_{i}}\) is in \(V(T_{u_{i}})\cup V(T_{v_{i}})\). As a result, \(x\in V(T_{u_{i}})\cup V(T_{v_{i}})\). That is, \(u_{i}\in B_{x}\) or \(v_{i}\in B_{x}\) for all \(i\in\{1,\ldots,k+1\}\). Since \(M\) is a matching, we have that \(|B_{x}|\geqslant k+1\) and the width of \((T,\mathcal{B})\) is at least \(k\). We are now ready to prove the main theorem, which is restated below for convenience. **Theorem 2**.: _For every positive integer \(k\), there is a connected graph \(G\) of treewidth \(2\) such that if \((T,\mathcal{B})\) is a tree decomposition of \(G\) and \(T\) is a minor of \(G\), then \((T,\mathcal{B})\) has width at least \(k\)._ Proof.: For convenience, we fix an integer \(k\geqslant 2\). Now consider the \((k+3)\)-rd reflected-tree \(G_{k+3}\). Let \(a_{1},\ldots,a_{n}\) be an arbitrary ordering of \(V(G_{k+3})\), and let \(\widetilde{G}_{k+3}\) be the graph obtained from the integer \(k-1\), the graph \(G_{k+3}\), and the ordering \(a_{1},\ldots,a_{n}\) by applying Definition 3.2. We now prove that \(\widetilde{G}_{k+3}\) satisfies the conditions of Theorem 2. First, recall that \(\widetilde{G}_{k+3}\) has treewidth equal to \(\max(\operatorname{tw}(G_{k+3}),1)\). Moreover, \(\operatorname{tw}(G_{k+3})=2\) since \(G_{k+3}\) is series parallel and not a tree. Thus \(\widetilde{G}_{k+3}\) is a connected graph of treewidth \(2\), as desired. Next, suppose towards a contradiction that \(\widetilde{G}_{k+3}\) has a tree decomposition \((T,\mathcal{B})\) of width at most \(k-1\) such that \(T\) is a minor of \(\widetilde{G}_{k+3}\). Since \(\widetilde{G}_{k+3}\) is connected, Lemma 2.5 says that \(\widetilde{G}_{k+3}\) has a tree decomposition \((T^{\prime},\mathcal{B}^{\prime})\) of width at most \(k-1\) such that \(T^{\prime}\) is a spanning tree of \(\widetilde{G}_{k+3}\). Thus, since \(G_{k+3}\) is connected, Lemma 3.3 says that \(G_{k+3}\) has a tree decomposition \((T^{\prime\prime},\mathcal{B}^{\prime\prime})\) of width at most \(k\) such that \(T^{\prime\prime}\) is a spanning tree of \(G_{k+3}\) and for every \(v\in V(G_{k+3})\), we have \(v\in T^{\prime\prime}_{v}\). However, Lemma 4.3 also says that \((T^{\prime\prime},\mathcal{B}^{\prime\prime})\) has width at least \(k+1\). This contradiction completes the proof of Theorem 2. ## Acknowledgements The authors would like to thank Sang-il Oum for his suggestions about where to look for a counterexample to Conjecture 2.4; Zdenek Dvorak for sharing the original problem and helpful comments which improved the paper; David Wood for pointing out all of the progress in [10], and Sophie Spirkl for her helpful feedback on our proof that Conjecture 2.4 is false. Aristotelis Chaniotis, Linda Cook, Sepehr Hajebi, and Sophie Spirkl asked about a counterexample to a strengthening of Conjecture 2.4 at the Barbados Graph Theory Workshop in December 2022, which started this whole work. As such, the authors would like to thank Aristotelis Chaniotis, Sepehr Hajebi, Sophie Spirkl, and the organizers of the Barbados Graph Theory Workshop - Sergey Norin, Paul Seymour, and David Wood.
2308.07236
Temperature Evolution of Magnon Propagation Length in Tm$_3$Fe$_5$O$_{12}$ Thin Films: Roles of Magnetic Anisotropy and Gilbert Damping
The magnon propagation length ($\langle\xi\rangle$) of a ferro/ferrimagnet (FM) is one of the key factors that controls the generation and propagation of thermally-driven spin current in FM/heavy metal (HM) bilayer based spincaloritronic devices. Theory predicts that for the FM layer, $\langle\xi\rangle$ is inversely proportional to the Gilbert damping ($\alpha$) and the square root of the effective magnetic anisotropy constant ($K_{\rm eff}$). However, direct experimental evidence of this relationship is lacking. To experimentally confirm this prediction, we employ a combination of longitudinal spin Seebeck effect (LSSE), transverse susceptibility, and ferromagnetic resonance experiments to investigate the temperature evolution of $\langle\xi\rangle$ and establish its correlation with the effective magnetic anisotropy field, $H_K^{\rm eff}$ ($\propto K_{\rm eff}$) and $\alpha$ in Tm$_3$Fe$_5$O$_{12}$ (TmIG)/Pt bilayers. We observe concurrent drops in the LSSE voltage and $\langle\xi\rangle$ below 200$^\circ$K in TmIG/Pt bilayers regardless of TmIG film thickness and substrate choice and attribute it to the noticeable increases in $H_K^{\rm eff}$ and $\alpha$ that occur within the same temperature range. From the TmIG thickness dependence of the LSSE voltage, we determined the temperature dependence of $\langle\xi\rangle$ and highlighted its correlation with the temperature-dependent $H_K^{\rm eff}$ and $\alpha$ in TmIG/Pt bilayers, which will be beneficial for the development of rare-earth iron garnet-based efficient spincaloritronic nanodevices.
Amit Chanda, Christian Holzmann, Noah Schulz, Aladin Ullrich, Manfred Albrecht, Miela J. Gross, Caroline A. Ross, Dario. A. Arena, Manh-Huong Phan, Hariharan Srikanth
2023-08-14T16:17:35Z
http://arxiv.org/abs/2308.07236v3
# Controlling Magnonic Spin Current through Magnetic Anisotropy and Gilbert Damping ###### Abstract **Keywords:** Longitudinal spin Seebeck effect, Inverse spin Hall effect, Magnon propagation length, Gilbert damping, Magnetic anisotropy **The magnon propagation length, \(\langle\xi\rangle\) of a ferro-/ferrimagnet (FM) is one of the key factors that controls the generation and propagation of thermally-driven spin current in FM/heavy metal (HM) bilayer based spincaloritronic devices. Theory predicts that for the FM layer, \(\langle\xi\rangle\) is inversely proportional to the Gilbert damping (\(\alpha\)) and the square root of the effective magnetic anisotropy constant \(\left(\right.K_{eff}\)). However, direct experimental evidence of this relationship is lacking. To experimentally confirm this prediction, we employ a combination ## 1. Introduction The \(H_{K}^{eff}\) and \(\alpha\)-resonance experiments have been performed in the past 2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-20000-2000-20000-20000-20000-2000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-2000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-20000-2000-20000-2000-20000-2000-2000-20000-20000-20000-20000-20000-2000-20000-20000-20000-20000-20000-2000-20000-20000-20000-20000-20000-20000-20000-2000-20000-2000-2000-20000-2000-20000-20000-20000-20000-20000-20000-2000-20000-20000-20000-20000-2000-20000-2000-20000-20000-20000-20000-2000-20000-20000-20000-2000-20000-20000-20000-20000-2000-20000-20000-2000-20000-2000-2000-20000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-200-2000-2000-200-2000-2000-2000-200-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-2000-200-2000-2000-2000-2000-2000-2000-2000-2000-2000-200-2000-200-2000-200-2000-200-2000-2000-2000-200-2000-2000-200-2000-2000-2000-2000-2000-200-2000-200-2000-200-200-200-200-2000-200-200-200-200-2000-200-200-200-200-200-200-200-200-200-200-200-200-200-200-200-200-200-20-200-200-200-200-200-200-200-200-200-200-200-200-20-200-200-20-200-20-200-20-20-200-200-20-200-200-200-20-200-200-20-20-200-20-200-20-200-20-20-200-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-2-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-2-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-2-20-20-2-20-20-20-20-20-2-20-20-20-20-2-20-20-2-20-20-20-20-20-20-20-20-20-20-20-20-2-20-20-20-20-2-20-20-20-20-2-20-20-20-20-20-20-220-20-2-20-20-20-2-20-2-20-20-20-20-20-20-2-20-20-20-20-2-20-20-20-2-20-20-220-20-20-20-2-20-20-20-20-2-20-20-20-2-20-20-20-20-20-20-20-2-20-20- ## 1 Introduction Bilayers comprised of insulating rare-earth iron garnet (REIG) and heavy metal (HM) form the most appealing platform to generate, transmit, and detect pure spin currents in the field of spin-based-electronics[1, 2, 3]. The interplay of damping and magnon propagation length (\(\langle\xi\rangle\)) of the REIG layer and spin-orbit coupling (SOC) of the HM layer leads to a range of emergent spintronic phenomena in this fascinating class of heterostructures, including the spin Hall effect[4], spin-orbit torque[5, 6], spin-pumping effect (SPE)[7], and the longitudinal spin Seebeck effect (LSSE)[8, 9, 10]. The discovery of the SSE[11] instigated a new generation of spintronic nanodevices facilitating electrical energy harvesting from renewable thermal energy wherein a magnonic spin current is thermally generated and electrically detected by applying a temperature gradient across a magnetic insulator (MI)/HM bilayer[12]. Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\) (YIG) has been a widely explored MI for generating and transmitting pure spin currents due to its ultra-low damping (\(\sim\)10\({}^{-4}\)-10\({}^{-5}\)) and large \(\langle\xi\rangle\) (\(\sim\)100-200nm)[8, 10, 13]. This has led to a drastic increase in research over the last few decades, aimed at enhancing the spin current injection efficiency across the MI/HM interface by reducing the conductivity mismatch between the MI and HM layers by introducing atomically thin semiconducting interlayers[14, 15, 16, 17, 18, 19, 20] and enhancing the interfacial spin-mixing conductance[21, 22, 23]. Furthermore, our group has explored the roles of bulk and surface magnetic anisotropies in LSSE in different REIG-based MI/HM bilayers[9, 10], whereas, a recent study demonstrates the influence of damping on SPE and LSSE in a compensated ferrimagnetic insulator[24]. All these studies not only highlight the important roles of magnetic anisotropy and Gilbert damping in the LSSE, but also pose a fundamentally important question about the functional relationship between \(\langle\xi\rangle\), effective magnetic anisotropy constant ## 2 WILEY-VCH \(\langle K_{eff}\rangle\) and the Gilbert damping parameter (\(\alpha\)), which has been largely overlooked so far. Unlike magnetostatic spin waves with millimeter-range propagation lengths, \(\langle\xi\rangle\) for thermally generated magnons is significantly smaller, a few hundreds of nanometers[25]. In the framework of an atomistic spin model based on linear spin-wave theory, it was theoretically shown[13, 26] that thermally generated magnons have a broad frequency (\(f\)) distribution with \(f_{minimum}=2K_{eff}/[h(1+\alpha^{2})]\) and \(f_{maximum}=4K_{eff}/[h(1+\alpha^{2})]\), where \(h\) is the Planck constant. While the high-\(f\) magnons experience stronger damping, low-\(f\) magnons possess a very low group velocity, and hence, the majority of the thermally generated magnons become damped on shorter length-scales[13, 26]. Therefore, only a narrow \(f\)-distribution of thermally-generated magnons propagates over long distances and contributes towards the LSSE signal. Within this hypothesis,[13, 26] it is predicted that \(\langle\xi\rangle\) is inversely proportional to both \(\alpha\) and \(\sqrt{K_{eff}}\). An experimental confirmation of this correlation in MI/HM bilayers is of the utmost importance in the development of efficient spincaloritronic nanodevices by tuning these fundamental parameters. To experimentally validate this hypothesis and demonstrate the possibility of controlling thermally-driven spin current by magnetic anisotropy and Gilbert damping, we performed LSSE, radio frequency (RF) transverse susceptibility (TS), and broadband ferromagnetic resonance (FMR) measurements on Tm\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\)(TmIG)/Pt bilayers grown on different substrates. TmIG was chosen because of its negative magnetostriction coefficient \(\lambda_{111}\), which contributes to the perpendicular magnetic anisotropy (PMA) for TmIG films grown epitaxially in tension on various (111)-oriented garnet substrates[27, 28, 3]. Furthermore, TmIG has a much higher damping parameter (\(\approx 10^{-2}\))[3] compared to YIG, permitting a probe of the temperature evolutions of \(\alpha\) and magnetic anisotropy, as well as their relative contributions to \(\langle\xi\rangle\) and hence the LSSE. ## 2 Results and Discussion Single-crystalline TmIG films with different thicknesses were grown on (111)-oriented Gd\({}_{3}\)Sc\({}_{2}\)Ga\({}_{3}\)O\({}_{12}\)(GSGG) and Gd\({}_{3}\)Ga\({}_{5}\)O\({}_{12}\)(GGG) substrates by pulsed laser deposition technique (**see Experimental Section/Methods**). The high crystalline quality of the TmIG films was confirmed by X-ray diffraction (XRD). **Figure 1**(a) shows the \(\theta-2\theta\) X-ray diffractograms of the GSGG/TmIG(\(t\)) films with different TmIG film thickness \(t\) (\(t\) = 236, 150, 89, 73, 46 and 28 nm) and **Fig. 1**(b) exhibits the same for the GGG/TmIG(\(t\)) films with \(t\) = 142 and 44 nm. The substrate choice and the TmIG film thickness clearly have a robust impact on the film structure. Evidently, for the TmIG films grown on GSGG, the Bragg (444) peaks associated with the TmIG films are visible at lower angles than the bulk (444) reflection for films with \(t\) \(\leq\) 73 nm indicating tensile out-of-plane strain with in-plane easy axis in these films. For the \(t\) = 46 nm TmIG film, the Bragg (444) peak appears at a slightly higher angle than the bulk (444) reflection. However, the Bragg (444) peak associated with the \(t\) = 28 nm film occurs at a much higher angle than the bulk reflection, suggesting out-of-plane compressive strain in this film, which is required to achieve perpendicular magnetic anisotropy (PMA). For clarity, the bulk (444) reflection is indicated by a dashed line. In case of the GGG/TmIG(\(t\)) films, while the Bragg (444) peak associated with the \(t\) =142 nm TmIG film nearly merges with the bulk reflection, that for the \(t\) = 44 nm TmIG film occurs at a slightly higher angle than the bulk reflection, as observed in case of the GGG/TmIG(46 nm) film. Note that all the films show a smooth surface morphology with a low root-mean-square roughness below 0.5 nm, as visible in atomic force microscopy (AFM) images for the GSGG/TmIG(28nm) and GGG/TmIG(44nm) films shown in **Figs. 1**(c) and (d), respectively. Furthermore, we have recorded the reciprocal space maps in the vicinity of the (642) reflection for the GSGG/TmIG(30 nm) and GSGG/TmIG(205 nm) films as demonstrated in **Figs. **Wiley-VCH** **1**(e) and (f), respectively. For the thinner film (30nm), the TmIG film peak matches the IP lattice constant of the GSGG substrate, and hence, it is shifted towards higher \(q_{Z}\) in the OOP direction. But for the thicker film (205nm), the TmIG film peak is shifted towards the bulk position as the film is more relaxed. To gain further insight into the structure of the thin films, a cross-section of an \(\approx 220\) nm thick TmIG film on GSGG substrate, covered with a 5 nm Pt layer, was prepared, and analyzed by scanning transmission electron microscopy (STEM). **Fig. 2**(a) shows a low magnification STEM image of the whole layer stack. An annular detector with a rather small collector angle (24-48 mrad) was used to highlight strain (Bragg) contrast over mass (Z) contrast [29]. Although the Pt layer, the TmIG film and the GSGG substrate are clearly distinguishable in the STEM image, more detailed and distinctive contrast features appear within the TmIG layer. As shown by the XRD analysis, the TmIG film shows a strong lattice mismatch at the substrate interface and subsequently relaxes with increasing thickness, which causes a strain contrast in the STEM image. Additionally, the incoming ions during Pt sputtering (as well as during the cross-section preparation) can damage the film and cause an STEM contrast as well. However, an atomically resolved STEM image at the TmIG film-Pt interface (**Fig. 2**(b)) reveals a largely undamaged high quality single crystalline TmIG film, next to the polycrystalline Pt layer. From the Z contrast of our atomically resolved STEM image, the bright atoms are identified as the Tm and Fe atom columns. Upon further analysis, an STEM image of an area within the TmIG film (and close to the Pt interface) shows the presence of double twin boundaries[30], highlighted by colored lines in **Fig. 2**(c). **Fig. 3**(a) demonstrates the schematic illustration of our LSSE measurement configuration. Simultaneous application of a vertical (\(+z\)-axis) \(T\)-gradient (\(\overline{\mathbf{\nabla}\mathbf{T}}\)) and an in-plane (\(x\)-axis) DC magnetic field (\(\overline{\mathbf{\mu_{0}}\mathbf{H}}\)) across the TmIG film causes diffusion of thermally-excited magnons and develops a spatial gradient of magnon accumulation along the direction of \(\overline{\mathbf{\nabla}\mathbf{T}}\).[31] The accumulated magnons close to the TmIG/Pt interface transfer spin angular momenta to the electrons of the adjacent Pt layer[31]. The injected spin current density is, \(\overline{\mathbf{J_{S}}}\propto-S_{LSSE}\overline{\mathbf{\nabla}\mathbf{T}}\), where \(S_{LSSE}\) is the LSSE coefficient[31, 32]. The spin current injected into the Pt layer along the \(z\)-direction is converted into a charge current, \(\overline{\mathbf{J_{C}}}=\left(\frac{2e}{h}\right)\theta_{SH}^{Pt}\big{(} \overline{\mathbf{J_{S}}}\times\overline{\mathbf{\sigma_{S}}}\big{)}\) along the \(y\)-direction via the inverse spin Hall effect (ISHE), where \(e\), \(\hbar\), \(\theta_{SH}^{Pt}\), and \(\overline{\mathbf{\sigma_{S}}}\) are the electronic charge, the reduced Planck's constant, the spin Hall angle of Pt, and the spin-polarization vector, respectively. The corresponding LSSE voltage is[31, 33, 34] \[V_{LSSE}=R_{y}L_{y}D_{Pt}\left(\frac{2e}{h}\right)\theta_{SH}^{Pt}\big{|}J_{S }\big{|}\tanh\Big{(}\frac{t_{Pt}}{2D_{Pt}}\Big{)}, \tag{1}\] where, \(R_{y},L_{y},D_{Pt},\text{and}\,t_{Pt}\) represent the electrical resistance between the contact-leads, the distance between the contact-leads, the spin diffusion length of Pt, and the Pt layer thickness, respectively. **Fig. 3(b)** shows the magnetic field (\(H\)) dependent ISHE voltage, \(V_{ISHE}(H)\) for Gd\({}_{3}\)Sc\({}_{2}\)Ga\({}_{3}\)O\({}_{12}\)(GSGG)/TmIG(236nm)/Pt(5nm) for different values of the temperature difference between the hot (\(T_{hot}\)) and cold (\(T_{cold}\)) blocks, \(\Delta T=(T_{hot}-T_{cold})\), at a fixed average sample temperature \(T=\frac{T_{hot}+T_{cold}}{2}=295\)K. For all \(\Delta T\), \(V_{ISHE}(H)\) exhibits a nearly square-shaped hysteresis-loop. The inset of **Fig. 3(b)** plots the \(\Delta T\)-dependence of the background-corrected LSSE voltage, \(V_{LSSE}(\Delta T)=\left[\frac{V_{ISHE}(+\mu_{0}H_{sat}.\Delta T)-V_{ISHE}(- \mu_{0}H_{sat}.\Delta T)}{2}\right]\), where \(\mu_{0}H_{sat}\) is the saturation field. Clearly, \(V_{LSSE}\) increases linearly with \(\Delta T\) as expected from **Eqn. 1**.[9]**Fig. 3(c)** shows the \(V_{ISHE}(H)\) hysteresis-loops for GSGG/TmIG(236nm)/Pt(5nm) measured at selected temperatures for \(+\)10K. Clearly, \(|V_{ISHE}(\mu_{0}H_{sat})|\) significantly decreases, and the hysteresis-loop broadens at low-\(T\), especially below 200K. For a clearer insight, we display the two-dimensional \(H\)-\(T\) phase-diagram of \(V_{ISHE}\) for the \(+\mu_{0}H_{sat}\rightarrow-\mu_{0}H_{sat}\) sweep for GSGG/TmIG(236nm)/Pt(5nm) in Fig. 3(e). Evidently, \(V_{ISHE}\) drops below the \(T\)-window: 180-200K. To correlate the thermo-spin transport with magnetism, in Fig. 3(d), we show the \(H\)-dependence of magnetization, \(M(H)\) at selected temperatures for GSGG/TmIG(236nm)/Pt(5nm) measured while scanning an in-plane (IP) magnetic field. The coercivity (\(H_{\mathcal{C}}\)) increases at low-\(T\) consistent with an increase in the magnetic anisotropy. Furthermore, it is apparent from the \(H\)-\(T\) phase diagram of \(M\) (see Fig. 3(f)) that the saturation magnetization, \(M_{S}\), decreases at low-\(T\), especially below 200 K. This observation is also in agreement with the \(T\)-dependent magnetic-force-microscopy (MFM) results shown in **Supplementary Figure 1**, which clearly reveals that the root mean square (RMS) value of the phase shift, \(\Delta\phi_{RMS}\) decreases significantly between 300 and 150 K indicating changes in the magnetic domain structure at low-\(T\). The decrease in \(M_{S}\) at low-\(T\) is well-known in TmIG[35, 36] and is a result of the increasing moment of the Tm\({}^{3+}\) ion at low-\(T\), which competes with the net moment of the Fe\({}^{3+}\) ions (_i.e._, the dodecahedral Tm\({}^{3+}\) moment opposes the net moment of the tetrahedral and octahedral Fe\({}^{3+}\) moments). Based on the molecular-field-coefficient theory developed by Dionne[37], we have performed molecular-field simulations[38, 39] to determine \(M_{S}\left(T\right)\) for TmIG (see **Supplementary Figure 2(n)**) which is consistent with our experimental observation of the decrease in \(M_{S}\) at low-\(T\). The magnetometry and LSSE measurements were repeated on the GSGG/TmIG(_t_)/Pt(5nm) sample series with different TmIG film thicknesses (28nm \(\leq t\leq\) 236nm), where, films with ## 4 Wiley-Vch ### Wiley-Vch #### Wiley-Vch \(46\)nm \(\leq t\leq 236\)nm possess IP easy-axes while the 28nm film has an OOP easy-axis, which was confirmed via IP-magnetometry and OOP \(p\)-MOKE measurements (see **Supplementary Figure 2(e)**). The total magnetic anisotropy of a (111)-oriented TmIG film has contributions from shape anisotropy (\(K_{shape}\)), cubic magneto-crystalline anisotropy (\(K_{mc}\)), and magnetoelastic anisotropy (\(K_{me}\)) [27, 28, 29]_i.e._, \(K_{eff}=K_{shape}+K_{mc}+K_{me}=-\frac{1}{2}\ \mu_{0}M_{S}^{2}-\frac{\kappa_{1}}{12}- \frac{9}{4}\lambda_{111}c_{44}\left(\frac{\pi}{2}-\beta\right)\), where \(K_{1}\) is the magnetocrystalline anisotropy coefficient, \(\lambda_{111}\) is the magnetostriction along the [111] direction, \(c_{44}\) is the shear modulus and \(\beta\) is the corner angle of the rhombohedrally-distorted unit cell. For a negative magnetostriction (\(\lambda_{111}\)= -5.2\(\times\)10\({}^{-6}\) for bulk TmIG [27]), the tensile IP-strain, which results from the difference in lattice parameters \(\left(a_{GSGG}=12.57\ \text{\AA}\right.\)and \(a_{TmIG}=12.32\ \text{\AA}\)) promotes PMA (\(K_{eff}>0\)) [28, 29, 40]. PMA is expected for fully-strained films (28nm), but strain-relaxation in thicker films reduces the magnetoelastic contribution, and the easy-axis reorients to IP [28]. **Figs. 4**(a)-(c) depict the \(V_{ISHE}(H)\) loop on the left \(y\)-scale and corresponding \(M(H)\) loop on the right \(y\)-scale at 295K for \(t=28\), 46 and 150nm, respectively. The \(V_{ISHE}(H)\) hysteresis-loops for all the thicknesses mimic the corresponding \(M(H)\) loops. In **Figs. 4**(d) and (e), we demonstrate the \(T\)-dependence of the background-corrected LSSE voltage, \(V_{LSSE}(T)=\frac{V_{ISHE}(T,+\mu_{0}H_{sat})-V_{ISHE}(T,-\mu_{0}H_{sat})}{2}\) for \(\Delta T\)=+10K on the left \(y\)-scale and corresponding \(M_{S}(T)\) on the right \(y\)-scale for GSGG/TmIG(28nm)/Pt(5nm) and GSGG/TmIG(150nm)/Pt(5nm), respectively. Interestingly, \(V_{LSSE}(T)\) and \(M_{S}(T)\) for both films drop considerably below the \(T\)-window of 180-200K. We observed a similar trend in \(V_{LSSE}(T)\) and \(M_{S}(T)\) for all GSGG/TmIG(\(t\))/Pt(5nm) films with other thicknesses (see **Supplementary Figures 2 and 3**). The drop in \(V_{LSSE}(T)\) and \(M_{S}(T)\) below the \(T\)-window: 180-200K is evidently independent of film-thickness. We also performed spin Hall-anomalous Hall Effect measurements on the 28nm film and found that the \(T\)-dependence of anomalous Hall resistance, \(R_{xy}^{AHE}\) is similar to that of \(V_{LSSE}(T)\) and \(M_{S}(T)\) (see Fig. 4(d)). To ascertain the origin of the decrease in \(V_{LSSE}\) below 180-200K in the TmIG films, it is essential to determine the \(T\)-evolution of the average magnon propagation length, \(\langle\xi\rangle\) which signifies the critical length-scale for thermally-generated magnons to develop a spatial-gradient of magnon-accumulation inside a magnetic film[25, 26, 41]. Assuming negligible \(T\)-drops in the Pt layer, and at the TmIG/GSGG interface, the total LSSE voltage across Pt/TmIG/GSGG can be expressed as[42], \[\frac{V_{LSSE(t_{TmIG})}}{\Delta T}=\frac{V_{LSSE(t_{TmIG})+V_{LSSE}^{Bulk}(t_{TmIG })}^{Bulk}}{\Delta T}=\left[S_{int}\left\{\frac{(\kappa_{GSGG}\kappa_{TmIG})R_{ int}}{(\kappa_{TmIG}\epsilon_{GSGG}+\kappa_{GSGG}t_{TmIG})}\right\}L_{y}+\right.\] \[A\left\{\frac{\cosh(\frac{t_{TmIG}}{\langle\xi\rangle})-1}{\sinh(\frac{t_{TmIG }}{\langle\xi\rangle})}\right\}\left\{\frac{\kappa_{GSGG}}{(\kappa_{TmIG} \epsilon_{GSGG}+\kappa_{GSGG}t_{TmIG})}\right\}L_{y}\right] \tag{2}\] Here, the first (second) term represents the interfacial (bulk) contribution to the total LSSE voltage, and \(S_{int}\) denotes the interfacial LSSE coefficient for the Pt/TmIG interface[42]. Furthermore, \(t_{TmIG}(t_{GSGG})\) is the thickness of TmIG film (GSGG substrate), \(\kappa_{Pt}\), \(\kappa_{TmIG}\) and \(\kappa_{GSGG}\) are the thermal conductivity of the Pt, TmIG and GSGG respectively, and \(R_{int}\) is the interfacial thermal-resistance at the Pt/TmIG interface. It has recently been shown[43] that the \(M_{S}\) also needs be considered to evaluate \(\langle\xi\rangle\) from the LSSE voltage by normalizing the LSSE voltage by \(M_{S}\). In Fig. 4(f), we show the thickness-dependence of the background-corrected modified LSSE voltage,\(\frac{V_{LSSE(t_{TmIG})}}{\Delta T.M_{S}}\), at selected temperatures fitted to Eqn. 2. From the fits, we obtained \(\langle\xi\rangle\)=65 \(\pm\) 5nm for the TmIG film at 295K, which is smaller than that of YIG films (90-140nm)[25], but higher than GdIG (45\(\pm\)8nm)[9]. The correlation between the magnetic and thermo-spin transport properties in TmIG/Pt bilayers can be seen in the \(T\)-evolutions of different physical parameters \(\left(M_{S},V_{LSSE},\left\langle\xi\right\rangle,H_{K}^{eff}\) and \(\alpha\right)\) for GSGG/TmIG(236nm)/Pt(5nm) (Fig. 5). In addition to the decrease in \(M_{S}\) and increase in \(H_{C}\) below the \(T\)-window of 180-200K, \(V_{LSSE}\)for the 236nm film also shows a remarkable drop (see Fig. 5(a)-(b)) below that \(T\)-window, similar to the other thicknesses. To rule out possible effects of strain on \(V_{LSSE}(T)\), we performed LSSE measurements on TmIG films grown on different substrates (see **Supplementary Figures 4 and 5**). \(M_{S}(T)\) and \(V_{LSSE}(T)\) for the \(\text{Gd}_{3}\text{Ga}_{5}\text{O}_{12}\text{(GGG)/TmIG(44nm)/Pt(5nm)}\) and \(\text{(Gd}_{2.6}\text{Ca}_{0.4)}\text{(Ga}_{4.1}\text{Mg}_{0.25}\text{Zr}_{0.65 }\text{O}_{12}\text{(sGGG)/TmIG(40nm)/Pt(5nm) films exhibit the same trend as GSGG/TmIG(46nm)/Pt(5nm) (see **Supplementary Figure 6**). More specifically, both \(V_{LSSE}(T)\) and \(M_{S}(T)\) drop below 180-200K for all the TmIG films independent of substrate choice. The right \(y\)-scale of Fig. 5(b) demonstrates the \(T\)-dependence of \(\left\langle\xi\right\rangle\) obtained from the fit of \(\frac{V_{LSSE}(t_{TmIG})}{\Delta T.M_{S}}\)for GSGG/TmIG(\(t\))/Pt(5nm). Interestingly, \(\left\langle\xi\right\rangle\) shows a remarkable decrease below 200K, unlike YIG/Pt bilayers[41] for which \(\left\langle\xi\right\rangle\propto T^{-1}\). To interpret the decrease in \(\left\langle\xi\right\rangle\) at low-\(T\), we recall that \(\left\langle\xi\right\rangle\) of a magnetic material with lattice constant \(\alpha_{0}\) (considering simple cubic structure) is related to the Gilbert damping parameter (\(\alpha\)), the effective anisotropy constant (\(K_{eff}\)), and the strength of the Heisenberg exchange interaction between nearest neighbors (\(J_{ex}\)) through the relation[13, 26]\(\left\langle\xi\right\rangle=\frac{a_{0}}{2\alpha}.\sqrt{\frac{J_{ex}}{2K_{ eff}}}\). As discussed before, \(K_{eff}=K_{me}-\frac{1}{2}\)\(\mu_{0}M_{S}^{2}-\frac{K_{1}}{12}.\) Therefore, we can express \(\left\langle\xi\right\rangle\) as, \[\left\langle\xi\right\rangle=\ \frac{a_{0}}{2\alpha}.\sqrt{\frac{J_{ex}}{2\left(K_{ me}-\frac{K_{1}}{12}\frac{1}{2}\mu_{0}M_{S}^{2}\right)}} \tag{3}\] Eqn. 3 indicates that (i) \(\left\langle\xi\right\rangle\propto\left(\frac{1}{\alpha}\right)\), and (ii) a decrease in \(M_{S}\) also suppresses \(\left\langle\xi\right\rangle\). By rewriting Eqn. 3 as \(\left\langle\xi\right\rangle=\frac{a_{0}}{2\alpha}.\sqrt{\frac{I_{ex}}{\mu_{0} M_{S}H_{K}^{eff}}}\), \(\left\langle\xi\right\rangle\) is inversely proportional to the square-root of effective anisotropy field \(\left(H_{K}^{eff}\right)\). This implies that the \(T\)-evolution of \(\left\langle\xi\right\rangle\) is related to that of \(\alpha\) and \(H_{K}^{eff}\). RF TS measurements were performed to determine the \(T\)-evolution of \(H_{K}^{eff}\) in the TmIG films. The \(H\)-field dependence (\(H_{DC}\)) of TS, \(\chi_{T}(H_{DC})\), is known to exhibit peaks/cusps at the effective anisotropy fields, \(\pm H_{K}^{eff}\).[44, 45] As shown in Fig. 5(e), the RF magnetic field, \(H_{RF}\) is parallel to the film-surface and \(H_{DC}\) points perpendicular to it. Bipolar field-scans of \(\chi_{T}(H_{DC})\) for the GSGG/TmIG(236nm)/Pt film at 300 and 120K are shown in Fig. 5(f), which clearly indicates an increase in \(H_{K}^{eff}\) at low-\(T\). As shown on the left \(y\)-scale of Fig. 5(c), \(H_{K}^{eff}(T)\) shows a prominent increase below 180-200K, which coincides with the sudden drop in \(V_{LSSE}\). Similar behavior was also observed for other film thicknesses (see **Supplementary Figure 7**). Furthermore, the \(T\)-dependence of the coercivity of the \(V_{LSSE}(H)\) loops \(\left(H_{C}^{LSSE}(T)\right)\) for GSGG/TmIG(236nm)/Pt(5nm) (right \(y\)-scale of Fig. 5(c)) also shows an increase below 200K, in agreement with \(H_{K}^{eff}(T)\). Since the magnon energy-gap, \(\hbar\omega_{M}\propto 2K_{eff}\)[13, 26], an increase in \(H_{K}^{eff}\) (and hence, \(K_{eff}\)) below 200K enhances \(\hbar\omega_{M}\) giving rise to only high-frequency magnon propagation with shorter \(\left\langle\xi\right\rangle\) and thereby reducing \(V_{LSSE}\) below 200K in the TmIG films[9, 10]. This also explains the noticeable decrease in \(\left\langle\xi\right\rangle\) below 200K, as the maximum value of the frequency-dependent propagation length is, \(\langle\xi\rangle_{max}\propto\frac{1}{\sqrt{\hbar\omega_{M}^{min}}}\), where \(\hbar\omega_{M}^{min}=\) minimum value of \(\hbar\omega_{M}\), and \(\hbar\omega_{M}^{min}\propto 2K_{eff}\) [13]. A significant increase in magnetocrystalline anisotropy at low-\(T\) has been reported in various REIGs, which was interpreted in the framework of the single-ion anisotropy model considering the collective influence of the crystal and exchange fields of the REIG on the energy levels of the individual magnetic ions [46]. In this circumstance, the increase in \(H_{K}^{eff}\) and the corresponding decrease in \(V_{LSSE}\) below 175K was observed in YIG/Pt [10], which was attributed to the single-ion anisotropy of Fe\({}^{2+}\) ions [47]. To gain knowledge on the oxidation state of Fe in our TmIG films, electron energy loss spectroscopy (EELS) was conducted during the cross-sectional TEM study described earlier. **Fig. 2**(d) shows two EELS spectra, recorded at the Fe L3 and L2 edges, and at positions close to the film-substrate and the film-Pt interface. The spectra are fitted following [48, 49], shown by colored lines, using a Gauss profile and a combination of a power-law background and a double-step function (arctangent) with a fixed step-ratio. **Fig. 2**(e) shows the extracted thickness dependent Fe L3 peak position alongside the corresponding FWHM. While an exact quantification of the Fe oxidation state distribution using the EELS Fe L3 peak position or L3/L2 white-line ratio is challenging, the presence of different oxidation states is qualitatively visible by a shift in the peak position. Thereby, Fe\({}^{2+}\) ions would contribute at slightly lower energies compared to Fe\({}^{3+}\) ions [48, 49, 50, 51]. However, in the measured spectra, a constant peak position at about 710.1 eV and a constant FWHM of about 2.3 eV across the whole film thickness is observed. Our observation strongly hints at the presence of only one Fe oxidation state, namely the Fe\({}^{3+}\) ion and hence, we can rule out the contribution of single ion anisotropy of Fe\({}^{2+}\) ions towards the increased magnetic anisotropy. This is also in agreement with the recent studies on Tb\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\)(TbIG) thin films [38, 39] which reveal the presence of Tb\({}_{\textbf{Fe}}\) antisite defects along-with minority populations of Fe\({}^{2+}\) ions and Fe vacancies in TbIG thin films. Moreover, there are clear ## 4 WILEY-VCH We have investigated the influence of the \(T\)-evolution of the \(\alpha\) and its influence on \(\langle\xi\rangle\) through broadband IP FMR measurements. Fig. 5(g) shows the field-derivative of the microwave (MW) power absorption spectra \(\left(\frac{dP}{dH}\right)\) as a function of the IP-DC magnetic field for _f_=12GHz at 295K for GSGG/TmIG(236nm) and GSGG/TmIG(236nm)/Pt(5nm) films fitted with a linear combination of symmetric and antisymmetric Lorentzian function derivatives[53]. We found that \(H_{res}\approx 464\) and 450 mT at _f_=12GHz for GSGG/TmIG(236nm) and GSGG/TmIG(236nm)/Pt(5nm) films, respectively. Such downshift in \(H_{res}\) in the presence of Pt was also observed in YIG/Pt bilayers and attributed to the magnetic-proximity-effect (MPE) induced interfacial static exchange-coupling between the YIG film and the proximitized Pt layer.[54] Furthermore, the FMR linewidth, \(\Delta H\) extracted from the fits to the \(\frac{dP}{dH}\) lineshapes are 36.5 \(\pm\) 0.5 and 57.1 \(\pm\) 0.4 mT for _f_=12GHz at 295K for GSGG/TmIG(236nm) and GSGG/TmIG(236nm)/Pt(5nm) films, respectively. This increase in \(\Delta H\) in GSGG/TmIG(236nm)/Pt(5nm) can be associated with the loss of spin angular momentum in the TmIG film due to relaxation of the spin accumulation in Pt layer as a result of spin-pumping and can be expressed as,[55]\(\left[\begin{array}{cc}\Delta H_{TmIG/Pt}-&\Delta H_{TmIG}\end{array}\right]= \ G_{R}^{\uparrow\downarrow}\left(\frac{g_{eff}\mu_{B}}{2\gamma_{M}g_{T} \gamma_{TMIG}}\right)f\), where \(\frac{\gamma}{2\pi}=\frac{g_{eff}\mu_{B}}{h}\) is the gyromagnetic ratio, \(\mu_{B}\) is the Bohr magneton, \(g_{eff}\) is the Lande _g_-factor and \(G_{R}^{1\downarrow}\) is the real component of the spin-mixing conductance. Furthermore, we fitted the \(\Delta H\)-\(f\) curves at different temperatures using the expression,[56]\(\Delta H=\Delta H_{0}+\frac{4\pi\alpha}{|\gamma|\mu_{0}}f\), where, \(\Delta H_{0}=\) inhomogeneous broadening linewidth. From the fits (see Fig. 5(h)), we obtained \(\alpha_{TmIG}=\) 0.0103\(\pm\)0.002 and \(\alpha_{TmIG/Pt}=\) 0.0151\(\pm\)0.003 at 295K for GSGG/TmIG(236nm) and GSGG/TmIG(236nm)/Pt(5nm) films, respectively which are close to the previously reported values of \(\alpha\)(\(\approx\)0.0132-0.0146) for TmIG films[3, 57]. Clearly, \(\alpha_{TmIG/Pt}>\alpha_{TmIG}\) which is caused by additional damping due to the spin-pumping[55]. Furthermore, \(\Delta H_{0}\approx 28\pm 0.3\) and \(44\pm 0.5\) mT for GSGG/TmIG(236nm) and GSGG/TmIG(236nm)/Pt(5nm) films, respectively at 295K, indicating an increase in inhomogeneous broadening by Pt insertion. Most importantly, \(\alpha_{TmIG}\) increases gradually with decreasing temperature but shows a remarkable increase below \(\approx\)200K (Fig. 5(d)). A similar increase in \(\alpha\) at low-\(T\) has also been observed in GSGG/TmIG(236nm)/Pt(5nm), GSGG/TmIG(46nm)/Pt(5nm) and GGG/TmIG(44nm)/Pt(5nm) films (see **Supplementary Figures 8 and 9**), indicating that this behavior is independent of TmIG film thickness and substrate choice. The sizeable increases in \(\alpha\) and \(\Delta H\) at low-\(T\) were also observed in YIG and different REIGs[58, 59, 60, 61] including TmIG[62], which was primarily attributed to Fe\({}^{2+}\) and/or RE\({}^{3+}\) impurity relaxation mechanisms. However, our EELS study confirms the absence of Fe\({}^{2+}\) ions, and therefore, we can rule out the possibility of Fe\({}^{2+}\) impurity relaxation in our TmIG films. In this circumstances, we would like to highlight that our STEM analysis shown in Fig. 2(c) indicates the presence of twin boundaries in our TmIG films. Furthermore, recent studies[38, 39] demonstrate the presence of Tb\({}_{\text{Fe}}\) antisite defects and Fe vacancies in TbIG thin films. It is known that grain boundaries, which are also categorized as two dimensional defects in polycrystalline films can give rise to enhanced magnetic damping[63]. In a similar way, both antisite defects and twin boundaries in a single crystalline film could serve as pinning sites and result in increased magnon scattering and hence, enhanced Gilbert damping[64, 53, 30, 65]. Considering these facts, we suggest that the large damping in our TmIG films at low temperatures is associated with the combined effects of slowly relaxing Tm\({}^{3+}\) ions[58], twin boundaries and Tm\({}_{\text{Fe}}\) antisite defects[39]. In this context, we also recall that doping Ni\({}_{80}\)Fe\({}_{20}\) by RE elements enhances the total damping and the RE-induced damping was found to be maximum for Tb\({}^{3+}\), which was well explained by the slowly relaxing RE impurity model[66]. It is known that the contribution of slowly relaxing RE impurity ions towards damping is proportional to the orbital moment (\(L\)) of the RE\({}^{3+}\) ions[66, 67, 68]. Note that Tm\({}^{3+}\) has higher value of \(L\) compared to Tb\({}^{3+}\) (\(L_{Tm}=5\) and \(L_{Tb}=3\)). Therefore, slowly relaxing Tm\({}^{3+}\) ions are also expected to have considerable contributions towards the large damping in our TmIG films at low temperatures. The increase in \(\Delta H_{0}\) at low-\(T\) for GSGG/TmIG(236nm), (see **Fig. 5**(d)) especially below 200K also supports the occurrence of low-\(T\) impurity relaxation[61]. Thus, we demonstrate that the drop in \(\langle\xi\rangle\) below 180-200K is a result of the corresponding prominent increases in \(H_{K}^{eff}\) and \(\alpha\), which provides the first experimental confirmation of the theoretical prediction described by **Eqn. 3**. To gain a quantitative understanding of the \(T\)-evolution of spin-pumping efficiency in the GSGG/TmIG(236nm)/Pt(5nm) bilayer, we estimated \(G_{R}^{\uparrow\downarrow}\) using the expression,[69]\(G_{R}^{\uparrow\downarrow}=\left(\frac{2e^{2}}{h}\right)\left(\frac{2\pi M_{g}d_{TmIG}}{g_{eff \mu B}}\right)\left[\begin{array}{cc}\alpha_{TmIG/Pt}-&\alpha_{TmIG}\end{array}\right]\) where, \(G_{0}=\left(\frac{2e^{2}}{h}\right)\) is the conductance quantum, and found that \(G_{R}^{\uparrow\downarrow}\approx 3.04\times 10^{15}\)\(\Omega^{-1}\)m\({}^{-2}\) at 295K which is one-order of magnitude higher than the previously reported value of \(G_{R}^{\uparrow\downarrow}\) (\(=5.7\times 10^{14}\)\(\Omega^{-1}\)m\({}^{-2}\)) in TmIG/Pt bilayers[69]. As shown in **Fig. 5**(d), \(G_{R}^{\uparrow\downarrow}\) for GSGG/TmIG(236nm)/Pt(5nm) slowly increases with decreasing temperature. In this context, we recall that at high-\(T\), \(G_{R}^{\uparrow 1}\) follows the phenomenological expression,[70]\(G_{R}^{\uparrow 1}\propto(T_{C}-T)\), where \(T_{C}=\) Curie temperature, indicating that \(G_{R}^{\uparrow 1}\propto M_{S}\). However, our experimental observation of \(G_{R}^{\uparrow 1}(T)\) deviates from the aforementioned phenomenological expression at low-\(T\), especially below 200K. Interestingly, a recent study[38] indicates that the imaginary component of the spin-mixing conductance \(\left(G_{Img}^{\uparrow 1}\right)\propto\) Fe-sublattice magnetization rather than \(M_{S}\), which explains the low-\(T\) behavior of \(G_{R}^{\uparrow 1}\) for our TmIG/Pt film. Based on the findings (**Fig. 5**), we can quantify the relative contributions of \(\alpha\) and \(H_{K}^{eff}\) to \(\left\langle\xi\right\rangle\) and, hence, \(V_{LSSE}\). We have found that the percentage changes in the absolute values of \(\left\langle\xi\right\rangle\), \(V_{LSSE}\) and \(H_{K}^{eff}\) between 200 and 120K are \(\left|\frac{\xi\left(200\text{ K}\right)-\xi\left(120\text{ K}\right)}{\xi\left(200\text{ K}\right)}\right|\times 100\%\approx 60\%\), \(\left|\frac{V_{LSSE}\left(200\text{ K}\right)-V_{LSSE}\left(120\text{ K}\right)}{V_{LSSE}\left(200\text{ K}\right)}\right|\times 100\%\approx 42\%\), and \(\left|\frac{H_{K}^{eff}\left(200\text{ K}\right)-H_{K}^{eff}\left(120\text{ K}\right)}{H_{K}^{eff}\left(200\text{ K}\right)}\right|\times 100\%\approx 16\%\), respectively, whereas that for \(\alpha_{TmIG}\) between 200 and 160K is \(\left|\frac{\alpha\left(200\text{ K}\right)-\alpha\left(160\text{ K}\right)}{\alpha\left(200\text{ K}\right)}\right|\times 100\%\approx 150\%\). This implies that compared to \(H_{K}^{eff}\), \(\alpha\) has a dominating contribution towards \(V_{LSSE}\) and hence \(\left\langle\xi\right\rangle\), which is also in reasonable agreement with the expression \(\left\langle\xi\right\rangle=\frac{a_{0}}{2\alpha}\cdot\sqrt{\frac{J_{ex}}{ \mu_{0}M_{S}H_{K}^{eff}}}\). In other words, the change in \(\alpha\) would have a much larger impact than the change in \(H_{K}^{eff}\) on \(\left\langle\xi\right\rangle\) and hence on the spincaloritronic efficiency. ## 3 Conclusions In summary, a remarkable drop in the LSSE voltage below 200K has been observed in TmIG/Pt bilayers regardless of the film thickness and substrate choice, which is indicative of an intrinsic origin. Significant increases in \(H_{K}^{eff}\) and \(\alpha\) below \(\sim\)200K are shown to decrease \(\left\langle\xi\right\rangle\) and, hence, \(V_{LSSE}\). Although both damping and magnetic anisotropy play crucial roles in manipulating the magnon propagation and spin angular momentum transfer across a FM/HM interface in spincaloritronic devices, our study suggests that the tuning of the damping is more effective than the magnetic anisotropy in controlling \(\langle\xi\rangle\) and, hence, the spincaloritronic efficiency. ## 4 Experimental Section/Methods _Thin film growth and structural/morphological characterization:_ Single-crystalline TmIG thin films were deposited by pulsed laser deposition (PLD), using two different PLD setups. The thin films were grown epitaxially on different (111)-oriented substrates, including GGG (Gd\({}_{3}\)Ga\({}_{5}\)O\({}_{12}\)), GSGG (Gd\({}_{3}\)Sc\({}_{2}\)Ga\({}_{3}\)O\({}_{12}\)), and sGGG ((Gd\({}_{2.6}\)Ca\({}_{0.4}\))(Ga\({}_{4.1}\)Mg\({}_{0.25}\)Zr\({}_{0.65}\))O\({}_{12}\)). Using the first PLD setup, films with varying thickness between 28 nm and 236 nm were grown on GGG and GSGG substrates. A KrF excimer laser with a wavelength of 248 nm, a fluence of 3-4 J/cm\({}^{2}\), and a repetition rate of 2 Hz is used. Before the first deposition, the TmIG target was preablated inside the PLD chamber with more than 10\({}^{4}\) pulses. All substrates were annealed for 8 h at 1250\({}^{\circ}\)C in oxygen atmosphere prior to the film deposition to provide a high substrate surface quality. To achieve stoichiometric, single-crystalline thin films with a smooth surface of about 0.2-0.3 nm in root-mean square roughness (RMS), the deposition conditions were carefully calibrated. For all films, the substrate was heated to 595\({}^{\circ}\)C during the film deposition, monitored by a thermocouple inside the substrate holder. The TmIG thin films were grown at a rate of 0.01 - 0.02 nm/s, in the presence of an oxygen background atmosphere of 0.05 mbar. After the deposition, the samples were cooled to room temperature at approximately 5 K/min, maintaining the oxygen atmosphere. A layer of 5 nm Pt was deposited at room temperature _ex-situ_ on the garnet films by DC magnetron sputtering using a shadow mask. The TmIG films were annealed at 400\({}^{\circ}\)C for 1 h inside the sputter chamber prior to the Pt deposition to avoid surface contamination\({}^{\text{\@@cite[cite]{[\@@bibref{}{Kaminski2012}{}{}]}}}\).To complement these samples, TmIG films with thicknesses 75 and 40 nm were grown on sGGG substrates using a second PLD setup. The laser wavelength was 248 nm at 10 Hz, the fluence 1.3 J/cm\({}^{2}\), and the substrate temperature was \(\sim\)750 \({}^{\circ}\)C with an oxygen pressure of 0.2 mbar. Samples were cooled at 20 K/min in 0.2 mbar oxygen. ## 6 WILEY-VCH The film surface morphology was investigated by atomic force microscopy (AFM), while the structural properties of the thin films were identified by x-ray diffraction (XRD) using monochromatic Cu K\(\alpha\) radiation. The film thickness was evaluated from the Laue oscillations (for the thinner films) and by spectroscopic ellipsometry. Further, a cross-sectional high resolution scanning transmission electron microscopy (HR-STEM) was conducted, using a JEOL NEOARM F200 operated at an electron energy of 200 keV. Electron energy loss spectra (EELS) were obtained using a GATAN Continuum S EELS spectrometer. The cross-sectional sample was prepared by mechanical dimpling and ion polishing. _Temperature dependent MFM measurements:_ Temperature dependent MFM measurements were performed on a Hitachi 5300E system. All measurements were done under high vacuum (P \(\leq\) 10\({}^{\text{-6}}\) Torr). MFM measurements utilized HQ: NSC18/Co-Cr/Al BS tips, which were magnetized out-of-plane with respect to the tip surface via a permanent magnet. Films were first magnetized to their saturation magnetization by being placed in a 1T static magnetic field, in-plane with the film surface. After that AC demagnetization of the film was implemented before initiating the MFM scans. After scans were performed, a parabolic background was subtracted, which arises from the film not being completely flat on the sample stage. Then, line artifacts were subtracted before finally applying a small Gaussian averaging/sharpening filter over the whole image. Phase standard deviation was determined by fitting a Gaussian to the image phase distribution and extracting the standard deviation from the fit parameters. _Magnetometry:_ The magnetic properties of the samples were measured using a superconducting quantum interference device - vibrating sample magnetometer (SQUID-VSM) at temperatures between 10 K and 350 K. A linear background stemming from the paramagnetic substrate was thereby subtracted. Due to a trapped remanent field inside the superconducting coils, the measured magnetic field was corrected using a paramagnetic reference sample. Additionally, a polar magneto-optical Kerr effect (MOKE) setup was used to record out-of-plane hysteresis loops at room temperature. The molecular field coefficient (MFC) model was a Python-coded version of Dionne's model[72] using molecular field coefficients in Ref. [37]. _Longitudinal spin Seebeck effect measurements:_ The longitudinal spin Seebeck effect (LSSE) was measured over a broad temperature window of 120 K \(\leq T\leq 295\) K using a custom-built setup assembled on a universal PPMS sample puck. During the LSSE measurements, the films were sandwiched between two copper blocks, as shown in Fig. 2(a). The same sample geometry was used for all films and the distance between the contact leads on the Pt surface were fixed at \(L_{y}=3\) mm for all films. A single layer of thin Kapton tape was thermally affixed to the naked surfaces of the top (cold) and bottom (hot) copper blocks. To ensure a good thermal link between the film surface and the Kapton tape attached to the top and bottom blocks, cryogenic Apiezon N-grease was used. Additionally, the Kapton tape electrically insulated the cold (hot) blocks from the top (bottom) surface of the films. The temperatures of both these blocks were controlled individually by two separate temperature controllers (Scientific Instruments Model no. 9700) to achieve an ultra-stable temperature difference (\(\Delta T\)) with \([\Delta T]_{Error}<\pm\) 2 mK. The top block (cold) was thermally anchored to the base of the PPMS puck using two molybdenum screws whereas a 4-mm-thick Teflon block was sandwiched between the puck base and the hot block (bottom) to maintain a temperature difference of \(\sim\) 10 K between the hot block and the PPMS base. A resistive chip-heater (PT-100 RTD sensor) and a calibrated Si-diode thermometer (DT-621-HR silicon -diode sensor) were attached to each of these blocks to efficiently control and sense the temperature. The heaters and thermometers attached to the copper blocks were connected to the temperature controllers in such a manner that a temperature gradient develops along the \(+z\)-direction that generates a temperature difference, \(\Delta T\), between the top (cold) and bottom (hot) copper blocks. For a given temperature gradient, the in-plane voltage generated along the \(y\)-direction across the Pt layer due to the ISHE (\(V_{ISHE}\)) was recorded by a Keithley 2182a nanovoltmeter while sweeping an external in-plane DC magnetic field from positive to negative values along the \(x\)-direction. The Ohmic contacts for the voltage measurements were made by electrically anchoring a pair of ultra-thin gold wires (25 \(\upmu\)m diameter) to the Pt layer by high quality conducting silver paint (SPI Supplies). The spin Hall anomalous Hall effect measurements were performed using the DC resistivity option of the PPMS. _Transverse susceptibility measurements:_ The temperature evolution of effective magnetic anisotropy in the GSGG/TmIG/Pt film was measured by employing a radio frequency (RF) transverse susceptibility (TS) technique using a home-built self-resonant tunnel diode oscillator (TDO) circuit with a resonance frequency of 12 MHz and sensitivity of \(\pm\)10 Hz. A physical property measurement system (PPMS) was employed as a platform to scan the external DC magnetic field (\(H_{DC}\)) and temperature. Before the TS measurements, the film was mounted inside an inductor coil (L), which is a component of an LC tank circuit. The entire tank circuit was placed outside the PPMS except the coil, L, which was positioned at the base of the PPMS sample chamber using a multi-purpose PPMS probe inserted in such a manner that the axial RF magnetic field (\(H_{RF}\)) of amplitude \(\sim\) 10 Oe produced inside the coil was always parallel to the film surface, but perpendicular to \(H_{DC}\). For the TmIG with IP easy axis, \(H_{DC}\)\(\perp\) film surface, whereas for the films with OOP easy axis, \(H_{DC}\parallel\) film surface. When the sample is subject to both \(H_{RF}\) and \(H_{DC}\), the dynamic susceptibility of the sample changes which in turn changes the inductance of the coil and, hence, the resonance frequency of the LC tank circuit. The relative change in the resonance frequency is proportional to the relative change in the transverse susceptibility of the sample. Therefore, TS as a function of H\({}_{\rm DC}\) was acquired by monitoring the shift in the resonance frequency of the TDO-oscillator circuit by employing an Agilent frequency counter. _Broadband ferromagnetic resonance measurements:_ Broadband ferromagnetic resonance (FMR) measurements (\(f\) = 6-20 GHz) were performed using a broadband FMR spectrometer (NanOscTM Phase-FMR, Quantum Design Inc., USA) integrated to a Dynacool PPMS. The TmIG/Pt film was firmly affixed on the surface of a coplanar waveguide (CPW) using Kapton tape. The spectrometer employs lock-in detection and records the field derivative of the power absorbed (\(dP/dH\)) by the film when it is excited by a microwave (MW) electromagnetic field generated by injecting a MW current to the CPW. The superconducting magnet of the PPMS provides the external dc magnetic field along the direction of the MW current flowing through the CPW and, hence, transverse to the MW magnetic field. **Supporting Information** Supporting Information is available from the Wiley Online Library or from the author. **Acknowledgements** Financial support by the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Science and Engineering under Award No. DE-FG02-07ER46438 at USF and by the ## VIIEY-VCH German Research Foundation (DFG) within project No. 318592081AL618/37-1 at U Augsburg are gratefully acknowledged. CR acknowledges support of NSF award DMR 1808190 and 1954606. Figure 1: **Structural and Morphological characterization.****(a)**\(\theta-2\theta\) X-ray diffractogram (XRD) of the GSGG/TmIG(\(t\)) films with different film thickness \(t\) (\(t\) = 236, 150, 89, 73, 46 and 28 nm), **(b)**\(\theta-2\theta\) X-ray diffractograms (XRD) of the GGG/TmIG(\(t\)) films with different film thickness \(t\) (\(t\) = 142 and 44 nm), film morphology as visible in atomic force microscopy (AFM) for **(c)** GSGG/TmIG(28nm) and **(d)** GGG/TmIG(44nm). The reciprocal space maps recorded in the vicinity of the (642) reflection for **(e)** GSGG/TmIG(30 nm) and **(f)** GSGG/TmIG(205 nm) films. For the thinner film (30nm), the TmIG film peak matches the IP lattice constant of the GSGG substrate, and hence, it is shifted towards the OOP direction. But for the thicker film (205nm), the TmIG film peak is shifted towards the bulk position as the film is more relaxed. Figure 2: **Cross-sectional scanning transmission electron microscopy (STEM) analysis of the GSGG/TmIG(220 nm)/Pt(5nm) film.****(a)** TEM image of the layer stack recorded by an annular detector with a small collector angle (24-48 mrad), highlighting strain (Bragg) contrast over mass (\(Z\)) contrast, (b) shows an atomic-resolution STEM image of the TmIG-Pt interface with (110) viewing direction, while (c) shows an area within the TmIG film. The colored lines highlight a double twin boundary. (d) Furthermore, an electron energy loss spectroscopy (EELS) scan at the Fe L3 and L2 edges was conducted. The measured energy loss spectra are displayed as data points, exemplary for positions close to the garnet-substrate and garnet-Pt interfaces, with the fitted functions presented as colored lines. (e) The thickness dependent Fe L3 peak position and FWHM is extracted. Figure 3: **Magnetism and longitudinal spin Seebeck effect (LSSE) in GSGG/TmIG(236nm)/Pt(5nm) film.****(a)** Schematic illustration of the experimental configuration for LSSE measurements. A temperature gradient (\(\overline{\mathbf{\nabla}T}\)) is applied along the \(+z\) axis and an in-plane (IP) dc magnetic field (\(\overline{\mathbf{\mu_{0}}H}\)) is applied along the \(+x\) axis. The inverse spin Hall effect (ISHE) induced voltage (\(V_{ISHE}\)) is measured along the \(y\)-axis. (**b)**\(V_{ISHE}(H)\) loops for different values of the temperature difference \(\Delta T\) at a fixed average sample temperature \(T=295\) K. The inset shows a linear \(\Delta T\)-dependence of the background-corrected LSSE voltage. **(c)**\(V_{ISHE}(H)\) hysteresis loops measured at selected temperatures in the range \(120\) K \(\leq T\leq 295\) K for \(\Delta T=+10\) K. **(d)** The IP \(M(H)\) hysteresis loops at selected temperatures. **(e)** The two-dimensional \(H\)-\(T\) phase diagrams of the \(V_{ISHE}(H)\) isotherms for the sweep \(+\mu_{0}H_{sat}\rightarrow\ -\mu_{0}H_{sat}\). **(f)** The two-dimensional \(H\)-\(T\) phase diagrams of the \(M(H)\) isotherms for the sweep \(+\mu_{0}H_{sat}\rightarrow\ -\mu_{0}H_{sat}\). Figure 4: Thickness (_t_) dependence of longitudinal spin Seebeck effect in GSGG/TmIG(_t_)/Pt(5nm) films.** The \(V_{ISHE}(H)\) hysteresis loops on the left \(y\)-scale and the IP \(M(H)\) loops on the right \(y\)-scale at \(T=295\) K for GSGG/TmIG(_t_)/Pt films for \(t=\)**(a)** 28 nm, **(b)** 46 and **(c)** 150 nm. The temperature dependence of the background-corrected LSSE voltage, \(V_{LSSE}(T)\) on the left \(y\)-scale and temperature dependence of saturation magnetization, \(M_{S}(T)\) on the right \(y\)-scale for the GSGG/TmIG(_t_)/Pt(5 nm) films for \(t=\)**(d)** 28 nm and **(e)** 150 nm, for \(\Delta T=+10\) K. Temperature dependence of the normalized anomalous Hall resistance, \(\left|R_{xy}^{AHE}(T)\right|\) for the GSGG/TmIG (28 nm)/Pt film is also shown on the right \(y\)-scale (green) of **(e)**. **(f)** The thickness dependence of the normalized background corrected LSSE voltage, \(V_{LSSE}(t)/\Delta T.M_{S}\) at three selected temperatures \(T=295\), 200 and 140 K fitted with Eqn. (2). Figure 5: **Temperature evolution of magnon propagation length of GSGG/TmIG/Pt films and its correlation with magnetic anisotropy and Gilbert damping.** Temperature dependences of **(a)**\(M_{S}\) on the left \(y\)-scale and \(H_{C}\) on the right \(y\)-scale for the GSGG/TmIG(236 nm)/Pt(5 nm) film, **(b)**\(V_{LSSE}\) for the GSGG/TmIG(236 nm)/Pt(5 nm) film on the left y-scale and the magnon propagation length, \(\langle\xi\rangle\) obtained from the fit on the right y-scale, **(c)** the effective anisotropy field (\(H_{K}^{eff}\)) for the GSGG/TmIG(236nm)/Pt(5nm) film obtained from the transverse susceptibility (TS) measurements and **(d)** the Gilbert damping parameter ( \(\alpha_{TmIG}\)) for the bare GSGG/TmIG(236 nm) film on the left-\(y\) scale and real component of the spin mixing conductance (\(G_{R}^{\uparrow 1}\)) (green) and the inhomogeneous broadening (\(\Delta H_{0}\)) (blue) for the GSGG/TmIG(236 nm)/Pt film on the right \(y\)-scale. **(e)** Schematic illustration of the TS set up. **(f)** Comparison of the bipolar field scans ## References * (1) A. V Chumak, V. I. Vasyuchka, A. A. Serga, B. Hillebrands, _Nat. Phys._**2015**, _11_, 453. * (2) L. J. Cornelissen, J. Liu, R. A. Duine, J. Ben Youssef, B. J. Van Wees, _Nat. Phys._**2015**, _11_, 1022. * (3) E. R. Rosenberg, K. Litzius, J. M. Shaw, G. A. Riley, G. S. D. Beach, H. T. Nembach, C. A. Ross, _Adv. Electron. Mater._**2021**, \(7\), 2100452. * (4) H. Nakayama, M. Althammer, Y.-T. Chen, K. Uchida, Y. Kajiwara, D. Kikuchi, T. Ohtani, S. Geprags, M. Opel, S. Takahashi, others, _Phys. Rev. Lett._**2013**, _110_, 206601. * (5) Q. Shao, C. Tang, G. Yu, A. Navabi, H. Wu, C. He, J. Li, P. Upadhyaya, P. Zhang, S. A. Razavi, others, _Nat. Commun._**2018**, \(9\), 1. * (6) M. Evelt, L. Soumah, A. B. Rinkevich, S. O. Demokritov, A. Anane, V. Cros, J. Ben Youssef, G. De Loubens, O. Klein, P. Bortolotti, others, _Phys. Rev. Appl._**2018**, _10_, 41002. * (7) B. Heinrich, C. Burrowes, E. Montoya, B. Kardasz, E. Girt, Y.-Y. Song, Y. Sun, M. Wu, _Phys. Rev. Lett._**2011**, _107_, 66604. * (8) K. Uchida, H. Adachi, T. Ota, H. Nakayama, S. Maekawa, E. Saitoh, _Appl. Phys. Lett._**2010**, _97_, 172505. * (9) A. Chanda, C. Holzmann, N. Schulz, J. Seyd, M. Albrecht, M.-H. Phan, H. Srikanth, _Adv. Funct. Mater._**2022**, _32_, 2109170. * (10) V. Kalappattil, R. Das, M.-H. Phan, H. Srikanth, _Sci. Rep._**2017**, \(7\), 13316. * (11) K. Uchida, S. Takahashi, K. Harii, J. Ieda, W. Koshibae, K. Ando, S. Maekawa, E. Saitoh, _Nature_**2008**, _455_, 778. * (12) G. E. W. Bauer, E. Saitoh, B. J. Van Wees, _Nat. Mater._**2012**, _11_, 391. * (13) U. Ritzmann, D. Hinzke, A. Kehlberger, E.-J. Guo, M. Klaui, U. Nowak, _Phys. Rev. B_ **2015**, _92_, 174411. * (14) S. Lee, W. Lee, T. Kikkawa, C. T. Le, M. Kang, G. Kim, A. D. Nguyen, Y. S. Kim, N. Park, E. Saitoh, _Adv. Funct. Mater._**2020**, _30_, 2003192. * (15) V. Kalappattil, R. Geng, R. Das, M. Pham, H. Luong, T. Nguyen, A. Popescu, L. M. Woods, M. Klaui, H. Srikanth, _Mater. Horizons_**2020**, \(7\), 1413. * (16) W.-Y. Lee, M.-S. Kang, G.-S. Kim, N.-W. Park, K.-Y. Choi, C. T. Le, M. U. Rashid, E. Saitoh, Y. S. Kim, S.-K. Lee, _ACS Appl. Mater. \(\backslash\)& Interfaces_**2021**, _13_, 15783. * (17) M.-H. Phan, M. T. Trinh, T. Eggers, V. Kalappattil, K. Uchida, L. M. Woods, M. Terrones, _Appl. Phys. Lett._**2021**, _119_, 250501. * (18) W.-Y. Lee, M.-S. Kang, G.-S. Kim, N.-W. Park, J. W. Choi, E. Saitoh, S.-K. Lee, _J. Phys. Chem. C_**2021**, _125_, 13059. * (19) W.-Y. Lee, N.-W. Park, G.-S. Kim, M.-S. Kang, J. W. Choi, K.-Y. Choi, H. W. Jang, E. Saitoh, S.-K. Lee, _Nano Lett._**2020**, _21_, 189. * (20) W.-Y. Lee, N.-W. Park, M.-S. Kang, G.-S. Kim, Y.-G. Yoon, S. Lee, K.-Y. Choi, K. S. Kim, J.-H. Kim, M.-J. Seong, others, _ACS Appl. Mater. \(\backslash\)& Interfaces_**2021**, _13_, 45097. * (21) D. Kikuchi, M. Ishida, K. Uchida, Z. Qiu, T. Murakami, E. Saitoh, _Appl. Phys. Lett._**2015**, _106_, 82401. * (22) H. Yuasa, K. Tamae, N. Onizuka, _AIP Adv._**2017**, \(7\), 55928. * (23) S. J. Yun, D. L. Duong, D. M. Ha, K. Singh, T. L. Phan, W. Choi, Y.-M. Kim, Y. H. Lee, _Adv. Sci._**2020**, \(7\), 1903076. * (24) Y. Li, D. Zheng, B. Fang, C. Liu, C. Zhang, A. Chen, Y. Ma, K. Shen, H. Liu, A. Manchon, others, _Adv. Mater._**2022**, _34_, 2200019. * (25) A. Kehlberger, U. Ritzmann, D. Hinzke, E.-J. Guo, J. Cramer, G. Jakob, M. C. Onbasli, D. H. Kim, C. A. Ross, M. B. Jungfleisch, _Phys. Rev. Lett._**2015**, _115_, 96602. * [26] U. Ritzmann, D. Hinzke, U. Nowak, _Phys. Rev. B_**2014**, _89_, 24409. * [27] A. Quindeau, C. O. Avci, W. Liu, C. Sun, M. Mann, A. S. Tang, M. C. Onbasli, D. Bono, P. M. Voyles, Y. Xu, _Adv. Electron. Mater._**2017**, \(3\), 1600376. * [28] O. Ciubotariu, A. Semisalova, K. Lenz, M. Albrecht, _Sci. Rep._**2019**, \(9\), 17474. * [29] C. Holzmann, A. Ullrich, O.-T. Ciubotariu, M. Albrecht, _ACS Appl. Nano Mater._**2022**, \(5\), 1023. * [30] Y. Jia, Y. Wu, S. Zhao, S. Zuo, K. P. Skokov, O. Gutfleisch, C. Jiang, H. Xu, _Phys. Rev. Mater._**2020**, \(4\), 94402. * [31] S. M. Rezende, R. L. Rodriguez-Suarez, R. O. Cunha, A. R. Rodrigues, F. L. A. Machado, G. A. F. Guerra, J. C. L. Ortiz, A. Azevedo, _Phys. Rev. B_**2014**, _89_, 14416. * [32] J. Xiao, G. E. W. Bauer, K. Uchida, E. Saitoh, S. Maekawa, _Phys. Rev. B_**2010**, _81_, 214418. * [33] M. Arana, M. Gamino, E. F. Silva, V. Barthem, D. Givord, A. Azevedo, S. M. Rezende, _Phys. Rev. B_**2018**, _98_, 144431. * [34] A. Azevedo, L. H. Vilela-Leao, R. L. Rodriguez-Suarez, A. F. L. Santos, S. M. Rezende, _Phys. Rev. B_**2011**, _83_, 144402. * [35] S. Geller, J. P. Remeika, R. C. Sherwood, H. J. Williams, G. P. Espinosa, _Phys. Rev._**1965**, _137_, A1034. * [36] S. Ding, Z. Liang, C. Yun, R. Wu, M. Xue, Z. Lin, A. Ross, S. Becker, W. Yang, X. Ma, others, _Phys. Rev. B_**2021**, _104_, 224410. * [37] G. F. Dionne, _Magnetic Oxides_, Springer, **2009**. * [38] B. Khurana, J. J. Bauer, P. Zhang, T. Safi, C.-T. Chou, J. T. Hou, T. Fakhrul, Y. Fan, L. * [39] E. Rosenberg, J. Bauer, E. Cho, A. Kumar, J. Pelliciari, C. A. Occhialini, S. Ning, A. Kaczmarek, R. Rosenberg, J. W. Freeland, others, _Small_**2023**, 2300824. * [40] E. R. Rosenberg, L. Beran, C. O. Avci, C. Zeledon, B. Song, C. Gonzalez-Fuentes, J. Mendil, P. Gambardella, M. Veis, C. Garcia, others, _Phys. Rev. Mater._**2018**, 2, 94405. * [41] E.-J. Guo, J. Cramer, A. Kehlberger, C. A. Ferguson, D. A. MacLaren, G. Jakob, M. Klaui, _Phys. Rev. X_**2016**, \(6\), 31012. * [42] P. Jimenez-Cavero, I. Lucas, D. Bugallo, C. Lopez-Bueno, R. Ramos, P. A. Algarabel, M. R. Ibarra, F. Rivadulla, L. Morellon, _Appl. Phys. Lett._**2021**, _118_, 92404. * [43] G. Venkat, C. D. W. Cox, D. Voneshen, A. J. Caruana, A. Piovano, M. D. Cropper, K. Morrison, _Phys. Rev. Mater._**2020**, \(4\), 75402. * [44] A. Aharoni, E. H. Frei, S. Shtrikman, D. Treves, _Bull. Res. Counc. Isr._**1957**, \(6\), 215. * [45] A. Chanda, J. E. Shoup, N. Schulz, D. A. Arena, H. Srikanth, _Phys. Rev. B_**2021**, _104_, 94404. * [46] R. F. Pearson, _J. Appl. Phys._**1962**, _33_, 1236. * [47] Y.-Q. Zeng, X.-Y., Lu, X.-J. & Wang, _ACTA Phys. Sin._**1989**, _38_, 11. * [48] L. Cave, T. Al, D. Loomer, S. Cogswell, L. Weaver, _Micron_**2006**, _37_, 301. * [49] Z. L. Wang, J. S. Yin, Y. D. Jiang, _Micron_**2000**, _31_, 571. * [50] H. Tan, J. Verbeeck, A. Abakumov, G. Van Tendeloo, _Ultramicroscopy_**2012**, _116_, 24. * [51] P. A. Van Aken, B. Liebscher, V. J. Styrsa, _Phys. Chem. Miner._**1998**, _25_, 323. * [52] M. Wolloch, D. Suess, P. Mohn, _Phys. Rev. B_**2017**, _96_, 104408. * [53] P. Durrenfeld, F. Gerhard, J. Chico, R. K. Dumas, M. Ranjbar, A. Bergman, L. Bergqvist, A. Delin, C. Gould, L. W. Molenkamp, others, _Phys. Rev. B_**2015**, _92_, 214424. * [54] Y. Sun, H. Chang, M. Kabatek, Y.-Y. Song, Z. Wang, M. Jantz, W. Schneider, M. Wu, E. Montoya, B. Kardasz, others, _Phys. Rev. Lett._**2013**, _111_, 106601. * [55] O. Mosendz, J. E. Pearson, F. Y. Fradin, G. E. W. Bauer, S. D. Bader, A. Hoffmann, _Phys. Rev. Lett._**2010**, _104_, 46601. * [56] H. T. Nembach, T. J. Silva, J. M. Shaw, M. L. Schneider, M. J. Carey, S. Maat, J. R. Childress, _Phys. Rev. B_**2011**, _84_, 54424. * [57] C. N. Wu, C. C. Tseng, Y. T. Fanchiang, C. K. Cheng, K. Y. Lin, S. L. Yeh, S. R. Yang, C. T. Wu, T. Liu, M. Wu, others, _Sci. Rep._**2018**, \(8\), 11087. * [58] C. L. Jermain, S. V Aradhya, N. D. Reynolds, R. A. Buhrman, J. T. Brangham, M. R. Page, P. C. Hammel, F. Y. Yang, D. C. Ralph, _Phys. Rev. B_**2017**, _95_, 174411. * [59] P. E. Seiden, _Phys. Rev._**1964**, _133_, A728. * [60] E. G. Spencer, R. C. LeCraw, A. M. Clogston, _Phys. Rev. Lett._**1959**, \(3\), 32. * [61] S. Guo, B. McCullian, P. C. Hammel, F. Yang, _J. Magn. Magn. Mater._**2022**, _562_, 169795. * [62] G. L. S. Vilela, J. E. Abrao, E. Santos, Y. Yao, J. B. S. Mendes, R. L. Rodr\(\backslash\)iguez-Suarez, S. M. Rezende, W. Han, A. Azevedo, J. S. Moodera, _Appl. Phys. Lett._**2020**, _117_, 122412. * [63] T. Nozue, T. Kikkawa, T. Watamura, T. Niizeki, R. Ramos, E. Saitoh, H. Murakami, _Appl. Phys. Lett._**2018**, _113_. * [64] C. Reichhardt, C. J. O. Reichhardt, M. V Milosevic, _Rev. Mod. Phys._**2022**, _94_, 35005. * [65] X. Ma, L. Ma, P. He, H. B. Zhao, S. M. Zhou, G. Lupke, _Phys. Rev. B_**2015**, _91_, 14438. * [66] G. Woltersdorf, M. Kiessling, G. Meyer, J.-U. Thiele, C. H. Back, _Phys. Rev. Lett._**2009**, _102_, 257602. * [67] A. Rebei, J. Hohlfeld, _Phys. Rev. Lett._**2006**, _97_, 117601. * [68] S. G. Reidy, L. Cheng, W. E. Bailey, _Appl. Phys. Lett._**2003**, _82_, 1254. * [69] S. Crossley, A. Quindeau, A. G. Swartz, E. R. Rosenberg, L. Beran, C. O. Avci, Y. Hikita, C. A. Ross, H. Y. Hwang, _Appl. Phys. Lett._**2019**, _115_, 172402. * [70] K. Uchida, Z. Qiu, T. Kikkawa, R. Iguchi, E. Saitoh, _Appl. Phys. Lett._**2015**, _106_, 52405. * [71] M. B. Jungfleisch, V. Lauer, R. Neb, A. V Chumak, B. Hillebrands, _Appl. Phys. Lett._**2013**, _103_, 22411. * [72] G. F. Dionne, _J. Appl. Phys._**1970**, _41_, 4874. **Wiley-VCH** **Supplementary Information** **Controlling Magnonic Spin Current through Magnetic Anisotropy and Gilbert Damping** Amit Chanda\({}^{1}\), Christian Holzmann\({}^{2}\), Noah Schulz\({}^{1}\), Aladin Ullrich\({}^{2}\), Manfred Albrecht\({}^{2*}\), Miela J. Gross\({}^{3}\), Caroline A. Ross\({}^{3*}\), Dario. A. Arena\({}^{1}\), Manh-Huong Phan\({}^{1}\), and Hariharan Srikanth\({}^{1*}\) _\({}^{1}\)Department of Physics, University of South Florida, Tampa, Florida 33620, USA_ _\({}^{2}\)Institute of Physics, University of Augsburg, 86159 Augsburg, Germany_ _\({}^{3}\)Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA_ _*Corresponding authors: [email protected]; [email protected]; [email protected]_ **Supplementary Figure 1**. MFM images of GSGG/TmIG(236nm) film measured at **(a)**\(T=300\) K, **(b)**\(T=275\) K, **(c)**\(T=170\) K and **(d)**\(T=150\) K after doing AC demagnetization at room temperature. The MFM image at 300K (see **Fig. (a)**) shows a bright/dark contrast with irregular shaped features indicating domains. The image contrast is determined by the magnetic force-gradient \(\left(\frac{dF}{dz}\right)\) between the sample and the MFM tip (magnetized \(\perp\) to the film-surface). The bright/dark contrast of the MFM images decreases considerably at low temperatures (**Figs. (c)** and (d)) as the magnetization decreases. The RMS value of the phase shift,[1]\(\Delta\phi_{RMS}\approx\frac{Q}{\kappa}\left[\frac{dF}{dz}\right]\) (Q = quality factor and \(\kappa=\) spring constant of the tip; hence, \(\Delta\phi_{RMS}\propto\) average domain contrast and hence, \(\Delta\phi_{RMS}\propto\) magnetization).[1] The estimated values of \(\Delta\phi_{RMS}=1.31,1.20,1.07\) and \(0.97\) at T = 300, 275, 170 and 150 K, respectively. Clearly, \(\Delta\phi_{RMS}\) decreases significantly between 300 and 150 K, which is in agreement with our \(M_{S}(T)\) data. **Supplementary Figure 2. (a)-(e)** Left \(y\)-axis: the in-plane (IP) \(M(H)\) hysteresis loops at \(T=295\) K for GSGG/TmIG(\(t\))/Pt films with \(t\) (thickness) = 236, 150, 89, 46 and 28 nm and right \(y\)-axis: normalized polar magneto-optical Kerr effect (p-MOKE) signal as a function of an OOP magnetic field for the same films, (**f)-(j)** the temperature dependence of saturation magnetization, \(M_{S}(T)\) obtained from the IP \(M(H)\) loops for \(t\) = 236-28 nm, (**k)-(l)** IP M(H) loops for GGG/TmIG(\(t\))/Pt films with \(t\) = 142 and 44 nm at 295 K, (**m**) \(M_{S}(T)\) for the same films, (**n**) simulated \(M_{S}(T)\) for TmIG obtained from the Molecular field coefficient (MFC) model. **Supplementary Figure 3.**\(V_{ISHE}(H)\) hysteresis loops for the GSGG/TmIG(\(t\))/Pt films measured at selected temperatures in the range 120 K \(\leq T\leq\) 295 K for \(\Delta T=+10\) K for **(a)**\(t=236\) nm, **(b)**\(t=150\) nm, **(c)**\(t=89\) nm, **(d)**\(t=73\) nm, **(e)**\(t=46\) nm and **(f)**\(t=28\) nm. **(g)** The temperature dependence of the background-corrected LSSE voltage, \(V_{LSSE}(T)\) for the GSGG/TmIG(\(t\))/Pt(5 nm) films for \(t=28\) - 236 nm at \(\Delta T=+10\) K. **(h)** The magnetic field dependence of the anomalous Hall resistance, \(R_{xy}^{AHE}(H)\) at selected temperatures. **Supplementary Figure 4**. \(V_{ISHE}(H)\) hysteresis loops for the GGG/TmIG(\(t\))/Pt films measured at selected temperatures in the range 120 K \(\leq T\)\(\leq\) 295 K for \(\Delta T\) = +10 K for **(a)**\(t\) = 142 nm and **(b)**\(t\) = 44 nm, **(c)** and **(d)** the temperature dependence of the background corrected LSSE voltage, \(V_{LSSE}(T)\) for the same films for \(\Delta T\) = +10 K. **Supplementary Figure 5**. (**a**) \(V_{ISHE}(H)\) loops for the sGGG/TmIG(75nm)/Pt film for different values of \(\Delta T\) at 295 K. (**b**) Linear \(\Delta T\) -dependence of the background-corrected LSSE voltage, \(V_{LSSE}(\Delta T)\) for the same film. \(V_{ISHE}(H)\) hysteresis loops for the sGGG/TmIG(_t_)/Pt films measured at selected temperatures in the range 120 K \(\leq T\leq\) 295 K for \(\Delta T=+10\) K for (**c**) \(t=75\) nm and (**d**) \(t=40\) nm. The temperature dependence of the background corrected LSSE voltage, \(V_{LSSE}(T)\) for the (**e**) \(t=75\) nm and (**f**) \(t=40\) nm films for \(\Delta T=+10\) K. **Supplementary Figure 6**. Comparison of \(V_{ISHE}(H)\) hysteresis loops measured at 295 and 120 K for the films with nearly same thickness but different substrates: **(a)** GSGG/TmIG(46nm)/Pt, **(b)** GGG/TmIG(44nm)/Pt and **(c)** sGGG/TmIG(40nm)/Pt, **(d)-(f)**\(V_{LSSE}(T)\) for the same films on the left y-axis and corresponding normalized \(M_{S}(T)\)/ \(M_{S}(T=295\) K) on the right \(y\)-axis. **Supplementary Figure 7.** (**a**) Comparison of the bipolar field scans (\(+H_{DC}^{max}\)\(\rightarrow\)\(-H_{DC}^{max}\)\(\rightarrow\)\(+H_{DC}^{max}\)) of transverse susceptibility, \(\chi_{T}(H_{DC})\) at \(T=300\) and \(100\) K for the GSGG/TmIG(89nm)/Pt(5nm) film measured with configuration \(H_{DC}\perp\) film surface (IP easy axis). **(b**) Comparison of the bipolar field scans of \(\chi_{T}(H_{DC})\) at \(T=300\) and \(120\) K for the GSGG/TmIG(28nm)/Pt(5nm) film measured with the configuration \(H_{DC}\parallel\) film surface (OOP easy axis).** For both the films, \(\chi_{T}(H_{DC})\) exhibits a maximum at: \(H_{DC}=\pm H_{K}^{eff}\) as obtained from the Lorentzian fit. The temperature evolution of \(H_{K}^{eff}(T)\) on the left \(y\)-scale and, \(V_{LSSE}(T)\) on the right \(y\)-scale for the **(c)** GSGG/TmIG(89nm)/Pt(5nm) and **(d)** GSGG/TmIG(28nm)/Pt(5nm) films. **Supplementary Figure 8: Ferromagnetic resonance in GSGG/TmIG(236nm) and GSGG/TmIG(236nm)/Pt(5nm) films.** The field derivative of microwave (MW) power absorption spectra \(\left(\frac{dP}{dt}\right.\)line shapes) recorded at different frequencies between \(f=6\) - 20 GHz fitted with the linear combination of symmetric and anti-symmetric Lorentzian function derivatives for GSGG/TmIG(236nm) film at **(a)**\(T=295\) K, and **(b)**\(T=200\) K. **(c)** Frequency dependence of \(\Delta H\) at different temperatures for GSGG/TmIG(236nm) film with linear fit. The \(\frac{dP}{dt}\) spectra for **(d)** GSGG/TmIG(236nm) and **(e)** GSGG/TmIG(236nm)/Pt(5nm) films at a fixed frequency (\(f=12\) GHz) for different temperatures in the range \(160\) K \(\leq T\leq 295\) K. Temperature dependence of **(f)** effective Land\(\acute{e}\)\(g\)-factor \(\left(g_{eff}\right)\), **(g)** Gilbert damping, \(\alpha\) and **(h)** inhomogeneous broadening, \(\Delta H_{0}\) for GSGG/TmIG(236nm) and GSGG/TmIG(236nm)/Pt(5nm) films. We found that \(g_{eff}=1.642\pm 0.002\) and \(1.624\pm 0.015\) at \(T\)=295 K for our GSGG/TmIG(236nm) and GSGG/TmIG(236nm)/Pt(5nm) films, which are sufficiently lower than the free electron value (\(g_{eff}=2.002\)), but close to those reported for perpendicularly magnetized TmIG/Pt bilayers (\(g_{eff}\approx 1.56\)-\(1.58\))\({}^{[2][3]}\). Such low value of \(g_{eff}\) in TmIG was attributed to the large intrinsic spin-orbit coupling. **Supplementary Figure 9: Ferromagnetic resonance in TmIG films with similar thickness but grown on different substrates.** The field derivative of microwave (MW) power absorption spectra \(\left(\begin{array}{c}\frac{dP}{dH}\end{array}\right)\) recorded at \(T=295\) K at different frequencies between \(f=6\) - \(20\) GHz fitted with the linear combination of symmetric and anti-symmetric Lorentzian function derivatives for **(a)** GSGG/TmIG(46nm)/Pt, and **(b)** GGG/TmIG(44nm)/Pt films. **(c)** Frequency dependence of \(\Delta H\) at different temperatures for GGG/TmIG(44nm). The \(\frac{dP}{dH}\) spectra for **(d)** GSGG/TmIG(46nm)/Pt and **(e)** GGG/TmIG(44nm)/Pt films at a fixed frequency (\(f=12\) GHz) for different temperatures in the range \(160\) K \(\leq T\leq 295\) K. Temperature dependence of **(f)** effective Lande \(g\)-factor **(g)** Gilbert damping, \(\alpha\) and **(h)** inhomogeneous broadening, \(\Delta H_{0}\) for GSGG/TmIG(46nm)/Pt and GGG/TmIG(44nm)/Pt films.
2307.06975
Neuro-symbolic Empowered Denoising Diffusion Probabilistic Models for Real-time Anomaly Detection in Industry 4.0
Industry 4.0 involves the integration of digital technologies, such as IoT, Big Data, and AI, into manufacturing and industrial processes to increase efficiency and productivity. As these technologies become more interconnected and interdependent, Industry 4.0 systems become more complex, which brings the difficulty of identifying and stopping anomalies that may cause disturbances in the manufacturing process. This paper aims to propose a diffusion-based model for real-time anomaly prediction in Industry 4.0 processes. Using a neuro-symbolic approach, we integrate industrial ontologies in the model, thereby adding formal knowledge on smart manufacturing. Finally, we propose a simple yet effective way of distilling diffusion models through Random Fourier Features for deployment on an embedded system for direct integration into the manufacturing process. To the best of our knowledge, this approach has never been explored before.
Luigi Capogrosso, Alessio Mascolini, Federico Girella, Geri Skenderi, Sebastiano Gaiardelli, Nicola Dall'Ora, Francesco Ponzio, Enrico Fraccaroli, Santa Di Cataldo, Sara Vinco, Enrico Macii, Franco Fummi, Marco Cristani
2023-07-13T13:52:41Z
http://arxiv.org/abs/2307.06975v2
Neuro-symbolic Empowered Denoising Diffusion Probabilistic Models for Real-time Anomaly Detection in Industry 4.0 ###### Abstract Industry 4.0 involves the integration of digital technologies, such as IoT, Big Data, and AI, into manufacturing and industrial processes to increase efficiency and productivity. As these technologies become more interconnected and interdependent, Industry 4.0 systems become more complex, which brings the difficulty of identifying and stopping anomalies that may cause disturbances in the manufacturing process. This paper aims to propose a diffusion-based model for real-time anomaly prediction in Industry 4.0 processes. Using a neuro-symbolic approach, we integrate industrial ontologies in the model, thereby adding formal knowledge on smart manufacturing. Finally, we propose a simple yet effective way of distilling diffusion models through Random Fourier Features for deployment on an embedded system for direct integration into the manufacturing process. To the best of our knowledge, this approach has never been explored before. Industry 4.0, Anomaly Detection, Diffusion Models, Neuro-symbolic AI, Knowledge Distillation ## I Context and motivation The dawn of Industry 4.0 has has ushered a digital revolution in industrial processes, significantly increasing productivity, efficiency, and quality. This transformation is facilitated by the Industrial Internet of Things (IIoT) [1], which involves the interconnection of industrial devices, equipment, and systems through the Internet. The generation and collection of massive amounts of diverse data enabled by IIoT provide insights into various aspects of production, such as process optimization, quality control, and resource allocation. But, on the other hand, bring new challenges in data analysis and management. ## II Our Proposal We want to enhance the productivity and safety of the production line by implementing a system that can effectively Fig. 1: The key topics of our proposal. _1)_ Starting from the open problem of anomaly detection in an Industry 4.0 scenario, _2)_ we propose to use a recent category of deep learning models (_i.e._, diffusion models) to address this problem, by enhancing them with neuro-symbolic learning, which is one of the trending topics of the moment in the field of AI. _3)_ Finally, we propose a distillation strategy to transfer the knowledge into an embedded system for real-time usage on the product line. detect irregular behaviors of the system, even when human attention is lacking or unavailable. Traditional methods for achieving this goal rely on purely symbolic approaches, as companies need assurance that the system will provide explainable predictions and will not demand extensive computing resources. However, due to the simplicity of these methods, they are often limited in their ability to recognize anomalies that rely on context or relationships between sensors. Another option is to rely on deep learning models, which can learn complex relationships in the sensed data. Unfortunately, they are often avoided due to their "black-box" nature and the amount of effort required to run even the smallest models in real-time. Furthermore, they might require more expensive hardware and higher power usage, without the guarantee to match or exceed the performance of symbolic approaches. To address these concerns, this paper presents an innovative approach to the integration of a formally constrained diffusion model for anomaly detection with embedded systems for industrial applications, merging neuro-symbolic methods, formal constraints to enhance reliability and safety in industrial systems, and knowledge distillation. This innovative solution strives to improve monitoring capabilities without sacrificing the reliability and efficiency required in an Industry 4.0 production line setting. Figure 1 shows our proposed anomaly detection flow. What follows is a detailed explanation of the flow which, we hope, might pose the basis for further investigation and research in this area. We approach anomaly detection as a task of Out-Of-Distribution (OOD) classification. Anomaly detection is the task of identifying examples in a dataset that are different or unusual compared to most other examples. Similarly, OOD classification is the task of identifying examples that do not belong to a known data distribution. Both tasks involve identifying examples dissimilar to the majority of the other examples, and both require the ability to identify patterns in the data that are significantly different from the norm. Inspired by the work of [4], we propose a Denoising Diffusion Probabilistic Model (DDPM) [5] as the backbone for our anomaly detection process. We take advantage of DDPMs' ability to understand a given distribution latent structure to detect OOD samples in our data via differences in signal reconstruction. The final goal of the DDPM will be to label the training data in a fully unsupervised way. Our second contribution is exploiting ontologies to add formal and additional knowledge as properties that can be integrated within the deep learning model, to constrain the diffusion model to learn a data distribution that always respects the given logical axioms. We plan to do so by extending the methodology presented in [6], where authors present a formal model able to represent abstract capabilities, structure, and executable skills of a manufacturing system through so-called ontology design patterns based on industrial standards, _e.g._, DIN 8580 [7], VDI 3682 [8], and VDI 2860 [9]. By enriching our network with domain-specific knowledge, we ensure the model starts from a formal understanding of the data and can detect the anomalies that a purely symbolic system should be able to detect. Combined with the Deep Generative Model (DGM), this allows the network to learn more complex rules and relationships, and thus more sophisticated anomalies. Specifically, we plan to use a neural-symbolic AI [10] approach to support the learning phase of neural networks using the satisfaction of a first-order logic knowledge base as an objective, similarly to [11]. In our case, this is provided by the ontologies that formalize the smart manufacturing knowledge in an interoperable way. From the formal conceptualization given by the ontologies, we obtain the first-order logic knowledge base containing a set of axioms. At this point, we have some predicates, or functions appearing in these axioms that we want to learn, and some data available that we can use to learn the parameters of those symbols. The idea is to embed the logical axioms into the loss function of our diffusion model. The goal of our model then becomes finding solutions in the hypothesis space that maximally satisfy all the axioms contained in our knowledge base. This hybrid method, which combines both data-driven and knowledge-based techniques, allows the neural network to discern the remaining nuances and intricacies of the system, which should ultimately enhance its overall performance and reliability. The described system is explainable by design, as it allows the user to know which sensors are reporting anomalous data, and which values the system would have considered acceptable. Finally, we address the challenge of running the algorithm in real-time on the product line, which is in direct contrast with the computationally intensive nature of deep learning models, particularly diffusion models. In order to solve this, our third proposal consists of the use of Random Fourier Features (RFF) [12] to train a classifier on the binary labels obtained by our DDPM, effectively distilling its knowledge, without using a teacher-student learning paradigm [13], into a more lightweight and practically useful detector. RFF-based classifiers represent an extension of linear classifiers that project the data into a higher dimensional space, where the classes are more easily separable, in order to model arbitrary non-linear functions. By training a RFF model on the pseudo-labels created by the NeSy-DDPM, it is plausible to think that RFFs can be used to provide an optimal decision boundary in kernel space [14] for our proposed anomaly detection classifier. To sum up, the RFF must not capture the posterior inferred by the diffusion model per se, but they must be able to define features that support the (binary) label distribution generated by the diffusion-based-OOD [4]. Given a kernel function \(K(x,y)\), we can represent it as an inner product in a higher-dimensional space via the feature map: \[\phi(x)=\begin{bmatrix}\cos(w_{1}^{T}x+b_{1})\\ \sin(w_{1}^{T}x+b_{1})\\ \vdots\\ \cos(w_{D}^{T}x+b_{D})\\ \sin(w_{D}^{T}x+b_{D})\end{bmatrix}\, \tag{1}\] where \(D\) is the desired dimensionality of the feature space, and \(w_{i}\) and \(b_{i}\) are randomly generated parameters. To train a linear classifier using RFF, we first compute the feature representation for each input data point using the above formula and then use it to predict the label previously assigned to that sample by our DDPM approach. Once the linear classifier is trained on these features, we can make predictions on new data points by computing their feature representation and then multiplying them by the learned weight vector \(w\) and adding a bias term \(b\): \[p(y_{pred})=\frac{1}{1+e^{-(w^{T}\phi(x_{new})+b)}}. \tag{2}\] From 2, the entire inference process requires only two matrix multiplications, making them extremely lightweight and quickly executable on low-end hardware. In conclusion, our idea, shown in Figure 2, consists of a series of proposals in order to leverage the power of diffusion models for anomaly detection from time series data. The framework can capture the complex spatiotemporal dependencies between the sensors, handle large amounts of data in a rigorous way, and enable proactive anomaly detection and mitigation, thereby enhancing the reliability, safety, and efficiency of industrial processes. For all of these reasons, we think that our proposal can be a valuable tool for Industry 4.0. ## III Related Work This section introduces the techniques we rely upon. ### _Diffusion models_ Given observed samples \(x\) from a distribution of interest, the goal of a generative model is to learn to estimate its true data distribution \(p(x)\). Once learned, we are able to use the learned model to evaluate the likelihood of newly observed data. DDPMs are parameterized Markov chains models, able to learn the latent structure of data by modeling the way data points diffuse through the latent space. They emerged in 2015 [15] as a powerful new family of deep generative models with a record-breaking performance in a broad range of applications, spanning from image synthesis and multi-modal modeling to temporal analysis and natural language problem [16]. Referring to our problem, there are three principal works related to OOD with diffusion models. The authors of [4] utilize DDPMs as denoising autoencoders, where the amount of noise applied externally controls the strength of the conditioning. They suggest employing DDPMs to reconstruct a noised input across various noise levels and leverage the resulting multidimensional reconstruction error to classify OOD inputs. The experiments demonstrate that the proposed DDPM-based technique surpasses both reconstruction-based methods and state-of-the-art generative approaches. As previously stated, DGMs appear to be a natural choice for detecting OOD inputs. However, these models have been observed to assign higher probabilities or densities to OOD images than images from the training distribution. The work in [17] addresses this behavior and attributes it to model misestimation, suggesting it to be a more probable cause, rather than the misalignment between likelihood-based OOD detection and our distributions of interest. They also demonstrate how even slight estimation errors can result in OOD detection failures, which carries implications for future research in deep generative modeling and OOD detection. This work, although primarily focused on image inputs, has shown that DDPMs can be utilized to address this problem. This provides theoretical foundations for our idea, even though it may seem unconventional. ### _Neuro-symbolic AI_ Neuro-symbolic AI refers to the combination of artificial neural networks with symbolic knowledge representation and reasoning techniques used in symbolic AI. This approach aims to overcome the limitations of traditional rule-based symbolic AI by incorporating both logical reasoning and statistical inference into the same model [10]. One of these studies is that of Zheng _et al_. [18], tackling the challenge of developing a conversational embodied agent capable of executing real-life tasks. Traditional symbolic methods suffer from scaling and generalization issues, while end-to-end deep learning models face data scarcity and high task complexity, and are often difficult to interpret. To take advantage of both approaches, the authors propose a neuro-symbolic commonsense reasoning framework for generalizable and interpretable conversational embodied agents. Another recent work is the one by Huang _et al_. [19], in which they propose a neuro-symbolic approach that learns semantic representations by leveraging logic specifications that can capture rich spatial and temporal properties in video data. Furthermore, Siyaev _et al_. [20] proposed a neuro-symbolic reasoning approach for interacting with 3D digital twins through natural language. This method can comprehend user requests and contexts to manipulate 3D components of digital twins and execute installations and removal procedures independently by reading maintenance manuals. Overall, all these works demonstrate the potential of neuro-symbolic AI in various domains. The proposed approaches not only improve the accuracy of predictions but also provide valuable insights into the decision-making process, making them more transparent and trustworthy. ## IV Discussion This paper outlines a novel idea that combines neuro-symbolic diffusion models and RFF for real-time anomaly detection. This section discusses the reasons why this idea is considered wild and crazy and the potential implications of successfully implementing such a model. First and foremost, while the idea of integrating symbolic and neural systems has been explored in other contexts, such as in hybrid neuro-symbolic models, the concept of neuro-symbolic diffusion models is entirely new and has never been attempted before. This approach proposes to combine the strengths of symbolic and neural systems in a single model, potentially enabling a more robust representation of complex information. At the same time, it relies on domain-specific logical constraints to guide the DDPM during training, meaning that careful considerations on the formalisms used are needed by domain experts. Secondly, while RFF classifiers have been used in other contexts, utilizing them as a means to distill DDPMs is unexplored and presents numerous challenges. That being said, their low computation complexity offers a promising avenue for deployment on embedded systems, allowing us to utilize the power of deep learning models even in resource-constrained environments. The complexity of Industry 4.0 systems and the datasets they produce can present significant challenges for any anomaly detection approach. Anomaly detection is a critical task that demands high accuracy and reliability; as such, any approach utilized for this purpose must undergo thorough testing and validation to ensure its effectiveness. It is vital to consider whether the proposed approach can handle the complexity of these systems and datasets and whether it can scale to meet the demands of real-world applications. Although it may seem unconventional at first glance, our approach is grounded in sound scientific principles and has the potential to significantly contribute to the field of smart manufacturing, offering a promising solution to the challenges posed by real-time anomaly detection in Industry 4.0.
2304.01592
PAC-Based Formal Verification for Out-of-Distribution Data Detection
Cyber-physical systems (CPS) like autonomous vehicles, that utilize learning components, are often sensitive to noise and out-of-distribution (OOD) instances encountered during runtime. As such, safety critical tasks depend upon OOD detection subsystems in order to restore the CPS to a known state or interrupt execution to prevent safety from being compromised. However, it is difficult to guarantee the performance of OOD detectors as it is difficult to characterize the OOD aspect of an instance, especially in high-dimensional unstructured data. To distinguish between OOD data and data known to the learning component through the training process, an emerging technique is to incorporate variational autoencoders (VAE) within systems and apply classification or anomaly detection techniques on their latent spaces. The rationale for doing so is the reduction of the data domain size through the encoding process, which benefits real-time systems through decreased processing requirements, facilitates feature analysis for unstructured data and allows more explainable techniques to be implemented. This study places probably approximately correct (PAC) based guarantees on OOD detection using the encoding process within VAEs to quantify image features and apply conformal constraints over them. This is used to bound the detection error on unfamiliar instances with user-defined confidence. The approach used in this study is to empirically establish these bounds by sampling the latent probability distribution and evaluating the error with respect to the constraint violations that are encountered. The guarantee is then verified using data generated from CARLA, an open-source driving simulator.
Mohit Prashant, Arvind Easwaran
2023-04-04T07:33:02Z
http://arxiv.org/abs/2304.01592v1
# PAC-Based Formal Verification for ###### Abstract Cyber-physical systems (CPS) like autonomous vehicles, that utilize learning components, are often sensitive to noise and out-of-distribution (OOD) instances encountered during runtime. As such, safety critical tasks depend upon OOD detection subsystems in order to restore the CPS to a known state or interrupt execution to prevent safety from being compromised. However, it is difficult to guarantee the performance of OOD detectors as it is difficult to characterize the OOD aspect of an instance, especially in high-dimensional unstructured data. To distinguish between OOD data and data known to the learning component through the training process, an emerging technique is to incorporate variational autoencoders (VAE) within systems and apply classification or anomaly detection techniques on their latent spaces. The rationale for doing so is the reduction of the data domain size through the encoding process, which benefits real-time systems through decreased processing requirements, facilitates feature analysis for unstructured data and allows more explainable techniques to be implemented. This study places probably approximately correct (PAC) based guarantees on OOD detection using the encoding process within VAEs to quantify image features and apply conformal constraints over them. This is used to bound the detection error on unfamiliar instances, \(\epsilon\), with user-defined confidence, \(1-\delta\). The approach used in this study is to empirically establish these bounds by sampling the latent probability distribution and evaluating the error with respect to the constraint violations that are encountered. The guarantee is then verified using data generated from CARLA, an open-source driving simulator. Autoencoder, Conformal Prediction, Formal Verification, Generalized Error Bounds, Safety Guarantees ## I Introduction Developments in artificial intelligence (AI) and machine learning (ML) have led to their implementations in safety-critical fields like transport, healthcare and security. Autonomous vehicles, amongst other cyber physical systems (CPS), use ML within their detection and decision-making subsystems. A reason for this is that ML models like deep neural networks (DNN) can create lower dimensional representations of abstract data that can be utilized for various tasks [9]. However, obstacles to widespread use are the lack of explainability regarding inner workings and the lack of guarantees on performance. The primary reason for this is the black-boxed nature of DNNs, which, due to the number of training parametres, make it difficult to provide safety assurances within the CPS context [19]. The necessity of these are highlighted by the fact that the performance estimation formed during the training/testing phase of development may be different from the true performance of the system during deployment, oftentimes because of the existence of out-of-distribution (OOD) data that is unlikely to be present in the training phase [2]. OOD data refers to data that exist outside of the scope of data the model is familiar with. That is, instances that are out of the distribution defined by the dataset used during the training phase [11]. As it is impossible to account for all possible instances and states a system may encounter during the training phase, the system's behaviour toward OOD instances cannot be anticipated accurately and can be especially undesirable in safety-critical tasks [8]. For this reason, CPSs within safety-critical domains often contain subsystems dedicated to the detection and handling of OOD data [20]. There have been a number of studies that present implementations and frameworks for solutions to this problem, such as novelty or outlier detection and various OOD classifiers that have been developed using in-distribution benchmark datasets [11]. However, regardless of the algorithm used, an error-free OOD detection system is infeasible. Therefore, it is necessary to be able to guarantee the probability with which detection is conducted, especially in safety-critical tasks; i.e. to evaluate and bound the rate with which the subsystem fails to detect OOD instances. An obstacle toward deriving general error bounds for OOD detection is the difficulty in characterizing the OOD aspect of high-dimensional data instances with respect to in-distribution properties [3]. That is, it is difficult to check if any properties of an instance are outside 'normal' parameters for high-dimensional data as the properties over which data is distributed, especially in image-based CPS, can be abstract [3]. As a result, creating definite, explainable constraints using in-distribution properties to evaluate whether instances are OOD, and thereby, bound the system's performance, is difficult. A solution to this, utilized in systems described by [20, 8] and [7], is to use variational autoencoders (VAE) to parametrize the training data distribution with a fixed number of variables. VAEs are a class of DNNs that map high-dimension input data to lower-dimensional distributions that comprise latent spaces within the model. This results in the encoding of the distribution of training data to lower-dimension multivariate distributions. Studies have made use of this property in designing OOD detection systems by attempting to equate OOD instances with outliers in the latent space [20] and constructing classifiers to define in-distribution safety constraints within this space [8]. As such, the objective of this study is to create a framework for guaranteeing and bounding OOD detection failure. The approach described in this study relies on the construction of constraints within the latent space that are used to define an in-distribution criteria for high-dimension data using the spatial coordinates of the encoding. Similar to [20], the constraints within this study are constructed using conformity-based classification based on a subset of known in-distribution data. Sampling the VAE latent distribution to find violations of these constraints allows for the construction of bounds on the error with which the system conducts OOD detection. The guarantees on error provided in this study are through probably approximately correct (PAC) bounds. Two probabilistic measures are used to characterize the guarantee: the error level, \(\epsilon\in(0,1)\) and the confidence level, \(\delta\in(0,1)\). Through the sufficient sampling of the multivariate latent distribution, the approach provided in this study establishes that with a \(1-\delta\) confidence, the probability of OOD detection failure is less than \(\epsilon\). The correlation between sampling, constraint violation and confidence is used to bound the error probabilistically and provide an estimate of the performance of the system. The structure of the remaining report will consist of a clarification of the assumptions made as well as the limitations of this study, the relevant works upon which this study is based, the theoretical approach taken and the results acquired from applying the theory. ## II Related Works This investigation is related to two categories of research: _probably approximately correct guarantees for system safety_ and _variational autoencoder based out-of-distribution detection_. ### _PAC-based Safety Guarantees_ PAC learning was first introduced in [13] and has been utilized within several studies since. The objective of this framework is to be able to guarantee training within learning components to a certain extent with some confidence [5]. This has made the framework adaptable for formal verification purposes as it can be used to place error guarantees on certain properties of the output. This is cited as 'probably approximate safety verification' within [22]. There are a number of papers that address the specific topic using PAC-based guarantees to generalize error bounds within CPSs. Notable investigations in this area include [22, 19, 2] and [12]. Similar to the objective of this study, the PAC-based guarantees in the aforementioned works correlate the size of the training data to the failure rate with a particular level of confidence. The error bounds for learning described in [5] and [13] use a generalized term correlated to the size of the hypothesis space to describe the target concept sample complexity. This is further generalized in [1] as a bound that is dependent on the VC-dimension of the model being used. The approach taken in [19] to place PAC guarantees applies this concept by attempting to estimate the VC-dimension of the classification algorithm. In contrast to this, [22] proposes the formulation of PAC-based error bounds through the formulation of the problem as an optimization problem with the objective of minimizing the constraint violation probability. One of the main contributions of [22] is that stochastic perturbations within the input layer, with an underlying probability distribution, are factored into the derived error bounds. Because the problem investigated in this study can be framed similarly, a similar approach to [22] is utilized when deriving the error bounds. However, because this study attempts to approximate safety constraints using conformal prediction [18] the guarantee placed on the constraints being accurate are incorporated into the PAC-based generalized error bounds for the entire system. To the best of our knowledge, aside from this paper, there are no existing studies on combining multiple types of guarantees when bounding the failure rate of an entire system. ### _VAE-based OOD Detection_ Within recent years, several studies have emerged that use VAE latent encodings to reduce data dimensionality for tasks like classification [9] and anomaly detection [21]. [20] and [8] cite three clear benefits of doing so: firstly, the reduction in data dimensionality reduces the complexity of the required ML model, allowing more explainable techniques to be implemented [20]; secondly, the latent encoding allows for the quantification of high-dimensional features, increasing the robustness of classifiers applied in this space [8]; lastly, the reduction in dimensionality also reduces runtime [8]. There exist various extensions to techniques utilizing VAEs and this section is not exhaustive in detailing them. Instead, it will focus on the results of [8, 20] and [7], which make use of VAEs for the explicit purpose of OOD detection. [20] aims to train a VAE to construct a partially disentangled latent representation of a data set to be able to identify OOD data based on the targeted latent dimensions. A conformal predictor could then be used to determine a threshold value for OOD data along the tested dimension. [7] demonstrates a similar achievement using regression. A research gap that should be noted is that none of the aforementioned studies place an emphasis on guaranteeing the OOD detection failure rate within this type of pipeline, which is necessary when designing safety-critical CPSs. Though, it is worth mentioning that while [8] and [20] build conformal predictors that operate with certain confidence within this space, there are no comments on the representation of the calibration/conformal set by the latent probability distribution, which is required to bound the failure rate of the entire OOD detection system, including the VAE's encoding. To the best of our knowledge, aside from this paper, there are no existing studies on this. ## III Preliminaries and Definitions The investigation conducted in this paper is dependent on various existing techniques and the results from studies that have been conducted in the past. This section will provide background knowledge and define related terminology that will be used. ### _Safety Constraints_ The safety verification procedure in this study is conducted by verifying that sampled data instances lie within a'safe' region of the encoded hyperspace. This region is defined using a set of safety constraints, denoted by \(S_{n}\) for \(n\) constraints. In previous studies like [22, 4] and [10], assuming the dimensionality of the instance is \(k\), the safe region, \(\mathbb{S}\subseteq\mathbb{R}^{k}\), is equivalent to the set of values defined by x. \[\mathbb{S}=\left(x\in\mathbb{R}^{k}\mid\underset{j=1,2...n}{\max}S_{j}(x) \leq 0\right) \tag{1}\] Though the safety constraints implemented in this study are based on the Inductive Conformal Prediction framework (ICP) of a classification method discussed in [18], they are adapted using (1) as a basis. ### _Inductive Conformal Prediction_ ICP is a framework for prediction that relies on the degree to which future instances conform with known data and accordingly issues a guarantee on the confidence of the prediction. For ease of notation, the space, \(Z\), created by the Cartesian product of the feature space and label space, \(X\) and \(Y\) respectively, encompasses the training set \(Z_{M}:=(z_{1}...z_{M})\), with training samples \(z_{i}=(x_{i},y_{i})\in Z_{M}\). The training set can be split into two sets, \(Z_{L}\), the training set, and \(Z_{M-L}\), the calibration set, with \(L<M\)[17]. The role of the calibration set is made apparent when considering the conformity measure, \(C\), a function that outputs a value in proportion to the degree with which future data samples conform to the calibration set. [18], where conformal prediction was introduced, establishes \(C\) as a function that compares the output of a predictor \(f\) with the label for a data instance. \(\Delta\) is used to denote the comparator. \[C\left(Z_{M-L},z_{i}\right):=\Delta\left(y_{i},f(x_{i})\right) \tag{2}\] Within the context of a classification problem, \(C\) is used to evaluate the conformity score of a data instance, \(x\), with all labels \(y\in Y\) and outputs a set prediction for the potential class label of the instance based on the conformity of the label-instance pair with the existing calibration set. Given a significance level, \(\beta\in(0,1)\), where \(1-\beta\) is the confidence level, a threshold, \(t^{*}\), can be calculated from the calibration set by ordering the conformity scores of the calibration set and taking the \(\beta\) percentile score. Letting \(T\) be the ordered set of sorted conformity scores from the calibration set, the following is defined. \[T:=Sorted \left(t_{i}|t_{i}=\underset{y_{i}\in Y}{\max}\left(\Delta(y_{i},f( x_{i}))\right),(x_{i},y_{i})\in Z_{M-L}\right)\] \[with\ \left(t_{i}\leq t_{i+1}\right)\,,\] \[t^{*}=t_{\lfloor\beta(M-L)\rfloor} \tag{3}\] Note that it is assumed that the degree of conformity is greater for larger values of \(t^{*}\). Using \(t^{*}\), a set prediction can be constructed for a data instance, \(x\). \[\Gamma(Z_{M-L},x)=\{y|\Delta(y,f(x))>t^{*}\} \tag{4}\] The set predictor will have made an error if the correct label is not an element of the prediction. The probability of this occurring for a given prediction is less than \(\beta\) and, therefore, there is a more than \(1-\beta\) confidence that the set predictor is correct [18]. It is also worth noting that if none of the labels conform to a data instance, the predictor yields the null set, \(\varnothing\). This property is useful for predicting OOD instances. In order for ICP to hold, the following assumption has to be made [17]. **Assumption 1**: _The elements of the calibration set are exchangeable._ This implies that the distribution of the calibration set is to be representative of the distribution of the training set. Under these circumstances, the set prediction for a given data instance is invariant to different combinations of the calibration set and the prediction error is held less than \(\beta\). ### _Variational Autoencoders_ VAEs are a class of generative DNNs based on the encode-decode approach of autoencoders. The model assumes the existence of an underlying prior probability distribution of the training data. The model approximates the prior using a multivariate Gaussian distribution of fixed dimensions that comprises the latent dimension [6]. The resulting trained distribution is representative of the distribution of data within the latent space. Sampling from this distribution produces new data that preserves the learnt characteristics of the dataset. If a VAE is trained using in-distribution data, it stands to reason that the probability density function used to represent the latent distribution corresponds with the degree to which instances in the latent space are in-distribution. However, it is difficult to place a guarantee on the robustness of the encoding as well as the degree to which the latent distribution represents the in-distribution characteristics of the training set. As such, this study, similar to [8] and [20], utilizes encodings of known in-distribution data to create safety constraints within this space. ### _Probably Approximately Correct Guarantees_ Prior to formulating the safety verification problem and proposing a solution, it is necessary to describe the type of probabilistic guarantee that will be used. PAC learning is a concept that describes the efficient learning of a target hypothesis through approximation. Formally, the efficiency ascribed to PAC is in the form of a probably approximate learning guarantee, i.e. with at least \(1-\delta\) probability, \(\delta\in(0,1)\), the learnt concept will approximate the target concept with greater than \(1-\epsilon\) accuracy, given \(\epsilon\in(0,1)\). The motivation behind using a probabilistic, \(1-\epsilon\) approximation of the target concept is to reduce sample complexity, formally stated in [5] using the following equation, where \(N\) denotes the sample complexity and \(H_{N}\) denotes the size of the hypothesis space. \[N\geq\frac{1}{\epsilon}\left(\ln(H_{N})+\ln\left(\frac{1}{\delta}\right)\right) \tag{5}\] Inferring from inequality (5), any subsequent increase in either confidence or accuracy requires a larger increase in sample size. Therefore, in a system where time is a constraint to learning, an optimal level of accuracy can be guaranteed with a reduction in the samples used for training. The idea of a probably approximate safety guarantee, or PAC barrier certificate in [22], is borrowed from this concept. The application of the learning theory as a safety guarantee is to be able to, similarly, verify that with more than \(1-\delta\) confidence, the safety constraints are violated with less than an \(\epsilon\)-level probability using a fixed sample size, \(N\). To appropriate this framework, Assumption 2, the invariance assumption, and Assumption 3, learnable regularity, are required [14]. **Assumption 2**: _The distribution of data samples that are utilized during the training process is invariant to the distribution of the source of the samples._ This assumption is required to make inferences about the error bound for the system's performance during deployment. **Assumption 3**: _There exist regularities in the data that can be used to efficiently categorize and learn the target concept feasibly._ This assumption is necessary to evaluate OOD detection error. A step in doing so is to encode the set of learnable characteristics within the latent dimensions of the VAE architecture as well as categorize in-distribution data through these using arbitrary safety constraints. Based on the PAC framework, the problem formulation for this study is presented in Problem 1. **Problem 1:**_Given an out-of-distribution detection system consisting of a trained variational autoencoder, safety constraints identifying in-distribution characteristics within the latent encoding and a confidence level \(\delta\in(0,1)\), derive bounds \(\epsilon\in(0,1)\) such that the detection system, with at least \(1-\delta\) confidence, misidentifies OOD instances as in-distribution with less than \(\epsilon\) probability._ The objective of Problem 1 is to compute \(\epsilon\), the OOD detection error represented by the false positive in-distribution rate, with a particular confidence. However, this error can be minimized by categorizing all instances encountered as OOD. Therefore, a trade-off between in-distribution detection accuracy and OOD detection accuracy is inevitable. Intuitively, Problem 1 can be restated as the computation of the maximum tolerable OOD detection error for the system. ## IV Problem Formulation This section formalizes Problem 1 as an optimization problem and introduces the theorems necessary to provide a solution. The intuitive formulation from Problem 1 is the calculation of the dissociation of \(\mathbb{S}\), the space defined by the safety constraints, with the OOD regions and distributions of data outside of the specified in-distribution. Thereby, the problem would be the computation of \(\epsilon\), for the following inequality (6), given that \(\theta\) represents the VAE's learnt distribution over the latent space, given a data instance, \(x\), and given \(n\) safety constraints. \[P\left(x\notin\theta\mid\max_{j=1,2\ldots n}S_{j}(x)\leq 0\right)\leq\epsilon \tag{6}\] With the satisfaction of the safety constraints, if an instance is OOD with \(\theta\), then the constraints are in error. The difficulty with the computation of the LHS is that learning the corresponding distribution of OOD data within the latent space and sampling from it is infeasible. An alternative formulation would be (7), which describes the probability of an instance belonging to \(\theta\) given that the safety constraints are satisfied. The corresponding probability is an upper bound for \(1-\epsilon\). \[P\left(x\in\theta\mid\max_{j=1,2\ldots n}S_{j}(x)\leq 0\right)\geq 1-\epsilon \tag{7}\] A potential solution to this is the integral of the latent multivariate Gaussian distribution defined within \(\mathbb{S}\). However, given the probabilistic constraints that have been constructed, a more appropriate formulation would be as the following chance constrained optimization problem (CCP), where \(U\) is a user defined upper bound, in similar vein to [22]. \[\min_{\lambda\in\mathbb{R}}\lambda\ s.t.,\] \[P\left(x\in\theta\mid\max_{j=1,2\ldots n}S_{j}(x)\leq\lambda \right)\geq 1-\epsilon,\] \[0\leq\lambda\leq U \tag{8}\] CCPs are computationally hard problems. However, the results of [15] and [16] show they can be relaxed at the cost of the robustness of the solution using a scenario optimization approach. That is, the solution to a deterministic relaxation of the original problem is a valid solution to the original problem with a guaranteed confidence. For this reason, the objective of the problem is to minimize \(\lambda\) with respect to the constraints, allowing for reasonable deviation from the original problem. The relaxation of (8) given this approach is (9). \[\min_{\lambda\in\mathbb{R}}\lambda\ s.t.,\] \[\text{for each }i\in\ \left\{1,2,3...N\right\}\,,\] \[\max_{j=1,2...n}S_{j}(x_{i})-\lambda\leq 0,\] \[0\leq\lambda\leq U \tag{9}\] In doing so, the chance-based constraints are replaced by \(N\) instantiations of the constraints that can be violated with an \(\epsilon\) probability by feasible solutions to the problem. The confidence with which this is applicable is described by Theorem 1, presented in [15]. **Theorem 1**[15]: _Given a value \(\delta\in(0,1)\), if \(\epsilon\), \(N\) and \(r\) are such that the following condition holds,_ \[\binom{r+d-1}{r}\sum_{i=0}^{r+d-1}\binom{N}{i}e^{i}(1-e)^{N-i}\leq\delta \tag{10}\] _with \(d\) being the number of optimization variables and \(\mathbb{P}^{N}\) being the N-fold probability of constraint satisfaction with an \(\epsilon\) error level, the following also holds_ \[\mathbb{P}^{N}\left(P\left(x\in\theta\mid\max_{j=1,2...n}S_{j}(x)\leq 0\right) \geq 1-\epsilon\right)\geq 1-\delta \tag{11}\] Theorem 1 establishes a relation between the number of samples drawn, \(N\), the number of constraint violations, \(r\), the tolerable violability of the constraints, \(\epsilon\), and the confidence with which this occurs, \(\delta\), for a fixed number of optimization variables, \(d\). Based on this this, if the condition in (10) is met, the safety constraints will be satisfied with an \(\epsilon\) error level with a confidence of \(1-\delta\)[15]. Applying Theorem 1 directly to the problem described in (9) reduces the amount of computation required by loosening the constraints and increasing the size of the feasible region within \(\mathbb{R}^{k}\). Permitting violations of the safety constraints for \(N\) sampled instances to an \(\epsilon\) degree guarantees the solution with a \(1-\delta\) confidence assuming Theorem 1 holds, as applied in [22]. However, the formulation of Problem 1 is regarding the derivation of \(\epsilon\) given \(\delta\) rather than the converse. The following section describes how Theorem 1 can be used to do so. ## V Guaranteeing Out-Of-Distribution Detection ### _Constructing Safety Constraints_ The difficulty with placing safety constraints on the latent space exists because the encoding and decoding processes are opaque and to verify that the output is safe based on the sampled latent variables requires being able to map the latent space to the output space. Similarly, determining the latent variables that correspond to the right generative factor and their correlation is an exhaustive process. Some similarities can be drawn between the notion of conformity within the ICP framework and conventional safety constraints for OOD detection problems; i.e. data instances that fail to meet certain criteria specified by the safety constraints vs. the more abstract comparison of the conformity measure with the calibration set threshold. A potential conformity metric used to create a score for a data instance is the probability density of the calibration set at the location of the instance in the latent space of the VAE [8]. If the density exceeds the threshold, it is probable that the instance conforms. This can be tested for each label when forming the set prediction. A unique property of the set predictor that can be utilized to identify OOD instances is the null set prediction, \(\varnothing\), which implies a lack of conformity with any class [18]. However, it should be noted that the expected rate of exclusion of the correct label from the set prediction is \(\beta\) and it follows that the confidence of the predictor in constructing the correct prediction is \(1-\beta\). Therefore, when the set predictor outputs \(\varnothing\), it does so with a \(1-\beta\) confidence. Using the null set prediction property of ICP as the definition for OOD instances, the safe region can be constructed using the following. \[\mathbb{S}=\left\{x\in\mathbb{R}^{k}\mid\max_{j=1,2...n}\left(C(Z_{M-L},(x,y_{j }))\right)\geq t^{*}\right\} \tag{12}\] That is, for a calibrated set predictor, \[\mathbb{S}=\left\{x\in\mathbb{R}^{k}\mid\Gamma\left(Z_{M-L},x\right)\neq \varnothing\right\} \tag{13}\] Implementations of the density based conformity metric consist of approximations like kernel density estimation and K-nearest neighbor distance scores. Unlike the construction of safety constraints using conventional classifiers like support vector machines in [8], the conformity based safety constraints are defined using a calibration set as well as a confidence measure, \(\beta\), that dictates the degree of deviation permitted for a new data instance from the characteristics of the calibration set. In turn, this allows for more flexibility as well as minimal representation error when determining safety constraints when compared with the method in [8] that requires the support vector algorithm to do so. For a visual comparison, refer to Figures 1 and 2. Furthermore, in order to construct the constraints using support vectors, a number of known OOD samples must be included in the training set. This restriction is eliminated when using a conformal predictor. The following algorithm can be used to construct the safety constraints over the training set. **Pre-conditions :** * Trained VAE, latent distribution \(\theta\) over \(\mathbb{R}^{k}\), * Let \(Z_{M-L}\subseteq\mathbb{S}\) be the calibration set, \(M,L\in\mathbb{Z}^{+}\) * Significance \(\beta\in(0,1)\); **Procedure :** 1. Construct \(T\), the set of conformity scores for each element within the calibration set using the kernel density estimate (KDE) for each point \(z\in Z_{M-L}\); 2. Sort \(T\) in ascending order, \(t_{i}<t_{i+1},t_{i}\in T\); 3. Establish the threshold \(t^{*}=t_{\lfloor\beta(M-L)\rfloor}\); 4. Establish the constraints by building set predictor \(\Gamma\), 1. i.e. \(x\in\theta\) is an element of the safe region iff. \(\Gamma(Z_{M-L},x)\neq\varnothing\); Though Algorithm 1 utilizes the KDE algorithm within this study, note that \(T\) can be established using any density based metric with a similar ordering property. ### _Deriving Error Bounds for OOD Detection_ Assuming ideal conditions are met and there exists a solution to (9) where \(\lambda\leq 0\), that is, an absence of constraint violations are encountered, the following holds [15]. \[\epsilon\geq 1-\delta^{1/N} \tag{14}\] This can be derived from Theorem 1 by making assumptions regarding the number of violated constraints and, intuitively, defines the relation between \(\epsilon\) and \(\delta\) as \(\delta\) is an N-fold probability and is defined in (11). However, in the instance that \(\lambda\) is greater than zero, (14) does not hold. For this, [22] applies Chernoff bounds to the binomial condition in Theorem 1, (10), and, through inequality (8) in [15], presents adjusted error bounds that account for solutions where constraints are violated. \[\epsilon\geq\min\left\{1,\frac{1}{N}\left(r+\ln\frac{1}{\delta}+\sqrt{\ln^{2} \frac{1}{\delta}+2r\ln\frac{1}{\delta}}\right)\right\} \tag{15}\] For the specific application of (15) within this study, the error bound presented in (15) can be tightened further considering the confidence parameter, \(\beta\), from Section VI-A that describes the probability with which a true constraint violation has taken place assuming that a sampled instance does not conform to the calibration set. The adjustment is to the number of detected constraint violations by a factor of \(1-\beta\), the confidence of the conformal prediction. The proof for (15) and adjustment made to it in (16) is detailed in section VIII. \[\epsilon\geq\min\left\{1,\frac{1}{N}\left(r(1-\beta)+\ln\frac{1}{\delta}+ \sqrt{\ln^{2}\frac{1}{\delta}+2r(1-\beta)\ln\frac{1}{\delta}}\right)\right\} \tag{16}\] With this, the algorithm required to conduct the safety verification of the OOD detection system and bound the performance is a counting algorithm that records the number of constraint violations, or forced relaxations, within \(N\) sampled instances and bounds the number of potential future violations as \(N\) approaches infinity with \(1-\delta\) confidence. Algorithm 2 describes this process with greater detail. **Pre-conditions :** * Trained VAE, latent distribution \(\theta\) over \(\mathbb{R}^{k}\), * Set predictor \(\Gamma\), * Calibration set \(Z_{M-L}\subseteq\mathbb{R}^{k}\), * Significance \(\beta\in(0,1)\), * Instantiated values \(N\in\mathbb{Z}^{+}\) and \(\delta\in(0,1)\); **Procedure :** 1. Initialize variable \(r\) to \(0\); 2. Loop \(N\) times, 1. Generate sample \(x\) from \(\theta\), 2. If \(\Gamma(Z_{M-L},x)=\varnothing\), increment \(r\); 3. If \(1<\frac{1}{N}\left(r(1-\beta)+\ln\frac{1}{\delta}+\sqrt{\ln^{2}\frac{1}{\delta }+2r(1-\beta)\ln\frac{1}{\delta}}\right)\), return \(1\), 1. Else, return \(\frac{1}{N}\left(r(1-\beta)+\ln\frac{1}{\delta}+\sqrt{\ln^{2}\frac{1}{\delta }+2r(1-\beta)\ln\frac{1}{\delta}}\right)\); Fig. 1: Safe region created by support vectors highlighted by green. Fig. 2: Safe regions created using a uniform kernel estimation at different significance levels (\(\beta\)). ## VI Evaluation of Bounds This section describes the results of applying the theories in Section V. All computations were performed using a Google Colab environment with 12GB memory, 100GB disk space, 2.3GHz CPU and a Tesla k80 GPU. ### _VAE Properties_ The architecture of the VAE in this experiment is as follows. The model is divided into the encoder and decoder. Within the encoder, there are five layers of convolution, followed by five densely connected layers. Within the decoder, there are four densely connected layers followed by four layers of convolution. The latent encoding is comprised of 16 variables as it was assumed that this value was an appropriate upper bound for the number of generative factors for the DVM-CAR dataset. Lastly, through a grid search, the hyperparameter coefficient of the KL-divergence term in the loss function was set to 2.2. ### _Data Properties_ In order to verify that the method described in this paper can be successfully applied to the OOD detection system described, the dataset used to train the VAE consisted of images generated from running the CARLA driving simulator within fixed environmental parameters, e.g. rain, sunlight and location. The motivations behind using this dataset are because it is highly controllable with easily quantifiable OOD instances and the driving simulator is representative of the data complexity encountered during deployment. All images in the training dataset contain similar properties and are in-distribution. The partition between in-distribution data and OOD data is through the following features that were set upon running the simulator: * Any amount is OOD; * Any value under 0.5 is OOD; * Any segment apart from the one shown in Fig. 3 is OOD. Samples from the simulation that are known to be OOD are indicated in Fig. 4. 1600 in-distribution images are used in the VAE training process, from which 200 are selected for the calibration set. ### _Computing Error Bounds_ In this experiment, the safety constraints of the form of Equation (13) are computed using Algorithm 1 with a subset of training samples that comprise the calibration set. The objective of Algorithm 1 is to establish a metric by which future samples can be scored to verify conformity with the training set. If the degree of conformity fails to exceed a threshold, \(t^{*}\), the instance is OOD. Spatially, this is equivalent to partitioning the encoded \(\mathbb{R}^{k}\) space into a safe and unsafe region determined by the density of the encoded calibration set within that region. During the computation of the safety constraints, sets of 200 in-distribution CARLA samples were used to construct the calibration set. These samples were encoded and the KDE algorithm was applied with a uniform kernel to establish the density of the calibration set at each point. These values were then normalized and ordered. For all subsequent experiments, the significance level, \(\beta\), taken was 0.0275. Based on this and Equation (3), the 5th element of the ordered set of conformity scores was used as the threshold for OOD prediction. The following table describes the trials that took place using the conformity predictor at various levels of confidence and for various sample sizes. \begin{table} \begin{tabular}{|l||l||l||l||l|} \hline \multicolumn{4}{|l|}{Table 1. Sample Experiment Results} \\ \hline \(N\) & \(\delta\) & \(r\) & \(\frac{r}{N}\) & \(\epsilon\) \\ \hline \(10^{2}\) & \(10^{-6}\) & 5 & 0.0500 & 0.4141 \\ \(10^{3}\) & \(10^{-6}\) & 45 & 0.0450 & 0.1009 \\ \(10^{4}\) & \(10^{-6}\) & 436 & 0.0436 & 0.0559 \\ \(10^{5}\) & \(10^{-6}\) & 4301 & 0.0430 & 0.0457 \\ \(10^{6}\) & \(10^{-6}\) & 43235 & 0.0432 & 0.0433 \\ \hline \(10^{5}\) & \(10^{-4}\) & 4311 & 0.0431 & 0.0449 \\ \(10^{5}\) & \(10^{-6}\) & 4359 & 0.0436 & 0.0460 \\ \(10^{5}\) & \(10^{-8}\) & 4283 & 0.0428 & 0.0458 \\ \(10^{5}\) & \(10^{-10}\) & 4277 & 0.0428 & 0.0463 \\ \hline \end{tabular} \end{table} Table 1: Sample Experiment Results Figure 4: Out-of-Distribution CARLA Simulation Data Figure 3: In-Distribution CARLA Simulation Data Table 1 describes the results from the experiment and the performance of Algorithm 2 in establishing error bounds. Given the sample size, \(N\), and confidence, \(\delta\), Algorithm 2 computes the number of samples in violation of the established safety constraints, \(r\), yielding the proportion of samples in violation of the constraints, \(\frac{r}{N}\). From these values, Equation (16) could be used to bound the error rate, \(\epsilon\). The aim of this study is to be able to provide an upper bound for the value \(\frac{r}{N}\), \(\epsilon\), which must be able to bound \(\frac{r}{N}\) with \(1-\delta\) confidence. As such, there are two assessments to be made regarding the bounds that have been derived in this study, namely, the validity and effectiveness of the bounds. The validity of the bounds is the determination of whether or not \(\epsilon\) is greater than \(\frac{r}{N}\). The effectiveness is the measure of the tightness of the bound, i.e. how distant \(\epsilon\) is from \(\frac{r}{N}\). Based on the figures observed during the experiment, the number of data instances in violation of the constraints are approximately 4% of the sample size. The validity can be established by observing that the error bound is greater than the proportion of constraint violations relative to the number of instances sampled with greater than \(1-\delta\) confidence. Though the bound is greater than the observed error rate for all recorded trials in Table 1, the effects of adjustments in the confidence parameter to the error bound are more explicit in Fig. 5-8. Additional observations from the table are: * that \(\epsilon\) increases as \(\delta\) decreases, widening the bounds to increase the probability that the true error rate has, in fact, been bounded; * that \(\epsilon\) decreases as \(N\) increases, tightening the bounds as the latent distribution is sampled further and the sampled error rate approaches the expected error rate. Based on these observations, the tightness of the bounds is also made clear as, for a fixed confidence, as the number of instances sampled increases, \(\epsilon\) tends infinitely close to \(\frac{r}{N}\) while still providing an upper bound, as observed in row 5 of Table 1. Fig. 5-8 indicate the relation between error and the number of instances sampled for fixed values of \(\delta\). The points on the graph indicate the observed error rate within a trial and the expected error bound, \(\epsilon\), that places an upper bound guarantee on the error rate with \(1-\delta\) probability. A violation of the error bound during a trial can be expected in \(\frac{1}{\delta}\) trials. As such, the experiments depicted in Fig. 5-8 are of trials where \(\delta\) is in the range \([0.25,0.1,0.05,0.01]\) and the number of data points recorded for each confidence value are 500. The percentage of trials where the observed error rate exceeds the expected error bound is denoted in Table 2 alongside the corresponding graph and the confidence. ## VII Conclusions This study successfully derives guarantees for OOD detection with fixed levels of confidence that are within sampled error bounds for uncertain safety constraints. The framework for doing so utilizes VAEs to quantify the features comprising the distribution of training data and placing ICP-based safety constraints based on samples that conform to the in-distribution label. The algorithm for the error bound calculation depends on sampling from the VAE's learnt distribution over the latent dimension and counting the samples in violation of the constraints. Lastly, testing with a dataset of images from the CARLA driving simulator proved that the derived bounds are valid for all error-confidence pairs. The results of this study present a framework to predict system performance prior to deployment and independent of the type of safety constraints. And while an implementation of the technique on the CARLA driving simulator demonstrates its practicality, it raises questions from a theoretical standpoint as to developments that could be made in future studies. Extensions to this study should consider the effects that varying the sample size of the calibration set has on the error bound derivation as it is used to construct safety constraints that approximate the in-distribution characteristics being assessed. This may become a consideration for real-time OOD detection implementations that require smaller calibration sets to ensure runtime feasibility. We hope that the results described in this paper demonstrate the reliability of PAC-based formal verification and inform future studies that aim to guarantee CPS safety prior to deployment. ## VIII Appendix : Proof of Inequations (15) and (16) The derivation of (15) is absent in [22], from where this paper cites it. As such, this paper attempts to re-derive this bound within this section, beginning with inequation (8) from [15], restated in (17), and the associated preliminaries. \[r\leq\epsilon N-d+1-\sqrt{2\epsilon N\ln\frac{(\epsilon N)^{d-1}}{\delta}} \tag{17}\] Implying the following, \[\epsilon\geq\frac{1}{N}\left(r+d-1+\sqrt{2\epsilon N\ln\frac{(\epsilon N)^{d- 1}}{\delta}}\right) \tag{18}\] An assumption being made in [15] is that \(\epsilon N\geq r+d-1\). Substituting into (17), \[\epsilon\geq\frac{1}{N}\left(r+d-1+\sqrt{2(r+d-1)\ln\frac{(\epsilon N)^{d-1}} {\delta}}\right) \tag{19}\] Following from the previous assumption is (19). \[\epsilon N-r\geq d-1 \tag{20}\] However, the definition of the expected value of \(r\), given a sample size of \(N\), is as follows, \[E_{N}[r]=\epsilon N \tag{21}\] Furthermore, given the Law of Large Numbers, \[\lim_{N\rightarrow\infty}r-\epsilon N=0 \tag{22}\] Thus, substituting (22) into (20), \(d\leq 1\). The substitution of the resultant into (19) ensures that (23) holds. \[\epsilon\geq\frac{1}{N}\left(r+d-1+\sqrt{2(r+d-1)\ln\frac{1}{\delta}}\right) \tag{23}\] Thus, \[\epsilon\geq\frac{1}{N}\left(r+d-1+\sqrt{2r\ln\frac{1}{\delta}+2(d-1)\ln\frac {1}{\delta}}\right) \tag{24}\] As \(\ln\frac{1}{\delta}>2(d-1)\) for the following range, \(0<\delta<<1\), the following substitution can be made to upper bound the RHS expression in (23). \[\epsilon\geq\frac{1}{N}\left(r+\ln\frac{1}{\delta}+\sqrt{2r\ln\frac{1}{\delta }+\ln\frac{1}{\delta}\ln\frac{1}{\delta}}\right) \tag{25}\] (25) is equivalent to (15). (16) is produced by asserting Assumption 1 as well as the proposition in [17] regarding the validity of predictions made using conformal predictors, that conformal set predictions are made with \(1-\beta\) confidence. Therefore, the bound on \(\epsilon\) in (15) can be further tightened using the expected value of the erroneous null set detections. Of the \(r\) recorded constraint violations, it is expected that there are \(\beta r\) errors and, therefore, the value of \(r\) used within (15) can be adjusted by a factor of \(1-\beta\) to derive (16), restated in (26) with the adjusted error bound \(\epsilon^{*}\). \[\epsilon^{*}\geq\min\left\{1,\frac{1}{N}\left(r(1-\beta)+\ln\frac{1}{\delta}+ \sqrt{\ln^{2}\frac{1}{\delta}+2r(1-\beta)\ln\frac{1}{\delta}}\right)\right\} \tag{26}\] Note that \(\epsilon^{*}\leq\epsilon\) because the RHS of (26) is less than the RHS of (25) for \(\beta\in[0,1]\). Therefore, while (26) is being used to substitute (25) within this study, using (25) is a valid approach to determining the OOD detection failure rate as it presents an upper bound for (26). ## IX Acknowledgements This work was supported in part by the AISG research grant AISG2-RP-2020-017.
2310.15478
How to Train Your Neural Control Barrier Function: Learning Safety Filters for Complex Input-Constrained Systems
Control barrier functions (CBF) have become popular as a safety filter to guarantee the safety of nonlinear dynamical systems for arbitrary inputs. However, it is difficult to construct functions that satisfy the CBF constraints for high relative degree systems with input constraints. To address these challenges, recent work has explored learning CBFs using neural networks via neural CBF (NCBF). However, such methods face difficulties when scaling to higher dimensional systems under input constraints. In this work, we first identify challenges that NCBFs face during training. Next, to address these challenges, we propose policy neural CBF (PNCBF), a method of constructing CBFs by learning the value function of a nominal policy, and show that the value function of the maximum-over-time cost is a CBF. We demonstrate the effectiveness of our method in simulation on a variety of systems ranging from toy linear systems to an F-16 jet with a 16-dimensional state space. Finally, we validate our approach on a two-agent quadcopter system on hardware under tight input constraints.
Oswin So, Zachary Serlin, Makai Mann, Jake Gonzales, Kwesi Rutledge, Nicholas Roy, Chuchu Fan
2023-10-24T03:15:15Z
http://arxiv.org/abs/2310.15478v3
# How to Train Your Neural Control Barrier Function: ###### Abstract Control barrier functions (CBFs) have become popular as a safety filter to guarantee the safety of nonlinear dynamical systems for arbitrary inputs. However, it is difficult to construct functions that satisfy the CBF constraints for high relative degree systems with input constraints. To address these challenges, recent work has explored learning CBFs using neural networks via neural CBFs (NCBFs). However, such methods face difficulties when scaling to higher dimensional systems under input constraints. In this work, we first identify challenges that NCBFs face during training. Next, to address these challenges, we propose policy neural CBFs (PNCBFs), a method of constructing CBFs by learning the value function of a nominal policy, and show that the value function of the maximum-over-time cost is a CBF. We demonstrate the effectiveness of our method in simulation on a variety of systems ranging from toy linear systems to an F-16 jet with a 16-dimensional state space. Finally, we validate our approach on a two-agent quadcopter system on hardware under tight input constraints. The project page can be found at [https://mit-realm.github.io/pncbf](https://mit-realm.github.io/pncbf). ## I Introduction and Related Works Techniques employing control barrier functions (CBFs) are powerful tools for safety-critical control of dynamical systems. In particular, CBFs can be used as a safety filter to maintain and certify the safety of any system under arbitrary inputs. This safety guarantee is crucial in order to give users the needed confidence for greater adoption of robotics in safety-critical domains such as autonomous driving [1], surgical robotics [2], and urban air mobility [3]. Despite their theoretical advantages, constructing CBFs in practice remains difficult. While it is easy to construct a _candidate_ CBF, it is much harder to verify the conditions necessary to enjoy the safety guarantees of a _valid_ CBF for systems with input constraints. Consequently, input constraints are often ignored when using CBFs in practice [4, 5, 6, 7]. **CBFs for High Relative Degree Systems under Input Constraints.** To address the above challenges with CBFs, recent works try to simplify the construction of valid CBFs for high relative degree systems and input constraints. In particular, backup CBFs constrain the system to states where a fallback controller can maintain safety [8, 9, 10, 11]. These approaches, however, either require knowledge of an invariant set for the fallback controller which is difficult to compute in itself, or require an appropriate predictive horizon that trades between myopic unsafe behavior and performance. **Neural CBFs.** Recently, learning based approaches have been used to learn neural CBFs (NCBFs) that approximate CBFs using neural networks [12], part of a more general trend of learning neural certificates [13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. Owing to the flexibility of neural networks, NCBFs have been extended to handle parametric uncertainties [23], obstacles with unknown dynamics [24] and multi-agent control [25, 26]. However, many existing NCBF approaches do not consider input constraints. Recent work has examined incorporating input constraints into NCBFs [27], but this approach requires solving a minimax problem that can be brittle to solve in practice. **Reachability Analysis.** Reachability analysis provides a powerful tool for analyzing the safety of dynamical systems. Hamilton-Jacobi (HJ) reachability analysis computes the largest control-invariant set [28] and is often computed using grid-based methods [29]. Recent works connect the HJ value function with CBFs [30], providing an alternative to Sum-of-Squares programming for automated synthesis. However, the curse of dimensionality limits the practical applicability of grid-based solvers to systems with state-dimension smaller than \(5\)[29]. **Contributions.** We summarize our contributions as follows: 1. We identify challenges that existing training methods for Neural CBFs face when under input constraints. 2. We show that the policy value function is a valid CBF. Using this insight, we propose learning Neural CBFs via the policy value function, thereby bypassing the challenges that previous neural CBF approaches face. 3. We demonstrate our approach with extensive simulation experiments and show that our method can yield much larger control invariant sets and can scale to higher Fig. 1: We train a Policy Neural CBF (PNCBF) by learning the value function \(V^{h,\pi}\) for a given nominal policy offline. The subject set of \(V^{h,\pi}\) contains all states from which the nominal policy remains safe. The PNCBF can then be used online as a safety filter to ensure the safety of any potentially unsafe nominal policy. Our method avoids the pitfalls of previous Neural CBF approaches and can scale to high dimensional systems such as the F-16. -dimensional systems than current state of the art methods. 4. We validate our approach on a two-agent quadcopter system on hardware. ## II Preliminaries ### _Problem Definition_ We consider continuous-time, control-affine dynamics \[\dot{x}=f(x)+g(x)u, \tag{1}\] where \(x\in\mathcal{X}\subseteq\mathbb{R}^{n_{x}},u\in\mathcal{U}\subseteq\mathbb{R}^{ n_{u}}\) and \(f,g\) are locally Lipschitz continuous functions. Let \(\mathcal{A}\subset\mathcal{X}\) denote a set of unsafe sets to avoid. We state the safe controller synthesis problem below. **Problem 1** (Safe Controller Synthesis).: _Given the system (1) and an avoid set \(\mathcal{A}\subset\mathcal{X}\), find a control policy \(\pi:\mathcal{X}\to\mathcal{U}\) that prevents the system from entering the avoid set \(\mathcal{A}\), i.e.,_ \[x_{0}\not\in\mathcal{A}\implies x_{t}\not\in\mathcal{A},\quad\forall t\geq 0. \tag{2}\] Often, we have a nominal policy \(\pi_{\mathrm{nom}}:\mathcal{X}\to\mathcal{U}\) that is performant but may not be safe. In this case, we want our policy \(\pi\) to _minimally modify_\(\pi_{\mathrm{nom}}\) to maintain safety. **Problem 2** (Safety Filter Synthesis).: _Solve Problem 1 with the additional desire that \(\pi\) is close to \(\pi_{\mathrm{nom}}\). Specifically, we wish to solve the optimization problem_ \[\min_{\pi} \|\pi-\pi_{\mathrm{nom}}\| \tag{3a}\] \[\mathrm{s.t.} x_{t}\not\in\mathcal{A},\quad\forall t\geq 0, \tag{3b}\] _where \(\|\cdot\|\) is some distance metric._ In this work, we focus on solving Problem 2 with Control Barrier Functions (CBFs), as we describe below. ### _Safety Filter Synthesis with Control Barrier Functions_ We focus on (zeroing) CBFs [31, 32, 33] as a solution to Problem 2. Specifically, let \(B:\mathcal{X}\to\mathbb{R}\) be a continuously differentiable function, \(\alpha:\mathbb{R}\to\mathbb{R}\) be an extended class-\(\kappa\) function1, and2 Footnote 1: Extended class-\(\kappa\) is the set of continuous, strictly increasing functions \(\alpha\) such that \(\alpha(0)=0\) Footnote 2: Some works [33] define the unsafe set to be the zero superlevel set of \(B\), while some use the sublevel set. We use the former definition in this work. \[B(x)>0,\quad\forall x\in\mathcal{A}, \tag{4a}\] \[B(x)\leq 0\implies\inf_{u\in\mathcal{U}}L_{f}B(x)+L_{g}B(x)u\leq- \alpha\big{(}B(x)\big{)}, \tag{4b}\] where \(L_{f}B\coloneqq\nabla B^{\mathsf{T}}f\), \(L_{g}B\coloneqq\nabla B^{\mathsf{T}}g\). Then, \(B\) is a CBF, and any control \(u\) that satisfies the _descent condition_ (4b) renders the sublevel set of \(B\) { \(x\in\mathcal{X}\mid B(x)\leq 0\) } forward-invariant, i.e., any trajectory starting from within this set remains in this set under such a choice of \(u\). In particular, since (4b) is a linear constraint on \(u\), we can solve Problem 2 using the following Quadratic Program (QP). \[\min_{u\in\mathcal{U}} \|u-\pi_{\mathrm{nom}}(x)\|^{2} \tag{5}\] \[\mathrm{s.t.} L_{f}B(x)+L_{g}B(x)u\leq-\alpha(B(x))\] **Challenges with CBF synthesis.** Define a _candidate_ CBF to be any function that satisfies (4a). If \(\mathcal{U}\) is unbounded, since (4b) is _linear_ in \(u\), any candidate CBF \(B\) also satisfies (4b) if \(L_{g}B\not\equiv 0\). When a system is of high relative degree (i.e., \(L_{g}B(x)\equiv 0\)), Higher Order CBFs (HOCBFs)[34, 35] can be used. The main challenge to proving that a candidate CBF also satisfies (4b) occurs for bounded control sets \(\mathcal{U}\) due to actuator limits [11]. Finding a function \(B\) such that (4b) verifiably holds for _arbitrary_ nonlinear dynamics and avoid sets \(\mathcal{A}\) is a hard problem that can be solved using Hamilton-Jacobi (HJ) reachability [36]. However, HJ reachability is computationally expensive and impractical for systems with more than \(5\) dimensions [29]. Consequently, many works that propose CBFs do not consider actuator limits [4, 5, 6, 7]. One can try to use HOCBFs for automated CBF synthesis of high relative degree systems. As we show next, this can be problematic in the presence of input constraints. **Challenges of HOCBFs on the Double Integrator.** Consider the double integrator \(\dot{p}=v,\dot{v}=a\), the simplest high relative degree system, with the safety constraint \(p\geq 0\). The HOCBF _candidate_\(B(x)=-v-\alpha p\) is a _valid_ CBF if and only if \(\alpha=0\) (i.e., disallowing all negative velocities). All other choices of \(\alpha\) intersect the true unsafe region and violate (4b) (see Fig. 2). ### _Neural CBFs_ To address the challenges of designing a valid CBF by hand, recent works have proposed learning a CBF using neural networks [23, 24, 27]. A naive approach to approximating the CBF \(B\) with a neural network approximation \(B^{\theta}\) is to encourage satisfying Fig. 2: **HOCBF on the double integrator.** On a double integrator with box control constraints \(|u|\leq 1\) and the constraint \(p\geq 0\), applying different values of the \(\alpha\) to the HOCBF candidate \(B(x)=-v-\alpha p\) results in different boundaries of the resulting safety filter. However, the only _valid_ choice that satisfies the CBF descent condition (4b) is \(\alpha=0\) (green), which disallows any negative velocities and is overly conservative. All other choices of \(\alpha\) intersect the true unsafe region (gray dotted) at some point and violate (4b). the CBF conditions (4) by minimizing a loss \(L\) that penalizes constraint violations over samples of the state space, i.e., \[L_{\mathrm{unsf}}(\theta,x) =\left[\epsilon_{\mathrm{unsf}}-B_{\theta}(x)\right]_{+}, \tag{6a}\] \[L_{\mathrm{desc}}(\theta,x) =\left[L_{f}B_{\theta}(x)+L_{g}B_{\theta}(x)\pi(x)+cB_{\theta}(x) \right]_{+},\] (6b) \[L_{1}(\theta)= \underset{x\in\mathcal{X}_{\mathrm{unsf}}}{\sum}L_{\mathrm{unsf} }(\theta,x)+\underset{x\in\mathcal{X}}{\sum}L_{\mathrm{desc}}(\theta,x), \tag{6c}\] where \(\epsilon_{\mathrm{unsf}}>0\) for the strict inequality in (4a), \(\alpha(\cdot)\) is chosen to be linear \(x\mapsto cx\) for some \(c>0\), and \(\mathcal{X}_{\mathrm{unsf}}\) denotes some superset of \(\mathcal{A}\). Successful minimization (i.e., zero loss) of (6a) implies (4a), and similarly for (6b) and (4b). However, one problem is that the minimizer of (6c) may have a small or even empty forward-invariant set. For example, let \(\hat{B}\) be an exponential control-Lyapunov function, i.e., \[\underset{u\in\mathcal{U}}{\inf}L_{f}\hat{B}(x)+L_{g}\hat{B}(x)u+\hat{c}\hat {B}(x)\leq 0,\quad\hat{c}>c. \tag{7}\] Then, \(\hat{B}+d\) for all \(d>0\) small enough will also have zero loss on (6c). However, the forward-invariant set of \(\hat{B}+d\) is the empty set, and hence is not a useful CBF. To address this challenge, many previous works additionally consider a loss term that enforces that \(B_{\theta}\leq 0\) on some safe set \(\mathcal{X}_{\mathrm{safe}}\)[12, 24, 25, 37], i.e., \[L_{\mathrm{safe}}(\theta,x) =\left[B_{\theta}(x)\right]_{+}, \tag{8}\] \[L_{2}(\theta)= \underset{x\in\mathcal{X}_{\mathrm{safe}}}{\sum}L_{\mathrm{safe }}(\theta,x)+L_{1}(\theta). \tag{9}\] However, the difficulty here is in finding the set \(\mathcal{X}_{\mathrm{safe}}\). In [18, 25], this is taken to be the set of initial conditions. In [12, 23], this is assumed to be available, but no details are given for how this set is found in practice. In [24], this set is evaluated by rolling out the nominal policy for a fixed number of timesteps. For all these cases, it is not clear whether a valid CBF \(B^{\theta}\) exists such that \(B^{\theta}<0\) on \(\mathcal{X}_{\mathrm{safe}}\). The largest-possible \(\mathcal{X}_{\mathrm{safe}}\) from which a valid CBF can still be found can be obtained using reachability analysis [29]. However, the solution of the HJ reachability problem yields a CBF directly, rendering the NCBF unnecessary. Choosing an \(\mathcal{X}_{\mathrm{safe}}\) that is too large compromises the safety of the resulting CBF, while choosing an \(\mathcal{X}_{\mathrm{safe}}\) that is too small often results in a forward-invariant set that is too small (see Fig. 3). An attempt to combat this issue is presented in [27], where a regularization term is added to the loss function to enlarge the sublevel set of the learned CBF \(B^{\theta}\). One issue with this approach is that this regularization term takes a nonzero value for any CBF (including the CBF with the largest zero sublevel set). Hence, the coefficient on this regularization term induces a trade-off between the size of the sublevel set and satisfaction of the CBF constraints (4a) and (4b). We provide comparisons against this method in the experiments section Section IV. ## III Policy Neural CBFs To bypass the above challenges of training a Neural CBF, we now propose policy neural CBFs (PNCBFs), a different approach that does not require knowledge of the safe set but can still recover a large forward-invariant set. ### _Constructing CBFs via Policy Evaluation_ We assume that the avoid set \(\mathcal{A}\) can be described as the superlevel set of some continuous function \(h:\mathcal{X}\rightarrow\mathbb{R}\), i.e., \[\mathcal{A}=\left\{\,x\in\mathcal{X}\mid h(x)>0\,\right\}. \tag{10}\] Let \(\pi:\mathcal{X}\rightarrow\mathcal{U}\) be an arbitrary policy, and let \(x_{t}^{\pi}\) denote the resulting state at time \(t\) following \(\pi\). Consider the following _maximum-over-time_ cost function \[V^{h,\pi}(x_{0})\coloneqq\underset{t\geq 0}{\sup}h(x_{t}^{\pi}). \tag{11}\] It can be shown that \(V^{h,\pi}\) satisfies the following Hamilton-Jacobi PDE in the viscosity sense [38]. \[\max\left\{h(x)-V^{h,\pi}(x),\ \nabla V^{h,\pi}(x)^{\mathsf{T}}\left(f(x)+g(x )\pi(x)\right)\right\}=0. \tag{12}\] This immediately gives us the following two inequalities \[V^{h,\pi}(x) \geq h(x), \tag{13a}\] \[\nabla V^{h,\pi}(x)^{\mathsf{T}}\left(f(x)+g(x)\pi(x)\right) \leq 0, \tag{13b}\] from which we have the following theorem. **Theorem 1** (Policy value function is a CBF).: _The policy value function \(V^{h,\pi}\) is a CBF for (1) for any \(\pi\) and \(\alpha>0\)._ Proof.: (13a) and (10) implies (4a). Next, (13b) implies (4b) for _any_ choice of \(\alpha\), since \(V(x)\leq 0\) implies that \[\nabla V^{h,\pi}(x)^{\mathsf{T}}\left(f(x)+g(x)\pi(x)\right)\leq 0\leq-\alpha(V(x )).\qed\] Intuitively, the policy value function \(V^{h,\pi}\) gives us an upper-bound on the worst constraint violation \(h\) in the future under the optimal policy, since using \(\pi\) guarantees that \(h\) will be at most \(V^{h,\pi}\), and the optimal policy will do no worse. Moreover, by following the negative gradient of \(V^{h,\pi}\), we can move to states where following \(\pi\) leads to a lower maximum value of \(h\), i.e., safer states (see Fig. 4). Consequently, this provides us with a method to construct CBFs via policy evaluation of _any_ policy \(\pi\). To make this more concrete, consider the dynamic-programming form of (11): \[V^{h,\pi}(x_{0})=\max\left\{\underset{0\leq\sigma\leq t}{\sup}h(x_{s}),\ V^{h,\pi}(x_{t}) \right\}. \tag{14}\] Fig. 3: **Over and Underestimation of the Safe Set.** When the safe set (in green) is _overestimated (Left)_, there are no valid CBFs that can satisfy all the loss terms in (9) simultaneously. Consequently, the Neural CBF \(B^{\theta}\) has a zero level set (boundary in purple) that is larger than the true control-invariant set at the expense of violating the descent condition (4b). In contrast, when the safe set is _underestimated (Right)_, we can obtain a valid but overly conservative CBF. For comparison, the true unsafe set is shaded in gray with dots. Given a nominal policy \(\pi\), we can collect rollouts of the system and store tuples \((x_{0},\max_{0\leq t\leq T}h(x_{t}),x_{T})\). We then minimize the policy evaluation loss on a neural network approximation of the policy value function \(V^{h,\pi,\theta}\) \[L=\left\|V^{h,\pi}_{\theta}(x_{t})-\max\left\{\max_{t\leq s\leq T}h(x_{T}),V^{h, \pi}_{\theta}h(x_{T})\right\}\right\|^{2}. \tag{15}\] We summarize the above for training PNCBFs in Algorithm 1. After training, we can use \(V^{h,\pi}_{\theta}\) via the CBF-QP (5) to minimally modify the (unsafe) nominal policy to maintain safety. ``` 1:input: Nominal Policy \(\pi\) 2:Collect dataset of tuples \((x_{0},\max_{0\leq t\leq T}h(x_{t}),\,x_{T})\) 3:while not converged do 4: Minimize loss (15) over samples from the dataset 5:endwhile ``` **Algorithm 1** Policy Neural CBF **Viewing policy CBFs as policy distillation.** One can interpret policy value functions as policy distillation. More specifically, when \(V^{h,\pi}\) is used as a safety filter in the CBF-QP (5) with any _new_ nominal policy \(\tilde{\pi}\), the forward-invariant set of the resulting CBF-QP controller will be no smaller than that of the original nominal policy \(\pi\), as we show next. **Theorem 2**.: _Let \(V^{h,\pi}\) be a policy value function and let \(\tilde{\pi}\) be some other policy. Then, the forward-invariant set under CBF-QP with \(V^{h,\pi}\) and \(\tilde{\pi}\) is a superset of the forward-invariant set under \(\pi\)._ Proof.: The forward-invariant set under \(\pi\) is exactly the zero sublevel set of \(V^{h,\pi}\) { \(x\) \(\big{|}\)\(V^{h,\pi}\leq 0\) }. Since \(V^{h,\pi}\) is a CBF, the CBF-QP controller will render this set forward-invariant under any _new_ nominal policy \(\tilde{\pi}\). **Relationship with Hamilton-Jacobi Reachability.** The policy CBF is also closely related to HJ reachability. As noted in [30, 39, 40], the (optimal) HJ value function is a CBF. This is equivalent to the policy CBF (11) with the optimal policy \(\pi^{*}\). The policy CBF can thus be seen as a _relaxation_ of optimality that remains a CBF. For neural networks, policy evaluation can be more attractive than optimization, which requires techniques such as deep reinforcement learning (e.g., [41, 42]) that can be more unstable and computationally expensive. However, as a middle ground, we next show how policy iteration can be applied to PNCBFs to achieve fast convergence without resorting to a full deep reinforcement learning setup. ### _Policy Iteration with PNCBFs_ The choice of the nominal policy \(\pi\) is crucial. In light of Theorem 2, the forward invariant set of the resulting PNCBF controller (via the CBF-QP (5)) is only guaranteed to be no smaller than that of \(\pi\). Hence, a poor choice of \(\pi\) can result in a small forward-invariant set, resulting in a poor CBF. To resolve this problem, we use the insight that the policy value function (11) is also a (shifted) Lyapunov function. Hence, when using \(V^{h,\pi}\) with the CBF-QP (5), we can hope that the resulting forward-invariant set will be larger than the original policy \(\pi\). Nevertheless, this new controller will be no worse than the original policy \(\pi\). Thus, we propose to take the PNCBF controller as the _new_ nominal policy \(\pi^{+}\) to train a new PNCBF, and iterate this procedure (see Fig. 5). By treating the application of a CBF-QP as an analytical _policy improvement_ and the computation of \(V^{h,\pi}\) as policy evaluation, we can interpret this procedure as _policy iteration_, which has been studied extensively for the normal sum-over-time cost structure [43] where it enjoys guaranteed convergence at a superlinear rate under certain assumptions [44]. While it is not clear if this convergence result holds for the maximum-over-time cost structure, we empirically observe fast convergence in only a few iterations, as we show in Section IV-C. Also, we observe that using a policy value function with a non-zero discount factor can help with convergence when \(\pi\) is far from optimal. We leave an analysis of the interaction between the discount factor and convergence rates to future work. ### _Discounting and Contraction_ One problem with using (15) directly as a loss function is that there are undesireable solutions that satisfy this recursive equation. For example, \(V^{h,\pi}(x)=a\) for \(a\) large enough minimizes (15), but is clearly not a solution to (11). This is similar to the case in Markov Decision Processes where the _undiscounted_ value iteration is not contractive [45]. Hence, instead of (11), we consider the following _discounted_ cost, for \(\lambda\geq 0\). \[V^{h,\pi}_{\lambda}(x_{0}) \coloneqq\sup_{t\geq 0}\left\{\tilde{h}(x_{t},\lambda)+e^{- \lambda t}h(x_{t})\right\}, \tag{16}\] \[\tilde{h}(x_{t},\lambda) \coloneqq\int_{0}^{t}\lambda e^{-\lambda s}h(x_{s})\mathrm{d}s. \tag{17}\] Taking \(\lambda=0\) recovers the undiscounted problem (11), while \(\lambda\to\infty\) yields the solution \(V^{h,\pi}_{\infty}=h\). Hence, different choices of \(\lambda\) can be seen as _implicitly_ choosing the horizon considered for safety. Similar to (12), it can also be shown that \(V^{h,\pi}_{\lambda}\) satisfies the following Hamilton-Jacobi PDE in the viscosity sense (suppressing arguments for conciseness) [38]: \[\max\left\{h-V^{h,\pi}_{\lambda},\ \nabla V^{h,\pi^{\mathsf{T}}}_{\lambda}(f+g \pi)-\lambda(V^{h,\pi}_{\lambda}-h)\right\}=0, \tag{18}\] Fig. 4: **Understanding the policy value function. (_Left_) Trajectories from a nominal policy \(\pi\) started from three values of \(x_{0}\). (_Right_) The corresponding policy value functions \(V^{h,\pi}\) along each trajectory. \(V^{h,\pi}\) is non-increasing along (any) trajectory of \(\pi\) and is a CBF. Hence, the gradients of \(V^{h,\pi}\) inform the CBF-QP on how to improve safety using the “knowledge” of the \(\pi\).** as well as the following dynamic programming equation \[V_{\lambda}^{h,\pi}(x_{0})=\max\left\{\sup_{0\leq s\leq t}\tilde{h}(x_{s},\lambda),\tilde{h}(x_{t},\lambda)+e^{-\lambda t}V_{\lambda}^{h,\pi}(x_{t})\right\}. \tag{19}\] While solutions to the PDE (18) no longer satisfy the CBF constraint (4b) for \(\lambda>0\), they do prevent the constant solution from being a minimizer of the corresponding discounted loss. Hence, in practice, we use (19) instead of (15). We start with a small value of \(\lambda\) to avoid premature convergence to the constant solution, and gradually decrease it to \(0\) as training progresses. **Verification of PNCBF.** We stress that, without verifying that the learned PNCBF satisfies the descent condition (4b), we can not claim that the PNCBF satisfies the CBF conditions nor claim any safety guarantees. Verification of NCBFs can be performed using neural-network verification tools [46], sampling [47] or a generalization error bound [25]. Nevertheless, as we show next, empirical results show that our proposed method vastly improves the volume of both the forward-invariant set and the set where the safety filter is permissible to nominal controls compared to baseline methods, including an (unverified) HOCBF candidate. ## IV Simulation Experiments To study the performance of PNCBFs, we perform a series of simulation experiments on high relative degree systems under box control constraints. We first investigate the qualitative behavior of PNCBFs on simple low-dimensional systems to gain insight into the behavior of the method. Next, we demonstrate the scalability of PNCBFs to high-dimensional systems by applying it to a ground collision avoidance problem on the F-16 fighter jet [48, 49]. Finally, we study the behavior of policy iteration on a two-agent quadcopter system to demonstrate the ability of PNCBFs to improve the safety of an initially unsafe nominal policy. **Baselines.** We compare against the following safety filters. * **Neural CBF (NCBF) [12, 23]:** Learning a Neural CBF using (9). We choose the safe set to be the set containing the equilibrium point under the nominal policy \(\pi\). * **Non-Saturating Neural CBF (NSCBF) [27]:** A recent approach that explicitly tackles the problem of input constraints for CBFs by learning a Neural CBF. However, instead of enforcing the derivative condition (4b) over the entire state-space as in [23], this is only enforced on the boundary as in barrier certificates [50]. * **Handcrafted Candidate CBF (CBF) [34, 35]:** We construct a _candidate_ CBF via a Higher-Order CBF on \(h\) without considering input constraints. * **Approximate MPC-based Predictive Safety Filter (MPC) [51]:** A trajectory optimization problem is solved, imposing the safety constraints while penalizing deviations from the nominal policy. We do not assume access to a known forward-invariant set and hence do not impose this terminal constraint. * **Sum-of-Squares Synthesis (SOS) [52]:** When the dynamics are polynomial, a sequence of convex optimization problem can be solved to construct a CBF. All neural networks are trained until convergence and use the same architecture (3 layers of \(256\) neurons with \(\tanh\) activations). For PNCBFs, we perform at most \(3\) iterations of policy iteration. ### _Qualitative Behavior on a Double Integrator, Segway_ We first perform a general comparison between the different methods on a double integrator and a Segway, two simple systems that can be easily visualized. On the double integrator, safety is defined via position bounds (\(|p|\leq 1\)), while the Segway asks for the handlebars to stay upright (\(|\theta|\leq 0.3\pi\)) while remaining within position bounds (\(|p|\leq 2.0\)). During testing, we use a different nominal policy (i.e., zero control) than the one used during training for PNCBFs. We visualize the results in Fig. 6, plotting the region of the state space from where the safety filter preserves safety (Safe Region) and where the nominal policy can influence the output of the safety filter (Filter Boundary). For CBF-based methods, the filter boundary corresponds to the zero level set of the CBF. On the double integrator, all methods induce forward-invariance on some region of the state space but only our method is both maximally safe and permissive. This trend is even more pronounced on the Segway, where our method is able to find a significantly larger safe set and filter boundary. ### _Scalability to high-dimensional systems with F-16_ Next, we explore the scalability of PNCBF to high dimensional systems. We consider a ground collision Fig. 5: **Policy iteration on a double integrator. Starting with a suboptimal nominal policy \(\pi\), we learn the value function \(V^{h,\pi}\). By treating the CBF-QP of the learned \(V^{h,\pi}\) as a new nominal policy, we can repeat this process to perform _policy iteration_. Here, only two iterations are needed to obtain a CBF \(V^{h,\pi}\) that almost covers the true control-invariant set. The final CBF-QP controller maintains safety (blue line) under any potentially unsafe nominal policy (red line).** avoidance example involving the F-16 fighter jet [48, 49]. Since this system is not control-affine in the throttle, we leave the throttle as the output of a P controller, resulting in a \(16\)-dimensional state space and a \(3\)-dimensional control space. We define safety as a box constraint on the aircraft's altitude. During testing, we apply an adversarial nominal policy that commands the aircraft to dive nose-first into the ground. We visualize the results in Fig. 6, showing a \(2\)D slice of the state space. Even on a \(16\)-dimensional state space, we observe that PNCBFs are able to recover a significantly larger region of the safe set compared to other baseline methods. ### _Performance of Policy Iteration_ Finally, to investigate the ability of PNCBFs to learn a safe and permissible safety filter from an initially unsafe nominal policy, we consider a two-agent quadcopter system with a \(12\)-dimensional state and \(4\)-dimensional control space that must stay within communication radius while avoiding collisions with a dynamic obstacle. We model each quadcopter as a double integrator with a velocity tracking controller. The obstacle is assumed to move with constant velocity and direction. The nominal policy moves each quadcopter anticlockwise around a circle, ignoring all constraints. The obstacle can achieve higher velocities than each quadcopter. Additionally, the velocity tracking controller has a slow response time. Hence, the quadcopters must react well in advance to avoid collisions with the obstacle while staying within communication radius, resulting in a problem with complex safety constraints despite the simple dynamics. We visualize the results in Fig. 7. Although the nominal policy has a high unsafe fraction, policy iteration is able to significantly reduce the unsafe fraction to near \(0\) in only two iterations, representing a \(90\%\) reduction in unsafe states compared to the next best method. ## V Hardware Experiments We further validate our approach in a two-agent quadrotor hardware experiment mirroring the setup from Section IV-C. We use two custom drones and use Boston Dynamics's Spot as a dynamic obstacle. Velocity setpoints are sent to the drones through the PX4 flight stack. We visualize the results in Fig. 8, where the PNCBF filters the drone's unsafe nominal policy to avoid collisions with Spot while remaining within communication radius. For more details, see the supplemental video. ## VI Conclusion By learning the policy value function, we are able to learn Neural CBFs for high relative degree systems under input constraints. Extensive simulation experiments show that our method can yield much larger forward invariant sets and can be deployed on hardware. One limitation of our method is that it requires an accurate dynamics model. Model errors may cause the learned safety filter to be unsafe on the real system. While this was not a major issue in our hardware experiments, we plan to investigate methods to improve robustness to model errors in future work, such as by learning _robust_ CBFs as in [23]. Fig. 6: **Safe set and filter boundary on the double integrator, Segway, and F16** We plot the initial states from where the safety filter can preserve safety (Safe Region), and states where the nominal policy can influence the output of the safety filter (Filter Boundary). The true unsafe region is shaded in gray dots. On the double integrator, ours is the only method that is both maximally safe and permissive. For more complex systems, the performance gap between our method and baseline methods becomes more pronounced, showcasing the benefit of PNCBFs on high-dimensional nonlinear systems. Fig. 7: **Policy Iteration on a Two-Agent Quadcopter System.** (_Left_) In only three iterations, we achieve the smallest volume of unsafe states compared to baseline methods and greatly improving the safety of the original unsafe nominal policy. Since we may sample states in the true unsafe region, the optimal safety filter will not be able to achieve safety for all sampled states. (_Right_) An example of an initial state from which only our method is able to filter out unsafe controls from the nominal policy and prevent a collision (highlighted in red) with the moving obstacle.
2310.02104
An empirical study of ChatGPT-3.5 on question answering and code maintenance
Ever since the launch of ChatGPT in 2022, a rising concern is whether ChatGPT will replace programmers and kill jobs. Motivated by this widespread concern, we conducted an empirical study to systematically compare ChatGPT against programmers in question-answering and software-maintaining. We reused a dataset introduced by prior work, which includes 130 StackOverflow (SO) discussion threads referred to by the Java developers of 357 GitHub projects. We mainly investigated three research questions (RQs). First, how does ChatGPT compare with programmers when answering technical questions? Second, how do developers perceive the differences between ChatGPT's answers and SO answers? Third, how does ChatGPT compare with humans when revising code for maintenance requests? For RQ1, we provided the 130 SO questions to ChatGPT, and manually compared ChatGPT answers with the accepted/most popular SO answers in terms of relevance, readability, informativeness, comprehensiveness, and reusability. For RQ2, we conducted a user study with 30 developers, asking each developer to assess and compare 10 pairs of answers, without knowing the information source (i.e., ChatGPT or SO). For RQ3, we distilled 48 software maintenance tasks from 48 GitHub projects citing the studied SO threads. We queried ChatGPT to revise a given Java file, and to incorporate the code implementation for any prescribed maintenance requirement. Our study reveals interesting phenomena: For the majority of SO questions (97/130), ChatGPT provided better answers; in 203 of 300 ratings, developers preferred ChatGPT answers to SO answers; ChatGPT revised code correctly for 22 of the 48 tasks. Our research will expand people's knowledge of ChatGPT capabilities, and shed light on future adoption of ChatGPT by the software industry.
Md Mahir Asef Kabir, Sk Adnan Hassan, Xiaoyin Wang, Ying Wang, Hai Yu, Na Meng
2023-10-03T14:48:32Z
http://arxiv.org/abs/2310.02104v1
# An empirical study of ChatGPT-3.5 on question answering and code maintenance ###### Abstract Ever since the launch of ChatGPT in 2022, a rising concern is whether ChatGPT will replace programmers and kill jobs. Motivated by this widespread concern, we conducted an empirical study to systematically compare ChatGPT against programmers in question-answering and software-maintaining. We reused a dataset introduced by prior work, which includes 130 StackOverflow (SO) discussion threads referred to by the Java developers of 357 GitHub projects. We mainly investigated three research questions (RQs). First, how does ChatGPT compare with programmers when answering technical questions? Second, how do developers perceive the differences between ChatGPT's answers and SO answers? Third, how does ChatGPT compare with humans when revising code for maintenance requests? For RQ1, we provided the 130 SO questions to ChatGPT, and manually compared ChatGPT answers with the accepted/most popular SO answers in terms of relevance, readability, informativeness, comprehensiveness, and reusability. For RQ2, we conducted a user study with 30 developers, asking each developer to assess and compare 10 pairs of answers, without knowing the information source (i.e., ChatGPT or SO). For RQ3, we distilled 48 software maintenance tasks from 48 GitHub projects citing the studied SO threads. We queried ChatGPT to revise a given Java file, and to incorporate the code implementation for any prescribed maintenance requirement. Our study reveals interesting phenomena: For the majority of SO questions (97/130), ChatGPT provided better answers; in 203 of 300 ratings, developers preferred ChatGPT answers to SO answers; ChatGPT revised code correctly for 22 of the 48 tasks. Our research will expand people's knowledge of ChatGPT capabilities, and shed light on future adoption of ChatGPT by the software industry. empirical study, ChatGPT, StackOverflow, Q&A, software maintenance Md Mahir Asef Kabir, Sk Adnan Hassan, Xiaoyin Wang, Ying Wang, Hai Yu, and Na Meng. 2018. An empirical study of ChatGPT-3.5 on question answering and code maintenance. 1, 1 (October 2018), 21 pages. [https://doi.org/XXXXXXXX.XXXXXXXX](https://doi.org/XXXXXXXX.XXXXXXXX) ## 1. Introduction ChatGPT is a large language model-based chatbot developed by OpenAI; it can answer questions and assist users with different tasks, such as composing emails, essays, and code (Zhu et al., 2018). Ever since ChatGPT's launch in November 2022, heated debates have spawned concerning its impacts on industries, society, economy, and regulations (Beng et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019). For instance, many people hold pessimistic attitudes (Beng et al., 2016; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019). They believe that ChatGPT could replace white-collar workers in sectors like education, finance, software, journalism, and graphic design. Meanwhile, some people are optimistic about ChatGPT's role in the software industry, treating it as a coding assistant to help improve programmer productivity (Chen et al., 2017; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019). The reasons they mentioned include (1) ChatGPT uses existing code available online to answer questions, but has no creativity to produce new code (Chen et al., 2017); (2) it partially automates coding tasks (Chen et al., 2019); (3) it is only capable of basic coding (Chen et al., 2017). Albeit the heated discussion, little systematic study was done to qualitatively and quantitatively assess ChatGPT's programming capability. To demystify the capability of ChatGPT and predict its potential role in software development, we conducted an empirical study to thoroughly compare ChatGPT against human programmers in two major scenarios: question answering and software maintenance. We chose these scenarios for two reasons. First, question-and-answer (Q&A) is the main interaction mode between software developers and ChatGPT; the provided answers are essential in shaping the art and practices of software in the future. Thus, comparing the answers provided by both ChatGPT and humans will help developers decide how to integrate ChatGPT into their daily programming practices, and how much trust to give to ChatGPT's answers. Second, software maintenance cost can make up to 90% of the whole software development cost (Chen et al., 2017), which means that developers are likely to spend the majority of their time and effort maintaining software. By using ChatGPT, developers can get answers instantly, without waiting for someone to notice their query on a forum. Therefore, our comparison between ChatGPT's responses and developers' responses to the same maintenance requests will demonstrate how probably ChatGPT can replace humans. For our study, we reused a dataset introduced by prior work (Zhu et al., 2018), which includes 130 StackOverflow (SO) discussion threads referenced by 357 GitHub projects. Each thread contains a question and at least an answer, with identifiable accepted or most popular answer(s). Each of the threads has URLs explicitly referenced by at least one GitHub project. Based on the dataset, we investigated the following three research questions (RQs): **RQ1**: _How does ChatGPT compare with programmers when answering technical questions?_ We provided 130 SO questions to ChatGPT, and manually compared ChatGPT answers with the accepted/most popular SO answers. The 130 samples belong to 5 major categories (e.g., optimization and debugging), and cover 9 technical topics (e.g., data processing and testing). **RQ2**: _How do developers consider the differences between ChatGPT answers and SO answers?_ We did a user study with 30 developers. We gave each participant 10 SO questions, 10 accepted or most popular SO answers, and 10 ChatGPT answers. For each question, a participant assessed and compared the two given answers, without knowing the answer providers. **RQ3**: _How does ChatGPT compare with humans when revising code for maintenance requests?_ We mined the 357 GitHub repositories, for any commit that introduces both an SO link and code revision to an existing Java file. We formulated a prompt based on the SO thread and original Java file, asking ChatGPT to produce a new version of that file to integrate the requested feature implementation. We formulated in total 48 prompts to query ChatGPT. Our work provides empirical evidence for many interesting phenomena that are rarely mentioned by prior work. The major findings are summarized as below: * For 75% (97/130) of SO questions, ChatGPT provided better answers than the accepted or most popular SO answers. The 97 answers respond to questions of all the styles and topics we examined. It means that ChatGPT is strong at answering various technical questions. * Among the 300 ratings provided by developers, 68% of ratings imply that ChatGPT answers are better; only 32% of ratings imply SO answers to be better. Compared with other question styles, ChatGPT answers to comprehension questions were rated higher more often. * Given 48 maintenance tasks, ChatGPT modified Java files for all tasks. However, only 22 of these revised files can be smoothly integrated into GitHub repositories. ChatGPT-3.5 could not correctly maintain software in 54% of cases. Developers need to do extra work to integrate or further improve the code revision recommended by ChatGPT. ## 2. Background: the Github\(\rightarrow\)So dataset used in our study In our research, one challenge is: _What prompts should we provide to ChatGPT, in order to compare it against developers in a realistic and fair manner?_ To overcome this challenge, we chose to reuse the dataset constructed by Chen et al. (2018) for their recent investigation on how GitHub developers reuse StackOverflow answers (Kumar et al., 2018). The dataset consists of 130 StackOverflow (SO) discussion threads, referenced by Java files in 357 GitHub repositories. Every thread has a question post and at least one answer post; every post is assigned with a unique URL. Each of the included Java files references an SO thread via either the question URL, or URL of any answer belonging to that thread. As shown in Fig. 1, Chen et al. classified the 130 threads into 5 categories based on the question styles. Specifically, _Coding task_ means that askers describe software requirements and seek for code solutions. _Optimization_ means that askers provide initial programs satisfying certain requirements, looking for better programs that have either easier implementation, lower runtime overheads, or less platform-specific dependency. _Optimization_ is different from the _Coding task_ category mentioned above, as askers provide initial code implementation. _Comprehension_ is about clarification or comparison of concepts, terms, or APIs (e.g., StringBuilder vs. StringBuffer). _Debugging_ means that askers present their erroneous programs, and solicit debugging feedback. _Others_ captures the miscellaneous questions not covered by any category mentioned above. Additionally, Chen et al. also classified the threads into nine topics based on the technical content. Figure 1. The taxonomy of SO threads based on both question styles and technical content (Kumar et al., 2018) According to Fig. 1, SO threads do not distribute evenly among different categories or topics. For example, _Coding task_ is the dominant category, covering 102 of the 130 threads. We intentionally chose this dataset instead of creating a balanced one, because this dataset reflects (i) the major concerns of developers when they discuss technical issues on SO, and (ii) developers' practices of software maintenance under the guidance of an online knowledge base. Both (i) and (ii) can facilitate our evaluation of ChatGPT, as they help us identify the most important questions to ask ChatGPT, and provide good reference answers against which we can compare ChatGPT's outputs. ## 3. Methodology There are three research questions (RQs) in our study: **RQ1**: _How does ChatGPT compare with programmers when answering technical questions?_ This RQ examines given questions of different styles (e.g., coding or debugging task) or covering different topics (e.g., data structure or algorithm), whether ChatGPT answers outperform or underperform SO answers. **RQ2**: _How do developers consider the answer differences between ChatGPT and SO?_ This RQ assesses given ChatGPT answers and SO answers, whether developers present obvious preferences towards one answer type. **RQ3**: _How does ChatGPT compare with humans when revising code for maintenance?_ This RQ explores ChatGPT's capability in maintaining software, so it complements both RQ1 and RQ2 that examine ChatGPT's capability in answering SO questions. ### The Experiment Design for RQ1 We located the most popular answer (i.e., the one with the highest vote) and accepted answer in each discussion thread, and considered such answers as the _best answers provided by expert developers_. Notice that not every thread has an answer labeled as "accepted", while in many threads, the most popular answer is simultaneously the accepted answer. Therefore, we located 1-2 best answers in each thread. For simplicity, this paper refers to these answers as "SO answers". Additionally, we formulated a prompt for each SO question by extracting the title, technical content, and keywords. For instance, given the SO question shown in Fig. 2, we crafted the prompt shown in Fig. 3, to ensure that ChatGPT is exposed to the same amount of question information as humans. After sending 130 prompts to ChatGPT, we collected all generated answers and manually compared those answers with SO answers. To avoid human bias by individual authors, the first two authors independently compared ChatGPT answers with SO answers. For each group of answers under comparison, they rated which answer was better based on the following criteria: 1. Relevance. Does the answer directly respond to the question? 2. Readability. Does the answer clearly explain the solution? 3. Comprehensiveness. Is the solution comprehensive enough to cover all edge cases? 4. Informativeness. Does the answer contain code snippets to concretize the explanation? 5. Reusability. If a certain code is provided by the answer, is it easy to (re)use for developers? Figure 3. The prompt we crafted for the SO question Figure 2. An example SO question post After rating all answer groups, we calculated Cohen's kappa coefficient value (Cohen, 1979) to measure the inter-rater reliability. The measured value is 0.746, demonstrating a substantial agreement between the two authors. For the 14 ones on which they disagreed with each other, another author was invited to compare all answers and to lead a discussion until all three authors reached a consensus. ### The Experiment Design for RQ2 We successfully got an IRB approval to conduct a user study, and recruited 30 Java developers by sending email invitations in our institution or spreading information via personal networks. Each participant accessed a Google form to compare ChatGPT answers with SO answers for 10 SO questions, filled in his/her assessment, and submitted the form online. For all answers, we anonymized the sources: no participant knows which answer was generated by ChatGPT or SO. Everyone spent 30-40 minutes to complete the survey, and got a 10-dollar gift card for compensation. One possible design of the user study could be asking all participants to compare the same 10 answer pairs. However, such a design can only check for people's opinions on 10 answer pairs at most, and considerably limit the generalizability of our research. Thus, we decided not to experiment in this way. Instead, from the dataset created for RQ1, we randomly selected 32 SO questions, the corresponding best SO answers (i.e., either the accepted or most popular answer), and ChatGPT answers. We ensured that the selected questions cover all five question styles mentioned in Section 2, and they are not too long to demotivate participants. As shown in Table 1, our selection includes 18 coding tasks, 7 optimization questions, 4 comprehension questions, 2 debugging questions, and 1 question belonging to the _Other_ category. We formulated 6 sets \([S_{1},S_{6}]\) with these questions, so that each set has 10 SO questions to cover all 5 categories. We created six different google forms based on the six question sets, and sent five copies of each Google form to five participants. To sum up, each question set is assessed by 5 participants; the 6 sets are assessed by 30 people and cover in total 32 questions, with some questions shared between sets because those questions are from smaller categories (e.g., _Debugging_ and _Other_). For each pair of answers under comparison, developers responded to the following five questions: 1. Is answer #1 correct? 2. Is answer #2 correct? 3. Does answer #1 have better readability than answer #2 (i.e., more readable)? 4. Does answer #1 have better informativeness than answer #2 (i.e., more informative)? 5. Considering all factors mentioned above, which answer do you prefer? Q1-Q2 are about the correctness checking for ChatGPT answers and SO answers. Both questions have three options for developers to choose from: correct, unsure, and incorrect. Q3 and Q4 separately focus on the readability and informativeness of answers. Participants were expected to respond to both questions in a five-level Likert scale (Kal To avoid human bias due to sequential ordering, we randomized the order of questions and answers for each set. In scenarios where participants lacked the technical background to assess answers, we permitted them to search for relevant information online (e.g., articles or books). However, we asked them to intentionally avoid the information from SO, as the retrieved SO answers may reveal identities of anonymized answers. We also asked them to avoid querying ChatGPT for answers, as those generated answers may introduce bias towards ChatGPT answers. Finally, we collected and analyzed the submitted Google forms. ### The Experiment Design for RQ3 In the 357 GitHub repositories mentioned by Chen et al. (see Section 2), we found 480 Java files to cite any of the 130 discussion threads. We believe that when a Java file was modified to include an SO discussion thread and code revision, it is very likely that developers modified the program in response to certain maintenance need and the cited thread captures that need. Based on this insight, we located \(\langle\)SO-thread, code-revision\(\rangle\) pairs in Java files, and simulated maintenance tasks accordingly. By asking ChatGPT to fulfill those tasks and by examining the tool's outputs, we assessed ChatGPT's capability in code maintenance and its potential of replacing developers. Fig. 4 shows our four-step experiment procedure. Starting from the 357 repositories, we crawled the latest versions to identify 480 Java files that contain 501 SO references, with some files citing multiple SO references in one version or citing distinct SO references in different versions. We then refined the data by applying three filters. First, we checked whether any of the Java files had the SO link added to its initial version. If so, we discarded the files because we could not easily locate the code region specifically relevant to the SO link. Second, for any commit that adds an SO link to a Java file, we also checked whether any code was revised in that file by the same commit; if not, we discarded the file. Third, for any file modified to include both an SO reference and code revision, we checked whether there is any semantic relevance between the two; if not, the file was discarded. In our study, the 3 filters separately removed 357, 29, and 10 SO references from the original 501 references. We had 105 Java files remaining, which were further divided into 2 groups: 13 files from compilable projects and 92 files from uncompilable ones. Due to the time limit, we experimented with all 13 files from compilable projects and 35 sampled unique files from uncompilable projects. To simplify presentation, we use \(f_{i}(i\in[1,48])\) (i.e., 48 = 13 + 35) to refer to each file. All these files were modified in version history to introduce (1) references to SO discussions and (2) related code revisions. Thus, we use \(f_{i}^{o}\) and \(f_{i}^{n}\) to refer to the old and new versions of \(f_{i}\). In Step 2, we formulated a prompt for \(f_{i}\). The prompt provides the content of \(f_{i}^{o}\); it describes a maintenance request (e.g., feature addition or bug fixing) based on the SO question cited by \(f_{i}^{n}\) as well as differences between \(f_{i}^{o}\) and \(f_{i}^{n}\). ChatGPT is supposed to revise the given code \(f_{i}^{o}\) in response to that request. Step 3 sends all prompts to ChatGPT. Step 4 gathers outputs from ChatGPT and validates them via automatic build and/or manual inspection. For the first group of files (i.e., 13 files from compilable projects), this step replaces \(f_{i}^{n}\) in each project with ChatGPT's output \(f_{i}^{c}\), checking whether the generated file is compilable or whether it is compatible with developers' code in other files. For the second group of files (i.e., 35 files from uncompilable projects), automatic build is inapplicable to validate ChatGPT's outputs. For all 48 files revised by ChatGPT, we manually Figure 4. The experiment procedure for RQ3 compared \(f_{i}^{c}\) with \(f_{i}^{n}\) to examine the semantic equivalence. If \(f_{i}^{c}\) matches \(f_{i}^{n}\) in terms of the program logic or implementation algorithm, we consider ChatGPT to succeed in the maintenance task. Fig. 5 shows a concrete example to illustrate the procedure. Our dataset has an SO question describing a coding task: to concatenate two arrays in Java (see Fig. 5 (a)); there is also a GitHub repository with a Java file \(f\) edited to cite the SO question and to add relevant code. For simplicity, Fig. 5 (b) only presents the relevant diff data between program versions before and after edits. We inspected both the SO question and developers' related edits, to craft a prompt. As shown in Fig. 5 (c), the prompt includes the original version of the Java file (\(f^{o}\)), and a request asking ChatGPT to modify code and add a method for concatenating two arrays in Java. To ensure that ChatGPT's output can be easily integrated into the GitHub project, we crafted the prompt to also (1) specify the method header of code-to-generate, and (2) require generating the full code of the revised file. Due to the space limit, Fig. 5 (c) only illustrates a brief version of the actual Java code we included in the prompt. Afterwards, we sent ChatGPT the prompt and got the modified file \(f^{c}\) (see Fig. 5 (d)). We then compared \(f^{c}\) with developers' revision \(f^{n}\), to assess ChatGPT's maintenance capability. ## 4. Experiment Results This section introduces and explains our experiment results. Figure 5. An example to illustrate our procedure of asking ChatGPT to maintain a codebase To readwrite to the Windows registry using Java, you can use the Java Native Access (JNA) library. JNA provides a way to call native Windows API functions from Java code, including functions for accessing the registry. Here's an example of how to use JNA to read a value from the registry: import com.sun.jna.platform.win32.Adwagi2UUH; \(I\) Read the value of "HEY CURRENT USERSoftware/MyAp(MyKey)" String value = Adwagi2UUH.registry.GetStringValue( Adwagi2UUH.KEY CURRENT USER, "Software/MyAp", "MyKey"): And here's an example of how to use JNA to write a value to the registry: import com.sun.jna.platform.win32.VinReg; \(I\) Write the value "myvalue" to "HKEY CURRENT USERSoftware/MyAp(MyKey)" Adwagi2UUH.registry.GetStringValue( WinfReg.HKEY CURRENT USERSoftware/MyAp", "Mykey", "myvalue"): Note that you/I need to include the JNA library in your projects dependencies in order to use it. You can download the JNA library from [https://github.com/java-native-access/jna](https://github.com/java-native-access/jna). ### Results for RQ1 Based on our manual analysis and comparison of all answers, we noticed that none of the SO/ChatGPT answers is fundamentally wrong. Our answer preference was decided mainly based on the readability, comprehensiveness, and informativeness of descriptions. For example, Fig. 6 shows two answers to the SO question "_How is it possible to read/write to the Windows registry using Java?_", including one ChatGPT answer and the most popular SO answer. All authors agreed that the ChatGPT answer is better, because it is more concise and clear: it explains how to read and write to the Windows registry using two simple code snippets. Meanwhile, the SO answer offers a 386-line code implementation copied from an open-source project, without explaining the essential internal program logic. Such answers may tempt developers to blindly copy-and-paste code or get into copyright issues, but does not necessarily help developers improve coding skills in the long run. Among the 130 SO questions, the authors preferred ChatGPT answers for 97 questions, and preferred SO answers for 33 questions. It means that ChatGPT answers are often better. Table 2 shows the result breakdown across five categories. According to this table, ChatGPT provides better answers for 78% of coding tasks, while SO answers are better for 22% of coding tasks. For the four debugging questions, ChatGPT answers are always better. Among optimization and other questions, ChatGPT answers are better for 50% of cases, while SO answers are better for the remaining 50%. Among the four comprehension questions, ChatGPT answers are better for three cases. Due to the large number of data samples in categories _Coding task_ and _Optimization_, we further zoomed into these categories to study how well the two types of answers compare with each other on different technical topics. As shown in Table 3, ChatGPT answers generally outperform SO answers in all subcategories of _Coding task_. For some minor topics like _New feature for automation_ (i.e., discussion on rare feature implementation), _Data structure_ (i.e., discussion on defining customized data structures), and _Testing_ (i.e., discussion on defining test cases), ChatGPT answers are always \begin{table} \begin{tabular}{l c c c c c} \hline \hline & **Coding Task** & **Optimization** & **Comprehension** & **Debugging** & **Other** \\ \hline **ChatGPT answer is better** & 78\% (80/102) & 50\% (9/18) & 75\% (3/4) & 100\% (4/4) & 50\% (1/2) \\ **SO answer is better** & 22\% (22/102) & 50\% (9/18) & 25\% (1/4) & 0\% (0/4) & 50\% (1/2) \\ \hline \hline \end{tabular} \end{table} Table 2. Distribution of comparison results across question categories Figure 6. Answers to the SO question “How is it possible to read/write to the Windows registry using Java?” better. One reason to explain our observations is that ChatGPT answers often have better clarity, readability, and/or larger coverage of edge cases and alternative solutions. Interestingly, for optimization questions, ChatGPT answers only outperform SO answers for 50% of the cases. SO answers are better for the remaining 50% of cases mainly because (1) the SO answers have more rigorous performance comparison between alternative code snippets, or (2) the SO answers suggest usage of advanced libraries instead of coding from scratch. Compared with SO answers, ChatGPT answers are often better because they (1) include as much relevant information as possible, and (2) provide more clear and concise explanations. However, ChatGPT answers may not outperform when askers look for optimized solutions. **Finding 1:**_For 75% (97/130) of SO questions we studied, ChatGPT answers are better than SO answers (i.e., accepted or most popular answers). For almost all the question styles and technical topics we examined, ChatGPT answers are typically preferable or the dominant better answers._ ### Results for RQ2 Table 4 presents the years of Java programming experience of the 30 participants in our user study. As shown in the table, 20 developers have 1-2 years of experience, 8 developers have 3-5 years of experience, and 2 developers have 6-10 years of experience. \(U_{1}\)-\(U_{6}\) denote the six user groups we created, based on the six question sets \(S_{1}\)-\(S_{6}\) mentioned in Section 3.2. Namely, each group \(U_{i}(i\in[1,6])\) has five participants, assessing answers for \(S_{i}\). As all survey questions anonymize the sources of answers under discussion, our analysis mapped answers to their sources (i.e., ChatGPT or SO) after the survey was done and before we derived the results discussed below. #### 4.2.1. Correctness Comparison As shown in Table 5, across all groups, ChatGPT answers received more "Correct" labels than SO answers (231 vs. 206), more "Incorrect" labels (29 vs. 25), but a lot fewer "Unsure" labels (40 vs. 69). For individual groups, ChatGPT answers received more "Correct" labels than SO answers in five groups, but received fewer "Correct" labels in \(U_{1}\) only. ChatGPT answers received more "Incorrect" labels in four groups, but received fewer "Incorrect" labels in only two groups: \(U_{3}\) and \(U_{5}\). ChatGPT answers received fewer "Unsure" labels in all groups. \begin{table} \begin{tabular}{l|c c c c c|c} \hline \hline & \(U_{1}\) & \(U_{2}\) & \(U_{3}\) & \(U_{4}\) & \(U_{5}\) & \(U_{6}\) & **Total** \\ \hline 1-2 years & 1 & 4 & 4 & 3 & 4 & 4 & 20 \\ 3-5 years & 2 & 1 & 1 & 2 & 1 & 1 & 8 \\ 6-10 years & 2 & 0 & 0 & 0 & 0 & 0 & 2 \\ \textgreater{}10 years & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \hline \end{tabular} \end{table} Table 4. Years of Java programming experience of the 30 participants in our user study \begin{table} \begin{tabular}{l|l|c|c} \hline \hline **Question Category** & **Technical Topic** & **ChatGPT answer is better** & **SO answer is better** \\ \hline \multirow{6}{*}{Coding task} & Data processing & 86\% (30/35) & 14\% (5/35) \\ \cline{2-4} & Feature implementation in a certain context & 72\% (21/29) & 28\% (8/29) \\ \cline{2-4} & Inspection/Manipulation of program execution at runtime & 64\% (9/14) & 36\% (5/14) \\ \cline{2-4} & File processing & 89\% (8/9) & 11\% (1/9) \\ \cline{2-4} & Emulation of the syntax feature from another language & 60\% (3/5) & 40\% (2/5) \\ \cline{2-4} & Algorithm & 75\% (3/4) & 25\% (1/4) \\ \cline{2-4} & New feature for automation & 100\% (3/3) & 0\% (0/3) \\ \cline{2-4} & Data structure & 100\% (2/2) & 0\% (0/2) \\ \cline{2-4} & Testing & 100\% (1/1) & 0\% (0/1) \\ \hline \multirow{6}{*}{Optimization} & Data processing & 62\% (5/8) & 38\% (3/8) \\ \cline{2-4} & File processing & 25\% (1/4) & 75\% (3/4) \\ \cline{1-1} \cline{2-4} & Algorithm & 33\% (1/3) & 67\% (2/3) \\ \cline{1-1} \cline{2-4} & Data structure & 50\% (1/2) & 50\% (1/2) \\ \hline \hline \end{tabular} \end{table} Table 3. Distribution of comparison results across different subcategories of _Coding Task_ and _Optimization_ We also clustered developers' responses for each answer, and identified the assessments voted for by the majority. Namely, if N (N\(\geq\)5) developers assessed the correctness of one answer, we identified the label commonly chosen by at least N/2 developers and treated it as the assessment voted for by the majority. As shown in Table 6, among the 32 sampled ChatGPT answers, the majority voted for 30 correct answers, 1 unsure answer, and 1 incorrect answer. Both the incorrect and unsure answers are about coding tasks, probably because these tasks are hard to solve or the ChatGPT answers are hard to evaluate. Among the 32 sampled SO answers, the majority voted for 25 correct answers and 3 unsure answers; 4 SO answers received no majority voting as developers' opinions diverge a lot. The three "Unsure" answers are all about coding tasks; the "No majority" answers separately correspond to one coding task, two debugging questions, and one comprehension question. Such phenomena imply that SO answers are generally harder to assess than ChatGPT answers. Our observations imply that compared with SO answers, ChatGPT answers are more likely to be considered correct, and developers are more certain about their correctness assessment for ChatGPT answers. This may be because ChatGPT answers are more readable or more informative. _Others_--imply ChatGPT answers to be more readable. Among all five categories, _Comprehension_ received the highest percentage of ratings for the higher readability of ChatGPT answers--67%, while _Debugging_ received the lowest--47%. **Finding 3:**_Developers tend to rate ChatGPT answers to have better readability than SO answers. Compared with other answers, ChatGPT answers to comprehension questions are most likely to be considered having better readability._ #### 4.2.3. Informativeness Comparison Tables 9-10 show developers' assessments of answer informativeness. These tables present similar phenomena to those reported for readability assessments (see Section 4.2.2). For instance, as shown in Table 9, among all assessments, the majority of ratings (i.e., 180) shows that ChatGPT answers are more informative, fewer ratings (i.e., 80) show SO answers to be more informative, and even fewer ratings (i.e., 40) indicate the equivalent informativeness between two types of answers. In Table 10, the majority of ratings in all five categories imply that ChatGPT answers are more informative. Compared with other categories, ChatGPT answers to comprehension questions are most likely to have better informativeness. We calculated the Pearson correlation coefficient (Kal #### 4.2.4. Developers' Overall Preferences Tables 11 and 12 show developers' overall answer preferences. In each user group, more developers preferred ChatGPT answers to SO answers (see Table 11). In 54%-76% of cases, developers preferred ChatGPT answers. According to Table 12, in each question category, ChatGPT answers were chosen more often as the preferable ones. In particular, for comprehension questions, the highest percentage of developers (83%) prefer ChatGPT answers over SO answers. It means that ChatGPT is especially good at answering comprehension questions. In total, among the 300 ratings provided by developers, 203 ratings (68%) were about their preferences for ChatGPT answers, while only 97 ratings (32%) show that developers preferred SO answers. Additionally, we clustered developers' responses for each answer pair under comparison, and identified the preferences voted for by the majority. Namely, if N (N\(\geq\)5) developers simultaneously compared an \(\langle SO,ChatGPT\rangle\) answer pair, we identified the label commonly chosen by at least N/2 developers and treated it as the preference voted for by the majority. As shown in Table 14, in 32 sampled answer pairs, the majority preferred 5 SO answers and 25 ChatGPT answers. The five SO answers respond to four coding tasks and one optimization question. In another two sampled answer pairs, responses are divided equally between SO and ChatGPT as N is even. These answer pairs separately respond to a debugging question and an optimization question. Our results imply that SO answers sometimes outperform ChatGPT answers when responding to coding tasks, debugging questions, and optimization requests. To better understand developers' answer preferences, we did statistical analysis to see whether developers' preferences are correlated with the correctness, readability, or informativeness of answers. Firstly, we mapped developers' 300 ratings for ChatGPT answer correctness to numeric values: 3 (correct), 2 (unsure), and 1 (incorrect); we also mapped developers' 300 preference responses to numeric values: 1 (SO answer), and 2 (ChatGPT answer). We then applied the Pearson correlation analysis to the two groups of numeric data. As shown in Table 13, the coefficient is 0.26, implying that developers' preferences are weakly related to the correctness of ChatGPT answers. Secondly, we repeated the above-mentioned process to study the correlation between developers' preferences and the correctness of SO answers. As shown in Table 13, the two variables are also weakly related. Furthermore, we adopted the two-way ANOVA test [(52)]--a statistical method to examine the influence of two different categorical independent variables on one continuous dependent variable. By applying this method, we examined whether developers' preferences depend on the correctness contrast within the \(\langle SO,ChatGPT\rangle\) answer pairs for given questions. Our analysis shows all evaluated p-values to be greater than 0.05, which means that the correctness contrasts between SO answers and ChatGPT answers do not determine developers' preferences. All these three statistical tests imply that developers did not base their preference decisions on answer correctness. One possible reason is that the answers-under-comparison are often correct, or have little difference in terms of the correctness property. Next, we applied Pearson correlation analysis to (1) developers' ratings of readability and preferences, and (2) developers' ratings of informativeness and preferences. As shown in Table 13, both tests produced high coefficient values: 0.7 and 0.73, and low p-values (<0.00001). The phenomena indicate that developers expressed their preferences mainly based on the readability and informativeness of answers. \begin{table} \begin{tabular}{l|r|r} \hline \hline **Variables** & **Coefficient** & **P-value** \\ \hline ChatGPT Answer correctness vs. Preferences & 0.26 & \textless{}0.00001 \\ \hline SO Answer Correctness vs. Preferences & -0.27 & \textless{}0.00001 \\ \hline Readability vs. Preferences & 0.7 & \textless{}0.00001 \\ \hline Informativeness vs. Preferences & 0.73 & \textless{}0.00001 \\ \hline \hline \end{tabular} \end{table} Table 13. The Pearson correlation analysis between answer characteristics and developers’ preferences \begin{table} \begin{tabular}{l|r} \hline \hline & **The majority voting** \\ \hline (1) SO chosen & 5 \\ \hline (2) ChatGPT chosen & 25 \\ \hline Both (1) and (2) & 2 \\ \hline \hline \end{tabular} \end{table} Table 14. The answer preference by the majority of developers Finally, we compared developers' manual analysis results against ours described in Section 4.1. Our answer preferences match developers' preferences in 84% (27/32) of cases. There are only five cases where our preferences do not match developers'. One of the cases is about optimization, and we are very confident that SO provides a more efficient code solution than ChatGPT. The other four cases cover three coding tasks and one comprehension question; the preference divergence is mainly due to personal styles or coding habits. ### Results for RQ3 This section reports our experiments with the tasks separately defined for 13 compilable projects and 35 uncompilable ones. #### 4.3.1. Experiment with the 13 tasks defined for compilable projects As shown in Table 15, our prompts ask ChatGPT to (1) add one or more methods to an existing Java file, (2) modify a return statement in an existing method, or (3) revise the implementation of existing Java methods. ChatGPT was able to generate revised Java files for all prompts. By trying to compile the files output by ChatGPT, we found 11 out of the 13 files to compile successfully. One file (see M1) does not compile, because ChatGPT omitted almost all details of unchanged code in the given Java file and it majorly presented the added code implementation. Another file (see M13) does not compile because ChatGPT introduced the usage of a variable radius, without defining or declaring radius first. \begin{table} \begin{tabular}{c|l|l|l|l} \hline \hline **Id** & **Program** & **Maintenance task** & **Does ChatGPT’s** & **Does ChatGPT’s code semantic-** \\ & & & **code compile?** & **cally match developers’ code?** \\ \hline M1 & LOFiles (Loflie, 2017) & Add a Java method to clone an entire document using the Java DOM & No & No. ChatGPT omits details of unchanged code. \\ \hline M2 & CoreNLP (Loflie, 2017) & Add a method to concatenate two arrays in Java and return the result. & Yes & Yes. ChatGPT’s code also adds extra sanity checks for inputs. \\ \hline M3 & DeltaLauncher (Loflie, 2017) & Add a method to concatenate two arrays in Java and return the result. & Yes & Yes \\ \hline M4 & jlib (Loflie, 2017) & Add a method to check whether the char-typed input parameter is printable. & Yes & No. ChatGPT’s code considers fewer corner cases. \\ \hline M5 & Achilles (Ashmer, 2017) & Add a method to programmatically determine the availability of a port in a given machine. & Yes & No. Divergent values are assigned to the same field. ChatGPT’s code does not use the parameter hostname. \\ \hline M6 & lanterna (Loflie, 2017) & Modify a return-statement, to make sure the returned value is true when the given character is printable. & Yes & No. ChatGPT’s code considers fewer corner cases. \\ \hline M7 & gmkrap (Loflie, 2017) & Add two methods to separately decode and encode base64 data. & Yes & Yes \\ \hline M8 & the-holy-braille (Loflie, 2017) & Add a method to count the lines of a file & Yes & Yes \\ \hline M9 & Aiolos (Ashmer, 2017) & Modify the code to configure a logger of the type java.util.logging.Logger, to have LevelALL. & Yes & No. Developer’s code contains more project-specific logic. \\ \hline M10 & CodingProblems (Ashmer, 2017) & Add a method to implement an optimized algorithm of checking if an integer’s square root is an integer. & Yes & No. ChatGPT’s code is not an optimized solution. \\ \hline M11 & markov-test (Loflie, 2017) & Revise an existing method, so that it reads the content of a specified file, creates a Java string from that content, and returns the value. & Yes & Yes \\ \hline M12 & openmars (Loflie, 2017) & Add a method to return the exponential value of a given input, in a optimized and fast way. & Yes & No. ChatGPT omits details of unchanged code. \\ \hline M13 & skyroad-magnets (Loflie, 2017) & Modify an existing method, to check whether the circle and rectangle intersect in 2D euclidean space. & No & No. The variable radius is used but not defined; the output value is calculated differently. \\ \hline \hline \end{tabular} \end{table} Table 15. The 13 maintenance tasks we created in compilable projects for ChatGPT to fulfill For only 5 out of the 13 cases, ChatGPT successfully revised given files to satisfy maintenance needs. The prompts of all these five cases perfectly match the program logic implemented in developers' code, and the logic is totally irrelevant to the surrounding program context or other methods/classes defined in the same project. Specifically for M2, ChatGPT not only output a correct Java file, but also added extra checks for the input parameters to avoid null-pointer dereferences. For 8 out of the 13 cases, ChatGPT did not revise given files as expected. In addition to the compilation issue mentioned above, two major reasons explain our observations. First, ChatGPT's code handles fewer corner cases than developers' code (see M4 and M6). For instance, M6 requires for a method addition to check whether a given char-typed variable is printable. ChatGPT's code considers limited types of non-printable characters (i.e., line separator, paragraph separator, and unassigned characters). However, developers' code covers more types of non-printable characters (e.g., keyboard characters like "Tab"). Second, ChatGPT could not generate project-specific logic (see M5, M9, M13). For instance, Fig. 7 presents the implementations by both developers and ChatGPT for M5. Both snippets satisfy the requirement of determining the availability of a port in a machine. However, developers' code calls setReuseAddress(...) with the false parameter value, while ChatGPT's code calls that method with true; developers' code uses the input hostname but ChatGPT's does not. Although ChatGPT's code is reasonable; it does not fit the program context. #### 4.3.2. Experiment with the 35 tasks defined for uncompilable projects As shown in Table 16, given the 35 tasks, ChatGPT always output code to respond to our prompts. Among the responses, 17 match the program logic in developers' code and 18 responses do not. For 1 of the 17 matching cases--M26, ChatGPT's code performs an extra check on the input parameter to eliminate potential program errors. As shown in Fig. 8, both implementations satisfy the maintenance need of "adding a method to convert java.util.Date to java.sql.Date". However, ChatGPT's version is safer, as it performs a null-pointer check before dereferencing the object date. We identified 4 major reasons to explain why ChatGPT's code failed to match developers' code in 18 cases. First, for 10 cases, ChatGPT's code lacks the project-specific logic even though it satisfies the described maintenance need. Namely, ChatGPT's code either (1) contains throw-statements to throw exceptions that are not thrown by developers' code, (2) misses some if-condition checks or Figure 8. ChatGPT’s code matches developers’ code for M26, and conducts an extra sanity check for the input, Vol. 1, No. 1, Article. Publication date: October 2018. Figure 7. ChatGPT’s code does not match developers’ code for M5, as it allows for reuse of the port other statements, or (3) fails to use all input parameters. Second, for five cases, ChatGPT's code omits details of the unchanged code, and only outputs the revised code with limited surrounding context (i.e., unchanged code). Third, for three cases, ChatGPT's code considers fewer corner cases. Fourth, for one case, ChatGPT's code does not implement any optimized algorithm to decide whether an integer is a perfect square, although the prompts ask for such an optimization. **Finding 7:**_Among the 35 maintenance tasks defined for uncompilable projects, ChatGPT could not fulfill 18 tasks mainly because the generated code is either incomplete, lacking project-specific logic, covering fewer corner cases, or failing to use all input parameters._ ## 5. Threats to Validity _Threats to External Validity:_ All our observations are limited to the SO discussion threads, GitHub repositories, and developers' responses included into our dataset. They may not generalize well to other SO discussion threads, other repositories (including closed-source repositories), or other developers. Our study focuses on Java programs, as the language has been popularly used and we have more experience and domain knowledge relevant to it; we did not apply ChatGPT to programs written in other languages. We experimented with ChatGPT-3.5 instead of ChatGPT-4.0, because ChatGPT-4.0 is not free to use. People may observe slightly different phenomena when applying higher versions of ChatGPT to programs written in other languages. In the future, to make our research findings more representative, we will expand our dataset and explore to use ChatGPT-4.0. _Threats to Internal Validity:_ We experimented with the default setting of ChatGPT-3.5, without controlling or tuning any parameter it defines. By default, when ChatGPT is queried with the same prompt multiple times, it generates results with randomness, i.e., it can produce different results given the same prompt. Such randomness can impact the validity or certainty of our observations. However, based on our experience so far, ChatGPT often produces very similar results given multiple trials of the same prompt. We believe that the internal randomness of ChatGPT does not significantly impact our experiment results. \begin{table} \begin{tabular}{l|l|l|l} \hline \hline **Id** & **Does ChatGPT’s code semantically match developers’** & **Id** & **Does ChatGPT’s code semantically match developers’** \\ & **code?** & & **code?** \\ \hline M14 & Yes & M32 & No. ChatGPT’s code is incomplete. \\ \hline M15 & No. ChatGPT’s code is not an optimized solution, as required. & M33 & Yes. \\ \hline M16 & Yes & M34 & Yes. \\ \hline M17 & No. ChatGPT’s code does not contain project-specific logic. & M35 & No. ChatGPT’s code is incomplete. \\ \hline M18 & No. ChatGPT’s code is incomplete, and it also misses project-specific logic. & M36 & No. ChatGPT’s code calls an API that can throw an exception, but the method header does not declare that exception-to-throw. Meanwhile, developers’ code does not call any API that can throw exception(s). \\ \hline M19 & Yes & M37 & No. ChatGPT’s code can throw an exception, but the developers’ code does not throw any exception. \\ \hline M20 & No. ChatGPT’s code can throw an exception, but the developers’ code does not throw any exception. & M38 & Yes \\ & & & \\ \hline M21 & No. ChatGPT’s code misses project-specific logic. & M39 & No. ChatGPT’s code can throw an exception, developers’ code does not throw any exception. \\ & & & code does not throw any exception. \\ \hline M22 & Yes & M40 & Yes \\ \hline M23 & Yes & M41 & Yes \\ \hline M24 & No. ChatGPT’s code misses project-specific logic. & M42 & No. ChatGPT’s code covers fewer corner cases. \\ \hline M25 & No. ChatGPT’s code covers fewer corner cases. & M43 & No. ChatGPT’s code fails to use one of the input parameters. \\ \hline M26 & Yes. ChatGPT’s code further does sanity checks for inputs. & M44 & Yes \\ \hline M27 & No. ChatGPT’s code does not match project-specific logic. & M45 & No. ChatGPT’s code is incomplete. \\ \hline M28 & No. ChatGPT’s code covers fewer corner cases. & M46 & No. ChatGPT’s code is incomplete. \\ \hline M29 & Yes & M47 & Yes \\ \hline M30 & Yes & M48 & Yes \\ \hline M31 & Yes & & \\ \hline \hline \end{tabular} \end{table} Table 16. The 35 maintenance tasks we created in uncompilable projects for ChatGPT to fulfill #### Threats to Construct Validity The manual inspection of SO answers and Java code is subject to human bias. To mitigate the potential inaccuracy due to human bias, for RQ1, we had two authors separately compare 130 pairs of \(\langle SO,ChatGPT\rangle\) answers; to resolve any opinion conflict between them, another author lead a discussion until the three authors reached a consensus. For RQ2, we recruited as many developers as possible (i.e., 30 developers), asked each to compare 10 answer pairs, and ensured each answer pair to be inspected by at least 5 developers. For RQ3, we leveraged automatic compilation to validate ChatGPT's outputs whenever possible, and had three authors inspect ChatGPT's outputs and the optional compilation reports. ChatGPT was trained on large collections of text data (e.g., books, articles, and web pages) publicly available by September 2021. All data used in our study was available before that date. Therefore, model-overfitting issues may occur and our evaluation may overestimate ChatGPT's capabilities. However, considering the large volume of training data used by ChatGPT (i.e., 570 GB) and potential interference/conflicts among that data, we feel the overfitting issues to be insignificant. In the future, we plan to evaluate ChatGPT with more recent data to mitigate this threat. ## 6. Lessons Learned Below are the lessons or actionable items we learned from this study. For Developers or Potential ChatGPT Users: ChatGPT is a good information resource to refer to when developers have technical questions We observed that ChatGPT answered SO questions with high accuracy, regardless of the question style or technical topic. According to ChatGPT itself, "I was pre-trained using a combination of unsupervised and supervised learning techniques, such as language modeling, auto-encoding, and sequence prediction." [(15)]. The pre-trained knowledge enables ChatGPT to produce answers to SO questions with high accuracy. Another important aspect that has worked in ChatGPT's advantage is how quickly it responds. Developers can get answers instantly, without waiting for someone to notice their query on a forum. This real-time interaction allows developers to maintain their momentum and tackle coding challenges without unnecessary delays. Among the five question categories we studied, ChatGPT answers are generally better than SO answers for coding tasks and comprehension questions, but competitive for optimization tasks. #### Developers need to be very cautious when merging ChatGPT's code into their own projects Compared with developers' code, ChatGPT's code often does not fit into the context of given Java files, even if it can satisfy the maintenance needs. Such a limitation is due to the unchanged code omitted by ChatGPT, the missing program-specific logic, or fewer corner cases covered by ChatGPT's code. The lack of project-specific logic may be caused by ambiguous or narrow descriptions of prompts, or by ChatGPT's limited capability of code generation. To reasonably simulate real-developers' efforts, we did not further refine prompts to describe all project-specific requirements in addition to the basic maintenance need. It is possible that ChatGPT can work better when more project-specific details are specified in prompts; however, further investigation is still needed to validate this argument. #### For SE Community and Q&A Forums: _When people create documentation (e.g., coding answers or user manuals) to provide guidance on software development or usage, they can consider using ChatGPT to improve the readability and informativeness of documents_. Compared with the accepted or most popular SO answers provided by human experts (e.g., experienced developers), ChatGPT's answers are often more desirable. Developers typically prefer ChatGPT's answers due to the better readability and higher informativeness, but not necessarily due to the correctness. Our observations also raise a potential challenge for developer websites such as Stack Overflow and the whole SE community. Users may use these Q&A websites less frequently because they can get better answers from ChatGPT, and even users of these sites may start using ChatGPT-like tools to formulate and post their answers. In the long run, these behaviors may reduce the human-crafted materials available on the Internet for training. _It is unknown and worth further investigation whether such a reduction trend will negatively impact ChatGPT-like tools._ _For SE Researchers:_ _New program analysis techniques can be invented to identify the scenarios where it is safer to merge ChatGPT's code into projects. Novel testing techniques can be created to specifically test the interaction between ChatGPT's code and developers' code in the same project._ In our study, ChatGPT performed well on certain types of maintenance tasks, especially when it generated elementary functions to implement independent features and those features have little or no data dependency on other parts of the same project. Characterizing the scenarios when ChatGPT's code is easy to (re)use can help researchers better assess the automation boundary of ChatGPT, and help developers better leverage ChatGPT to improve programmer productivity and software quality. So far, we have not observed any obvious or dummy error in the outputs by ChatGPT to signal significant limitations of the tool, or to facilitate our immediate rejection of specific answers. Consequently, we always spent lots of time reading every single line of the outputs, thought carefully, and discussed thoroughly to identify issues lying in those outputs. If developers are not careful enough or are under time pressure, they may get misled by ChatGPT and blindly accept all of the tool's outputs. _To avoid the potential hallucination problem or misinformation produced by ChatGPT, researchers need to apply comprehensive program analysis tools or even invent new tools to examine the tool's outputs rigorously._ _For LLM Researchers:_ _Research needs to be done to investigate the best user interfaces that facilitate users to provide all necessary information for accurate code generation or appropriate program revision._ One big challenge developers or LLM users would face is that it is unknown (1) how much project-specific information is sufficient, and (2) what project-specific information will help. Due to the size of code base and intellectual property restrictions, it is unlikely that developers can use their whole code base as the prompt. Meanwhile, for developers, the difficulty of describing all project-specific details can be sometimes equivalent to or even higher than that of writing code manually. _It is also worth investigation how willing developers are to specify all project-specific details for coding tasks. The cost-efficiency of using ChatGPT forms another important research direction._ ## 7. Related Work The related work of our research includes empirical studies on SO posts, and studies on ChatGPT. ### Empirical Studies on StackOverflow Researchers did various studies to characterize the crowdsourced knowledge available on StackOverflow (Zhu et al., 2018; Zhu et al., 2018; Zhu et al., 2019; Zhu et al. IDEs, Architecture and Design Patterns, Unit Testing, and Database. Zahedi et al. (2018) applied topic modeling to SO posts to understand the non-functional requirements (NFRs) that developers focus on. Firouzi et al. (2018) studied 2,283 C# snippets mined in SO data dump, to investigate why developers used unsafe codes (i.e., code blocks encapsulated via the C# unsafe keyword). Zhang et al. (2018) studied the code examples of API usage, to reveal answers that may misuse APIs. Bangash et al. (2018) analyzed the machine learning-related posts, to investigate developers' understanding of machine learning. Openja et al. (2018) analyzed release engineering questions, to understand the modern release engineering topics of interest and their difficulty. Fischer et al. (2018) and Meng et al. (2018, 2019) examined SO posts related to Java security, to identify developers' concerns on security implementation, technical challenges, or vulnerabilities in answer code. Our study complements all studies mentioned above, but has a unique focus on the comparison between SO answers and ChatGPT answers. No existing work compares ChatGPT answers with SO answers. We mainly focused on the best SO answers developers could provide, including accepted and most popular answers. By comparing these answers with ChatGPT answers, we intended to reveal how ChatGPT compares with human experts in responding to technical questions. ### Studies on ChatGPT Several studies were conducted on ChatGPT (Sobania et al., 2018; Sobania et al., 2018; Sobania et al., 2018; Sobania et al., 2018; Sobania et al., 2018). Specifically, Nascimento et al. (2018) used four LeetCode questions to create prompts, and observed ChatGPT to outperform novice developers in solving easy or medium problems. Jalil et al. (2018) checked how well ChatGPT performs when tasked with answering common questions in a popular curriculum, and found ChatGPT to respond to 77.5% of questions. Sobania et al. (2018) used a bug fixing benchmark set--QuixBugs; they found ChatGPT to fix 31 out of 40 bugs and outperform the state-of-the-art. Tian et al. (2018) assessed ChatGPT's capability in code generation, program repair, and code summarization. They observed ChatGPT to outperform two large language models in code generation; it is competitive with a state-of-the-art repair tool; it produces consistent summaries for code with the same intention. Nikolaidis et al. (2018) evaluated ChatGPT and Copilot using LeetCode problems. They found both models to well solve easy problems. Chen et al. (2018) created GPTutor, a ChatGPT-powered programming tool, to provide code explanation for developers in IDE. Our study complements all prior work, as we studied ChatGPT from unique angles. We characterized its capability of (1) answering SO questions and (2) maintaining or revising software in response to new software requirements. We further examined developers' opinions on the comparison between ChatGPT answers and best SO answers. ## 8. Conclusion Motivated by the widespread concern on ChatGPT's capability of replacing developers and killing jobs, we explored to use ChatGPT in two typical working scenarios in developers' daily lives: question answering and software maintenance. We hypothesized that ChatGPT could not provide good answers to technical questions, or satisfy the maintenance needs in given software projects. Surprisingly, we observed ChatGPT to work very well in answering technical questions, and provide promising outputs to facilitate software maintenance. Specifically, both our manual inspection and user study show that given technical questions, ChatGPT answers are often correct and reasonable; they often have higher quality than the most popular or accepted answers from SO. This implies that developers can always refer to ChatGPT as a reliable information resource when they have technical questions; answer-providers or technical supporters can also leverage ChatGPT to polish or enhance their original answers, to better help other developers, and to better shape the art as well as practice of software today and in future. Meanwhile, ChatGPT's responses to maintenance tasks are less satisfactory; the code included in these responses do not fit into the given program contexts in most cases either due to (1) ChatGPT's limited understanding of program context, (2) its limited capability of code generation, or (3) the unclearness or ambiguity in task-describing prompts. We do not consider ChatGPT to be able to replace humans or work as independent software maintainers, although we do observe ChatGPT's great capability in generating independent functional units (e.g., Java classes or methods) that have no or little dependency on the surrounding program context. To sum up, we are cautiously optimistic about ChatGPT's role in the software industry. By quantitatively and qualitatively measuring its capabilities in question-answering and software-maintaining, our study characterizes the potential technical support and automation opportunities ChatGPT brings; our study also reveals the potential pitfalls or challenges provoked by the tool. In the future, we will create better tools to automatically assess the quality of ChatGPT's outputs, or integrate ChatGPT into the existing tool chains for test generation or bug detection.
2307.08780
Discounted-Sum Automata with Multiple Discount Factors
Discounting the influence of future events is a key paradigm in economics and it is widely used in computer-science models, such as games, Markov decision processes (MDPs), reinforcement learning, and automata. While a single game or MDP may allow for several different discount factors, discounted-sum automata (NDAs) were only studied with respect to a single discount factor. For every integer $\lambda\in\mathbb{N}\setminus\{0,1\}$, as opposed to every $\lambda\in \mathbb{Q}\setminus\mathbb{N}$, the class of NDAs with discount factor $\lambda$ ($\lambda$-NDAs) has good computational properties: it is closed under determinization and under the algebraic operations min, max, addition, and subtraction, and there are algorithms for its basic decision problems, such as automata equivalence and containment. We define and analyze discounted-sum automata in which each transition can have a different integral discount factor (integral NMDAs). We show that integral NMDAs with an arbitrary choice of discount factors are not closed under determinization and under algebraic operations and that their containment problem is undecidable. We then define and analyze a restricted class of integral NMDAs, which we call tidy NMDAs, in which the choice of discount factors depends on the prefix of the word read so far. Some of their special cases are NMDAs that correlate discount factors to actions (alphabet letters) or to the elapsed time. We show that for every function $\theta$ that defines the choice of discount factors, the class of $\theta$-NMDAs enjoys all of the above good properties of integral NDAs, as well as the same complexity of the required decision problems. Tidy NMDAs are also as expressive as deterministic integral NMDAs with an arbitrary choice of discount factors. All of our results hold for both automata on finite words and automata on infinite words.
Udi Boker, Guy Hefetz
2023-07-17T18:50:22Z
http://arxiv.org/abs/2307.08780v1
# Discounted-sum Automata with Multiple Discount Factors ###### Abstract. Discounting the influence of future events is a key paradigm in economics and it is widely used in computer-science models, such as games, Markov decision processes (MDPs), reinforcement learning, and automata. While a single game or MDP may allow for several different discount factors, discounted-sum automata (NDAs) were only studied with respect to a single discount factor. For every integer \(\lambda\in\mathbb{N}\setminus\{0,1\}\), as opposed to every \(\lambda\in\mathbb{Q}\setminus\mathbb{N}\), the class of NDAs with discount factor \(\lambda\) (\(\lambda\)-NDAs) has good computational properties: it is closed under determinization and under the algebraic operations \(\min\), \(\max\), addition, and subtraction, and there are algorithms for its basic decision problems, such as automata equivalence and containment. We define and analyze discounted-sum automata in which each transition can have a different integral discount factor (integral _NMDAs_). We show that integral NMDAs with an arbitrary choice of discount factors are not closed under determinization and under algebraic operations and that their containment problem is undecidable. We then define and analyze a restricted class of integral NMDAs, which we call _tidy NMDAs_, in which the choice of discount factors depends on the prefix of the word read so far. Some of their special cases are NMDAs that correlate discount factors to actions (alphabet letters) or to the elapsed time. We show that for every function \(\theta\) that defines the choice of discount factors, the class of \(\theta\)-NMDAs enjoys all of the above good properties of integral NDAs, as well as the same complexity of the required decision problems. Tidy NMDAs are also as expressive as deterministic integral NMDAs with an arbitrary choice of discount factors. All of our results hold for both automata on finite words and automata on infinite words. Key words and phrases:Automata, Discounted-sum, Quantitative verification, NMDA, NDA The article extends [7] and parts of [8]. or infinite) run is the discounted summation of the weights on the transitions, such that the weight in the \(i\)th position of the run is divided by \(\lambda^{i}\). The value of a (finite or infinite) word is the minimal value of the automaton runs on it. An NDA \(\mathcal{A}\) realizes a function from words to real numbers, and we write \(\mathcal{A}(w)\) for the value of \(\mathcal{A}\) on a word \(w\). In the Boolean setting, where automata realize languages, closure under the basic Boolean operations of union, intersection, and complementation is desirable, as it allows to use automata in formal verification, logic, and more. In the quantitative setting, where automata realize functions from words to numbers, the above Boolean operations are naturally generalized to algebraic ones: union to min, intersection to max, and complementation to multiplication by \(-1\) (depending on the function's co-domain). Likewise, closure under these algebraic operations, as well as under addition and subtraction, is desirable for quantitative automata, serving for quantitative verification. Determinization is also very useful in automata theory, as it gives rise to many algorithmic solutions, and is essential for various tasks, such as synthesis and probabilistic model checking1. Footnote 1: In some cases, automata that are “almost deterministic”, such as limit-deterministic [43] or good-for-games automata [28, 12] suffice. NDAs cannot always be determinized [18], they are not closed under basic algebraic operations [10], and basic decision problems on them, such as universality, equivalence, and containment, are not known to be decidable and relate to various longstanding open problems [11]. However, restricting NDAs to an integral discount factor \(\lambda\in\mathbb{N}\) provides a robust class of automata that is closed under determinization and under the algebraic operations, and for which the decision problems of universality equivalence, and containment are decidable [10]. Various variants of NDAs are studied in the literature, among which are _functional_, _k-valued_, _probabilistic_, and more [25, 24, 16]. Yet, to the best of our knowledge, all of these models are restricted to have a single discount factor in an automaton. This is a significant restriction of the general discounted-summation paradigm, in which multiple discount factors are considered. For example, Markov decision processes and discounted-sum games allow for multiple discount factors within the same entity [27, 4]. A natural extension to NDAs is to allow for different discount factors over the transitions, providing the ability to model systems in which each action (alphabet letter in the automaton) causes a different discounting, systems in which the discounting changes over time, and more. As integral NDAs provide robust automata classes, whereas non-integral NDAs do not, we look into extending integral NDAs into integral _NMDAs_ (Definition 2.1), allowing multiple integral discount factors in a single automaton. As automata are aimed at modeling systems, NMDAs significantly extend the system behaviors that can be modeled with discounted-sum automata. For an intuitive example, consider how the value of used vehicles changes over time: It decreases a lot in the first year, slightly less rapidly in the next couple of years, and significantly less rapidly in further years. An NDA cannot model such a behavior, as the discount factor cannot change over time, whereas an NMDA provides the necessary flexibility of the discount factor. On a more formal level, NMDAs may allow to enhance formal verification of reinforcement learning applications. In the reinforcement learning process, the expected return value is the discounted-summation of the accumulated future rewards. In classic reinforcement learning, the discounted summation uses a single discount factor, whereas novel approaches in reinforcement learning study how to enhance the process to allow multiple discount factors [34, 26, 44, 39, 31]. This enhancement of reinforcement learning parallels our extension of discounted-sum automata to support multiple discount factors. We start with analyzing NMDAs in which the integral discount factors can be chosen arbitrarily. Unfortunately, we show that this class of automata does not allow for determinization, it is not closed under the basic algebraic operations, and its containment problem is undecidable. For more restricted generalizations of integral NDAs, in which the discount factor depends on the transition's letter (_letter-oriented_ NMDAs) or on the elapsed time (_time-oriented_ NMDAs), we show that the corresponding automata classes do enjoy all of the good properties of integral NDAs, while strictly extending their expressiveness. We further analyze a rich class of integral NMDAs that extends both letter-oriented and time-oriented NMDAs, in which the choice of discount factor depends on the word-prefix read so far (_tidy_ NMDAs). We show that their expressiveness is as of deterministic integral NMDAs with an arbitrary choice of discount factors and that for every choice function \(\theta:\Sigma^{+}\rightarrow\mathbb{N}\setminus\{0,1\}\), the class of \(\theta\)-NMDAs enjoys all of the good properties of integral NDAs. (See Figure 1.) Considering closure under algebraic operations, we further provide tight bounds on the size blow-up involved in the different operations (Table 1). To this end, we provide new lower bounds also for the setting of NDAs, by developing a general scheme to convert every NFA to a corresponding NDA of linearly the same size, and to convert some specific NDAs back to corresponding NFAs. As for the decision problems of tidy NMDAs, we provide a PTIME algorithm for emptiness and PSPACE algorithms for the other problems of exact-value, universality, equivalence, and containment. The complexities are with respect to the automaton (or automata) size, which is considered as the maximum between the number of transitions and the maximal binary representation of any discount factor or weight in it. These new algorithms also improve the complexities of the previously known algorithms for solving the decision problems of NDAs, which were PSPACE with respect to unary representation of the weights. For rational weights, we assume all of them to have the same denominator. (Omitting this assumption changes in the worst case the PSPACE algorithms into EXPSPACE ones.) As general choice functions need not be finitely represented, it might upfront limit the usage of tidy NMDAs. Yet, we show that finite transducers (Mealy machines) suffice, in the sense that they allow to represent every choice function \(\theta\) that can serve for a \(\theta\)-NMDA. We provide a PTIME algorithm to check whether a given NMDA is tidy, as well as if it is a \(\mathcal{T}\)-NMDA for a given transducer \(\mathcal{T}\). We show all of our results for both automata on finite words and automata on infinite words. Whenever possible, we provide a single proof for both settings. ## 2. Discounted-Sum Automata with Multiple Integral Discount Factors We define a discounted-sum automaton with arbitrary discount factors, abbreviated NMDA, by adding to an NDA a discount factor in each of its transitions. An NMDA is defined on either finite or infinite words. The formal definition is given in Definition 2.1, and an example in Figure 2. An _alphabet_\(\Sigma\) is an arbitrary finite set, and a _word_ over \(\Sigma\) is a finite or infinite sequence of letters in \(\Sigma\), with \(\varepsilon\) for the empty word. We denote the concatenation of a finite word and a finite or infinite word \(w\) by \(u\cdot w\), or simply by \(uw\). We define \(\Sigma^{+}\) to be the set of all finite words except the empty word, i.e., \(\Sigma^{+}=\Sigma^{*}\setminus\{\varepsilon\}\). For a word \(w=w(0)w(1)w(2)\ldots\), we denote the sequence of its letters starting at index \(i\) and ending at index \(j\) as \(w[i..j]=w(i)w(i+1)\ldots w(j)\). **Definition 2.1**.: A nondeterministic discounted-sum automaton with multiple discount factors (NMDA), on finite or infinite words, is a tuple \(\mathcal{A}=\langle\Sigma,Q,\iota,\delta,\gamma,\rho\rangle\) over an alphabet \(\Sigma\), with a finite set of states \(Q\), an initial set of states \(\iota\subseteq Q\), a transition function \(\delta\subseteq Q\times\Sigma\times Q\), a weight function \(\gamma:\delta\to\mathbb{Q}\), and a discount-factor function \(\rho:\delta\to\mathbb{Q}\cap(1,\infty)\), assigning to each transition its discount factor, which is a rational greater than one. 2 Footnote 2: Discount factors are sometimes defined in the literature as numbers between \(0\) and \(1\), under which setting weights are multiplied by these factors rather than divided by them. * A _walk_ in \(\mathcal{A}\) from a state \(p_{0}\) is a sequence of states and letters, \(p_{0},\sigma_{0},p_{1},\sigma_{1},p_{2},\cdots\), such that for every \(i\), \((p_{i},\sigma_{i},p_{i+1})\in\delta\). For example, \(\psi=q_{1},a,q_{1},b,q_{2}\) is a walk of the NMDA \(\mathcal{A}\) of Figure 2 on the word \(ab\) from the state \(q_{1}\). * A run of \(\mathcal{A}\) is a walk from an initial state. * The length of a walk \(\psi\), denoted by \(|\psi|\), is \(n\) for a finite walk \(\psi=p_{0},\sigma_{0},p_{1},\cdots,\sigma_{n-1},p_{n}\), and \(\infty\) for an infinite walk. * The \(i\)-th transition of a walk \(\psi=p_{0},\sigma_{0},p_{1},\sigma_{1},\cdots\) is denoted by \(\psi(i)=(p_{i},\sigma_{i},p_{i+1})\). * The _value_ of a finite or an infinite walk \(\psi\) is \(\mathcal{A}(\psi)=\sum_{i=0}^{|\psi|-1}\left(\gamma\big{(}\psi(i)\big{)}\cdot \prod_{j=0}^{i-1}\frac{1}{\rho\big{(}\psi(j)\big{)}}\right)\). For example, the value of the walk \(r_{1}=q_{0},a,q_{0},a,q_{1},b,q_{2}\) (which is also a run) of \(\mathcal{A}\) from Figure 2 is \(\mathcal{A}(r_{1})=1+\frac{1}{2}\cdot\frac{1}{3}+2\cdot\frac{1}{2.3}=\frac{3}{2}\). * The _value_ of \(\mathcal{A}\) on a finite or infinite word \(w\) is \(\mathcal{A}(w)=\inf\{\mathcal{A}(r)\mid r\) is a run of \(\mathcal{A}\) on \(w\}\). * In the case where \(|\iota|=1\) and for every \(q\in Q\) and \(\sigma\in\Sigma\), we have \(|\{q^{\prime}\mid(q,\sigma,q^{\prime})\in\delta\}|\leq 1\), we say that \(\mathcal{A}\) is _deterministic_, denoted by DMDA, and view \(\delta\) as a function to states. * When all the discount factors are integers, we say that \(\mathcal{A}\) is an _integral_ NMDA. In the case where for every \(q\in Q\) and \(\sigma\in\Sigma\), we have \(|\{q^{\prime}\mid(q,\sigma,q^{\prime})\in\delta\}|\geq 1\), intuitively meaning that \(\mathcal{A}\) cannot get stuck, we say that \(\mathcal{A}\) is _complete_. It is natural to assume that Figure 1. Classes of integral NMDAs, defined according to the flexibility of choosing the discount factors. The class of NMDAs with arbitrary integral factors is not closed under algebraic operations and under determinization, and some of its decision problems are undecidable. The other classes (for a specific choice function) are closed under both algebraic operations and determinization, and their basic decision problems are decidable. Tidy NMDAs are as expressive as deterministic NMDAs with arbitrary integral discount factors. discounted-sum automata are complete, and we adopt this assumption, as dead-end states, which are equivalent to states with infinite-weight transitions, break the property of the decaying importance of future events. Automata \(\mathcal{A}\) and \(\mathcal{A}^{\prime}\) are _equivalent_, denoted by \(\mathcal{A}\equiv\mathcal{A}^{\prime}\), if for every word \(w\), \(\mathcal{A}(w)=\mathcal{A}^{\prime}(w)\). For every finite (infinite) walk \(\psi=p_{0},\sigma_{0},p_{1},\sigma_{1},p_{2},\cdots,\sigma_{n-1},p_{n}\) (\(\psi=p_{0},\sigma_{0},p_{1},\cdots\)), and all integers \(0\leq i\leq j\leq|\psi|-1\) (\(0\leq i\leq j\)), we define the finite sub-walk from \(i\) to \(j\) as \(\psi[i..j]=p_{i},\sigma_{i},p_{i+1},\cdots,\sigma_{j},p_{j+1}\). For an infinite walk, we also define \(\psi[i..\infty]=p_{i},\sigma_{i},p_{i+1},\cdots\), namely the infinite suffix from position \(i\). For a finite walk, we also define the target state as \(\delta(\psi)=p_{n}\) and the accumulated discount factor as \(\rho(\psi)=\prod_{i=0}^{n-1}\rho\big{(}\psi(i)\big{)}\). We extend the transition function \(\delta\) to finite words in the regular manner: For a word \(u\in\Sigma^{*}\) and a letter \(\sigma\in\Sigma\), \(\delta(\varepsilon)=\iota;\delta(u\cdot\sigma)=\bigcup_{q\in\delta(u)}\delta( q,\sigma)\). For a state \(q\) of \(\mathcal{A}\), we denote by \(\mathcal{A}^{q}\) the automaton that is identical to \(\mathcal{A}\), except for having \(q\) as its single initial state. An NMDA may have rational weights, yet it is often convenient to consider an analogous NMDA with integral weights, achieved by multiplying all weights by their common denominator. **Proposition 2.2**.: _For all constant \(0<m\in\mathbb{Q}\), NMDA \(\mathcal{A}=\langle\Sigma,Q,\iota,\delta,\gamma,\rho\rangle\), NMDA \(\mathcal{A}^{\prime}=\langle\Sigma,Q,\iota,\delta,m\cdot\gamma,\rho\rangle\) obtained from \(\mathcal{A}\) by multiplying all its weights by \(m\), and a finite or infinite word \(w\), we have \(\mathcal{A}^{\prime}(w)=m\cdot\mathcal{A}(w)\)._ Proof.: Let \(0<m\in\mathbb{Q}\), \(\mathcal{A}=\langle\Sigma,Q,\iota,\delta,\gamma,\rho\rangle\) and \(\mathcal{A}^{\prime}=\langle\Sigma,Q,\iota,\delta,m\cdot\gamma,\rho\rangle\) NMDAs, and \(w\) a finite or infinite word. For every run \(r\) of \(\mathcal{A}\) on \(w\), we have that the same run in \(\mathcal{A}^{\prime}\) has the value of \[\mathcal{A}^{\prime}(r)=\sum_{i=0}^{|w|-1}\left(m\cdot\gamma(r(i))\cdot\prod_ {j=0}^{i-1}\frac{1}{\rho(r(j))}\right)=m\cdot\sum_{i=0}^{|w|-1}\left(\gamma(r (i))\cdot\prod_{j=0}^{i-1}\frac{1}{\rho(r(j))}\right)=m\cdot\mathcal{A}(r)\] Hence for every run of \(\mathcal{A}\) with value \(v_{0}\) we have a run of \(\mathcal{A}^{\prime}\) for the same word with value of \(m\cdot v_{0}\). Symmetrically for every run of \(\mathcal{A}^{\prime}\) with value \(v_{1}\) we have a run of \(\mathcal{A}\) for the same word with value of \(\frac{1}{m}\cdot v_{1}\). So, \[\mathcal{A}^{\prime}(w) =\inf\{\mathcal{A}^{\prime}(r)\bigm{|}r\text{ is a run of } \mathcal{A}^{\prime}\text{ on }w\}\geq\inf\{m\cdot\mathcal{A}(r)\bigm{|}r\text{ is a run of }\mathcal{A}\text{ on }w\}\] \[=m\cdot\inf\{\mathcal{A}(r)\bigm{|}r\text{ is a run of }\mathcal{A}\text{ on }w\}=m\cdot\mathcal{A}(w)\] Figure 2. An NMDA \(\mathcal{A}\). The labeling on the transitions indicate the alphabet letter, the weight of the transition, and its discount factor. and \[\mathcal{A}(w) =\inf\{\mathcal{A}(r)\ \big{|}\,\,r\text{ is a run of }\mathcal{A}\text{ on }w\}\] \[\geq\inf\Big{\{}\frac{1}{m}\cdot\mathcal{A}^{\prime}(r)\ \big{|}\,\,r\text{ is a run of }\mathcal{A}^{\prime}\text{ on }w\Big{\}}=\frac{1}{m}\cdot\mathcal{A}^{\prime}(w)\] which leads to \(\mathcal{A}^{\prime}(w)=m\cdot\mathcal{A}(w)\). ### Size We define the size of \(\mathcal{A}\), denoted by \(|\mathcal{A}|\), as the maximum between the number of transitions and the maximal binary representation of any discount factor or weight in it. For rational weights, we assume all of them to have the same denominator. The motivation for a common denominator stems from the determinization algorithm (Theorem 4.6). Omitting this assumption will still result in a deterministic automaton whose size is only single exponential in the size of the original automaton, yet storing its states will require a much bigger space, changing our PSPACE algorithms (section 4) into EXPSPACE ones. ### Algebraic operations Given automata \(\mathcal{A}\) and \(\mathcal{B}\) over the same alphabet, and a non-negative scalar \(c\in\mathbb{Q}\), we define * \(\mathcal{C}\equiv\min(\mathcal{A},\mathcal{B})\) if \(\forall w\). \(\mathcal{C}(w)=\min\big{(}\mathcal{A}(w),\mathcal{B}(w)\big{)}\) * \(\mathcal{C}\equiv\max(\mathcal{A},\mathcal{B})\) if \(\forall w\). \(\mathcal{C}(w)=\max\big{(}\mathcal{A}(w),\mathcal{B}(w)\big{)}\) * \(\mathcal{C}\equiv\mathcal{A}+\mathcal{B}\) if \(\forall w\). \(\mathcal{C}(w)=\mathcal{A}(w)+\mathcal{B}(w)\) * \(\mathcal{C}\equiv\mathcal{A}-\mathcal{B}\) if \(\forall w\). \(\mathcal{C}(w)=\mathcal{A}(w)-\mathcal{B}(w)\) * \(\mathcal{C}\equiv c\cdot\mathcal{A}\) if \(\forall w\). \(\mathcal{C}(w)=c\cdot\mathcal{A}(w)\) * \(\mathcal{C}\equiv-\mathcal{A}\) if \(\forall w\). \(\mathcal{C}(w)=-\mathcal{A}(w)\) ### Decision problems Given automata \(\mathcal{A}\) and \(\mathcal{B}\) and a threshold \(\nu\in\mathbb{Q}\), we consider the following properties, with strict (or non-strict) inequalities: _Nonemptiness:_ There exists a word \(w\), s.t. \(\mathcal{A}(w)<\nu\) (or \(\mathcal{A}(w)\leq\nu\)); _Exact-value:_ There exists a word \(w\), s.t. \(\mathcal{A}(w)=\nu\); _Universality:_ For all words \(w\), \(\mathcal{A}(w)<\nu\) (or \(\mathcal{A}(w)\leq\nu\)); _Equivalence:_ For all words \(w\), \(\mathcal{A}(w)=\mathcal{B}(w)\); _Containment:_ For all words \(w\), \(\mathcal{A}(w)>\mathcal{B}(w)\) (or \(\mathcal{A}(w)\geq\mathcal{B}(w)\)). 3 Footnote 3: Considering quantitative containment as a generalization of language containment, and defining the “acceptance” of a word \(w\) as having a small enough value on it, we define that \(\mathcal{A}\) is contained in \(\mathcal{B}\) if for every word \(w\), \(\mathcal{A}\)’s value on \(w\) is at least as big as \(\mathcal{B}\)’s value. (Observe the \(>\) and \(\geq\) signs in the definition.) ### Finite and infinite words Results regarding NMDAs on finite words that refer to the existence of an equivalent automaton ("positive results") can be extended to NMDAs on infinite words due to Lemma 2.3 below. Likewise, results that refer to non-existence of an equivalent automaton ("negative results") can be extended from NMDAs on infinite words to NMDAs on finite words. Accordingly, if not stated otherwise, we prove the positive results for automata on finite words and the negative results for automata on infinite words, getting the results for both settings. **Lemma 2.3**.: _For all NMDAs \(\mathcal{A}\) and \(\mathcal{B}\), if for all finite word \(u\in\Sigma^{+}\), we have \(\mathcal{A}(u)=\mathcal{B}(u)\), then also for all infinite word \(w\in\Sigma^{\omega}\), we have \(\mathcal{A}(w)=\mathcal{B}(w)\)._ The proof is a simple extension of the proof of a similar lemma in [10] with respect to NDAs. Notice that the converse does not hold, namely there are automata equivalent w.r.t. infinite words, but not w.r.t. finite words. (See an example in Figure 5.) ## 3. Arbitrary Integral NMDAs Unfortunately, the family of integral NMDAs in which discount factors can be chosen arbitrarily is not closed under determinization (subsection 3.1) and under basic algebraic operations (subsection 3.2), and its containment problem is undecidable (subsection 3.3). ### Non-closure under determinization **Theorem 3.1**.: _There exists an integral NMDA that no integral DMDA is equivalent to._ Proof.: Let \(\mathcal{B}\) be the integral NMDA depicted in Figure 3 over the alphabet \(\Sigma=\{a,b,c\}\). We show that for every \(n\in\mathbb{N}\), \(\mathcal{B}(a^{n}b^{\omega})=1-\frac{1}{2^{n+1}}\) and \(\mathcal{B}(a^{n}c^{\omega})=1+\frac{1}{3^{n+1}}\). An integral DMDA \(\mathcal{D}\) that is equivalent to \(\mathcal{B}\) will intuitively need to preserve an accumulated discount factor \(\Pi_{n}\) and an accumulated weight \(W_{n}\) on every \(a^{n}\) prefix, such that both suffixes of \(b^{\omega}\) and \(c^{\omega}\) will match the value of \(\mathcal{B}\). Since the difference between the required value of each pair \(\langle a^{n}b^{\omega},a^{n}c^{\omega}\rangle\) is "relatively large", \(\Pi_{n}\) must have "many" small discount factors of 2 to compensate this difference. But too many discount factors of 2 will not allow to achieve the "delicate" values of \(1+\frac{1}{3^{n+1}}\). We will formally analyze the mathematical properties of \(\Pi_{n}\), showing that its prime-factor decomposition must indeed contain mostly 2's, "as well as" mostly 3's, leading to a contradiction. Note that the only nondeterminism in \(\mathcal{B}\) is in the initial state. Intuitively, for an infinite word for which the first non-\(a\) letter is \(b\), the best choice for \(\mathcal{B}\) would be to start in \(q_{0}\), while if the first non-\(a\) letter is \(c\), the best choice would be to start in \(q_{1}\). Formally, for each \(n\in\mathbb{N}\setminus\{0\}\), observe that for the finite word \(a^{n}\), the run \(r_{1}\) starting at \(q_{0}\) will have the accumulated value of \(\mathcal{B}(r_{1})=\sum_{k=0}^{n-1}\frac{1}{2}\cdot\frac{1}{2^{k}}=\frac{1}{2 }\cdot\frac{1-\frac{1}{2^{n}}}{1-\frac{1}{2}}=1-\frac{1}{2^{n}}\), and an accumulated discount factor of \(2^{n}\); the run \(r_{2}\) starting at \(q_{1}\) the value \(\mathcal{B}(r_{2})=\sum_{k=0}^{n-1}\frac{2}{3}\cdot\frac{1}{3^{k}}=\frac{2}{ 3}\cdot\frac{1-\frac{1}{2^{n}}}{1-\frac{1}{3}}=1-\frac{1}{3^{n}}\), and an accumulated discount factor of \(3^{n}\); and thus the value of \(\mathcal{B}\), which is the minimum value of the two runs, \(\mathcal{B}(a^{n})=\min\left\{1-\frac{1}{2^{n}},1-\frac{1}{3^{n}}\right\}=1- \frac{1}{2^{n}}\). Figure 3. An integral NMDA \(\mathcal{B}\) on infinite words that cannot be determinized. Accordingly, we have that for every \(n\in\mathbb{N}\), \[\mathcal{B}(a^{n}b^{\omega}) =\min\left\{1-\frac{1}{2^{n}}+\frac{1}{2}\cdot\frac{1}{2^{n}},1- \frac{1}{3^{n}}+2\cdot\frac{1}{3^{n}}\right\}=1-\frac{1}{2^{n+1}} \tag{3.1}\] \[\mathcal{B}(a^{n}c^{\omega}) =\min\left\{1-\frac{1}{2^{n}}+2\cdot\frac{1}{2^{n}},1-\frac{1}{3^ {n}}+\frac{4}{3}\cdot\frac{1}{3^{n}}\right\}=1+\frac{1}{3^{n+1}} \tag{3.2}\] Assume toward contradiction that there exists an integral DMDA \(\mathcal{D}=\langle\Sigma,Q_{\mathcal{D}},p_{0},\delta_{\mathcal{D}},\gamma_ {\mathcal{D}},\rho_{\mathcal{D}}\rangle\) such that \(\mathcal{B}\equiv\mathcal{D}\). Intuitively, \(\mathcal{D}\) needs to preserve accumulated discount factor and weight on every \(a^{n}\) prefix, such that both suffixes of \(b^{\omega}\) and \(c^{\omega}\) will match the value of \(\mathcal{B}\). Since the difference between the required value of each pair \(\langle a^{n}b^{\omega},a^{n}c^{\omega}\rangle\) is relatively large, \(\mathcal{D}\) must have "many" small discount factors of \(2\) to compensate this difference. But "many" discount factors of \(2\) will not allow to achieve all the delicate values of \(1+\frac{1}{3^{n+1}}\). Formally, since \(Q_{\mathcal{D}}\) is finite, there exist \(i\in\mathbb{N}\) and \(j\in\mathbb{N}\setminus\{0\}\) such that \(\delta_{\mathcal{D}}(a^{i})=\delta_{\mathcal{D}}(a^{i+j})\). Let \(r\) be the run of \(\mathcal{D}\) on \(a^{i+j}\), and denote the weight and discount factor of the prefix of \(r\) on \(a^{i}\) as \(W_{1}=\mathcal{D}(a^{i})=\mathcal{D}(r[0..i-1])\) and \(\Pi_{1}=\rho(r[0..i-1])\), and the weight and discount factor of the suffix of \(r\) on the \(a^{j}\) cycle as \(W_{2}=\mathcal{D}(r[i..i+j-1])\) and \(\Pi_{2}=\rho(r[i..i+j-1])\). Let \(W_{b}=\left[\mathcal{D}(a^{i}b^{\omega})-\mathcal{D}(a^{i})\right]\cdot\Pi_{1}\), be the weight of a \(b^{\omega}\) word starting from \(\delta_{\mathcal{D}}(a^{i})\), and similarly \(W_{c}=\left[\mathcal{D}(a^{i}c^{\omega})-\mathcal{D}(a^{i})\right]\cdot\Pi_{1}\). The partial structure of \(\mathcal{D}\) with respect to those symbols is depicted in Figure 4. For every \(k\in\mathbb{N}\) we have \[\mathcal{D}(a^{i+j\cdot k}b^{\omega}) =W_{1}+\Big{(}\sum_{t=0}^{k-1}\frac{W_{2}}{\Pi_{1}\cdot\Pi_{2}^{ t}}\Big{)}+\frac{W_{b}}{\Pi_{1}\cdot\Pi_{2}^{k}} \tag{3.3}\] \[\mathcal{D}(a^{i+j\cdot k}c^{\omega}) =W_{1}+\Big{(}\sum_{t=0}^{k-1}\frac{W_{2}}{\Pi_{1}\cdot\Pi_{2}^{ t}}\Big{)}+\frac{W_{c}}{\Pi_{1}\cdot\Pi_{2}^{k}} \tag{3.4}\] By the assumption that \(\mathcal{B}\equiv\mathcal{D}\), subtracting Equation 3.1 from Equation 3.2 and Equation 3.3 from Equation 3.4, we get \[\frac{1}{3^{i+j\cdot k+1}}+\frac{1}{2^{i+j\cdot k+1}}=\frac{W_{c}-W_{b}}{\Pi_{ 1}\cdot\Pi_{2}^{k}}\] Let \(M\) be the maximal weight in absolute value in \(\mathcal{D}\). Since \(2\) is the minimal integral discount factor, we have that the value of \(\mathcal{D}\) on any infinite word is no more than \(2M\) in absolute value. Hence \(|W_{b}|\leq 2M\) and \(|W_{c}|\leq 2M\), which lead to \[\frac{1}{2^{i+j\cdot k+1}}<\frac{1}{3^{i+j\cdot k+1}}+\frac{1}{2^{i+j\cdot k+1 }}\leq\frac{2\cdot 2M}{\Pi_{1}}\cdot\frac{1}{\Pi_{2}^{k}}\] Figure 4: Partial structure of the DMDA \(\mathcal{D}\) in the proof of Theorem 3.1. and therefore, \(\frac{1}{2^{j\cdot k}}\,<\,\frac{2\cdot 2M\cdot 2^{i+1}}{\Pi_{1}}\cdot\frac{1}{\Pi_{2} ^{k}}\) and \(\left(\frac{\Pi_{2}}{2^{j}}\right)^{k}\,<\,\frac{2\cdot 2M\cdot 2^{i+1}}{\Pi_{1}}\). The above holds for every \(k\in\mathbb{N}\). Observe that \(\frac{2\cdot 2M\cdot 2^{i+1}}{\Pi_{1}}\) is a constant and \(\lim_{k\to\infty}\left(\frac{\Pi_{2}}{2^{j}}\right)^{k}=\infty\) if and only if \(\frac{\Pi_{2}}{2^{j}}>1\), to conclude that \(\Pi_{2}\leq 2^{j}\). But \(\Pi_{2}\) is a product of \(j\) integers bigger than \(1\), hence \(\Pi_{2}=2^{j}\). Let \(m\) be the least common denominator of \(W_{c}\) and \(W_{2}\), and construct a DMDA \(\mathcal{D}^{\prime}=\langle\Sigma,Q_{\mathcal{D}},p_{0},\delta_{\mathcal{D}}, m\cdot\gamma_{\mathcal{D}},\rho_{\mathcal{D}}\rangle\) created from \(\mathcal{D}\) by multiplying all its weights by \(m\). According to Proposition 2.2 and Lemma 2.3, for every \(w\in\Sigma^{\omega}\) we have \[\mathcal{D}^{\prime}(w)=m\cdot\mathcal{D}(w)=m\cdot\mathcal{B}(w) \tag{3.5}\] Let \(W_{1}^{\prime},W_{2}^{\prime}\) and \(W_{c}^{\prime}\) be the values of \(\mathcal{D}^{\prime}\) on the \(a^{i}\) prefix, the following \(a^{j}\) cycle and the final \(c^{\omega}\) respectively. Observe that \(W_{1}^{\prime}=m\cdot W_{1}\), \(W_{2}^{\prime}=m\cdot W_{2}\) and \(W_{c}^{\prime}=m\cdot W_{c}\), and that \(W_{2}^{\prime}\) and \(W_{c}^{\prime}\) are integers. For every \(k\in\mathbb{N}\), similarly to Equation 3.4, we have \[\mathcal{D}^{\prime}(a^{i}c^{\omega})-\mathcal{D}^{\prime}(a^{i+ k\cdot j}c^{\omega}) =\frac{W_{c}^{\prime}}{\Pi_{1}}-\Big{(}\sum_{t=0}^{k-1}\frac{W_{2 }^{\prime}}{\Pi_{1}\cdot 2^{t\cdot j}}\Big{)}-\frac{W_{c}^{\prime}}{\Pi_{1} \cdot 2^{k\cdot j}}\] \[=\frac{W_{c}^{\prime}(2^{k\cdot j}-1)-\sum_{t=1}^{k}2^{t\cdot j}W _{2}^{\prime}}{\Pi_{1}\cdot 2^{k\cdot j}} \tag{3.6}\] Define \(X(k)=W_{c}^{\prime}(2^{k\cdot j}-1)-\sum_{t=1}^{k}2^{t\cdot j}W_{2}^{\prime}\) and observe that \(X(k)\) is integer. Combine Equation 3.6, Equation 3.5 and Equation 3.2 to \(m+\frac{m}{3^{i+1}}-\Big{(}m+\frac{m}{3^{i+k\cdot j+1}}\Big{)}=\mathcal{D}^{ \prime}(a^{i}c^{\omega})-\mathcal{D}^{\prime}(a^{i+k\cdot j}c^{\omega})=\frac{ X(k)}{\Pi_{1}\cdot 2^{k\cdot j}}\), simplified to \(\frac{m\cdot(3^{k\cdot j}-1)}{3^{i+k\cdot j+1}}=\frac{X(k)}{\Pi_{1}\cdot 2^{k \cdot j}}\). But both \(m\) and \(\Pi_{1}\) are constants and each of them has a finite number of prime factors of \(3\). Since \((3^{k\cdot j}-1)\) is not divisible by \(3\), and \(X(k)\) is integer, when \(k\) gets bigger, eventually the denominator of the left side will have more prime factors of \(3\) than the denominator of the right side, which leads to a contradiction. Hence, no DMDA is equivalent to \(\mathcal{B}\) with respect to infinite words. According to Lemma 2.3, we also conclude that no DMDA is equivalent to \(\mathcal{B}\) with respect to finite words. ### Non-closure under algebraic operations In the following proof that integral NMDAs are not closed under algebraic operations, we cannot assume toward contradiction a candidate deterministic automaton, and thus, as opposed to the proof of Theorem 3.1, we cannot assume a specific accumulative discount factor for each word prefix. Yet, we analyze the behavior of a candidate nondeterministic automaton on an infinite series of words, and build on the observation that there must be a state that appears in "the same position of the run" in infinitely many optimal runs of the automaton on these words. **Theorem 3.2**.: _There exist integral NMDAs (even deterministic integral NDAs) \(\mathcal{A}\) and \(\mathcal{B}\) over the same alphabet, such that no integral NMDA is equivalent to \(\max(\mathcal{A},\mathcal{B})\), and no integral NMDA is equivalent to \(\mathcal{A}+\mathcal{B}\)._ Proof.: Consider the NMDAs \(\mathcal{A}\) and \(\mathcal{B}\) depicted in Figure 5, and assume towards contradiction that there exists an integral NMDA \(\mathcal{C}^{\prime}\) such that for every \(n\in\mathbb{N}\), \[\mathcal{C}^{\prime}(a^{n}b^{\omega})=\max(\mathcal{A},\mathcal{B})(a^{n}b^{ \omega})=\begin{pmatrix}\mathcal{A}+\mathcal{B}\end{pmatrix}(a^{n}b^{\omega})= \begin{cases}\frac{1}{2^{n}}&n\text{ is odd}\\ \frac{1}{3^{n}}&n\text{ is even}\end{cases}\] Let \(d\in\mathbb{N}\) be the least common denominator of the weights in \(\mathcal{C}^{\prime}\), and consider the NMDA \(\mathcal{C}=\langle\Sigma,Q,\iota,\delta,\gamma,\rho\rangle\) created from \(\mathcal{C}^{\prime}\) by multiplying all its weights by \(d\). Observe that all the weights in \(\mathcal{C}\) are integers. According to Proposition 2.2, for every \(n\in\mathbb{N}\), we have \[\mathcal{C}(a^{n}b^{\omega})=d\cdot\mathcal{C}^{\prime}(a^{n}b^{\omega})= \begin{cases}\frac{d}{2^{n}}&n\text{ is odd}\\ \frac{d}{3^{n}}&n\text{ is even}\end{cases}\] For every even \(n\in\mathbb{N}\), let \(w_{n}=a^{n}b^{\omega}\), and \(r_{n}\) a run of \(\mathcal{C}\) on \(w_{n}\) that entails the minimal value of \(\frac{d}{3^{n}}\). Since \(\mathcal{C}\) is finite, there exists a state \(q\in Q\) such that for infinitely many even \(n\in\mathbb{N}\), the target state of \(r_{n}\) after \(n\) steps is \(q\), i.e, \(\delta(r_{n}[0..n-1])=q\). We now show that the difference between \(U_{b}=\mathcal{C}^{q}(b^{\omega})\) and \(U_{a}=\mathcal{C}^{q}(a\cdot b^{\omega})\), the weights of the \(b^{\omega}\) and \(a\cdot b^{\omega}\) suffixes starting at \(q\), discounted by \(\Pi_{n}=\rho(r_{n}[0..n-1])\), which is the accumulated discount factor of the prefix of \(r_{n}\) up to \(q\), is approximately \(\frac{1}{2^{n}}\) (See Figure 6 for the notations). Since the weights of the prefixes are constant, for large enough \(n\) we will conclude that \(m_{1}\cdot 2^{n}\geq\Pi_{n}\) for some positive constant \(m_{1}\). For every such \(n\in\mathbb{N}\), let \(W_{n}=\mathcal{C}(r_{n}[0..n-1])\), and since \(\mathcal{C}(r_{n})=\frac{d}{3^{n}}\), we have \[W_{n}+\frac{U_{b}}{\Pi_{n}}=\frac{d}{3^{n}} \tag{3.7}\] Since the value of every run of \(\mathcal{C}\) on \(a^{n+1}b^{\omega}\) is at least \(\frac{d}{2^{n+1}}\), we have \(W_{n}+\frac{U_{a}}{\Pi_{n}}\geq\frac{d}{2^{n+1}}\). Hence, \(\frac{d}{3^{n}}-\frac{U_{b}}{\Pi_{n}}+\frac{U_{a}}{\Pi_{n}}\geq\frac{d}{2^{n+ 1}}\) resulting in \(\frac{U_{a}-U_{b}}{\Pi_{n}}\geq d\cdot\left(\frac{1}{2^{n+1}}-\frac{1}{3^{n}}\right)\). But for large enough \(n\), we have Figure 5. Deterministic integral NDAs that no integral NMDA is equivalent to their max or addition. Figure 6. The state \(q\) and the notations from the proof of Theorem 3.2, for two different even \(n\in\mathbb{N}\) such that \(\delta(r_{n}[1..n])=q\). The labels on the walks indicate the input word and the accumulated weight and discount factors. \(3^{n}>2^{n+2}\), hence we get \(\frac{1}{2^{n+1}}-\frac{1}{3^{n}}>\frac{1}{2^{n+1}}-\frac{1}{2^{n+2}}=\frac{1}{2^{n +2}}\), resulting in \(\frac{U_{a}-U_{b}}{d}\cdot 2^{n+2}\geq\Pi_{n}\). And indeed, there exists a positive constant \(m_{1}=\frac{U_{a}-U_{b}}{d}\cdot 2^{2}\) such that \(m_{1}\cdot 2^{n}\geq\Pi_{n}\). Now, \(U_{b}\) is a rational constant, otherwise Equation 3.7 cannot hold, as the other elements are rationals. Hence, there exist \(x\in\mathbb{Z}\) and \(y\in\mathbb{N}\) such that \(U_{b}=\frac{x}{y}\), and \(\frac{1}{3^{n}}=\frac{W_{n}\cdot\Pi_{n}+U_{b}}{d\cdot\Pi_{n}}=\frac{W_{n}\cdot \Pi_{n}+\frac{x}{y}}{d\cdot\Pi_{n}}=\frac{W_{n}\cdot\Pi_{n}\cdot y+x}{d\cdot y \cdot\Pi_{n}}\). Since the denominator and the numerator of the right-hand side are integers, we conclude that there exists a positive constant \(m_{2}=d\cdot y\), such that \(m_{2}\cdot\Pi_{n}\geq 3^{n}\). Eventually, we get \(m_{1}\cdot m_{2}\cdot 2^{n}\geq 3^{n}\), for some positive constants \(m_{1}\) and \(m_{2}\), and for infinitely many \(n\in\mathbb{N}\). But this stands in contradiction with \(\lim_{n\to\infty}\left(\frac{2}{3}\right)^{n}=0\). ### Undecidability of the containment problem We show that it is undecidable to resolve for given integral NMDA \(\mathcal{N}\) and integral DMDA \(\mathcal{D}\), on both finite and infinite words, whether \(\mathcal{N}\equiv\mathcal{D}\) and whether \(\mathcal{N}\leq\mathcal{D}\), and on finite words also whether \(\mathcal{N}<\mathcal{D}\). We prove the undecidability result by reduction from the halting problem of two-counter machines. The general scheme follows similar reductions, such as in [22, 2], yet the crux is in simulating a counter by integral NMDAs. Upfront, discounted summation is not suitable for simulating counters, since a current increment has, in the discounted setting, a much higher influence than of a far-away decrement. However, we show that multiple discount factors allow in a sense to eliminate the influence of time, having automata in which no matter where a letter appears in the word, it will have the same influence on the automaton value. (See Lemma 3.3 and Figure 8). Another main part of the proof is in showing how to nondeterministically adjust the automaton weights and discount factors in order to "detect" whether a counter is at a current value \(0\). (See Figures 10, 11, 13 and 14.) We start with introducing the halting problem of two-counter machines (subsubsection 3.3.1), continue with a lemma on the accumulated value of certain series of discount factors and weights (subsubsection 3.3.2), present the reduction (subsubsection 3.3.3) and show the undecidability proof (subsubsection 3.3.4). #### 3.3.1. Two-counter machines A two-counter machine [38]\(\mathcal{M}\) is a sequence \((l_{1},\ldots,l_{n})\) of commands, for some \(n\in\mathbb{N}\), involving two counters \(x\) and \(y\). We refer to \(\{\,1,\ldots,n\,\}\) as the _locations_ of the machine. For every \(i\in\{\,1,\ldots,n\,\}\) we refer to \(l_{i}\) as the _command in location_\(i\). There are five possible forms of commands: \[\textsc{inc}(c),\ \textsc{dec}(c),\ \textsc{goto}\ l_{k},\ \textsc{if}\ c =0\ \textsc{goto}\ l_{k}\ \textsc{else}\ \textsc{goto}\ l_{k^{\prime}},\ \textsc{halt},\] where \(c\in\{\,x,y\,\}\) is a counter and \(1\leq k,k^{\prime}\leq n\) are locations. For not decreasing a zero-valued counter \(c\in\{\,x,y\,\}\), every \(\textsc{dec}(c)\) command is preceded by the command if \(c\)=0 goto \(<\)current_line\(>\) else goto \(<\)next_line\(>\), and there are no other direct goto-commands to it. The counters are initially set to \(0\). An example of a two-counter machine is given in Figure 7. Let \(L\) be the set of possible commands in \(\mathcal{M}\), then a _run_ of \(\mathcal{M}\) is a sequence \(\psi=\psi_{1},\ldots,\psi_{m}\in(L\times\mathbb{N}\times\mathbb{N})^{*}\) such that the following hold: 1. \(\psi_{1}=\langle l_{1},0,0\rangle\). 2. For all \(1<i\leq m\), let \(\psi_{i-1}=(l_{j},\alpha_{x},\alpha_{y})\) and \(\psi_{i}=(l^{\prime},\alpha^{\prime}_{x},\alpha^{\prime}_{y})\). Then, the following hold. * If \(l_{j}\) is an \(\textsc{inc}(x)\) command (resp. \(\textsc{inc}(y)\)), then \(\alpha^{\prime}_{x}=\alpha_{x}+1\), \(\alpha^{\prime}_{y}=\alpha_{y}\) (resp. \(\alpha_{y}=\alpha_{y}+1\), \(\alpha^{\prime}_{x}=\alpha_{x}\)), and \(l^{\prime}=l_{j+1}\). * If \(l_{j}\) is \(\textsc{dec}(x)\) (resp. \(\textsc{dec}(y)\)) then \(\alpha^{\prime}_{x}=\alpha_{x}-1\), \(\alpha^{\prime}_{y}=\alpha_{y}\) (resp. \(\alpha_{y}=\alpha_{y}-1\), \(\alpha^{\prime}_{x}=\alpha_{x}\)), and \(l^{\prime}=l_{j+1}\). * If \(l_{j}\) is \(\textsc{goto}\ l_{k}\) then \(\alpha^{\prime}_{x}=\alpha_{x}\), \(\alpha^{\prime}_{y}=\alpha_{y}\), and \(l^{\prime}=l_{k}\). * If \(l_{j}\) is if \(x\)=0 \(\textsc{goto}\ l_{k}\)\(\textsc{else goto}\ l_{k^{\prime}}\) then \(\alpha^{\prime}_{x}=\alpha_{x}\), \(\alpha^{\prime}_{y}=\alpha_{y}\), and \(l^{\prime}=l_{k}\) if \(\alpha_{x}=0\), and \(l^{\prime}=l_{k^{\prime}}\) otherwise. * If \(l_{j}\) is if \(y\)=0 \(\textsc{goto}\ l_{k}\)\(\textsc{else goto}\ l_{k^{\prime}}\) then \(\alpha^{\prime}_{x}=\alpha_{x}\), \(\alpha^{\prime}_{y}=\alpha_{y}\), and \(l^{\prime}=l_{k}\) if \(\alpha_{y}=0\), and \(l^{\prime}=l_{k^{\prime}}\) otherwise. * If \(l^{\prime}\) is halt then \(i=m\), namely a run does not continue after halt. If, in addition, we have that \(\psi_{m}=\langle l_{j},\alpha_{x},\alpha_{y}\rangle\) such that \(l_{j}\) is a halt command, we say that \(\psi\) is a _halting run_. We say that a machine \(\mathcal{M}\) 0-halts if its run is halting and ends in \(\langle l,0,0\rangle\). We say that a sequence of commands \(\tau\in L^{*}\)_fits_ a run \(\psi\), if \(\tau\) is the projection of \(\psi\) on its first component. The _command trace_\(\pi=\sigma_{1},\ldots,\sigma_{m}\) of a halting run \(\psi=\psi_{1},\ldots,\psi_{m}\) describes the flow of the run, including a description of whether a counter \(c\) was equal to 0 or larger than 0 in each occurrence of an if \(c\)=0 \(\textsc{goto}\ l_{k}\)\(\textsc{else goto}\ l_{k^{\prime}}\) command. It is formally defined as follows. \(\sigma_{m}=\textsc{halt}\) and for every \(1<i\leq m\), we define \(\sigma_{i-1}\) according to \(\psi_{i-1}=(l_{j},\alpha_{x},\alpha_{y})\) in the following manner: * \(\sigma_{i-1}=l_{j}\) if \(l_{j}\) is not of the form if \(c\)=0 \(\textsc{goto}\ l_{k}\)\(\textsc{else goto}\ l_{k^{\prime}}\). * \(\sigma_{i-1}=(\textsc{goto}\ l_{k},c=0)\) for \(c\in\{x,y\}\), if \(\alpha_{c}=0\) and the command \(l_{j}\) is of the form if \(c\)=0 \(\textsc{goto}\ l_{k}\)\(\textsc{else goto}\ l_{k^{\prime}}\). * \(\sigma_{i-1}=(\textsc{goto}\ l_{k^{\prime}},c>0)\) for \(c\in\{x,y\}\), if \(\alpha_{c}>0\) and the command \(l_{j}\) is of the form if \(c\)=0 \(\textsc{goto}\ l_{k}\)\(\textsc{else goto}\ l_{k^{\prime}}\). For example, the command trace of the halting run of the machine in Figure 7 is \(\textsc{inc}(x)\), \(\textsc{inc}(x)\), \(\textsc{inc}(x)\), \(\textsc{(goto}\ l_{4},x>0)\), \(\textsc{dec}(x)\), \(\textsc{(goto}\ l_{3},x>0)\), \(\textsc{(goto}\ l_{4},x>0)\), \(\textsc{dec}(x)\), \(\textsc{(goto}\ l_{6},x=0)\), halt. Deciding whether a given counter machine \(\mathcal{M}\) halts is known to be undecidable [38]. Deciding whether \(\mathcal{M}\) halts with both counters having value 0, termed the \(0\)_-halting problem_, is also undecidable. Indeed, the halting problem can be reduced to the latter by adding some commands that clear the counters, before every halt command. #### 3.3.2. Auxiliary lemma for simulating counters We present a lemma on the accumulated value of certain series of discount factors and weights. Observe that by the lemma, no matter where the pair of discount-factor \(\lambda\in\mathbb{N}\setminus\{0,1\}\) and weight \(w=\frac{\lambda-1}{\lambda}\) appear along the run, they will have the same effect on the accumulated value. This property will play a key role in simulating counting by NMDAs. **Lemma 3.3**.: _For every sequence \(\lambda_{1},\cdots,\lambda_{m}\) of integers larger than \(1\) and weights \(w_{1},\cdots,w_{m}\) such that \(w_{i}=\frac{\lambda_{i}-1}{\lambda_{i}}\), we have \(\sum_{i=1}^{m}\left(w_{i}\cdot\prod_{j=1}^{i-1}\frac{1}{\lambda_{j}}\right)=1- \frac{1}{\prod_{j=1}^{j-1}\lambda_{j}}\)._ Figure 7. An example of a two-counter machine. Proof.: We show the claim by induction on \(m\). The base case, i.e. \(m=1\), is trivial. For the induction step we have \[\sum_{i=1}^{m+1}\big{(}w_{i}\cdot\prod_{j=1}^{i-1}\frac{1}{\lambda_ {j}}\big{)} =\sum_{i=1}^{m}\big{(}w_{i}\cdot\prod_{j=1}^{i-1}\frac{1}{\lambda_{ j}}\big{)}+w_{m+1}\cdot\prod_{j=1}^{m}\frac{1}{\lambda_{j}}\] \[=1-\frac{1}{\prod_{j=1}^{m}\lambda_{j}}+\frac{\lambda_{m+1}-1}{ \lambda_{m+1}}\cdot\prod_{j=1}^{m}\frac{1}{\lambda_{j}}\] \[=1-\frac{\lambda_{m+1}}{\prod_{j=1}^{m+1}\lambda_{j}}+\frac{ \lambda_{m+1}-1}{\prod_{j=1}^{m+1}\lambda_{j}}=1-\frac{1}{\prod_{j=1}^{m+1} \lambda_{j}}\] #### 3.3.3. The Reduction We turn to our reduction from the halting problem of two-counter machines to the problem of NMDA containment. We provide the construction and the correctness lemma with respect to automata on finite words, and then show in subsubsection 3.3.4 how to use the same construction also for automata on infinite words. Given a two-counter machine \(\mathcal{M}\) with the commands \((l_{1},\ldots,l_{n})\), we construct an integral DMDA \(\mathcal{A}\) and an integral NMDA \(\mathcal{B}\) on finite words, such that \(\mathcal{M}\) 0-halts iff there exists a word \(w\in\Sigma^{+}\) such that \(\mathcal{B}(w)\geq\mathcal{A}(w)\) iff there exists a word \(w\in\Sigma^{+}\) such that \(\mathcal{B}(w)>\mathcal{A}(w)\). The automata \(\mathcal{A}\) and \(\mathcal{B}\) operate over the following alphabet \(\Sigma\), which consists of \(5n+5\) letters, standing for the possible elements in a command trace of \(\mathcal{M}\): \[\Sigma^{\text{INCDEC}}= \ \{\operatorname{\textsc{inc}}(x),\operatorname{\textsc{dec}}(x), \operatorname{\textsc{inc}}(y),\operatorname{\textsc{dec}}(y)\,\}\] \[\Sigma^{\text{GOTO}}= \ \big{\{}\textsc{GOTO}\ \ l_{k}:k\in\{1,\ldots,n\}\big{\}}\cup\] \[\big{\{}(\textsc{GOTO}\ \ l_{k},c=0):k\in\{1,\ldots,n\},c\in\{x,y \}\big{\}}\cup\] \[\big{\{}(\textsc{GOTO}\ \ l_{k^{\prime}},c>0):k^{\prime}\in\{1, \ldots,n\},c\in\{x,y\}\big{\}}\] \[\Sigma^{\text{NOHALT}}= \ \Sigma^{\text{INCDEC}}\cup\Sigma^{\text{GOTO}}\] \[\Sigma= \ \Sigma^{\text{NOHALT}}\cup\big{\{}\textsc{HALT}\big{\}}\] When \(\mathcal{A}\) and \(\mathcal{B}\) read a word \(w\in\Sigma^{+}\), they intuitively simulate a sequence of commands \(\tau_{u}\) that induces the command trace \(u=\textsc{pref}_{\text{HALT}}(w)\). If \(\tau_{u}\) fits the actual run of \(\mathcal{M}\), and this run 0-halts, then the minimal run of \(\mathcal{B}\) on \(w\) has a value strictly larger than \(\mathcal{A}(w)\). If, however, \(\tau_{u}\) does not fit the actual run of \(\mathcal{M}\), or it does fit the actual run but it does not 0-halt, then the violation is detected by \(\mathcal{B}\), which has a run on \(w\) with value strictly smaller than \(\mathcal{A}(w)\). In the construction, we use the following partial discount-factor functions \(\rho_{p},\rho_{d}:\Sigma^{\text{NOHALT}}\to\mathbb{N}\) and partial weight functions \(\gamma_{p},\gamma_{d}:\Sigma^{\text{NOHALT}}\to\mathbb{Q}\). \[\rho_{p}(\sigma)=\begin{cases}5&\sigma=\operatorname{\textsc{inc}}(x)\\ 4&\sigma=\operatorname{\textsc{dec}}(x)\\ 7&\sigma=\operatorname{\textsc{inc}}(y)\\ 6&\sigma=\operatorname{\textsc{dec}}(y)\\ 15&\text{otherwise}\end{cases}\quad\rho_{d}(\sigma)=\begin{cases}4&\sigma= \operatorname{\textsc{inc}}(x)\\ 5&\sigma=\operatorname{\textsc{dec}}(x)\\ 6&\sigma=\operatorname{\textsc{inc}}(y)\\ 7&\sigma=\operatorname{\textsc{dec}}(y)\\ 15&\text{otherwise}\end{cases}\] \(\gamma_{p}(\sigma)=\frac{\rho_{p}(\sigma)-1}{\rho_{p}(\sigma)}\), and \(\gamma_{d}(\sigma)=\frac{\rho_{d}(\sigma)-1}{\rho_{d}(\sigma)}\). We say that \(\rho_{p}\) and \(\gamma_{p}\) are the _primal_ discount-factor and weight functions, while \(\rho_{d}\) and \(\gamma_{d}\) are the _dual_ functions. Observe that for every \(c\in\{x,y\}\) we have that \[\rho_{p}(\textsc{inc}(c))=\rho_{d}(\textsc{dec}(c))>\rho_{p}(\textsc{dec}(c))= \rho_{d}(\textsc{inc}(c)) \tag{3.8}\] Intuitively, we will use the primal functions for \(\mathcal{A}\)'s discount factors and weights, and the dual functions for identifying violations. Notice that if changing the primal functions to the dual ones in more occurrences of \(\textsc{inc}(c)\) letters than of \(\textsc{dec}(c)\) letters along some run, then by Lemma 3.3 the run will get a value lower than the original one. We continue with their formal definitions. \(\mathcal{A}=\langle\Sigma,\{q_{\mathcal{A}},q_{\mathcal{A}}^{h}\},\{q_{ \mathcal{A}}\},\delta_{\mathcal{A}},\gamma_{\mathcal{A}},\rho_{\mathcal{A}}\rangle\) is an integral DMDA consisting of two states, as depicted in Figure 8. Observe that the initial state \(q_{\mathcal{A}}\) has self loops for every alphabet letter in \(\Sigma^{\textsc{NOHALT}}\) with weights and discount factors according to the primal functions, and a transition \((q_{\mathcal{A}},\textsc{halt},q_{\mathcal{A}}^{h})\) with weight of \(\frac{14}{15}\) and a discount factor of \(15\). The integral NMDA \(\mathcal{B}=\langle\Sigma,Q_{\mathcal{B}},\iota_{\mathcal{B}},\delta_{ \mathcal{B}},\gamma_{\mathcal{B}},\rho_{\mathcal{B}}\rangle\) is the union of the following eight gadgets (checkers), each responsible for checking a certain type of violation in the description of a \(0\)-halting run of \(\mathcal{M}\). It also has the states \(q_{\mathsf{freeze}},q_{\mathsf{halt}}\in Q_{\mathcal{B}}\) such that for all \(\sigma\in\Sigma\), there are \(0\)-weighted transitions \((q_{\mathsf{freeze}},\sigma,q_{\mathsf{freeze}})\in\delta_{\mathcal{B}}\) and \((q_{\mathsf{halt}},\sigma,q_{\mathsf{halt}})\in\delta_{\mathcal{B}}\) with an arbitrary discount factor. Observer that in all of \(\mathcal{B}\)'s gadgets, the transition over the letter halt to \(q_{\mathsf{halt}}\) has a weight higher than the weight of the corresponding transition in \(\mathcal{A}\), so that when no violation is detected, the value of \(\mathcal{B}\) on a word is higher than the value of \(\mathcal{A}\) on it. ## 1. Halt Checker This gadget, depicted in Figure 9, checks for violations of non-halting runs. Observe that its initial state \(q_{\mathsf{HC}}\) has self loops identical to those of \(\mathcal{A}\)'s initial state, a transition to \(q_{\mathsf{halt}}\) over halt with a weight higher than the corresponding weight in \(\mathcal{A}\), and a transition to the state \(q_{\mathsf{last}}\) over every letter that is not halt, "guessing" that the run ends without a halt command. ## 2. Negative-Counters Checker The second gadget, depicted in Figure 10, checks that the input prefix \(u\) has no more \(\textsc{dec}(c)\) than \(\textsc{inc}(c)\) commands for each counter \(c\in\{x,y\}\) Figure 9. The Halt Checker in the NMDA \(\mathcal{B}\). It is similar to \(\mathcal{A}\), however having self loops in its initial states that favor dec\((c)\) commands when compared to \(\mathcal{A}\). **3. Positive-Counters Checker.** The third gadget, depicted in Figure 11, checks that for every \(c\in\{x,y\}\), the input prefix \(u\) has no more inc\((c)\) than dec\((c)\) commands. It is similar to \(\mathcal{A}\), while having self loops in its initial state according to the dual functions rather than the primal ones. **4. Command Checker.** The next gadget checks for local violations of successive commands. That is, it makes sure that the letter \(w_{i}\) represents a command that can follow the command represented by \(w_{i-1}\) in \(\mathcal{M}\), ignoring the counter values. For example, if the command in location \(l_{2}\) is inc\((x)\), then from state \(q_{2}\), which is associated with \(l_{2}\), we move with the letter inc\((x)\) to \(q_{3}\), which is associated with \(l_{3}\). The test is local, as this gadget does not check for violations involving illegal jumps due to the values of the counters. An example of the command checker for the counter machine in Figure 7 is given in Figure 12. The command checker, which is a DMDA, consists of states \(q_{1},\ldots,q_{n}\) that correspond to the commands \(l_{1},\ldots,l_{n}\), and the states \(q_{\mathtt{halt}}\) and \(q_{\mathtt{freeze}}\). For two locations \(j\) and \(k\), there is a transition from \(q_{j}\) to \(q_{k}\) on the letter \(\sigma\) iff \(l_{k}\) can _locally follow_\(l_{j}\) in a run of \(\mathcal{M}\) that has \(\sigma\) in the corresponding location of the command trace. That is, either \(l_{j}\) is a goto\(l_{k}\) command (meaning \(l_{j}=\sigma=\textsc{goto}\;\;l_{k}\)), \(k\) is the next location after \(j\) and \(l_{j}\) is an inc or a dec command (meaning \(k=j+1\) and \(l_{j}=\sigma\in\Sigma^{\mathrm{INCDEC}}\)), \(l_{j}\) is an if \(c\)=0 goto\(l_{k}\) else goto Figure 12. The command checker that corresponds to the counter machine in Figure 7. Figure 10. The negative-counters checker, on the left for \(x\) and on the right for \(y\), in the NMDA \(\mathcal{B}\). command with \(\sigma=(\textsc{goto}\,\,l_{k},c=0)\), or \(l_{j}\) is an if \(c\)=0 goto\(l_{s}\) else goto\(l_{k}\) command with \(\sigma=(\textsc{goto}\,\,l_{k},c>0)\). The weights and discount factors of the \(\Sigma^{\textsc{NOHALT}}\) transitions mentioned above are according to the primal functions \(\gamma_{p}\) and \(\rho_{p}\) respectively. For every location \(j\) such that \(l_{j}=\textsc{halt}\), there is a transition from \(q_{j}\) to \(q_{\textsc{halt}}\) labeled by the letter halt with a weight of \(\frac{15}{16}\) and a discount factor of \(16\). Every other transition that was not specified above leads to \(q_{\textsf{freeze}}\) with weight 0 and some discount factor. ### Zero-Jump Checkers The next gadgets, depicted in Figure 13, check for violations in conditional jumps. In this case, we use a different checker instance for each counter \(c\in\{x,y\}\), ensuring that for every if \(c\)=0 goto\(l_{k}\) else goto\(l_{k^{\prime}}\) command, if the jump goto\(l_{k}\) is taken, then the value of \(c\) is indeed 0. Intuitively, \(q_{\textsf{ZC}}^{c}\) profits from words that have more \(\textsc{inc}(c)\) than \(\textsc{dec}(c)\) letters, while \(q_{c}\) continues like \(\mathcal{A}\). If the move to \(q_{c}\) occurred after a balanced number of \(\textsc{inc}(c)\) and \(\textsc{dec}(c)\), as it should be in a real command trace, neither the prefix word before the move to \(q_{c}\), nor the suffix word after it result in a profit. Otherwise, provided that the counter is 0 at the end of the run (as guaranteed by the negative- and positive-counters checkers), both prefix and suffix words get profits, resulting in a smaller value for the run. ### Positive-Jump Checkers These gadgets, depicted in Figure 14, are dual to the zero-jump checkers, checking for the dual violations in conditional jumps. Similarly to the zero-jump checkers, we have a different instance for each counter \(c\in\{x,y\}\), ensuring that for every if \(c\)=0 goto\(l_{k}\) else goto\(l_{k^{\prime}}\) command, if the jump goto\(l_{k^{\prime}}\) is taken, then the value of \(c\) is indeed greater than 0. Figure 14: The Positive-Jump Checker (for a counter \(c\)) in the NMDA \(\mathcal{B}\). Figure 13: The Zero-Jump Checker (for a counter \(c\in\{\,x,y\,\}\)) in the NMDA \(\mathcal{B}\). Intuitively, if the counter is \(0\) on a (goto\(l_{k^{\prime}},c>0\)) command when there was no inc\((c)\) command yet, the gadget benefits by moving from \(q^{c}_{\mathsf{PC0}}\) to \(q_{\mathsf{freeze}}\). If there was an inc\((c)\) command, it benefits by having the dual functions on the move from \(q^{c}_{\mathsf{PC0}}\) to \(q^{c}_{\mathsf{PC1}}\) over inc\((c)\) and the primal functions on one additional self loop of \(q^{c}_{\mathsf{PC1}}\) over dec\((c)\). **Lemma 3.4**.: _Given a two-counter machine \(\mathcal{M}\), we can compute an integral DMDA \(\mathcal{A}\) and an integral NMDA \(\mathcal{B}\) on finite words, such that \(\mathcal{M}\)\(0\)-halts iff there exists a word \(w\in\Sigma^{+}\) such that \(\mathcal{B}(w)\geq\mathcal{A}(w)\) iff there exists a word \(w\in\Sigma^{+}\) such that \(\mathcal{B}(w)>\mathcal{A}(w)\)._ Proof.: Given a two-counter machine \(\mathcal{M}\), consider the DMDA \(\mathcal{A}\) and the NMDA \(\mathcal{B}\) constructed in subsubsection 3.3.3, and an input word \(w\). Let \(u=\textsc{pref}_{\textsc{halt}}(w)\). We prove the claim by showing that I) if \(u\) correctly describes a \(0\)-halting run of \(\mathcal{M}\) then \(\mathcal{B}(w)>\mathcal{A}(w)\), and II) if \(u\) does not fit the actual run of \(\mathcal{M}\), or if it does fit it, but the run does not \(0\)-halt, then the violation is detected by \(\mathcal{B}\), in the sense that \(\mathcal{B}(w)<\mathcal{A}(w)\). **I.** We start with the case that \(u\) correctly describes a \(0\)-halting run of \(\mathcal{M}\), and show that \(\mathcal{B}(w)>\mathcal{A}(w)\). Observe that in all of \(\mathcal{B}\)'s checkers, the transition over the halt command to the \(q_{\textsc{halt}}\) state has a weight higher than the weight of the corresponding transition in \(\mathcal{A}\). Thus, if a checker behaves like \(\mathcal{A}\) over \(u\), namely uses the primal functions, it generates a value higher than that of \(\mathcal{A}\). We show below that each of the checkers generates a value higher than the value of \(\mathcal{A}\) on \(u\) (which is also the value of \(\mathcal{A}\) on \(w\)), also if it nondeterministically "guesses a violation", behaving differently than \(\mathcal{A}\). _1. Halt Checker._ Since \(u\) does have the halt command, the run of the halt checker on \(u\), if guessing a violation, will end in the pair of transitions from \(q_{\mathsf{HC}}\) to \(q_{\mathsf{last}}\) to \(q_{\mathsf{freeze}}\) with discount factor \(2\) and weights \(0\) and \(2\), respectively. Let \(D\) be the accumulated discount factor in the gadget up to these pair of transitions. According to Lemma 3.3, the accumulated weight at this point is \(1-\frac{1}{D}\), hence the value of the run will be \(1-\frac{1}{D}+\frac{1}{D}\cdot 0+\frac{1}{2D}\cdot 2=1\), which is, according to Lemma 3.3, larger than the value of \(\mathcal{A}\) on any word. _2,3. Negative- and Positive-Counters Checkers._ Since \(u\) has the same number of inc\((c)\) and dec\((c)\) letters, by Equation 3.8 and \(Lemma\) 3.3, these gadgets and \(\mathcal{A}\) will have the same value on the prefix of \(u\) until the last transition, on which the gadgets will have a higher weight. _4. Command Checker._ As this gadget is deterministic, it cannot "guess a violation", and its value on \(u\) is larger than \(\mathcal{A}(u)\) due to the weight on the halt command. _5,6. Zero-Jump Checkers._ Consider a counter \(c\in\{\,x,y\,\}\) and a run \(r\) of the gadget on \(u\). If \(r\) did not move to \(q_{c}\), we have \(\mathcal{B}(r)>\mathcal{A}(w)\), similarly to the analysis in the negative- and positive-counters checkers. Otherwise, denote the transition that \(r\) used to move to \(q_{c}\) as \(t\). Observe that since \(u\) correlates to the actual run of \(\mathcal{M}\), we have that \(t\) was indeed taken when \(c=0\). In this case the value of the run will not be affected, since before \(t\) we have the same number of inc\((c)\) and dec\((c)\) letters, and after \(t\) we also have the same number of inc\((c)\) and dec\((c)\) letters. Hence, due to the last transition over the halt command, we have \(\mathcal{B}(r)>\mathcal{A}(u)\). _7,8. Positive-Jump Checkers._ Consider a counter \(c\in\{\,x,y\,\}\) and a run \(r\) of the gadget on \(u\). If \(r\) never reaches \(q^{c}_{\mathsf{PC1}}\), it has the same sequence of weights and discount factors as \(\mathcal{A}\) except for the higher-valued halt transition. If \(r\) reaches \(q_{\mathsf{PC1}}^{c}\) but never reaches \(q_{\mathsf{PC2}}^{c}\), since \(u\) ends with a halt letter, we have that \(r\) ends with a transition to \(q_{\mathsf{freeze}}\) that has a weight of \(1\), hence \(\mathcal{B}(r)=1>\mathcal{A}(w)\). If \(r\) reaches \(q_{\mathsf{PC2}}^{c}\), let \(u=y\cdot\textsc{inc}(c)\cdot z\cdot v\) where \(y\) has no \(\textsc{inc}(c)\) letters, \(t=r[|y|+1+|z|]\) is the first transition in \(r\) targeted at \(q_{\mathsf{PC2}}^{c}\), and \(\alpha_{c}\geq 1\) is the value of the counter \(c\) when \(t\) is taken. We have that \(1+\#(\textsc{inc}(c),z)=\#(\textsc{dec}(c),z)+\alpha_{c}\). Since \(u\) is balanced, we also have that \(\#(\textsc{dec}(c),v)=\#(\textsc{inc}(c),v)+\alpha_{c}\). For the first \(\textsc{inc}(c)\) letter, \(r\) gets a discount factor of \(\rho_{d}(\textsc{inc}(c))=\rho_{p}(\textsc{dec}(c))\). All the following \(\textsc{inc}(c)\) and \(\textsc{dec}(c)\) letters contribute discount factors according to \(\rho_{p}\) in \(z\) and according to \(\rho_{d}\) in \(v\). Hence, \(r\) gets the discount factor \(\rho_{p}(\textsc{dec}(c))\) a total of \[1+\#(\textsc{dec}(c),z)+\#(\textsc{inc}(c),v) =1+1+\#(\textsc{inc}(c),z)-\alpha_{c}+\#(\textsc{inc}(c),v)\] \[=\#(\textsc{inc}(c),u)+1-\alpha_{c}\] \[\leq\#(\textsc{inc}(c),u)=\#(\textsc{dec}(c),u)\] times, and the discount factor \(\rho_{p}(\textsc{inc}(c))\) a total of \[\#(\textsc{inc}(c),z)+\#(\textsc{dec}(c),v) =\#(\textsc{inc}(c),z)+\#(\textsc{inc}(c),v)+\alpha_{c}\] \[=\#(\textsc{inc}(c),u)-1+\alpha_{c}\geq\#(\textsc{inc}(c),u)\] times. Therefore, the value of \(r\) is at least as big as the value of \(\mathcal{A}\) on the prefix of \(u\) until the halt transition, and due to the higher weight of \(r\) on the latter, we have \(\mathcal{B}(r)>\mathcal{A}(u)\). **II.** We continue with the case that \(u\) does not correctly describe a \(0\)-halting run of \(\mathcal{M}\), and show that \(\mathcal{B}(w)<\mathcal{A}(w)\). Observe that the incorrectness must fall into one of the following cases, each of which results in a lower value of one of \(\mathcal{B}\)'s gadgets on \(u\), compared to the value of \(\mathcal{A}\) on \(u\): * _The word \(u\) has no halt command._ In this case the minimal-valued run of the halt checker on \(u\) will be the same as of \(\mathcal{A}\) until the last transition, on which the halt checker will have a \(0\) weight, compared to a strictly positive weight in \(\mathcal{A}\). * _The word \(u\) does not describe a run that ends up with value \(0\) in both counters._ Then there are the following sub-cases: * _The word \(u\) has more \(\textsc{dec}(c)\) than \(\textsc{inc}(c)\) letters for some counter \(c\in\{x,y\}\)._ For \(c=x\), in the negative-counters checker, more discount factors were changed from \(4\) to \(2\) than those changed from \(5\) to \(10\), compared to their values in \(\mathcal{A}\), implying that the total value of the gadget until the last letter will be lower than of \(\mathcal{A}\) on it. For \(c=y\), we have a similar analysis with respect to the discount factors \(6;3\), and \(7;14\). * _The word \(u\) has more \(\textsc{inc}(c)\) than \(\textsc{dec}(c)\) letters for some counter \(c\in\{x,y\}\)._ By Equation 3.8 and \(Lemma\)\(3.3\), the value of the positive-counters checker until the last transition will be lower than of \(\mathcal{A}\) until the last transition. Observe, though, that the weight of the gadgets on the halt transition (16) is still higher than that of \(\mathcal{A}\) on it (15). Nevertheless, since a "violation detection" results in replacing at least one discount factor from \(4\) to \(2\), from \(6\) to \(3\), from \(5\) to \(4\), or from \(7\) to \(6\) (and replacing the corresponding weights, for preserving the \(\frac{\rho-1}{\rho}\) ratio), and the ratio difference between \(16\) and \(15\) is less significant than between the other pairs of weights, we have that the gadget's value and therefore \(\mathcal{B}\)'s value on \(u\) is smaller than \(\mathcal{A}(u)\). Indeed, by Lemma 3.3\(\mathcal{A}(u)=1-\frac{1}{\mathcal{D}_{\mathcal{A}}}\), where \(D_{\mathcal{A}}\) is the multiplication of the discount factors along \(\mathcal{A}\)'s run, and \(\mathcal{B}(u)\leq 1-(\frac{1}{\mathcal{D}_{\mathcal{A}}}\cdot\frac{7}{6}\cdot\frac{15} {16})<1-\frac{1}{D_{\mathcal{A}}}=\mathcal{A}(u)\). * _The word_ \(u\) _does not correctly describe the run of_ \(\mathcal{M}\)_. Then there are the following sub-cases:_ * _The incorrect description does not relate to conditional jumps_. Then the command-checker has the same weights and discount factors as \(\mathcal{A}\) on the prefix of \(u\) until the incorrect description, after which it has \(0\) weights, compared to strictly positive weights in \(\mathcal{A}\). * _The incorrect description relates to conditional jumps_. Then there are the following sub-sub-cases: * _A counter_ \(c>0\) _at a position_ \(i\) _of_ \(\mathcal{M}\)_'s run, while_ \(u[i]=\textsc{goto}\ l_{k},c=0\)_. Let_ \(v=u[0..i{-}1]\) _and_ \(u=v\cdot v^{\prime}\)_, and consider the run_ \(r\) _of the zero-jump checker on_ \(u\) _that moves to_ \(q_{c}\) _after_ \(v\)_. Then_ \(\#(\textsc{inc}(c),v)>\#(\textsc{dec}(c),v)\) _and_ \(\#(\textsc{inc}(c),v^{\prime})<\#(\textsc{dec}(c),v^{\prime})\)_. (We may assume that the total number of_ \(\textsc{inc}(c)\) _and_ \(\textsc{dec}(c)\) _letters is the same, as otherwise one of the previous checkers detects it.) All the_ \(\textsc{inc}(c)\) _and_ \(\textsc{dec}(c)\) _transitions in_ \(r[0..i{-}1]\) _have weights and discount factors according to the dual functions, and those transitions in_ \(r[i..|w|{-}1]\) _have weights and discount factors according to the primal functions. Therefore, compared to_ \(\mathcal{A}\)_, more weights changed from_ \(\gamma_{p}(\textsc{inc}(c))\) _to_ \(\gamma_{d}(\textsc{inc}(c))=\gamma_{p}(\textsc{dec}(c))\) _than weights changed from_ \(\gamma_{p}(\textsc{dec}(c))\) _to_ \(\gamma_{d}(\textsc{dec}(c))=\gamma_{p}(\textsc{inc}(c))\)_, resulting in a lower total value of_ \(r\) _than of_ \(\mathcal{A}\) _on_ \(u\)_. (As shown for the negative- and positive-counters checkers, the higher weight of the_ \(\textsc{halt}\) _transition is less significant than the lower values above.)_ * _A counter_ \(c=0\) _at a position_ \(i\) _of_ \(\mathcal{M}\)_'s run, while_ \(u[i]=\textsc{goto}\ l_{k},c>0\)_. Let_ \(r\) _be a minimal-valued run of the positive-jump checker on_ \(u\)_. If there are no_ \(\textsc{inc}(c)\) _letters in_ \(u\) _before position_ \(i\)_,_ \(r\) _will have the same weights and discount factors as_ \(\mathcal{A}\) _until the_ \(i\)_'s letter, on which it will move from_ \(q_{\textsc{PC1}}^{c}\) _to_ \(q_{\textsc{freeze}}\)_, continuing with_ \(0\)_-weight transitions, compared to strictly positive ones in_ \(\mathcal{A}\)_. Otherwise, we have that the first_ \(\textsc{inc}(c)\) _letter of_ \(u\) _takes_ \(r\) _from_ \(q_{\textsc{PC0}}^{c}\) _to_ \(q_{\textsc{PC1}}^{c}\) _with a discount factor of_ \(\rho_{d}(\textsc{inc}(c))\)_. Then in_ \(q_{\textsc{PC1}}^{c}\) _we have more_ \(\textsc{dec}(c)\) _transitions than_ \(\textsc{inc}(c)\) _transitions, and in_ \(q_{\textsc{PC2}}^{c}\) _we have the same number of_ \(\textsc{dec}(c)\) _and_ \(\textsc{inc}(c)\) _transitions. (We may assume that_ \(u\) _passed the previous checkers, and thus has the same total number of_ \(\textsc{inc}(c)\) _and_ \(\textsc{dec}(c)\) _letters.) Hence, we get two more discount factors of_ \(\rho_{d}(\textsc{inc}(c))\) _than_ \(\rho_{p}(\textsc{inc}(c))\)_, resulting in a value smaller than_ \(\mathcal{A}(u)\)_. (As in the previous cases, the higher value of the_ \(\textsc{halt}\) _transition is less significant.)_ #### 3.3.4. Undecidability of arbitrary integral NMDAs containment For finite words, the undecidability result directly follows from Lemma 3.4 and the undecidability of the \(0\)-halting problem of counter machines [38]. **Theorem 3.5**.: _Strict and non-strict containment of (integral) NMDAs on finite words are undecidable. More precisely, the problems of deciding for given integral NMDA \(\mathcal{N}\) and integral DMDA \(\mathcal{D}\) whether \(\mathcal{N}(w)\leq\mathcal{D}(w)\) for all finite words \(w\) and whether \(\mathcal{N}(w)<\mathcal{D}(w)\) for all finite words \(w\)._ For infinite words, undecidability of non-strict containment also follows from the reduction given in subsubsection 3.3.3, as the reduction considers prefixes of the word until the first \(\textsc{halt}\) command. We leave open the question of whether strict containment is also undecidable for infinite words. The problem with the latter is that a \(\textsc{halt}\) command might never appear in an infinite word \(w\) that incorrectly describes a halting run of the two-counter machine, in which case both automata \(\mathcal{A}\) and \(\mathcal{B}\) of the reduction will have the same value on \(w\). On words \(w\) that have a halt command but do not correctly describe a halting run of the two-counter machine we have \(\mathcal{B}(w)<\mathcal{A}(w)\), and on a word \(w\) that does correctly describe a halting run we have \(\mathcal{B}(w)>\mathcal{A}(w)\). Hence, the reduction only relates to whether \(\mathcal{B}(w)\leq\mathcal{A}(w)\) for all words \(w\), but not to whether \(\mathcal{B}(w)<\mathcal{A}(w)\) for all words \(w\). **Theorem 3.6**.: _Non-strict containment of (integral) NMDAs on infinite words is undecidable. More precisely, the problem of deciding for given integral NMDA \(\mathcal{N}\) and integral DMDA \(\mathcal{D}\) whether \(\mathcal{N}(w)\leq\mathcal{D}(w)\) for all infinite words \(w\)._ Proof.: The automata \(\mathcal{A}\) and \(\mathcal{B}\) in the reduction given in subsubsection 3.3.3 can operate as is on infinite words, ignoring the Halt-Checker gadget of \(\mathcal{B}\) which is only relevant to finite words. Since the values of both \(\mathcal{A}\) and \(\mathcal{B}\) on an input word \(w\) only relate to the prefix \(u=\textsc{pref}_{\textsc{halt}(w)}\) of \(w\) until the first halt command, we still have that \(\mathcal{B}(w)>\mathcal{A}(w)\) if \(u\) correctly describes a halting run of the two-counter machine \(\mathcal{M}\) and that \(\mathcal{B}(w)<\mathcal{A}(w)\) if \(u\) is finite and does not correctly describe a halting run of \(\mathcal{M}\). Yet, for infinite words there is also the possibility that the word \(w\) does not contain the halt command. In this case, the value of both \(\mathcal{A}\) and the command checker of \(\mathcal{B}\) will converge to \(1\), getting \(\mathcal{A}(w)=\mathcal{B}(w)\). Hence, if \(\mathcal{M}\)\(0\)-halts, there is a word \(w\), such that \(\mathcal{B}(w)>\mathcal{A}(w)\) and otherwise, for all words \(w\), we have \(\mathcal{B}(w)\leq\mathcal{A}(w)\). Observe that for NMDAs, equivalence and non-strict containment are interreducible. **Theorem 3.7**.: _Equivalence of (integral) NMDAs on finite as well as infinite words is undecidable. That is, the problem of deciding for given integral NMDAs \(\mathcal{A}\) and \(\mathcal{B}\) on finite or infinite words whether \(\mathcal{A}(w)=\mathcal{B}(w)\) for all words \(w\)._ Proof.: Assume toward contradiction the existence of a procedure for equivalence check of \(\mathcal{A}\) and \(\mathcal{B}\). We can use the nondeterminism to obtain an automaton \(\mathcal{C}=\mathcal{A}\cup\mathcal{B}\), having \(C(w)\leq A(w)\) for all words \(w\). We can then check whether \(\mathcal{C}\) is equivalent to \(\mathcal{A}\), which holds if and only if \(\mathcal{A}(w)\leq\mathcal{B}(w)\) for all words \(w\). Indeed, if \(\mathcal{A}(w)\leq\mathcal{B}(w)\) then \(\mathcal{A}(w)\leq\min(\mathcal{A}(w),\mathcal{B}(w))=\mathcal{C}(w)\), while if there exists a word \(w\), such that \(\mathcal{B}(w)<\mathcal{A}(w)\), we have \(\mathcal{C}(w)=\min(\mathcal{A}(w),\mathcal{B}(w))<\mathcal{A}(w)\), implying that \(\mathcal{C}\) and \(\mathcal{A}\) are not equivalent. Thus, such a procedure contradicts the undecidability of non-strict containment, shown in Theorem 3.5 and Theorem 3.6. ## 4. Tidy NMDAs We present the family of "tidy NMDAs" and show that it is as expressive as deterministic NMDAs with arbitrary integral discount factors. Intuitively, an integral NMDA is tidy if the choice of discount factors depends on the word prefix read so far. We further show that for every choice function \(\theta\), the class of all \(\theta\)-NMDAs is closed under determinization and algebraic operations, and enjoys decidable algorithms for its decision problems. The family of tidy NMDAs contains various other natural subfamilies, such as integral NMDAs in which the discount factors are chosen per letter (action) or per the elapsed time, on which we elaborate at the end of this section. Each of these subfamilies strictly extends the expressive power of integral NDAs. We conclude with analyzing the structure of the family of tidy NMDAs. **Definition 4.1**.: An integral NMDA \(\mathcal{A}\) over an alphabet \(\Sigma\) and with discount-factor function \(\rho\) is _tidy_ if there exists a function \(\theta:\Sigma^{+}\to\mathbb{N}\setminus\{0,1\}\), such that for every finite word \(u=\sigma_{1}\dots\sigma_{n}\in\Sigma^{+}\), and every run \(q_{0},\sigma_{1},\cdots,q_{n}\) of \(\mathcal{A}\) on \(u\), we have \(\rho(q_{n-1},\sigma_{n},q_{n})=\theta(u)\). In this case we say that \(\mathcal{A}\) is a \(\theta\)-NMDA. **Definition 4.2**.: For an alphabet \(\Sigma\), a function \(\theta:\Sigma^{+}\to\mathbb{N}\setminus\{0,1\}\) is a _choice function_ if there exists an integral NMDA that is a \(\theta\)-NMDA. For choice functions \(\theta_{1}\) and \(\theta_{2}\), the classes of \(\theta_{1}\)-NMDAs and of \(\theta_{2}\)-NMDAs are _equivalent_ if they express the same functions, namely if for every \(\theta_{1}\)-NMDA \(\mathcal{A}\), there exists a \(\theta_{2}\)-NMDA \(\mathcal{B}\) equivalent to \(\mathcal{A}\) and vice versa. For every tidy NMDA \(\mathcal{A}\) and finite word \(u\), all the runs of \(\mathcal{A}\) on \(u\) entail the same accumulated discount factor. We thus use the notation \(\rho(u)\) to denote \(\rho(r)\), where \(r\) is any run of \(\mathcal{A}\) on \(u\). Observe that a general function \(\theta:\Sigma^{+}\to\mathbb{N}\setminus\{0,1\}\) might require an infinite representation. Yet, we will show in Theorem 4.7 that every choice function has a finite representation. ### Determinizability We determinize a tidy NMDA by generalizing the determinization algorithm presented in [10] for NDAs. The basic idea in that algorithm is to extend the subset construction, by not only storing in each state of the deterministic automaton whether or not each state \(q\) of the original automaton \(\mathcal{A}\) is reachable, but also the "gap" that \(q\) has from the currently optimal state \(q^{\prime}\) of \(\mathcal{A}\). This gap stands for the difference between the accumulated weights for reaching \(q\) and for reaching \(q^{\prime}\), multiplied by the accumulated discounted factor. Since we consider tidy NMDAs, we can generalize this view of gaps to the setting of multiple discount factors, as it is guaranteed that the runs to \(q\) and to \(q^{\prime}\) accumulated the same discount factor. _The construction._ Consider a tidy NMDA \(\mathcal{A}=\langle\Sigma,Q,\iota,\delta,\gamma,\rho\rangle\). For every finite word \(u\in\Sigma^{*}\) and state \(q\in Q\), we define \(S(q,u)\) to be the set of runs of \(\mathcal{A}\) on \(u\) with \(q\) as the target state, and \(r_{(q,u)}\) to be a _preferred run_ that entails the minimal value among all the runs in \(S(q,u)\). Observe that every prefix of a preferred run is also a preferred run. Hence given the values of all the preferred runs on a certain finite word \(u\), i.e., \(\mathcal{A}(r_{(q,u)})\) for every \(q\in Q\), we can calculate the values of the preferred runs on every \(u\cdot\sigma\) word by \(\mathcal{A}(r_{(q^{\prime},u\cdot\sigma)})=\min\big{\{}\mathcal{A}(r_{(q,u)})+ \gamma(t)\bigm{|}t=(q,\sigma,q^{\prime})\in\delta\big{\}}\). Intuitively, every state of \(\mathcal{D}\) that was reached after reading \(u\), will store for each \(q\in Q\) its "gap", which is the difference between \(\mathcal{A}(u)\) and \(\mathcal{A}(r_{(q,u)})\), "normalized" by multiplying it with the accumulated discount factor \(\rho(u)\), and "truncated" if reached a threshold value (which can no longer be recovered). Formally, for a state \(q\in Q\), and a finite word \(u\), we define * The _cost_ of reaching \(q\) over \(u\) as \(\texttt{cost}(q,u)=\min\big{\{}\mathcal{A}(r)\bigm{|}r\text{ is a run of }\mathcal{A}\text{ on }u\text{ s.t. }\delta(r)=q\big{\}}=\min\big{\{}\mathcal{A}(r)\bigm{|}r \in S(q,u)\big{\}}\) where \(\min\emptyset=\infty\). * The _gap_ of \(q\) over \(u\) as \(\texttt{gap}(q,u)=\rho(u)\big{(}\texttt{cost}(q,u)-\mathcal{A}(u)\big{)}\). Intuitively, the gap stands for the value that a walk starting in \(q\) should have, compared to a walk starting in \(u\)'s optimal ending state, in order to make a run through \(q\) optimal. Let \(T\) be the maximum difference between the weights in \(\mathcal{A}\), That is, \(T=\max\left(\left|x-y\right|\,\big{|}x,y\in\mathsf{range}(\gamma)\right)\). Since for every infinite run \(r\) of \(\mathcal{A}\) we have \(\sum_{i=0}^{\infty}\frac{1}{\prod_{j=0}^{i-1}r(j)}\leq\sum_{i=0}^{\infty}\frac {1}{2^{i}}=2\), we define the set of possible _recoverable-gaps_\(G=\left\{v\,\big{|}\,v\in\mathbb{Q}\text{ and }0\leq v<2T\right\}\cup\{\infty\}\). The \(\infty\) element denotes a non-recoverable gap, and behaves as the standard infinity element in the algebraic operations that we will be using. Note that our NMDAs do not have infinite weights and the infinite element is only used as an internal component of the construction. We will inductively construct \(\mathcal{D}=\langle\Sigma,Q^{\prime},q_{in}^{\prime},\delta^{\prime},\gamma^{ \prime},\rho^{\prime}\rangle\) as follows. A state of \(\mathcal{D}\) extends the standard subset construction by assigning a gap to each state of \(\mathcal{A}\). That is, for \(Q=\{q_{1},\cdots,q_{n}\}\), a state \(p\in Q^{\prime}\) is a tuple \(\langle g_{1},\cdots,g_{n}\rangle\), where \(g_{h}\in G\) for every \(1\leq h\leq n\). Once a gap is obviously not recoverable, by being larger than or equal to \(2T\), it gets truncated by setting it to be \(\infty\). In the integral \(\rho\) function case, the construction only requires finitely many elements of \(G\), as shown in Lemma 4.3, and thus it is guaranteed to terminate. For simplicity, we assume that \(\iota=\{q_{1},q_{2},\cdots,q_{|\iota|}\}\) and extend \(\gamma\) with \(\gamma(q_{i},\sigma,q_{j})=\infty\) for every \((q_{i},\sigma,q_{j})\not\in\delta\). The initial state of \(\mathcal{D}\) is \(q_{in}^{\prime}=\langle 0,\cdots,0,\infty,\cdots,\infty\rangle\), in which the left \(|\iota|\) elements are \(0\), meaning that the initial states of \(\mathcal{A}\) have a \(0\) gap and the others are currently not relevant. We inductively build the desired automaton \(\mathcal{D}\) using the intermediate automata \(\mathcal{D}_{i}=\langle\Sigma,Q_{i}^{\prime},q_{in}^{\prime},\delta_{i}^{ \prime},\gamma_{i}^{\prime},\rho_{i}^{\prime}\rangle\). We start with \(\mathcal{D}_{1}\), in which \(Q_{1}^{\prime}=\{q_{in}^{\prime}\}\), \(\delta_{1}^{\prime}=\emptyset\), \(\gamma_{1}^{\prime}=\emptyset\) and \(\rho_{1}^{\prime}=\emptyset\), and proceed from \(\mathcal{D}_{i}\) to \(\mathcal{D}_{i+1}\), such that \(Q_{i}^{\prime}\subseteq Q_{i+1}^{\prime}\), \(\delta_{i}^{\prime}\subseteq\delta_{i+1}^{\prime}\), \(\gamma_{i}^{\prime}\subseteq\gamma_{i+1}^{\prime}\) and \(\rho_{i}^{\prime}\subseteq\rho_{i+1}^{\prime}\). The construction is completed once \(\mathcal{D}_{i}=\mathcal{D}_{i+1}\), finalizing the desired deterministic automaton \(\mathcal{D}=\mathcal{D}_{i}\). In the induction step, \(\mathcal{D}_{i+1}\) extends \(\mathcal{D}_{i}\) by (possibly) adding, for every state \(q^{\prime}=\langle g_{1},\cdots,g_{n}\rangle\in Q_{i}^{\prime}\) and letter \(\sigma\in\Sigma\), a state \(q^{\prime\prime}:=\langle x_{1},\cdots,x_{n}\rangle\), and a transition \(t:=(q^{\prime},\sigma,q^{\prime\prime})\) as follows: * Weight: For every \(1\leq h\leq n\) define, \(c_{h}:=\min\left\{g_{j}+\gamma(q_{j},\sigma,q_{h})\,\big{|}\,1\leq j\leq n\right\}\), and add a new weight, \(\gamma_{i+1}^{\prime}(t)=\min\limits_{1\leq h\leq n}(c_{h})\). * Discount factor: By the induction construction, if \(\mathcal{D}_{i}\) running on a finite word \(u\) ends in \(q^{\prime}\), there is a run of \(\mathcal{A}\) on \(u\) ending in \(q_{h}\), for every \(1\leq h\leq n\) for which the gap \(g_{h}\) in \(q^{\prime}\) is not \(\infty\). Since \(\mathcal{A}\) is tidy, all the transitions from every such state \(q_{h}\) over \(\sigma\) have the same discount factor, which we set to the new transition \(\rho_{i+1}^{\prime}(t)\). * Gap: For every \(1\leq h\leq n\), set \(x_{h}:=\rho_{i+1}^{\prime}(t)\cdot\left(c_{h}-\gamma_{i+1}^{\prime}(t)\right)\). If \(x_{h}\geq 2T\) then set \(x_{h}:=\infty\). See Figure 15 for an example of the determinization process. We prove below that the procedure always terminates for a tidy NMDA, and that every state of the generated DMDA can be represented in PSPACE. The proof is similar to the corresponding proof in [10] with respect to NDAs, adding the necessary extensions for tidy NMDAs. **Lemma 4.3**.: _The above determinization procedure always terminates for a tidy NMDA \(\mathcal{A}\). Every state of the resulting deterministic automaton \(\mathcal{D}\) can be represented in space polynomial in \(|\mathcal{A}|\), and \(|\mathcal{D}|\in 2^{O(|\mathcal{A}|)}\)._ Proof.: The induction step of the construction, extending \(\mathcal{D}_{i}\) to \(\mathcal{D}_{i+1}\), only depends on \(\mathcal{A}\), \(\Sigma\) and \(Q_{i}^{\prime}\). Furthermore, for every \(i\geq 0\), we have that \(Q_{i}^{\prime}\subseteq Q_{i+1}^{\prime}\). Thus, for showing the termination of the construction, it is enough to show that there is a general bound on the size of the sets \(Q_{i}^{\prime}\). We do it by showing that the inner values, \(g_{1},\ldots,g_{n}\), of every state \(q^{\prime}\) of every set \(Q_{i}^{\prime}\) are from the finite set \(\widetilde{G}\), defined below. Let \(d\in\mathbb{N}\) be the least common denominator of the weights in \(\mathcal{A}\), and let \(T\in\mathbb{N}\) be the maximal difference between the weights. We define the set \(\bar{G}\) as \[\bar{G}=\Big{\{}\frac{k}{d}\ \big{|}\ k\in\mathbb{N}\ \text{and}\ \frac{k}{d}<2T \Big{\}}\cup\{\infty\}\] We start with the first set of states \(Q^{\prime}_{1}\), which satisfies the property that the inner values, \(g_{1},\ldots,g_{n}\), of every state \(q^{\prime}\in Q^{\prime}_{1}\) are from \(\bar{G}\), as \(Q^{\prime}_{1}=\{\langle 0,\cdots,0,\infty,\cdots,\infty\rangle\}\). We proceed by induction on the construction steps, assuming that \(Q^{\prime}_{i}\) satisfies the property. By the construction, an inner value of a state \(q^{\prime\prime}\) of \(Q^{\prime}_{i+1}\) is derived by four operations on elements of \(\bar{G}\): addition, subtraction (\(x-y\), where \(x\geq y\)), multiplication by \(\lambda\in\mathsf{range}(\rho)\subset\mathbb{N}\), and taking the minimum. One may verify that applying these four operations on \(\infty\) and numbers of the form \(\frac{k}{d}\), where \(k\in\mathbb{N}\), results in \(\infty\) or in a number \(\frac{k^{\prime}}{d}\), where \(k^{\prime}\in\mathbb{N}\). Recall that once an inner value exceeds \(2T\), it is replaced by the procedure with \(\infty\), meaning that \(\frac{k^{\prime}}{d}<2T\), or the calculated inner value is \(\infty\). Concluding that all the inner values are in \(\bar{G}\). Observe that \(|\bar{G}|\leq 2\cdot T\cdot d+1\). Meaning that every state in the resulting DMDA has up to \(2\cdot T\cdot d+1\) possible values for each of the \(|Q|\) inner elements. Hence we have no more than \((2\cdot T\cdot d+1)^{|Q|}\) possibilities for the states of \(\mathcal{D}\), proving the termination claim. Recall that in our definition for \(|\mathcal{A}|\), we mention that we assume that all of the weights are given with the same denominator, which is \(d\) in our notations. Hence the space required for \(|Q|\) elements with up to \(2\cdot T\cdot d+1\) possible values each, which is the space required for every state in \(\mathcal{D}\), is polynomial with respect to \(|\mathcal{A}|\). Also the total size of \(\mathcal{D}\) is in \(2^{O(|\mathcal{A}|)}\). We will now show the correctness of the determinization procedure. According to Lemma 2.3, it is enough to show the equivalence \(\mathcal{D}\equiv\mathcal{A}\) with respect to finite words. **Lemma 4.4**.: _Consider a tidy NMDA \(\mathcal{A}\) over \(\Sigma^{+}\) and a DMDA \(\mathcal{D}\), constructed from \(\mathcal{A}\) by the above determinization procedure. Then, for every \(u\in\Sigma^{+}\), we have \(\mathcal{A}(u)=\mathcal{D}(u)\)._ Figure 15. An example of the determinization procedure, as per Theorem 4.6. The gray rectangles detail some of the intermediate calculations. Proof.: Let \(\mathcal{A}=\langle\Sigma,Q,\iota,\delta,\gamma,\rho\rangle\) be the input NMDA, \(\mathcal{D}=\langle\Sigma,Q^{\prime},\iota^{\prime},\delta^{\prime},\gamma^{ \prime},\rho^{\prime}\rangle\) the DMDA constructed from \(\mathcal{A}\), and \(T\) be the maximal difference between the weights in \(\mathcal{A}\). For a finite word \(u\), let \(\delta^{\prime}(u)=\langle g_{1},\cdots,g_{n}\rangle\in Q^{\prime}\) be the target state of \(\mathcal{D}\)'s run on \(u\). We show by induction on the length of the input word \(u\) that: 1. \(\mathcal{A}(u)=\mathcal{D}(u)\). 2. For every \(1\leq h\leq n\), \(g_{h}=\texttt{gap}(q_{h},u)\) if \(\texttt{gap}(q_{h},u)<2T\) and \(\infty\) otherwise. The assumptions obviously hold for the initial step, where \(u\) is the empty word. As for the induction step, we assume they hold for \(u\) and show that for every \(\sigma\in\Sigma\), they hold for \(u\cdot\sigma\). Let \(\delta^{\prime}(u\cdot\sigma)=\langle x_{1},\cdots,x_{n}\rangle\in Q^{\prime}\) be the target state of \(\mathcal{D}\)'s run on \(u\cdot\sigma\). We start by proving the claim with respect to an _infinite-state_ automaton \(\mathcal{D}^{\prime}\) that is constructed as in the determinization procedure, except for not changing any gap to \(\infty\). Afterwards, we shall argue that changing all gaps that exceed \(2T\) to \(\infty\) does not harm the correctness. 1. By the definitions of cost and gap, we have for every \(1\leq h\leq n\), \[\texttt{cost}(q_{h},u\cdot\sigma) =\min_{1\leq j\leq n}\left(\texttt{cost}(q_{j},u)+\frac{\gamma(q_ {j},\sigma,q_{h})}{\rho(u)}\right)\] \[=\min_{1\leq j\leq n}\left(\frac{\texttt{gap}(q_{j},u)}{\rho(u)} +\mathcal{A}(u)+\frac{\gamma(q_{j},\sigma,q_{h})}{\rho(u)}\right)\] \[=\mathcal{A}(u)+\frac{\min_{1\leq j\leq n}\left(\texttt{gap}(q_{ j},u)+\gamma(q_{j},\sigma,q_{h})\right)}{\rho(u)}=\text{By the induction assumption}\] \[=\mathcal{D}^{\prime}(u)+\frac{\min_{1\leq j\leq n}\left(g_{j}+ \gamma(q_{j},\sigma,q_{h})\right)}{\rho(u)}\] (4.1) By the construction of \(\mathcal{D}^{\prime}\), the transition weight \(\gamma^{\prime}_{i}(t)\) assigned on the \(i=|u|+1\) step is \[\gamma^{\prime}_{|u|+1}(t)=\min_{1\leq h\leq n}\Big{(}\min_{1\leq j\leq n}(g_ {j}+\gamma(q_{j},\sigma,q_{h}))\Big{)}\]. Therefore, \[\mathcal{D}^{\prime}(u\cdot\sigma) =\mathcal{D}^{\prime}(u)+\frac{\gamma^{\prime}_{|u|+1}(t)}{\rho(u)}\] \[=\mathcal{D}^{\prime}(u)+\frac{\min_{1\leq h\leq n}\min_{1\leq j \leq n}\Big{(}g_{j}+\gamma(q_{j},\sigma,q_{h})\Big{)}}{\rho(u)}\] \[=\min_{1\leq h\leq n}\left(\mathcal{D}^{\prime}(u)+\frac{\min_{1 \leq j\leq n}\Big{(}g_{j}+\gamma(q_{j},\sigma,q_{h})\Big{)}}{\rho(u)}\right)\] \[=\min_{1\leq h\leq n}\texttt{cost}(q_{h},u\cdot\sigma)=\mathcal{ A}(u\cdot\sigma)\] 2. By Equation 4.1, we get that for every \(1\leq h\leq n\): \[\min_{1\leq j\leq n}(g_{j}+\gamma(q_{j},\sigma,q_{h}))=\rho(u)\Big{(}\texttt{cost}(q_ {h},u\cdot\sigma)-\mathcal{D}^{\prime}(u)\Big{)}\] Let \(t\) be the transition that was added in the \(i=|u|+1\) step of the algorithm from the state \(\delta^{\prime}(u)\) over the \(\sigma\) letter. For every \(1\leq h\leq n\), we have \[x_{h} =\rho_{i}^{\prime}(t)\cdot(c_{h}-\gamma_{i}^{\prime}(t))\] \[=\rho_{i}^{\prime}(t)\Big{(}\min_{1\leq j\leq n}(g_{j}+\gamma(q_{j},\sigma,q_{h}))-\gamma_{i}^{\prime}(t)\Big{)}\] \[=\rho_{i}^{\prime}(t)\Bigg{(}\min_{1\leq j\leq n}(g_{j}+\gamma(q_{ j},\sigma,q_{h}))-\rho(u)\Big{(}\mathcal{D}^{\prime}(u\cdot\sigma)-\mathcal{D}^{ \prime}(u)\Big{)}\Bigg{)}\] \[=\rho_{i}^{\prime}(t)\Bigg{(}\rho(u)\Big{(}\mathtt{cost}(q_{h},u \cdot\sigma)-\mathcal{D}^{\prime}(u)\Big{)}-\rho(u)\Big{(}\mathcal{D}^{\prime }(u\cdot\sigma)-\mathcal{D}^{\prime}(u)\Big{)}\Bigg{)}\] \[=\rho_{i}^{\prime}(t)\cdot\rho(u)\Big{(}\mathtt{cost}(q_{h},u \cdot\sigma)-\mathcal{D}^{\prime}(u\cdot\sigma)\Big{)}\] \[=\rho(u\cdot\sigma)\cdot\Big{(}\mathtt{cost}(q_{h},u\cdot\sigma) -\mathcal{D}^{\prime}(u\cdot\sigma)\Big{)}\] And by the induction assumption we have \[x_{h}=\rho(u\cdot\sigma)\cdot\Big{(}\mathtt{cost}(q_{h},u\cdot\sigma)- \mathcal{A}(u\cdot\sigma)\Big{)}=\mathtt{gap}(q_{h},u\cdot\sigma)\] It is left to show that the induction is also correct for the _finite-state_ automaton \(\mathcal{D}\). The only difference between the construction of \(\mathcal{D}\) and of \(\mathcal{D}^{\prime}\) is that the former changes all gaps \((g_{j})\) above \(2T\) to \(\infty\). We should thus show that if the gap \(g_{j}\), for some \(1\leq j\leq n\), exceeds \(2T\) at a step \(i\) of the construction, and this \(g_{j}\) influences the next gap of some state \(h\) (we denoted this gap in the construction as \(x_{h}\)) then \(x_{h}\geq 2T\). This implies that \(\mathcal{D}(u)=\mathcal{D}^{\prime}(u)\), since at every step of the construction there is at least one \(1\leq h\leq n\), such that \(x_{h}=0\), corresponding to an optimal run of \(\mathcal{A}\) on \(u\) ending in state \(q_{h}\). Formally, we should show that if \(g_{j}\geq 2T\) and \(x_{h}=\rho_{i+1}^{\prime}(t)\cdot\Big{(}g_{j}+\gamma(q_{j},\sigma,q_{h})- \gamma_{i+1}^{\prime}(t)\Big{)}\), where \(t\) is the transition added in the construction on step \(i\) as defined in part (ii.) above, then \(x_{h}\geq 2T\). Indeed, according to the construction exists an index \(1\leq k\leq n\) such that \(g_{k}=0\) and since \(\mathcal{A}\) is complete, there is a transition from \(q_{k}\) to some state \(q_{m}\), implying that \(\gamma_{i+1}^{\prime}(t)\leq g_{k}+\gamma(q_{k},\sigma,q_{m})=\gamma(q_{k}, \sigma,q_{m})\). Hence \[x_{h} \geq\rho_{i+1}^{\prime}(t)\cdot\Big{(}2T+\gamma(q_{j},\sigma,q_{h })-\gamma_{i+1}^{\prime}(t)\Big{)}\geq 2\cdot\Big{(}2T+\gamma(q_{j},\sigma,q_{h })-\gamma_{i+1}^{\prime}(t)\Big{)}\] \[\geq 2\cdot\Big{(}2T+\gamma(q_{j},\sigma,q_{h})-\gamma(q_{k}, \sigma,q_{m})\Big{)}\geq 2\cdot(2T+(-T))=2T\] We show next that the DMDA created by the determinization procedure is indeed a \(\theta\)-DMDA. **Lemma 4.5**.: _Consider a \(\theta\)-NMDA \(\mathcal{A}\) over \(\Sigma^{+}\) and a DMDA \(\mathcal{D}\), constructed from \(\mathcal{A}\) by the determinization procedure above. Then \(\mathcal{D}\) is a \(\theta\)-DMDA._ Proof.: Consider a tidy NMDA \(\mathcal{A}=\langle\Sigma,Q,\iota,\delta,\gamma,\rho\rangle\), and the DMDA \(\mathcal{D}=\langle\Sigma,Q^{\prime},\iota^{\prime},\delta^{\prime},\gamma^{ \prime},\rho^{\prime}\rangle\) constructed from \(\mathcal{A}\). We show by induction on the length of an input word that for every finite word \(u\in\Sigma^{*}\), we have \(\rho^{\prime}(u)=\rho(u)\). The base case regarding the empty word obviously holds. As for the induction step, we assume the claim holds for \(u\) and show that it also holds for \(u\cdot\sigma\), for every \(\sigma\in\Sigma\). Let \(t\) be the final transition of \(\mathcal{D}\)'s run on \(u\cdot\sigma\). Due to the construction of \(\mathcal{D}\), there exist \(q,q^{\prime}\in Q\) such that \(\texttt{gap}(q,u)\neq\infty\), \(\texttt{gap}(q^{\prime},u\cdot\sigma)\neq\infty\), and \(\rho^{\prime}(t)=\rho(q,\sigma,q^{\prime})\). Hence, \(\rho^{\prime}(u\cdot\sigma)=\rho^{\prime}(u)\cdot\rho^{\prime}(t)=\rho(u) \cdot\rho^{\prime}(t)=\rho(u)\cdot\rho(q,\sigma,q^{\prime})\) and since \(\texttt{gap}(q,u)\neq\infty\), we get that \(q\in\delta(u)\), and \(\rho^{\prime}(u\cdot\sigma)=\rho(u)\cdot\rho(q,\sigma,q^{\prime})=\rho(u\cdot\sigma)\). And finally, as a direct consequence of the above construction, Lemma 4.4, Lemma 4.3, and Lemma 4.5: **Theorem 4.6**.: _For every choice function \(\theta\) and a \(\theta\)-NMDA \(\mathcal{A}\), there exists a \(\theta\)-DMDA \(\mathcal{D}\equiv\mathcal{A}\) of size in \(2^{O(|\mathcal{A}|)}\). Every state of \(\mathcal{D}\) can be represented in space polynomial in \(|\mathcal{A}|\)._ ### Representing Choice Functions We show that, as opposed to the case of a general function \(f:\Sigma^{+}\to\mathbb{N}\setminus\{0,1\}\), every choice function \(\theta\) can be finitely represented by a transducer. A transducer \(\mathcal{T}\) (Mealy machine) is a \(6\)-tuple \(\langle P,\Sigma,\Gamma,p_{0},\delta,\rho\rangle\), where \(P\) is a finite set of states, \(\Sigma\) and \(\Gamma\) are finite sets called the input and output alphabets, \(p_{0}\in P\) is the initial state, \(\delta:P\times\Sigma\to P\) is the total transition function and \(\rho:P\times\Sigma\to\Gamma\) is the total output function. A transducer \(\mathcal{T}\) represents a function, to which for simplicity we give the same name \(\mathcal{T}:\Sigma^{+}\to\Gamma\), such that for every word \(w\), the value \(\mathcal{T}(w)\) is the output label of the last transition taken when running \(\mathcal{T}\) on \(w\). The size of \(\mathcal{T}\), denoted by \(|\mathcal{T}|\), is the maximum between the number of transitions and the maximal binary representation of any output in the range of \(\rho\). Since in this work we only consider transducers in which the output alphabet \(\Gamma\) is the natural numbers \(\mathbb{N}\), we omit \(\Gamma\) from their description, namely write \(\langle P,\Sigma,p_{0},\delta,\rho\rangle\) instead of \(\langle P,\Sigma,\mathbb{N},p_{0},\delta,\rho\rangle\). An example of a transducer \(\mathcal{T}\) and a \(\mathcal{T}\)-NMDA is given in Figure 16. **Theorem 4.7**.: _For every function \(\theta:\Sigma^{+}\to\mathbb{N}\setminus\{0,1\}\), \(\theta\) is a choice function, namely there exists a \(\theta\)-NMDA, if and only if there exists a transducer \(\mathcal{T}\) such that \(\theta\equiv\mathcal{T}\)._ Proof.: Consider a function \(\theta:\Sigma^{+}\to\mathbb{N}\setminus\{0,1\}\). For the first direction, observe that given a transducer \(\mathcal{T}=\langle P,\Sigma,p_{0},\delta,\rho\rangle\) representing \(\theta\), it holds that the NMDA \(\mathcal{T}^{\prime}=\langle\Sigma,P,\{p_{0}\},\delta,\gamma,\rho\rangle\), for every weight function \(\gamma\), is a \(\theta\)-NMDA. For the other direction, consider a \(\theta\)-NMDA \(\mathcal{A}^{\prime}\). According to Theorem 4.6, there exists a \(\theta\)-DMDA \(\mathcal{A}=\langle\Sigma,Q,q_{0},\delta,\gamma,\rho\rangle\) equivalent to \(\mathcal{A}^{\prime}\). Since the image of \(\rho\) is a subset of \(\mathbb{N}\), we have that \(\theta\) can be represented by the transducer \(\mathcal{T}=\langle Q,\Sigma,q_{0},\delta,\rho\rangle\). Figure 16. A transducer \(\mathcal{T}\) and a \(\mathcal{T}\)-NMDA. For a given choice function \(\theta\), we refer to the class of all \(\theta\)-NMDAs. Observe that when considering such class, only the choice function is relevant, regardless of the transducer defining it. ### Closure under Algebraic Operations **Theorem 4.8**.: _For every choice function \(\theta\), the set of \(\theta\)-NMDAs is closed under the operations of min, max, addition, subtraction, and multiplication by a rational constant._ Proof.: Consider a choice function \(\theta\) and \(\theta\)-NMDAs \(\mathcal{A}\) and \(\mathcal{B}\). \(\bullet\)_Multiplication by constant \(c\geq 0\)_: A \(\theta\)-NMDA for \(c\cdot\mathcal{A}\) is straightforward from Proposition 2.2. \(\bullet\)_Multiplication by \(-1\)_: A \(\theta\)-NMDA for \(-\mathcal{A}\) can be achieved by first determinizing \(\mathcal{A}\), as per Theorem 4.6, into a \(\theta\)-DMDA \(\mathcal{D}\) and then multiplying all the weights in \(\mathcal{D}\) by \(-1\). \(\bullet\)_Addition_: Considering \(\mathcal{A}=\langle\Sigma,Q_{1},\iota_{1},\delta_{1},\gamma_{1},\rho_{1}\rangle\) and \(\mathcal{B}=\langle\Sigma,Q_{2},\iota_{2},\delta_{2},\gamma_{2},\rho_{2}\rangle\), a \(\theta\)-NMDA for \(\mathcal{A}+\mathcal{B}\) can be achieved by constructing the product automaton \(\mathcal{C}=\langle\Sigma,Q_{1}\times Q_{2},\iota_{1}\times\iota_{2},\delta,\gamma,\rho\rangle\) such that \(\delta=\big{\{}\big{(}(q_{1},q_{2}),\sigma,(p_{1},p_{2})\big{)}\,\big{|}(q_{1 },\sigma,p_{1})\in\delta_{1}\) and \((q_{2},\sigma,p_{2})\in\delta_{2}\big{\}}\), \(\gamma\big{(}(q_{1},q_{2}),\sigma,(p_{1},p_{2})\big{)}=\gamma_{1}(q_{1},\sigma,p_{1})+\gamma_{2}(q_{2},\sigma,p_{2})\), \(\rho\big{(}(q_{1},q_{2}),\sigma,(p_{1},p_{2})\big{)}=\rho_{1}(q_{1},\sigma,p_{ 1})=\rho_{2}(q_{2},\sigma,p_{2})\). The latter must hold since both \(\rho_{1}\) and \(\rho_{2}\) are compliant with \(\theta\). \(\bullet\)_Subtraction_: A \(\theta\)-NMDA for \(\mathcal{A}-\mathcal{B}\) can be achieved by i) Determinizing \(\mathcal{B}\) to \(\mathcal{B}^{\prime}\); ii) Multiplying \(\mathcal{B}^{\prime}\) by \(-1\), getting \(\mathcal{B}^{\prime\prime}\); and iii) Constructing a \(\theta\)-NMDA for \(\mathcal{A}+\mathcal{B}^{\prime\prime}\). \(\bullet\)_min_: A \(\theta\)-NMDA for \(\min(\mathcal{A},\mathcal{B})\) is straightforward by the nondeterminism on their union. \(\bullet\)_max_: A \(\theta\)-NMDA for \(\max(\mathcal{A},\mathcal{B})\) can be achieved by i) Determinizing \(\mathcal{A}\) and \(\mathcal{B}\) to \(\mathcal{A}^{\prime}\) and \(\mathcal{B}^{\prime}\), respectively; ii) Multiplying \(\mathcal{A}^{\prime}\) and \(\mathcal{B}^{\prime}\) by \(-1\), getting \(\mathcal{A}^{\prime\prime}\) and \(\mathcal{B}^{\prime\prime}\), respectively; iii) Constructing a \(\theta\)-NMDA \(\mathcal{C}^{\prime\prime}\) for \(\min(\mathcal{A}^{\prime\prime},\mathcal{B}^{\prime\prime})\); iv) Determinizing \(\mathcal{C}^{\prime\prime}\) into a \(\theta\)-DMDA \(\mathcal{D}\); and v) Multiplying \(\mathcal{D}\) by \(-1\), getting \(\theta\)-NMDA \(\mathcal{C}\), which provides \(\max(\mathcal{A},\mathcal{B})\). We analyze next the size blow-up involved in algebraic operations. In addition to the general classes of \(\theta\)-NMDAs, we also consider the case where both input and output automata are deterministic. Summation of the results can be seen in Table 1. Most results in Table 1 are straightforward from the constructions presented in the proof of Theorem 4.8, however the size blow-up of the max operation is a little more involved. At a first glance, determinizing back and forth might look like requiring a double-exponential blow-up, however in this case an optimized procedure for the second determinization can achieve an overall single-exponential blow-up: Most results in Table 1 are straightforward from the constructions presented in the proof of Theorem 4.8: multiplying all the weights by a constant is linear, creating the product automaton is quadratic, and whenever determinization is required, we get an exponential blow-up. However, the result of the size blow-up for the max operation on tidy NMDAs is a little more involved. At a first glance, determinizing back and forth might look like a doubly-exponential blow-up, however in this case an optimized determinization procedure can achieve a singly-exponential blow-up: Determinizing a tidy NMDA that is the union of two DMDAs, in which the transition weights are polynomial in the number of states, is shown to only involve a polynomial size blow-up. **Theorem 4.9**.: _The size blow-up involved in the \(\max\) operation on tidy NMDAs is at most single-exponential._ Proof.: Consider a choice function \(\theta\), \(\theta\)-NMDAs \(\mathcal{A}\) and \(\mathcal{B}\), and the automata \(\mathcal{A}^{\prime\prime},\mathcal{B}^{\prime\prime},\mathcal{C}^{\prime\prime}, \mathcal{D}\) and \(\mathcal{C}\), as constructed in the'max' part of the proof of Theorem 4.8. Observe that \(\mathcal{C}^{\prime\prime}\) is the the union of two \(\theta\)-DMDAs. As so, for every word \(u\), there are only two possible runs of \(\mathcal{C}^{\prime\prime}\) on \(u\). In order to determinize \(\mathcal{C}^{\prime\prime}\) into \(\mathcal{D}\) we present a slightly modified procedure compared to the one presented in subsection 4.1. Instead of the basic subset construction, we use the product automaton of \(\mathcal{A}^{\prime\prime}\) and \(\mathcal{B}^{\prime\prime}\) and instead of saving in every state of \(\mathcal{D}\) the gap from the preferred state for every state of \(\mathcal{C}^{\prime\prime}\), we only save the gap between the two runs of \(\mathcal{C}^{\prime\prime}\). Combined with the observation we showed in the proof of Lemma 4.4 that the weights of \(\mathcal{A}^{\prime\prime}\) and \(\mathcal{B}^{\prime\prime}\) are bounded by the weights of \(\mathcal{A}\) and \(\mathcal{B}\), we are able to reduce the overall blow-up to be only single-exponential. The procedure presented in subsection 4.1 requires the following modifications: * Every state of \(\mathcal{D}\) is a tuple \(\langle q_{1},q_{2},g_{1},g_{2}\rangle\) where \(q_{1}\) is a state of \(\mathcal{A}^{\prime\prime}\), \(q_{2}\) is a state of \(\mathcal{B}^{\prime\prime}\), and \(g_{1},g_{2}\in G\) are the gaps from the preferred run. * The initial state of \(\mathcal{D}\) is \(\langle q_{\mathcal{A}},q_{\mathcal{B}},0,0\rangle\) where \(q_{\mathcal{A}}\) and \(q_{\mathcal{B}}\) are the initial states of \(\mathcal{A}^{\prime\prime}\) and \(\mathcal{B}^{\prime\prime}\), respectively. * In the induction step, \(\mathcal{D}_{i+1}\) extends \(\mathcal{D}_{i}\) by (possibly) adding for every state \(p=\langle q_{1},q_{2},g_{1},g_{2}\rangle\) and letter \(\sigma\in\Sigma\), a state \(p^{\prime}:=\langle q^{\prime}_{1},q^{\prime}_{2},g^{\prime}_{1},g^{\prime}_{ 2}\rangle\) and a transition \(t:=\langle p,\sigma,p^{\prime}\rangle\) such that for every \(1\leq h\leq 2\): * \(x_{h}:=\rho^{\prime}_{i+1}(t)\cdot\big{(}c_{h}-\gamma^{\prime}_{i+1}(t)\big{)}\). If \(x_{h}\geq 2T\) then set \(x_{h}:=\infty\) With the above modifications, similarly to Lemma 4.3, we get that the number of possible gaps is \(2\cdot T\cdot d_{\mathcal{A}}\cdot d_{\mathcal{B}}+1\) where \(d_{\mathcal{A}}\) and \(d_{\mathcal{B}}\) are the denominators of weights in \(\mathcal{A}^{\prime\prime}\) and \(\mathcal{B}^{\prime\prime}\), respectively. Hence, there are no more than \((2\cdot T\cdot d_{\mathcal{A}}\cdot d_{\mathcal{B}}+1)^{2}\cdot N_{\mathcal{A }}\cdot N_{\mathcal{B}}\) possibilities for the states of \(\mathcal{D}\), where \(N_{\mathcal{A}}\) and \(N_{\mathcal{B}}\) are the number of states in \(\mathcal{A}^{\prime\prime}\) and \(\mathcal{B}^{\prime\prime}\), respectively. According to the determinization procedure showed in subsection 4.1 and as explained in the proofs of Lemma 4.3 and Lemma 4.4, the following observations hold: * \(d_{\mathcal{A}}\) and \(d_{\mathcal{B}}\) are also the denominators of weights in \(\mathcal{A}\) and \(\mathcal{B}\), respectively, and since we use binary representation of weights, \(d_{\mathcal{A}}\cdot d_{\mathcal{B}}\) is up to single-exponential in \(|\mathcal{A}|+|\mathcal{B}|\). * All the weights in \(\mathcal{A}^{\prime\prime}\) and \(\mathcal{B}^{\prime\prime}\) are bounded by the weights of \(\mathcal{A}\) and \(\mathcal{B}\), hence \(T\) is also up to single-exponential in \(|\mathcal{A}|+|\mathcal{B}|\). * \(N_{\mathcal{A}}\) and \(N_{\mathcal{B}}\) are up to single-exponential in \(|\mathcal{A}|+|\mathcal{B}|\). Concluding that the number of states in \(\mathcal{D}\) is up to single-exponential in \(|\mathcal{A}|+|\mathcal{B}|\), and since the number of states in \(\mathcal{C}\) is equal to the number of states in \(\mathcal{D}\), we get a single-exponential blow-up. Observe that if weights are represented in unary, we can achieve a quartic blow-up for the min and max operations on tidy-DMDAs, by using the above determinization procedure, and since \(T\) is linear in unary representation. \begin{table} \begin{tabular}{|c||c||c|c|} \hline \(c\cdot\mathcal{A}\) (for \(c\geq 0\)) & \(\min(\mathcal{A},\mathcal{B})\) & \(\mathcal{A}+\mathcal{B}\) & \(-\mathcal{A}\) & \(\max(\mathcal{A},\mathcal{B})\) & \(\mathcal{A}-\mathcal{B}\) \\ \hline \hline Linear & Quadratic & \multicolumn{2}{c|}{Single Exponential} \\ \hline \end{tabular} \end{table} Table 1. The size blow-up involved in algebraic operations on tidy NMDAs. We are not aware of prior lower bounds on the size blow-up involved in algebraic operations on NDAs. For achieving such lower bounds, we develop a general scheme to convert every NFA to a \(\lambda\)-NDA of linearly the same size that defines the same language with respect to a threshold value \(0\), and to convert some specific \(\lambda\)-NDAs back to corresponding NFAs. The conversion of an NFA to a corresponding \(\lambda\)-NDA is quite simple. It roughly uses the same structure of the original NFA, and assigns four different transitions weights, depending on whether each of the source and target states is accepting or rejecting. **Lemma 4.10**.: _For every \(\lambda\in\mathbb{N}\setminus\{0,1\}\) and NFA \(\mathcal{A}\) with \(n\) states, there exists a \(\lambda\)-NDA \(\tilde{\mathcal{A}}\) with \(n+2\) states, such that for every word \(u\in\Sigma^{+}\), we have \(u\in L(\mathcal{A})\) iff \(\tilde{\mathcal{A}}(u)<0\). That is, the language defined by \(\mathcal{A}\) is equivalent to the language defined by \(\tilde{\mathcal{A}}\) and the threshold \(0\)._ Proof.: Given an NFA \(\mathcal{A}=\langle\Sigma,Q,\iota,\delta,F\rangle\) and a discount factor \(\lambda\in\mathbb{N}\setminus\{0,1\}\), we construct a \(\lambda\)-NDA \(\tilde{\mathcal{A}}=\langle\Sigma,Q^{\prime},\{p_{0}\},\delta^{\prime}, \gamma^{\prime}\rangle\) for which there exists a bijection \(f\) between the runs of \(\mathcal{A}\) and the runs of \(\tilde{\mathcal{A}}\) such that for every run \(r\) of \(\tilde{\mathcal{A}}\) on a word \(u\), * \(r\) is an accepting run of \(\mathcal{A}\) iff \(f(r)\) is a run of \(\tilde{\mathcal{A}}\) on \(u\) with the value \(\tilde{\mathcal{A}}\big{(}f(r)\big{)}=-\frac{1}{\lambda^{|r|}}\). * \(r\) is a non-accepting run of \(\mathcal{A}\) iff \(f(r)\) is a run of \(\tilde{\mathcal{A}}\) on \(u\) with the value \(\tilde{\mathcal{A}}\big{(}f(r)\big{)}=\frac{1}{\lambda^{|r|}}\). We first transform \(\mathcal{A}\) to an equivalent NFA \(\mathcal{A}^{\prime}=\langle\Sigma,Q^{\prime},\{p_{0}\},\delta^{\prime},F\rangle\) that is complete and in which there are no transitions entering its initial state, and later assign weights to its transitions to create \(\tilde{\mathcal{A}}\). To construct \(\mathcal{A}^{\prime}\) we add two states to \(Q\), having \(Q^{\prime}=Q\cup\{p_{0},q_{hole}\}\), duplicate all the transitions from \(\iota\) to start from \(p_{0}\), and add a transition from every state to \(q_{hole}\), namely \(\delta^{\prime}=\delta\cup\big{\{}(p_{0},\sigma,q)\bigm{|}\exists p\in\iota,(p,\sigma,q)\in\delta\big{\}}\cup\big{\{}(q,\sigma,q_{hole})\bigm{|}q\in Q^{ \prime},\sigma\in\Sigma\big{\}}\). Observe that \(|Q^{\prime}|=|Q|+2\), and \(L(\mathcal{A})=L(\mathcal{A}^{\prime})\). Next, we assign the following transition weights: * For every \(t=(p_{0},\sigma,q)\in\delta^{\prime}\), \(\gamma^{\prime}(t)=-\frac{1}{\lambda}\) if \(q\in F\) and \(\gamma^{\prime}(t)=\frac{1}{\lambda}\) if \(q\notin F\). * For every \(t=(p,\sigma,q)\in\delta^{\prime}\) such that \(p\neq p_{0}\), \(\gamma^{\prime}(t)=\frac{\lambda-1}{\lambda}\) if \(p,q\in F\); \(\gamma^{\prime}(t)=\frac{\lambda+1}{\lambda}\) if \(p\in F\) and \(q\notin F\); and \(\gamma^{\prime}(t)=-\frac{\lambda-1}{\lambda}\) if \(p,q\notin F\). By induction on the length of the runs on an input word \(u\), one can show that for every \(u\in\Sigma^{+}\), \(\tilde{\mathcal{A}}(u)=-\frac{1}{\lambda^{|u|}}\) if \(u\in L(\mathcal{A})\) and \(\tilde{\mathcal{A}}(u)=\frac{1}{\lambda^{|u|}}\) if \(u\notin L(\mathcal{A})\). Converting an NDA to a corresponding NFA is much more challenging, since a general NDA might have arbitrary weights. We develop a conversion scheme, whose correctness proof is quite involved, from every NDA \(\tilde{\mathcal{B}}\) that is equivalent to \(-\tilde{\mathcal{A}}\), where \(\tilde{\mathcal{A}}\) is generated from an arbitrary NFA as per Lemma 4.10, to a corresponding NFA \(\mathcal{B}\). Notice that the assumption that \(\tilde{\mathcal{B}}\equiv-\tilde{\mathcal{A}}\) gives us some information on \(\tilde{\mathcal{B}}\), yet \(\tilde{\mathcal{B}}\) might a priori still have arbitrary transition weights. Using this scheme, we provide an exponential lower bound on the size blow-up involved in multiplying an NDA by \((-1)\). The theorem holds with respect to both finite and infinite words. **Theorem 4.11**.: _For every \(n\in\mathbb{N}\) and \(\lambda\in\mathbb{N}\setminus\{0,1\}\), there exists a \(\lambda\)-NDA \(\mathcal{A}\) with \(n\) states over a fixed alphabet, such that every \(\lambda\)-NDA that is equivalent to \(-\mathcal{A}\), w.r.t. finite or infinite words, has \(\Omega(2^{n})\) states._ Proof.: Consider \(n\in\mathbb{N}\) and \(\lambda\in\mathbb{N}\setminus\{0,1\}\). By [40, 30] there exists an NFA \(\mathcal{A}\) with \(n\) states over a fixed alphabet of two letters, such that any NFA for the complement language \(\overline{L(\mathcal{A})}\) has at least \(2^{n}\) states. _Finite words._ Let \(\tilde{\mathcal{A}}\) be a \(\lambda\)-NDA that is correlated to \(\mathcal{A}\) as per Lemma 4.10, and assume towards contradiction that there exists a \(\lambda\)-NDA \(\dot{\mathcal{B}}=\langle\Sigma,Q_{\dot{\mathcal{B}}},\iota_{\dot{\mathcal{B}}},\delta_{\dot{\mathcal{B}}},\gamma_{\dot{\mathcal{B}}}\rangle\) with less than \(\frac{2^{n}}{4}\) states such that \(\dot{\mathcal{B}}\equiv-\tilde{\mathcal{A}}\). We provide below a conversion opposite to Lemma 4.10, leading to an NFA for \(\overline{L(\mathcal{A})}\) with less than \(2^{n}\) states, and therefore to a contradiction. The conversion of \(\dot{\mathcal{B}}\) back to an NFA builds on the specific values that \(\dot{\mathcal{B}}\) is known to assign to words, as opposed to the construction of Lemma 4.10, which works uniformly for every NFA, and is much more challenging, since \(\dot{\mathcal{B}}\) might have arbitrary transition weights. This conversion scheme can only work for \(\lambda\)-NDAs whose values on the input words converge to some threshold as the words length grow to infinity. For simplification, we do not consider the empty word, since one can easily check if the input NFA accepts it, and set the complemented NFA to reject it accordingly. By Lemma 4.10 we have that for every word \(u\in\Sigma^{+}\), \(\tilde{\mathcal{A}}(u)=-\frac{1}{\lambda^{|u|}}\) if \(u\in L(\mathcal{A})\) and \(\tilde{\mathcal{A}}(u)=\frac{1}{\lambda^{|u|}}\) if \(u\notin L(\mathcal{A})\). Hence, \(\dot{\mathcal{B}}(u)=-\frac{1}{\lambda^{|u|}}\) if \(u\notin L(\mathcal{A})\) and \(\dot{\mathcal{B}}(u)=\frac{1}{\lambda^{|u|}}\) if \(u\in L(\mathcal{A})\). We will show that there exists an NFA \(\mathcal{B}\), with less than \(2^{n}\) states, such that \(u\in L(\mathcal{B})\) iff \(\dot{\mathcal{B}}(u)=-\frac{1}{\lambda^{|u|}}\), implying that \(L(B)=\overline{L(\mathcal{A})}\). We first construct a \(\lambda\)-NDA \(\mathcal{B}^{\prime}=\langle\Sigma,Q_{\mathcal{B}^{\prime}},\iota,\delta,\gamma\rangle\) that is equivalent to \(\dot{\mathcal{B}}\), but has no transitions entering its initial states. This construction eliminates the possibility that one run is a suffix of another, allowing to simplify some of our arguments. Formally, \(Q_{\mathcal{B}^{\prime}}=Q_{\dot{\mathcal{B}}}\!\cup\iota\), \(\iota=\iota_{\dot{\mathcal{B}}}\times\{1\}\), \(\delta=\delta_{\dot{\mathcal{B}}}\cup\big{\{}\big{(}(p,1),\sigma,q\big{)}\; \big{|}\;(p,\sigma,q)\in\delta_{\dot{\mathcal{B}}}\big{\}}\), and weights \(\gamma(t)=\gamma_{\dot{\mathcal{B}}}(t)\) if \(t\in\delta_{\dot{\mathcal{B}}}\) and \(\gamma\big{(}(p,1),\sigma,q\big{)}=\gamma_{\dot{\mathcal{B}}}(p,\sigma,q)\) otherwise. Let \(R^{-}\) be the set of all the runs of \(\mathcal{B}^{\prime}\) that entail a minimal value which is less than \(0\), i.e., \(R^{-}=\{r\;\bigm{|}\;r\) is a minimal run of \(\mathcal{B}^{\prime}\) on some word and \(\mathcal{B}^{\prime}(r)\,<\,0\}\). Let \(\hat{\delta}\subseteq\delta\) be the set of all the transitions that take part in some run in \(R^{-}\), meaning \(\hat{\delta}=\{r(i)\;\bigm{|}\,r\in R^{-}\) and \(0\leq i<|r|\}\), and \(\hat{\hat{\delta}}\subseteq\delta\) the set of all transitions that are the last transition of those runs, meaning \(\hat{\delta}=\big{\{}r\big{(}|r|-1\big{)}\;\bigm{|}r\in R^{-}\big{\}}\). We construct next the NFA \(\mathcal{B}=\langle\Sigma,Q_{\mathcal{B}},\iota,\delta_{\mathcal{B}},F_{ \mathcal{B}}\rangle\). Intuitively, \(\mathcal{B}\) has the states of \(\mathcal{B}^{\prime}\), but only the transitions from \(\hat{\delta}\). Its accepting states are clones of the target states of the transitions in \(\hat{\hat{\delta}}\), but without outgoing transitions. We will later show that the only runs of \(\mathcal{B}\) that reach these clones are those that have an equivalent run in \(R^{-}\). Formally, \(Q_{\mathcal{B}}=Q^{\prime}_{\mathcal{B}}\!\cup\!F_{\mathcal{B}}\), \(F_{\mathcal{B}}=\big{\{}(q,1)\;\bigm{|}\exists p,q\in Q^{\prime}_{\mathcal{B}}\) and \((p,\sigma,q)\in\hat{\hat{\delta}}\big{\}}\), and \(\delta_{\mathcal{B}}=\hat{\delta}\cup\big{\{}\big{(}p,\sigma,(q,1)\big{)}\; \bigm{|}(p,\sigma,q)\in\hat{\hat{\delta}}\big{\}}\). Observe that the number of states in \(\mathcal{B}\) is at most \(3\) times the number of states in \(\dot{\mathcal{B}}\), and thus less than \(2^{n}\). We will now prove that for every word \(u\), \(\mathcal{B}\) accepts \(u\) iff \(\mathcal{B}^{\prime}(u)=-\frac{1}{\lambda^{|u|}}\). The first direction is easy: if \(\mathcal{B}^{\prime}(u)=-\frac{1}{\lambda^{|u|}}\), we get that all the transitions of a minimal run of \(\mathcal{B}^{\prime}\) on \(u\) are in \(\hat{\delta}\), and its final transition is in \(\hat{\delta}\), hence there exists a run of \(\mathcal{B}\) on \(u\) ending at an accepting state. For the other direction, assume towards contradiction that there exists a word \(u\), such that \(\mathcal{B}^{\prime}(u)=\frac{1}{\lambda^{|u|}}\), while there is an accepting run \(r_{u}\) of \(\mathcal{B}\) on \(u\). Intuitively, we define the "normalized value" of a run \(r^{\prime}\) of \(\mathcal{B}^{\prime}\) as the value of \(\mathcal{B}^{\prime}\) multiplied by the accumulated discount factor, i.e., \(\mathcal{B}^{\prime}(r^{\prime})\cdot\lambda^{|r^{\prime}|}\). Whenever the normalized value reaches \(-1\), we have an "accepting" run. We will show that \(r_{u}\) and the structure of \(\mathcal{B}\) imply the existence of two "accepting" runs \(r^{\prime}_{1},r^{\prime}_{2}\in R^{-}\) that intersect in some state \(q\), such that taking the prefix of \(r^{\prime}_{1}\) up to \(q\) results in a normalized value \(\lambda^{k}W_{1}\) that is strictly smaller than the normalized value \(\lambda^{j}W_{2}\) of the prefix of \(r^{\prime}_{2}\) up to \(q\). Since \(r^{\prime}_{2}\) is an "accepting" run, the suffix of \(r_{2}^{\prime}\) reduces \(\lambda^{j}W_{2}\) to \(-1\) and therefore it will reduce \(\lambda^{k}W_{1}\) to a value strictly smaller than \(-1\), and the total value of the run to a value strictly smaller than \(-\frac{1}{\lambda^{n}}\), which is not a possible value of \(\mathcal{B}^{\prime}\). Formally, let \(r_{u}(|u|-1)=\big{(}p^{\prime},u(|u|-1),(q^{\prime},1)\big{)}\) be the final transition of \(r_{u}\). We replace it with the transition \(t^{\prime}=\big{(}p^{\prime},u(|u|-1),q^{\prime}\big{)}\). The resulting run \(r_{u}^{\prime}=r_{u}[0..|u|-2]\cdot t\) is a run of \(\mathcal{B}^{\prime}\) on \(u\), and therefore \(\mathcal{B}^{\prime}(r_{u}^{\prime})\geq\frac{1}{\lambda^{|u|}}\). Since \((q^{\prime},1)\) is an accepting state, we get by the construction of \(\mathcal{B}\) that \(t^{\prime}\) is in \(\hat{\delta}\). Consider a run \(r_{1}^{\prime}\in R^{-}\) that shares the maximal suffix with \(r_{u}^{\prime}\), meaning that if there exist \(r^{\prime}\in R^{-}\) and \(x>0\) such that \(r^{\prime}[|r^{\prime}|-x..|r^{\prime}|-1]=r_{u}^{\prime}[|u|-x..|u|-1]\) then also \(r_{1}^{\prime}[|r_{1}^{\prime}|-x..|r_{1}^{\prime}|-1]=r_{u}^{\prime}[|u|-x.. |u|-1]\). Recall that all the initial states of \(\mathcal{B}^{\prime}\) have no transitions entering them and \(\mathcal{B}^{\prime}(r_{1}^{\prime})\neq\mathcal{B}^{\prime}(r_{u}^{\prime})\), hence \(r_{1}^{\prime}\) is not a suffix of \(r_{u}^{\prime}\) and \(r_{u}^{\prime}\) is not a suffix of \(r_{1}^{\prime}\). Let \(i\) be the maximal index of \(r_{u}^{\prime}\) such that \(r_{u}^{\prime}[i..|u|-1]\) is a suffix of \(r_{1}^{\prime}\), but \(r_{u}^{\prime}[i-1..|u|-1]\) is not a suffix of \(r_{1}^{\prime}\). Let \(k\) be the index in \(r_{1}^{\prime}\) such that \(r_{1}^{\prime}[k..|r_{1}^{\prime}|-1]=r_{u}[i..|u|-1]\), and let \(x=|r_{1}^{\prime}|-k\) (see Figure 17). Since \(r_{u}^{\prime}(i-1)\in\hat{\delta}\), there exists \(r_{2}^{\prime}\in R^{-}\) and index \(j\) such that \(r_{2}^{\prime}(j-1)=r_{u}^{\prime}(i-1)\). Let \(y=|r_{2}^{\prime}|-j\) (see Figure 17). Consider the run \(r_{3}^{\prime}=r_{2}^{\prime}[0..j-1]\cdot r_{u}^{\prime}[i..|u|-1]\), starting with the prefix of \(r_{2}^{\prime}\) up to the shared transition with \(r_{u}^{\prime}\), and then continuing with the suffix of \(r_{u}^{\prime}\). Observe that \(\mathcal{B}^{\prime}(r_{3}^{\prime})>-\frac{1}{\lambda^{|r_{3}^{\prime}|}}\) as otherwise \(r_{3}^{\prime}\in R^{-}\) and has a larger suffix with \(r_{u}^{\prime}\) than \(r_{1}^{\prime}\) has. Let \(W_{1}=\mathcal{B}^{\prime}\big{(}r_{1}^{\prime}[0..k-1]\big{)}\), \(W_{2}=\mathcal{B}^{\prime}\big{(}r_{2}^{\prime}[0..j-1]\big{)}\), \(X=\mathcal{B}^{\prime}\big{(}r_{1}^{\prime}[k..k+x-1]\big{)}\) (which is also \(\mathcal{B}^{\prime}\big{(}r_{u}^{\prime}[i..|u|-1]\big{)}\)), and \(Y=\mathcal{B}^{\prime}\big{(}r_{2}^{\prime}[j..j+y-1]\big{)}\) (see Figure 17). The following must hold: 1. \(W_{1}+\frac{X}{\lambda^{k}}=\mathcal{B}^{\prime}(r_{1}^{\prime})=-\frac{1}{ \lambda^{k+x}}\). Hence, \(\lambda^{k}W_{1}=-\frac{1}{\lambda^{x}}-X\). 2. \(W_{2}+\frac{X}{\lambda^{j}}=\mathcal{B}^{\prime}(r_{3}^{\prime})>-\frac{1}{ \lambda^{j+x}}\). Hence, \(\lambda^{j}W_{2}>-\frac{1}{\lambda^{x}}-X\), and after combining with the previous equation, \(\lambda^{j}W_{2}>\lambda^{k}W_{1}\). 3. \(W_{2}+\frac{Y}{\lambda^{j}}=\mathcal{B}^{\prime}(r_{2}^{\prime})=-\frac{1}{ \lambda^{j+y}}\). Hence, \(\lambda^{j}W_{2}+Y=-\frac{1}{\lambda^{y}}\) Consider now the run \(r_{4}^{\prime}=r_{1}^{\prime}[0..k-1]\cdot r_{2}^{\prime}[j..j+y-1]\), and combine item 2 and item 3 above to get that \(\lambda^{k}W_{1}+Y<-\frac{1}{\lambda^{y}}\). But this leads to \(\mathcal{B}^{\prime}(r_{4}^{\prime})=W_{1}+\frac{Y}{\lambda^{k}}<-\frac{1}{ \lambda^{k+y}}=-\frac{1}{\lambda^{|r_{4}^{\prime}|}}\), and this means that there exists a word \(w\) of length \(k+y\) such that \(\mathcal{B}^{\prime}(w)<-\frac{1}{\lambda^{k+y}}\), contradicting the assumption that \(\mathcal{B}^{\prime}\equiv\dot{\mathcal{B}}\equiv-\tilde{\mathcal{A}}\). _Infinite words._ Figure 17. The runs and notations used in the proof of Theorem 4.11. For showing the lower bound for the state blow-up involved in multiplying an NDA by \((-1)\) w.r.t. infinite words, we add a new letter \(\#\) to the alphabet, and correlate every finite word \(u\) to an infinite word \(u\cdot\#^{\omega}\). The proof is similar, applying the following modifications: * The scheme presented in the proof of Lemma 4.10 now constructs a \(\lambda\)-NDA \(\tilde{\mathcal{A}}\) over the alphabet \(\Sigma\cup\{\#\}\), adding a \(0\)-weighted transition from every state of \(\tilde{\mathcal{A}}\) to \(q_{hole}\). The function \(f\) that correlates between the runs of \(\mathcal{A}\) and \(\tilde{\mathcal{A}}\) is still a bijection, but with a different co-domain, correlating every run \(r\) of \(\mathcal{A}\) on a finite word \(u\in\Sigma^{+}\) to the run \(f(r)\) of \(\tilde{\mathcal{A}}\) on the word \(u\cdot\#^{\omega}\). * With this scheme, we get that \(\dot{\mathcal{B}}(u\cdot\#^{\omega})=-\frac{1}{\lambda^{|u|}}\) if \(u\notin L(A)\) and \(\dot{\mathcal{B}}(u\cdot\#^{\omega})=\frac{1}{\lambda^{|u|}}\) if \(u\in L(A)\), hence replacing all referencing to \(\mathcal{B}^{\prime}(u)\) with referencing to \(\mathcal{B}^{\prime}(u\cdot\#^{\omega})\). * \(R^{-}\) is defined with respect to words of the form \(u\cdot\#^{\omega}\), namely \(R^{-}=\{r\bigm{|}u\in\Sigma^{+},r\) is a minimal run of \(\mathcal{B}^{\prime}\) on \(u\cdot\#^{\omega}\) and \(\mathcal{B}^{\prime}(r)<0\}\). * \(R^{-}_{p}\) is a new set of all the maximal (finite) prefixes of the runs of \(R^{-}\) without any transitions for the \(\#\) letter, meaning \(R^{-}_{p}=\{r[0..i-1]\bigm{|}r\in R^{-},r(i-1)=(p,\sigma,q)\) for some \(\sigma\in\Sigma,\text{ and }r(i)=(q,\#,s)\}\). \(\hat{\delta}\) and \(\hat{\delta}\) are defined with respect to \(R^{-}_{p}\) instead of \(R^{-}\). * Defining \(r^{\prime}_{u}\), we consider a run \(r^{\prime}_{t}\in R^{-}\) that is a witness for \(t^{\prime}\in\hat{\delta}\), meaning there exists \(i\in\mathbb{N}\) for which \(r^{\prime}_{t}(i)=t^{\prime}\), and \(r^{\prime}_{t}(i+1)\) is a transition for the \(\#\) letter. Then \(r^{\prime}_{u}=r_{u}[0..|u|-2]\cdot t\cdot r^{\prime}[i+1..\infty]=r_{u}[0..|u| -2]\cdot r^{\prime}[i..\infty]\), is a run of \(\mathcal{B}^{\prime}\) on \(u\cdot\#^{\omega}\). * For choosing \(r^{\prime}_{1}\) that "shares the maximal suffix" with \(r^{\prime}_{u}\), we take \(r^{\prime}_{1}\in R^{-}\) such that for every \(r^{\prime}\in R^{-}\) and \(x>0\), if \(r^{\prime}_{u}[i..\infty]\) is a suffix of \(r^{\prime}\) then it is also a suffix of \(r^{\prime}_{1}\). * For the different runs and their parts, we set \(X=\mathcal{B}^{\prime}\big{(}r^{\prime}_{1}[k..\infty]\big{)}\), \(Y=\mathcal{B}^{\prime}\big{(}r^{\prime}_{2}[j..\infty]\big{)}\), \(r^{\prime}_{3}=r^{\prime}_{2}[0..j-1]\cdot r^{\prime}_{u}[i..\infty]\) and \(r^{\prime}_{4}=r^{\prime}_{1}[0..k-1]\cdot r^{\prime}_{2}[j..\infty]\). ### Basic Subfamilies Tidy NMDAs constitute a rich family that also contains some basic subfamilies that are still more expressive than integral NDAs. Two such subfamilies are integral NMDAs in which the discount factors depend on the transition letter or on the elapsed time. Notice that closure of tidy NMDAs under determinization and under algebraic operations is related to a specific choice function \(\theta\), namely every class of \(\theta\)-NMDAs enjoys these closure properties (Theorem 4.6 and Theorem 4.8). Since the aforementioned subfamilies of tidy NMDAs also consist of \(\theta\)-NMDA classes, their closure under determinization and under algebraic operations follows. For example, the class of NMDAs that assigns a discount factor of \(2\) to the letter 'a' and of \(3\) to the letter 'b' enjoys these closure properties. #### 4.4.1. Letter-Oriented Discount Factors Allowing each action (letter) to carry its own discount factor is a basic extension of discounted summation, used in various models, such as Markov decision processes [35, 45]. A \(\theta\)-NMDA over an alphabet \(\Sigma\) is _letter oriented_ if all transitions over the same alphabet letter share the same discount factor; that is, if \(\theta:\Sigma^{+}\to\mathbb{N}\setminus\{0,1\}\) coincides with a function \(\Lambda:\Sigma\to\mathbb{N}\setminus\{0,1\}\), in the sense that for every finite word \(u\) and letter \(\sigma\), we have \(\theta(u\sigma)=\Lambda(\sigma)\). (See an example in Figure 18.) Notice that every choice function \(\theta\) for a letter-oriented \(\theta\)-NMDA can be defined via a simple transducer of a single state, having a self loop over every letter with its assigned discount factor. We show that letter-oriented NMDAs, and in particular the NMDA \(\mathcal{A}\) depicted in Figure 19, indeed add expressiveness over NDAs. **Theorem 4.12**.: _There exists a letter-oriented NMDA that no integral NDA is equivalent to._ Consider the NMDA \(\mathcal{A}\) depicted in Figure 19. Assume toward contradiction that there exists an integral NDA \(\mathcal{B}^{\prime}\) such that \(\mathcal{B}^{\prime}\equiv\mathcal{A}\). According to [10], there exists an integral deterministic NDA (integral DDA) \(\mathcal{B}\) with transition function \(\delta_{\mathcal{B}}\) and discount factor \(\lambda\), such that \(\mathcal{B}\equiv\mathcal{B}^{\prime}\equiv\mathcal{A}\). Observe that for every \(n\in\mathbb{N}\backslash\{0\}\), we have \(\mathcal{B}(a^{n}b^{\omega})=\mathcal{A}(a^{n}b^{\omega})=\frac{1}{2^{n}}\). As \(\mathcal{B}\) has finitely many states, there exists a state \(q\) in \(\mathcal{B}\) and \(i,j\in\mathbb{N}\setminus\{0\}\) such that \(\delta_{\mathcal{B}}(a^{i})=\delta_{\mathcal{B}}(a^{i+j})=q\). Let \(W_{1}=\mathcal{B}^{q}(a^{j})\) and \(W_{2}=\mathcal{B}^{q}(b^{\omega})\). Observe that \[\frac{1}{2^{i}} =\mathcal{B}(a^{i}b^{\omega})=\mathcal{B}(a^{i})+\frac{W_{2}}{ \lambda^{i}} \tag{4.2}\] \[\frac{1}{2^{i+j}} =\mathcal{B}(a^{i+j}b^{\omega})=\mathcal{B}(a^{i})+\frac{W_{1}}{ \lambda^{i}}+\frac{W_{2}}{\lambda^{i+j}}\] (4.3) \[\frac{1}{2^{i+2j}} =\mathcal{B}(a^{i+2j}b^{\omega})=\mathcal{B}(a^{i})+\frac{W_{1}}{ \lambda^{i}}+\frac{W_{1}}{\lambda^{i+j}}+\frac{W_{2}}{\lambda^{i+2j}} \tag{4.4}\] Figure 19: A letter-oriented discounted-sum automaton, for the discount factor function \(\Lambda(a)=2\); \(\Lambda(b)=3\), that no integral NDA is equivalent to. Figure 18: A letter-oriented discounted-sum automaton, for the discount factor function \(\Lambda(a)=3\); \(\Lambda(b)=2\). Subtract Equation 4.2 from Equation 4.3, and Equation 4.3 from Equation 4.4 to get \[\frac{1}{2^{i+j}}-\frac{1}{2^{i}} =\frac{W_{1}-W_{2}}{\lambda^{i}}+\frac{W_{2}}{\lambda^{i+j}} \tag{4.5}\] \[\frac{1}{2^{i+2j}}-\frac{1}{2^{i+j}} =\frac{W_{1}-W_{2}}{\lambda^{i+j}}+\frac{W_{2}}{\lambda^{i+2j}}= \frac{1}{\lambda^{j}}\Big{(}\frac{W_{1}-W_{2}}{\lambda^{i}}+\frac{W_{2}}{ \lambda^{i+j}}\Big{)} \tag{4.6}\] and combine Equation 4.5 and Equation 4.6 to get \(\frac{1}{2^{j}}\Big{(}\frac{1}{2^{i+j}}-\frac{1}{2^{i}}\Big{)}=\frac{1}{2^{i+2 j}}-\frac{1}{2^{i+j}}=\frac{1}{\lambda^{j}}\Big{(}\frac{1}{2^{i+j}}-\frac{1}{2^{i}} \Big{)}\), which implies \(\lambda=2\). Observe that for every \(n\in\mathbb{N}\backslash\{0\}\), we have \(\mathcal{B}(b^{n}a^{\omega})=\mathcal{A}(b^{n}a^{\omega})=\frac{1}{3^{n}}\). Symmetrically to the above, but with respect to '\(b\)' instead of '\(a\)' and '\(3\)' instead of '\(2\)', results in \(\lambda=3\), leading to a contradiction. #### 4.4.2. Time-Oriented Discount Factors A \(\theta\)-NMDA over an alphabet \(\Sigma\) is _time oriented_ if the discount factor on a transition is determined by the distance of the transition from an initial state; that is, if \(\theta:\Sigma^{+}\to\mathbb{N}\setminus\{0,1\}\) coincides with a function \(\Lambda:\mathbb{N}\setminus\{0\}\to\mathbb{N}\setminus\{0,1\}\), in the sense that for every finite word \(u\), we have \(\theta(u)=\Lambda\big{(}|u|\big{)}\). For example, the NMDA \(\mathcal{A}\) of Figure 20 is time-oriented, as all transitions taken at odd steps, in any run, have discount factor \(2\), and those taken at even steps have discount factor \(3\). The transducer \(\mathcal{T}\) of Figure 21 represents its choice function. Time-oriented NMDAs extend the expressiveness of NDAs, as proved for the time-oriented NMDA depicted in Figure 22. Figure 21. A transducer that represents the discount-factor choice function for the NMDA \(\mathcal{A}\) of Figure 20. Figure 20. A time-oriented discounted-sum automaton \(\mathcal{A}\). **Theorem 4.13**.: _There exists a time-oriented NMDA that no integral NDA is equivalent to._ Proof.: Let \(\mathcal{A}\) be the time-oriented NMDA depicted in Figure 22. Observe that \(\mathcal{A}(a^{n}b^{\omega})=\frac{1}{6^{|\frac{1}{2}|}}\). Analogously to the proof of Theorem 4.12, but with respect to "\(\sqrt{6}\)" instead of "\(2\)", we have that the discount factor of an equivalent DDA, if such exists, is \(\lambda=\sqrt{6}\), hence no integral NDA can be equivalent to \(\mathcal{A}\). ## 5. Tidy NMDAs - Decision Problems We show that all of the decision problems of tidy NMDAs are in the same complexity classes as the corresponding problems for discounted-sum automata with a single integral discount factor. That is, the nonemptiness problem is in PTIME, and the exact-value, universality, equivalence, and containment problems are in PSPACE (see Table 2). In the equivalence and containment problems, we consider \(\theta\)-NMDAs with the same choice function \(\theta\). In addition, the problem of checking whether a given NMDA is tidy, as well as whether it is a \(\theta\)-NMDA, for a given choice function \(\theta\), is decidable in PTIME. The complexities are w.r.t. the automata size (as defined in section 2), and when considering a threshold \(\nu\), w.r.t. its binary representation. ### Tidiness Given an NMDA \(\mathcal{A}\), one can check in PTIME whether \(\mathcal{A}\) is tidy. The algorithm follows by solving a reachability problem in a Cartesian product of \(\mathcal{A}\) with itself, to verify that for every word, the last discount factors are identical in all runs. **Theorem 5.1**.: _Checking if a given NMDA \(\mathcal{A}\) is tidy is decidable in time \(O\big{(}|\mathcal{A}|^{2}\big{)}\)._ Proof.: Consider an input NMDA \(\mathcal{A}=\langle\Sigma,Q,\iota,\delta,\gamma,\rho\rangle\). Observe that \(\mathcal{A}\) is tidy iff there does not exist a finite word \(u\in\Sigma^{+}\) of length \(n=|u|\) and runs \(r_{1}\) and \(r_{2}\) of \(\mathcal{A}\) on \(u\), such that \(\rho(r_{1}(n-1))\neq\rho(r_{2}(n-1))\). Intuitively, we construct the Cartesian product of \(\mathcal{A}\) with itself, associating the weight of every transition in the product to the difference of the two discount factors of the transitions causing it. The problem then reduces to reachabilty in this product automaton of a transition with weight different from \(0\). Formally, construct a weighted automaton \(P=\langle\Sigma,Q\times Q,\iota\times\iota,\delta^{\prime},\gamma^{\prime}\rangle\) such that * \(\delta^{\prime}=\Big{\{}\big{(}(s_{0},s_{1}),\sigma,(t_{0},t_{1})\big{)}\ \big{|}\,\sigma\in\Sigma\text{ and }(s_{0},\sigma,t_{0}),(s_{1},\sigma,t_{1})\in\delta \Big{\}}\). * \(\gamma^{\prime}\big{(}(s_{0},s_{1}),\sigma,(t_{0},t_{1})\big{)}=\rho(s_{0}, \sigma,t_{0})-\rho(s_{1},\sigma,t_{1})\). Figure 22. A time-oriented NMDA that no integral NDA is equivalent to, and a transducer that defines its choice function. Every run in \(P\) for a finite word \(u\) corresponds to two runs in \(\mathcal{A}\) for the same word \(u\). A non-zero weighted transition in \(P\) corresponds to two transitions in \(\mathcal{A}\) for the same letter, but with different discount factors. Hence, \(\mathcal{A}\) is tidy if and only if no run in \(P\) takes a non-zero weighted transition. The graph underlying \(P\) can be constructed in time quadratic in the size of \(\mathcal{A}\), and the reachability check on it can be performed in time linear in the size of this graph. Given also a transducer \(\mathcal{T}\), one can check in polynomial time whether \(\mathcal{A}\) is a \(\mathcal{T}\)-NMDA. **Theorem 5.2**.: _Checking if a given NMDA \(\mathcal{A}\) is a \(\mathcal{T}\)-NMDA, for a given transducer \(\mathcal{T}\), is decidable in time \(O\big{(}|\mathcal{A}|\cdot|\mathcal{T}|\big{)}\)._ Proof.: We show the procedure. Let \(\mathcal{A}=\langle\Sigma,Q_{\mathcal{A}},\iota,\delta_{\mathcal{A}},\gamma, \rho_{\mathcal{A}}\rangle\) be the input NMDA and \(\mathcal{T}=\langle Q_{\mathcal{T}},\Sigma,q_{0},\delta_{\mathcal{T}},\rho_{ \mathcal{T}}\rangle\) the input transducer. We construct a nondeterministic weighted automaton \(\mathcal{A}^{\prime}\) that resembles \(\mathcal{A}\) and a deterministic weighted automaton \(\mathcal{T}^{\prime}\) that resembles \(\mathcal{T}\), as follows. \(\mathcal{A}^{\prime}=\langle\Sigma,Q_{\mathcal{A}},\iota,\delta_{\mathcal{A}},\rho_{\mathcal{A}}\rangle\) is derived from \(\mathcal{A}\) by taking the same basic structure of states, initial states and transition function, and having the discount factors of \(\mathcal{A}\) as its weight function. \(\mathcal{T}^{\prime}=\langle\Sigma,Q_{\mathcal{T}},q_{0},\delta_{\mathcal{T}},\rho_{\mathcal{T}}\rangle\) is derived from \(\mathcal{T}\), by having the same structure as \(\mathcal{T}\) and having the output function of \(\mathcal{T}\) as the weight function of \(\mathcal{T}^{\prime}\). Then, we construct the product automaton \(\mathcal{B}=\mathcal{A}^{\prime}\times\mathcal{T}^{\prime}\), in which the weight on each transition is the weight of the corresponding transition in \(\mathcal{A}^{\prime}\) minus the weight of the corresponding transition in \(\mathcal{T}^{\prime}\). It is only left to check whether or not all the weights on the reachable transitions of \(\mathcal{B}\) are zero. Indeed, \(\mathcal{A}\) is a \(\mathcal{T}\)-NMDA iff all its reachable discount factors, which are the weights in \(\mathcal{A}^{\prime}\), correspond to the outputs of \(\mathcal{T}\), which are the weights in \(\mathcal{T}^{\prime}\). ### Nonemptiness, Exact-Value, Universality, Equivalence, and Containment We start with the non-emptiness problems. For both strict and non-strict inequalities with respect to infinite words, there is a simple reduction to one-player discounted-payoff games that also applies to arbitrary NMDAs (which are not necessarily tidy, or even integral), showing that those problems are in PTIME. This result can also be generalized to the \begin{table} \begin{tabular}{|c||c|c|} \hline & Finite words & Infinite words \\ \hline \hline Non-emptiness (\(<\)) & PTIME (Theorem 5.4) & PTIME (Theorem 5.3) \\ \hline Non-emptiness (\(\leq\)) & PTIME (Theorem 5.5) & \\ \hline Containment (\(>\)) & PSPACE-complete & PSPACE (Theorem 5.11) \\ \hline Containment (\(\geq\)) & (Theorem 5.9) & PSPACE-complete (Theorem 5.10) \\ \hline Equivalence & PSPACE-complete (Corollary 5.12) \\ \hline Universality (\(<\)) & PSPACE-complete & PSPACE(Theorem 5.13) \\ \hline Universality (\(\leq\)) & (Theorem 5.13) & PSPACE-complete (Theorem 5.13) \\ \hline Exact-value & PSPACE-complete & PSPACE (Theorem 5.14) \\ & (Theorem 5.14) & \\ \hline \end{tabular} \end{table} Table 2. The complexities of the decision problems of tidy NMDAs. strict non-emptiness problem of arbitrary NMDAs w.r.t. finite words. The non-strict problem w.r.t. finite words is solved differently, and applies to integral NMDAs (which are not necessarily tidy). **Theorem 5.3**.: _The nonemptiness problem of NMDAs w.r.t. infinite words is in PTIME._ Proof.: Let \(\mathcal{A}=\langle\Sigma,Q,\iota,\delta,\gamma,\rho\rangle\) be an NMDA and \(\nu\in\mathbb{Q}\) a threshold. Discounted-payoff games with multiple discount factors (DPGs) were defined in [4]. We will construct a one-player DPG \(G=\langle V_{MAX},V_{MIN},E,\gamma_{G},\rho_{G}\rangle\) such that every infinite walk \(\psi\) of \(\mathcal{A}\) will have a corresponding infinite play \(\pi\) of \(G\), such that \(\mathcal{A}(\psi)=\mu(\pi)\), where \(\mu(\pi)\) is the value of \(G\) on the play \(\pi\) as defined in [4]. Observe that our definition of the value of a walk is identical to the definition of \(\mu\) in [4]. Hence we would like \(G\) to have the same states, transitions, weights and discount factors as \(\mathcal{A}\), while omitting the letters on the transitions. Formally, the sets of vertices belonging to the players are \(V_{MIN}=Q\) and \(V_{MAX}=\emptyset\). For every transition \(t=(q,\sigma,p)\in\delta\) we add a corresponding edge \((q,p)\) to \(E\) with weight and discount factor of \(\gamma_{G}(q,p)=\gamma(t)\) and \(\rho_{G}(q,p)=\rho(t)\). Observe that \(\mathcal{A}\) might have two transitions with the same source and destination but with different weight and/or discount factor for different letters, however according to [4], DPGs are allowed to have multiple edges between the same ordered pair of vertices. Let \(f\) be the function that matches a transition in \(\mathcal{A}\) to the corresponding edge in \(G\). We can extend \(f\) to be a bijection between the set of walks of \(\mathcal{A}\) and the set of plays of \(G\). Observe that by the construction, for every walk \(\psi\), we have \(\mathcal{A}(\psi)=\mu\Big{(}f(\psi)\Big{)}\), and for every play \(\pi\), we have \(\mu(\pi)=\mathcal{A}\Big{(}f^{-1}(\pi)\Big{)}\). Recall that the value of a word is the minimal value of a run of \(\mathcal{A}\) on it, to conclude that the minimum value of an infinite word, equals the minimum value of a play in \(G\) starting from a vertex that corresponds to an initial state. The problem of solving \(G\), i.e., for each vertex \(v\in V_{MIN}\) finding the minimum value of any play starting from \(v\), can be represented as a linear program, as suggested by [4]. With the feasible solutions for this problem, all left to do is to iterate all the vertices that correspond to an initial state in \(\iota\), to check if the minimum value of a play from any of them is lower (or lower or equal for the non-strict case) than \(\nu\). If such a play \(\pi\) exists, then \(f^{-1}(\pi)\) is an infinite walk starting from an initial state whose value is lower (or equal) than \(\nu\), hence \(\mathcal{A}\) is not empty w.r.t. infinite words. Otherwise, there is no infinite run with value lower (or equal) than \(\nu\), meaning that \(\mathcal{A}\) is not empty w.r.t. infinite words. For nonemptiness with respect to finite words, we cannot directly use the aforementioned game solution, as it relies on the convergence of the values in the limit. However, for nonemptiness with respect to strict inequality, we can reduce the finite-words case to the infinite-words case: If there exists an infinite word \(w\) such that \(\mathcal{A}(w)\) is strictly smaller than the threshold, the distance between them cannot be compensated in the infinity, implying the existence of a finite prefix that also has a value smaller than the threshold; As for the other direction, we add to every state a \(0\)-weight self loop, causing a small-valued finite word to also imply a small-valued infinite word. **Theorem 5.4**.: _The nonemptiness problem of NMDAs w.r.t. finite words and strict inequality is in PTIME._ Proof.: Let \(\mathcal{A}=\langle\Sigma,Q,\iota,\delta,\gamma,\rho\rangle\) be an NMDA and \(\nu\in\mathbb{Q}\) a threshold. We will construct in polynomial time an NMDA \(\mathcal{A}^{\prime}=\langle\Sigma,Q\cup\iota\times\{1\}\cup\{q_{\infty}\}, \iota\times\{1\},\delta\cup\delta^{\prime}\cup\delta^{\prime\prime},\gamma \cup\gamma^{\prime}\cup\gamma^{\prime\prime},\rho\cup\rho^{\prime}\cup \rho^{\prime\prime}\rangle\), such that \(\mathcal{A}^{\prime}\) is empty(\(<\)) with respect to infinite words if and only if \(\mathcal{A}\) is empty(\(<\)) with respect to finite words, getting from Theorem 5.3 the required result. The construction duplicates all the initial states of \(\mathcal{A}\) and adds a new state \(q_{\infty}\). The new transitions are: * \(\delta^{\prime}=\big{\{}\big{(}(q,1),\sigma,q^{\prime}\big{)}\;\big{|}\;q\in \iota,\sigma\in\Sigma,(q,\sigma,q^{\prime})\in\delta\big{\}}\); \(\gamma^{\prime}:\delta^{\prime}\to\mathbb{Q}\) such that \(\gamma^{\prime}\big{(}(q,1),\sigma,q^{\prime}\big{)}=\gamma(q,\sigma,q^{\prime})\); \(\rho^{\prime}:\delta^{\prime}\to\mathbb{N}\setminus\{0,1\}\) such that \(\rho^{\prime}\big{(}(q,1),\sigma,q^{\prime}\big{)}=\rho(q,\sigma,q^{\prime})\). * \(\delta^{\prime\prime}=\big{\{}(q,\tau,q_{\infty})\;\big{|}\;q\in Q\big{\}} \cup\big{\{}(q_{\infty},\sigma,q_{\infty})\;\big{|}\;\sigma\in\Sigma\big{\}}\) for some letter \(\tau\in\Sigma\); \(\gamma^{\prime\prime}:\delta^{\prime\prime}\to\mathbb{Q}\) such that \(\gamma^{\prime\prime}\equiv 0\); \(\rho^{\prime\prime}:\delta^{\prime\prime}\to\mathbb{N}\setminus\{0,1\}\) for any arbitrary discount factors. Observe that for every finite word \(u\in\Sigma^{+}\) we have that \(\mathcal{A}^{\prime}(u\cdot\tau^{\omega})\leq\mathcal{A}(u)\), since for every run of \(A\) on \(u\) there is an equivalent run of \(A^{\prime}\) on \(u\) that has the same value. If \(\mathcal{A}\) is not empty(\(<\)) w.r.t. finite words, there exists \(u\in\Sigma^{+}\) such that \(\mathcal{A}(u)<\nu\). Hence \(\mathcal{A}^{\prime}(u\cdot\tau^{\omega})\leq\mathcal{A}(u)<\nu\). Concluding that \(\mathcal{A}^{\prime}\) is not empty(\(<\)) w.r.t. infinite words. For the other direction, if \(\mathcal{A}^{\prime}\) is not empty(\(<\)) w.r.t. infinite words, there exists \(w\in\Sigma^{\omega}\) such that \(\mathcal{A}^{\prime}(w)<\nu\). Let \(r\) be the run of \(\mathcal{A}^{\prime}\) on \(w\) that entails the minimum value. Assume \(r\) contains some transitions from \(\delta^{\prime\prime}\). Let \(r^{\prime}\) be the maximal prefix run of \(r\) that contains only transitions form \(\delta\) and \(\delta^{\prime}\). Since all the transitions in \(\delta^{\prime\prime}\) are targeted in \(q_{\infty}\) and have a weight of \(0\), we get that \(\mathcal{A}^{\prime}(r^{\prime})=\mathcal{A}^{\prime}(r)<0\). By changing the first transition of \(r^{\prime}\) from \(\big{(}(q,1),\sigma,q^{\prime}\big{)}\) to \((q,\sigma,q^{\prime})\) we get a run of \(A\) on a finite prefix of \(w\) with the same value of \(\mathcal{A}^{\prime}\) on \(r\), which is a value strictly less than \(\nu\). Meaning that there exists \(v\in\Sigma^{+}\) such that \(\mathcal{A}(v)<\nu\), which is our claim. Otherwise, \(r\) contains only transitions from \(\delta\) and \(\delta^{\prime}\). changing its first transition \(\big{(}(q,1),\sigma,q^{\prime}\big{)}\) to \((q,\sigma,q^{\prime})\) results in a run of \(A\) on \(w\) with the same value strictly less than \(\nu\). We will now show that if the value of \(\mathcal{A}\) on some infinite word \(w\) is less than \(\nu\) then there exists a prefix of \(w\) for which the value of \(\mathcal{A}\) is also less than \(\nu\). Denote \(\epsilon=\nu-\mathcal{A}(w)\). Let \(W\) be the maximal absolute value of \(\mathcal{A}\) on any infinite word, and \(\lambda\) the minimal discount factor in \(\mathcal{A}\). Observe that there exists \(n_{\epsilon}\in\mathbb{N}\) such that \(\frac{W}{\lambda^{n_{\epsilon}}}<\epsilon\) and consider the run \(r_{n_{\epsilon}}=r[0..n_{\epsilon}-1]\) of \(\mathcal{A}\) on the finite word \(u=w[0..n_{\epsilon}-1]\). We will show that after reaching \(\delta(r_{n_{\epsilon}})\), if \(\mathcal{A}(r_{n_{\epsilon}})\) is not smaller than \(\nu\), then the weight of the suffix \(\mathcal{A}(r[n_{\epsilon}..\infty])\) reduced by the accumulated discount factor \(\rho(r_{n_{\epsilon}})\) will be too small to compensate, resulting in \(\mathcal{A}(r)\geq\nu\). Observe that \(|\mathcal{A}^{\delta(u)}(w[n_{\epsilon}..\infty])|\leq W<\epsilon\cdot\lambda^{ n_{\epsilon}}\) and \(\rho(r_{n_{\epsilon}})\geq\lambda^{n_{\epsilon}}\), resulting in \(\frac{1}{\rho(r_{n_{\epsilon}})}\leq\frac{1}{\lambda^{n_{\epsilon}}}\) and \(\frac{|\mathcal{A}^{\delta(u)}(w[n_{\epsilon}..\infty])|}{\rho(r_{n_{ \epsilon}})}\,<\,\epsilon\). And finally, \[\nu-\epsilon=\mathcal{A}(w)=\mathcal{A}(r) =\mathcal{A}(r_{n})+\frac{\mathcal{A}^{\delta(u)}\big{(}w[n_{ \epsilon}..\infty]\big{)}}{\rho(r_{n})}\] \[\geq\mathcal{A}(r_{n})-\frac{\big{|}\mathcal{A}^{\delta(u)}(w[n_ {\epsilon}..\infty])\big{|}}{\rho(r_{n_{\epsilon}})}>\mathcal{A}(r_{n})- \epsilon\geq\mathcal{A}(u)-\epsilon\] Meaning that \(\nu>\mathcal{A}(u)\) and \(\mathcal{A}\) is not empty(\(<\)) with respect to finite words. For nonemptiness with respect to finite words and non-strict inequality, we cannot use the construction used in the proof of Theorem 5.4, since its final part is inadequate: It is possible to have an infinite word with value that equals the threshold, while every finite prefix of it has a value strictly bigger than the threshold. Yet, when considering _integral_ NMDAs, we can use a different approach for resolving the problem, applying linear programming to calculate the minimal value of a finite run ending in every state. **Theorem 5.5**.: _The nonemptiness problem of integral NMDAs w.r.t. finite words and nonstrict inequality is in PTIME._ Proof.: Consider an integral NMDA \(\mathcal{A}=\langle\Sigma,Q,\iota,\delta,\gamma,\rho\rangle\) and a threshold \(\nu\). For every finite run \(r\) of \(\mathcal{A}\), we define its normalized difference from \(\nu\) as the accumulated discount factor multiplied by the difference, meaning \(\Delta(r)=\rho(r)\big{(}\mathcal{A}(r)-\nu\big{)}\). For every state \(q\in Q\), we define its minimal normalized difference from \(\nu\) as the minimal normalized difference among all finite runs that end in \(q\), meaning, \(\Delta(q)=\inf\{\Delta(r)\ |\ \delta(r)=q\}=\inf(D_{q})\). \(\mathcal{A}\) is not empty w.r.t. finite words and non-strict inequality iff there exists a run \(r\) such that \(\Delta(r)\leq 0\). We will show that for every state \(q\in Q\) such that \(\Delta(q)\leq 0\), there exists a finite run \(r\) of \(\mathcal{A}\) ending in \(q\) such that \(\Delta(r)\leq 0\), and combine it with the trivial opposite direction to conclude that \(\mathcal{A}\) is not empty iff there exists \(q\in Q\) such that \(\Delta(q)\leq 0\). Consider a state \(q\in Q\), * If \(\Delta(q)=-\infty\), then by the definition of \(\Delta(q)\), for every \(x<0\) there exists a run \(r\) ending in \(q\) such that \(\Delta(r)<x\). * If \(\Delta(q)=x\in\mathbb{Q}\), then for every \(\epsilon>0\) there exists a run \(r_{\epsilon}\) ending in \(q\) such that \(\epsilon>\Delta(r_{\epsilon})-x\geq 0\). Since we are dealing with integral discount factors, every normalized difference of a run is of the form \(\frac{k}{d}\), where \(k\in\mathbb{N}\) and \(d\) is the common denominator of the weights in \(\gamma\) and \(\nu\). We will show that the infimum of the set \(D_{q}\) is its minimum, since every element of \(D_{q}\) can have only discrete values. Let \(k_{x}\in\mathbb{N}\) be the minimal integer such that \(\frac{k_{x}}{d}\geq x\), meaning \(k_{x}=\lceil x\cdot d\rceil\), and observe that for every run \(r\) ending in \(q\) we have \(\Delta(r)\geq\frac{k_{x}}{d}\), leading to \(\Delta(r)-x\geq\frac{k_{x}}{d}-x\). Since this different needs to be arbitrary small, we get that \(\frac{k_{x}}{d}-x=0\). For every run \(r\) ending in \(q\) we have that \(\Delta(r)-x\) is \(0\) or at least \(\frac{1}{d}\). And since this difference needs to be arbitrary small, it must be \(0\) for some of those runs. Hence, there exists a run \(r\) ending in \(q\) such that \(\Delta(r)=x\). We will now show a linear program that calculates the value of \(\Delta(q)\) for every \(q\in Q\), or determines that there exists some \(q\in Q\) such that \(\Delta(q)<0\). For simplicity, we assume that all the states in \(\mathcal{A}\) are reachable (since otherwise, one can create in polynomial time an equivalent integral NMDA for which all states are reachable). Let \(Q_{in}\) be the set of all states that have an incoming transition, and \(n\) its size, meaning \(Q_{in}=\{q\in Q\ |\ \exists(p,\sigma,q)\in\delta\}=\{q_{1},\cdots,q_{n}\}\). Our linear program is over the variables \(x_{1},x_{2},\cdots,x_{n}\), such that if there exists a feasible solution to the program, meaning a solution that satisfies all the constraints, then \(\langle\Delta(q_{1}),\Delta(q_{2}),\ldots,\Delta(q_{n})\rangle\) is its maximal solution, and otherwise there exists a state \(q\) such that \(\Delta(q)<0\). For the first case, after finding the minimal normalized difference from \(\nu\) for every state in \(Q_{in}\), we can check if any of them equals to \(0\), and for the other case we can immediately conclude that \(\mathcal{A}\) is not empty. For defining the linear program, we first make the following observations. For every \(t=(q_{i},\sigma,q_{j})\in\delta\) s.t. \(q_{i}\in\iota\), we have \(\Delta(t)=\rho(t)\cdot\big{(}\gamma(t)-\nu\big{)}\), and for every run \(r\) of length \(|r|=m>1\) we have \[\Delta(r) =\rho(r)\cdot\big{(}\mathcal{A}(r)-\nu\big{)}\] \[=\rho\big{(}r(m-1)\big{)}\cdot\Big{(}\Delta\big{(}r[0..m-2]\big{)}+ \gamma\big{(}r(m-1)\big{)}\Big{)}\] Hence, \(\langle x_{1},x_{2},\ldots,x_{n}\rangle=\langle\Delta(q_{1}),\Delta(q_{2}), \ldots,\Delta(q_{n})\rangle\) must satisfy the following system of equations: 1. \(x_{j}\leq\rho(t)\cdot\big{(}\gamma(t)-\nu\big{)}\) for every \(t=(q_{i},\sigma,q_{j})\in\delta\) s.t. \(q_{i}\in\iota\). 2. \(x_{j}\leq\rho(t)\cdot\big{(}\gamma(t)+x_{i}\big{)}\) for every \(t=(q_{i},\sigma,q_{j})\in\delta\) s.t. \(q_{i}\in Q_{in}\). These equations have a single maximal solution \(\langle x_{1}^{*},\cdots,x_{n}^{*}\rangle\) such that for any solution \(\langle a_{1},\cdots,a_{n}\rangle\) and \(1\leq i\leq n\), we have \(x_{i}^{*}\geq a_{i}\). To see that \(\langle\Delta(q_{1}),\ldots,\Delta(q_{n})\rangle\) is indeed the unique maximal solution, if such exists, consider a solution \(\langle a_{1},\cdots,a_{n}\rangle\), a state \(q_{i}\in Q_{in}\) and a run \(r\) such that \(\delta(r)=q_{i}\) and \(\Delta(r)=\Delta(q_{i})\). For every \(0\leq j<|r|\), let \(q_{i_{j}}\) be the target state after the \(j\)-sized prefix of \(r\), meaning \(q_{i_{j}}=\delta\big{(}r[0..j]\big{)}\). We will show by induction on \(j\) that \(a_{i_{j}}\leq\Delta(r[0..j])\) to conclude that \(a_{i}=a_{i_{|r|-1}}\leq\Delta(r[0..|r|-1])=\Delta(r)=\Delta(q_{i})\): * For the base case, we have \(a_{i_{0}}\leq\rho\big{(}r(0)\big{)}\big{(}\gamma(r(0))-\nu\big{)}=\Delta\big{(} r(0)\big{)}\). * For the induction step, \[a_{i_{j}} \leq\rho\big{(}r(j)\big{)}\cdot\Big{(}\gamma\big{(}r(j)\big{)}+a_ {i_{j-1}}\Big{)}\] \[\leq\rho\big{(}r(j)\big{)}\cdot\Big{(}\gamma\big{(}r(j)\big{)}+ \Delta\big{(}r[0..j-1]\big{)}\Big{)}=\Delta\big{(}r[0..j]\big{)}\] The implicit constraint of non-negative values for the variables of the linear program, meaning \(x_{i}\geq 0\) for every \(1\leq i\leq n\), handles the case of a possible divergence to \(-\infty\). With these constraints, if there exists \(q\in Q\) such that \(\Delta(q)<0\), then the linear program has no feasible solution, and this case will be detected by the algorithm that solves the linear program. Meaning that the problem can be stated as the linear program: maximize \(\sum_{i=0}^{n}x_{i}\) subject to item 1, item 2 and \(x_{i}\geq 0\) for every \(1\leq i\leq n\). We continue with the PSPACE-complete problems, to which we first provide hardness proofs, by reductions from the universality problem of NFAs, known to be PSPACE-complete [37]. Notice that the provided hardness results already stand for integral NDAs, not only to tidy NMDAs. PSPACE-hardness of the containment problem for NDAs with respect to infinite words and non-strict inequalities is shown in [5]. We provide below more general hardness results, considering the equivalence problem, first with respect to finite words and then with respect to infinite words, as well as the exact-value, universality(\(\leq\)) and universality(\(<\)) problems with respect to finite words. **Lemma 5.6**.: _The equivalence and universality(\(\leq\)) problems of integral NDAs w.r.t. finite words are PSPACE-hard._ Proof.: Given an NFA \(A=\langle\Sigma,Q,Q_{0},\Delta,F\rangle\), we construct in polynomial time an NDA \(\mathcal{B}\) with discount factor 2, such that \(\mathcal{B}\) never gets a negative value, and \(\mathcal{A}\) is universal if and only if \(\mathcal{B}\) is equivalent to a \(0\) NDA, namely to an NDA that gets a value of \(0\) on all finite words. For simplicity, we ignore the empty word and words of length \(1\), whose acceptance is easy to check in \(\mathcal{A}\). Intuitively, \(\mathcal{B}\) will have the same structure as \(\mathcal{A}\), and the assigned weights on the transitions will guarantee that the value of \(\mathcal{B}\) on every word \(u\) is \(\frac{1}{2^{|u|}}\). In addition, we have in \(\mathcal{B}\) a new "good" state \(q_{acc}\), and for every original transition \(t\) to an accepting state \(q\in F\), we add in \(\mathcal{B}\) a new "good" transition \(t^{\prime}\) to \(q_{acc}\), such that the weight on \(t^{\prime}\) allows \(\mathcal{B}\) to have a value of \(0\) on a word \(u\) on which there is a run on \(u\) ending in \(q\). Finally, we add a "bad" transition out of \(q_{acc}\), such that its weight ensures a total positive value, in the case that \(\mathcal{B}\) continues the run out of \(q_{acc}\). (Example in Figure 23.) Formally, we construct a \(2\)-NDA (with discount factor \(2\)) \(\mathcal{B}=\langle\Sigma,Q\cup Q_{0}\times\{1\}\cup\{q_{acc}\},Q_{0}\times\{ 1\},\Delta\cup\delta_{\mathcal{B}},\gamma_{\mathcal{B}}\rangle\), where * \(\delta_{\mathcal{B}}=\big{\{}\big{(}(q,1),\sigma,q^{\prime}\big{)}\big{\}}\ |\ (q, \sigma,q^{\prime})\in\Delta\big{\}}\cup\) \(\big{\{}(q,\sigma,q_{acc})\ |\ \text{ exist }q^{\prime}\in F\text{ and }(q,\sigma,q^{\prime})\in\Delta\big{\}}\cup\) \(\big{\{}(q_{acc},\sigma,q_{acc})\ |\ \sigma\in\Sigma\big{\}}\). * For every \(t=\big{(}(q,1),\sigma,q^{\prime}\big{)}\in\delta_{\mathcal{B}}\), we have \(\gamma_{\mathcal{B}}(t)=\frac{1}{2}\). * For every \(t\in\Delta\), we have \(\gamma_{\mathcal{B}}(t)=-\frac{1}{2}\). * For every \(t=(q,\sigma,q_{acc})\in\delta_{\mathcal{B}}\), we have \(\gamma_{\mathcal{B}}(t)=-1\). Observe that by the construction of \(\mathcal{B}\), for every word \(w\), \(\mathcal{B}(w)\geq 0\). Hence, \(\mathcal{B}\) is equivalent to a \(0\) NDA iff it is universal(\(\leq\)) with respect to the threshold \(0\). Meaning that the same reduction shows the \(PSPACE\)-hardness of the universality(\(\leq\)) with respect to infinite words. **Lemma 5.7**.: _The equivalence and universality(\(\leq\)) problems of integral NDAs w.r.t. infinite words are PSPACE-hard._ Proof.: Similarly to the proof of Lemma 5.6, we construct in polynomial time an NDA \(\mathcal{B}\) with discount factor \(2\), such that the input NFA is universal if and only if \(\mathcal{B}\) is equivalent to a \(0\) NDA with respect to infinite words. Also in this reduction, no negative values of words will be possible, so it is also valid for showing the PSPACE-hardness of the universality(\(\leq\)) problem. The reduction is similar to the one provided in the proof of Lemma 5.6, with intuitively the following adaptations of the constructed NDA \(\mathcal{B}\) to the case of infinite words: We add a new letter \(\#\) to the alphabet, low-weighted \(\#\)-transitions from the accepting states, and high-weighted \(\#\)-transitions from the non-accepting states. Figure 23. An example of the reduction defined in the proof of Lemma 5.6. By this construction, the value of \(\mathcal{B}\) on an infinite word \(u\cdot\#\cdot w\), where \(u\) does not contain \(\#\), will be \(0\) if and only if \(\mathcal{A}\) accepts \(u\). Notice that the value of \(\mathcal{B}\) on an infinite word that does not contain \(\#\) is also \(0\), as it is \(\lim_{n\to\infty}\frac{1}{2^{n}}\). Formally, given NFA \(\mathcal{A}=\langle Q,\Sigma,\Delta,Q_{0},F\rangle\), we construct a \(2\)-NDA \(\mathcal{B}=\langle\Sigma\cup\{\#\},Q\cup Q_{0}\times\{1\}\cup\{q_{\infty}\},Q _{0}\times\{1\},\Delta\cup\delta_{\mathcal{B}},\gamma_{\mathcal{B}}\rangle\) where \(\bullet\)\(\#\notin\Sigma\) is a new letter. \(\bullet\)\(\delta_{\mathcal{B}}=\big{\{}\big{(}(q,1),\sigma,q^{\prime}\big{)}\big{\}}\)\(|\)\((q,\sigma,q^{\prime})\in\Delta\big{\}}\cup\) \(\big{\{}(q,\#,q_{\infty})\)\(|\)\(q\in Q\big{\}}\cup\) \(\big{\{}(q_{1},\#,q_{\infty})\)\(|\)\(q\in Q\big{\}}\cup\) \(\big{\{}(q_{\infty},\tau,q_{\infty})\)\(|\)\(\tau\in\Sigma\cup\{\#\}\big{\}}\). \(\bullet\)\(\gamma_{\mathcal{B}}\): * For every \(t=\big{(}(q,1),\sigma,q^{\prime}\big{)}\in\delta_{\mathcal{B}}\), we have \(\gamma_{\mathcal{B}}(t)=\frac{1}{2}\). * For every \(t\in\Delta\), we have \(\gamma_{\mathcal{B}}(t)=-\frac{1}{2}\). * For every \(t_{1}=(q,\#,q_{\infty})\in\delta_{\mathcal{B}}\) or \(t_{2}=\big{(}(q,1),\#,q_{\infty}\big{)}\in\delta_{\mathcal{B}}\), such that \(q\in F\), we have \(\gamma_{\mathcal{B}}(t_{1})=-1\) and \(\gamma_{\mathcal{B}}(t_{2})=0\). Those transitions assure that for every \(u\in\Sigma^{*}\) that \(\mathcal{A}\) accepts, there exists a run of \(\mathcal{B}\) on \(u\#\), ending in \(q_{\infty}\) with a value of \(0\). * For every \(t_{1}=(q,\#,q_{\infty})\in\delta_{\mathcal{B}}\) or \(t_{2}=\big{(}(q,1),\#,q_{\infty}\big{)}\in\delta_{\mathcal{B}}\), such that \(q\in Q\setminus F\), we have \(\gamma_{\mathcal{B}}(t_{1})=0\) and \(\gamma_{\mathcal{B}}(t_{2})=1\). * \(\gamma_{\mathcal{B}}\big{(}(q_{\infty},\tau,q_{\infty})\big{)}=0\). An example of the construction is given in Figure 24. Observe that for every infinite word \(w\in\Sigma^{\omega}\), we have \(\mathcal{B}(w)=\lim_{n\to\infty}\frac{1}{2^{n}}=0\). In addition, for every finite word \(u\in\Sigma^{*}\) and infinite word \(w\in(\Sigma\cup\{\#\})^{\omega}\), we have \(\mathcal{B}(u\cdot\#\cdot w)=0\Leftrightarrow\) there exists a run of \(\mathcal{B}\) on \(u\cdot\#\) with a final transition \((p,\#,q_{\infty})\) or \(\big{(}(p,1),\#,q_{\infty}\big{)}\) such that \(p\in F\Leftrightarrow\) there exist \(p\in F\) and a run of \(\mathcal{A}\) on \(u\) with \(p\) as the final state \(\Leftrightarrow\)\(u\in L(\mathcal{A})\). Hence \(\mathcal{A}\) is universal iff \(\mathcal{B}\equiv 0\). Also, for every finite word \(u\in\Sigma^{*}\) and infinite word \(w\in(\Sigma\cup\{\#\})^{\omega}\), we have \(\mathcal{B}(u\cdot\#\cdot w)\leq 0\Leftrightarrow u\in L(\mathcal{A})\). Hence \(\mathcal{A}\) is universal iff \(\mathcal{B}\) is universal with respect to the threshold \(0\), non-strict inequality and infinite words. **Lemma 5.8**.: _The universality(\(<\)) and exact-value problems of integral NDAs w.r.t. finite words are PSPACE-hard._ Proof.: Similarly to the proof of Lemma 5.6, we show a polynomial reduction from the problem of NFA universality to the problems of NDA universality and exact-value. The reduction Figure 24. An example of the reduction defined in the proof of Lemma 5.7. is similar to the one provided in the proof of Lemma 5.6, yet changing the transition weights in the constructed NDA \(\mathcal{B}\), such that for every finite word \(u\), we have \(\mathcal{B}(u)<0\) if and only if \(\mathcal{A}\) accepts \(u\), and \(\mathcal{B}(u)=0\) otherwise. This provides reductions to both the universality and exact-value problems. Formally, given an NFA \(\mathcal{A}=\langle Q,\Sigma,\Delta,Q_{0},F\rangle\), we construct a 2-NDA \(\mathcal{B}=\langle\Sigma,Q\cup\{q_{acc},q_{\infty}\},Q_{0},\Delta\cup\delta_ {\mathcal{B}},\gamma_{\mathcal{B}}\rangle\) where: * \(\delta_{\mathcal{B}}=\big{\{}(q,\sigma,q_{acc})\ |\ \text{ exist }q^{\prime}\in F\text{ and }(q,\sigma,q^{\prime})\in\Delta\big{\}}\cup\) \(\big{\{}(q_{acc},\sigma,q_{\infty})\ |\ \sigma\in\Sigma\big{\}}\cup\) \(\big{\{}(q_{\infty},\sigma,q_{\infty})\ |\ \sigma\in\Sigma\big{\}}\). * For every \(t\in\Delta\), we have \(\gamma_{\mathcal{B}}(t)=0\). * For every \(t=(q,\sigma,q_{acc})\in\delta_{\mathcal{B}}\), we have \(\gamma_{\mathcal{B}}(t)=-1\). These transitions ensure that if a word \(w\) is accepted in \(\mathcal{A}\), then there exists a run of \(\mathcal{B}\) on \(w\) with a negative value. * For every \(\sigma\in\Sigma\), we have \(\gamma_{\mathcal{B}}\big{(}(q_{acc},\sigma,q_{\infty})\big{)}=2\). These transitions ensure that only runs that "exit" the original structure of \(\mathcal{A}\) in the finial transition, will result in a negative value. The weight of 2, reduced by the fixed discount factor of 2, exactly compensates on the negative weight that was added in the transition that entered \(q_{acc}\). * For every \(\sigma\in\Sigma\), we have \(\gamma_{\mathcal{B}}\big{(}(q_{\infty},\sigma,q_{\infty})\big{)}=0\). These transitions ensure that a run entering \(q_{\infty}\) will maintain the exact same value for every suffix walk added to it. An example of the construction is given in Figure 25. Observe that the only negative weights in \(\mathcal{B}\) are on the transitions entering \(q_{acc}\), and only a single one of them can be part of every run. All the runs not entering \(q_{acc}\) have a value of 0, and all the runs passing in \(q_{acc}\) in a transition that is not the final one will also have a value of 0. For every finite word \(w\in\Sigma^{+}\), we have that \(w\in L(\mathcal{A})\Leftrightarrow\) there exist \(q\in Q\), \(p\in F\) and a run \(r\) of \(\mathcal{A}\) on \(w\) with a final transition \((q,\sigma,p)\Leftrightarrow\) there exist \(q\in Q\) and a run \(r^{\prime}\) of \(\mathcal{B}\) on \(w\) with a final transition \((q,\sigma,q_{acc})\Leftrightarrow\) there exists a run \(r^{\prime}\) of \(\mathcal{B}\) on \(w\) such that \(\mathcal{B}(r^{\prime})<0\Leftrightarrow\mathcal{B}(w)<0\Leftrightarrow\mathcal{ B}(w)\neq 0\). Hence \(\mathcal{A}\) is universal iff \(\mathcal{B}\) is universal(\(<\)) with respect to finite words and the threshold \(\nu=0\). Also, \(\mathcal{A}\) is universal iff there is no finite word \(w\) such that \(\mathcal{B}(w)=0\). Another special case left to handle is the empty word \(\varepsilon\), but this can be easily verified before constructing \(\mathcal{B}\) by checking if \(F\cap Q_{0}\neq\emptyset\). We continue with the PSPACE upper bounds. The containment problem of NDAs was proved in [5] to be in PSPACE, using comparators to reduce the problem to language inclusion between Buchi automata. Our approach for the containment problem of NMDAs is different, and it also improves the complexity provided in [5] for NDAs (having a single Figure 25. An example of the reduction defined in the proof of Lemma 5.8. discount factor), as we refer to binary representation of weights, while [5] assumes unary representation.4 Footnote 4: Rational weights are assumed to have a common denominator, both by us and by [5], where in the latter it is stated implicitly, by providing the complexity analysis with respect to transition weights that are natural numbers. Our algorithm for solving the containment problem between \(\theta\)-NMDAs \(\mathcal{A}\) and \(\mathcal{B}\) is a non-deterministic polynomial space algorithm that determines the opposite, meaning whether there exists a word \(w\) such that \(\mathcal{A}(w)-\mathcal{B}(w)<0\) for containment(\(\geq\)) or \(\mathcal{A}(w)-\mathcal{B}(w)\leq 0\) for containment(\(>\)), to conclude that the problems are in co-NPSPACE and hence in PSPACE. We perform the determinization of \(\mathcal{B}\) on-the-fly into a DMDA \(\mathcal{D}\), and simulate on the fly a \(\theta\)-NMDA for the difference between \(\mathcal{A}\) and \(\mathcal{D}\). We then non-deterministically guess a run \(r\) that witnesses a negative value of the difference automaton, while ensuring that the entire process only uses space polynomial in the size of the input automata. For meeting this space requirement, after each step of the run \(r\), the algorithm maintains a _local data_ consisting of the current state of \(\mathcal{A}\), the current state of \(\mathcal{D}\) and a "normalized difference" between the values of the runs of \(\mathcal{A}\) and \(\mathcal{D}\) on the word generated so far. When the normalized difference goes below \(0\), we have that the generated word \(w\) is a witness for \(\mathcal{A}(w)<\mathcal{D}(w)\), when it gets to \(0\) we have a witness for \(\mathcal{A}(w)=\mathcal{D}(w)\), and when it exceeds a certain _maximal recoverable difference_, which is polynomial in \(|\mathcal{A}|+|\mathcal{B}|\), no suffix can be added to \(w\) for getting a witness. **Theorem 5.9**.: _For every choice function \(\theta\), the containment problem of \(\theta\)-NMDAs w.r.t. finite words is PSPACE-complete for both strict and non-strict inequalities._ Proof.: PSPACE hardness directly follows from Lemma 5.8 and Lemma 5.6. We provide a PSPACE upper bound. Consider a choice function \(\theta\), and \(\theta\)-NMDAs \(\mathcal{A}=\langle\Sigma,Q_{\mathcal{A}},\iota,\delta_{\mathcal{A}},\gamma_{ \mathcal{A}},\rho_{\mathcal{A}}\rangle\) and \(\mathcal{B}\). We have that \[\forall w.\mathcal{A}(w)>\mathcal{B}(w)\Leftrightarrow\not{\exists}w. \mathcal{A}(w)\leq\mathcal{B}(w)\Leftrightarrow\not{\exists}w.\mathcal{A}(w )-\mathcal{B}(w)\leq 0\] and \[\forall w.\mathcal{A}(w)\geq\mathcal{B}(w)\Leftrightarrow\not{\exists}w. \mathcal{A}(w)<\mathcal{B}(w)\Leftrightarrow\not{\exists}w.\mathcal{A}(w)- \mathcal{B}(w)<0\] We present a nondeterministic algorithm that determines the converse of containment, namely whether there exists a word \(w\) such that \(\mathcal{A}(w)-\mathcal{B}(w)\leq 0\) for containment(\(>\)) or \(\mathcal{A}(w)-\mathcal{B}(w)<0\) for containment(\(\geq\)), while using polynomial space w.r.t. \(|\mathcal{A}|\) and \(|\mathcal{B}|\), to conclude that the problems are in co-NPSPACE and hence in PSPACE. Let \(\mathcal{D}=\langle\Sigma,Q_{\mathcal{D}},\{p_{0}\},\delta_{\mathcal{D}}, \gamma_{\mathcal{D}},\rho_{\mathcal{D}}\rangle\) be a \(\theta\)-DMDA equivalent to \(\mathcal{B}\), as per Theorem 4.6. Observe that the size of \(\mathcal{D}\) can be exponential in the size of \(\mathcal{B}\), but we do not save it all, but rather simulate it on the fly, and thus only save a single state of \(\mathcal{D}\) at a time. We will later show that indeed the intermediate data we use in each iteration of the algorithm only requires a space polynomial in \(|\mathcal{A}|\) and \(|\mathcal{B}|\). _Containment(\(\geq\))._ For providing a word \(w\in\Sigma^{+}\), such that \(\mathcal{A}(w)-\mathcal{B}(w)<0\), we nondeterministically generate on the fly a word \(w\), a run \(r_{w}\) of \(\mathcal{A}\) on \(w\), and the single run of \(\mathcal{D}\) on \(w\), such that \(\mathcal{A}(r_{w})-\mathcal{B}(w)=\mathcal{A}(r_{w})-\mathcal{D}(w)<0\). Observe that \(\mathcal{A}(w)\leq\mathcal{A}(r_{w})\), hence the above condition is equivalent to \(\mathcal{A}(w)-\mathcal{B}(w)<0\). Let \(M_{\mathcal{A}}\), \(M_{\mathcal{B}}\), and \(M_{\mathcal{D}}\) be the maximal absolute weights in \(\mathcal{A}\), \(\mathcal{B}\), and \(\mathcal{D}\), respectively. We start by guessing an initial state \(q_{in}\) of \(\mathcal{A}\) and setting a _local data_ storage of \(\langle q_{in},p_{0},0\rangle\). The local data will maintain the current state of \(\mathcal{A}\) and \(\mathcal{D}\) respectively, and a "normalized difference" between the value of the run in \(\mathcal{A}\) generated so far and the value of \(\mathcal{D}\) on the word generated so far, as formalized below. The algorithm iteratively guesses, given a local data \(\langle q,p,d\rangle\), a letter \(\sigma\in\Sigma\) and a transition \(t=(q,\sigma,q^{\prime})\in\delta_{\mathcal{A}}(q,\sigma)\), and calculates the _normalized difference_\(d^{\prime}=\rho_{\mathcal{A}}(t)\big{(}d+\gamma_{\mathcal{A}}(t)-\gamma_{ \mathcal{D}}(p,\sigma)\big{)}\) between the values \(\mathcal{A}(r_{w})\) and \(\mathcal{B}(w)\), w.r.r. the word \(w\) and the run \(r_{w}\) generated so far. If \(d^{\prime}\) is bigger than the _maximal recoverable difference_\(2S\), where \(S=M_{\mathcal{A}}+3M_{\mathcal{B}}\), we abort, if \(d^{\prime}<0\), we have that the generated word \(w\) indeed witnesses that \(\mathcal{A}(w)<\mathcal{D}(w)\) (the _accept condition_ holds), and otherwise we continue and update the local data to \(\langle q^{\prime},\delta(p,\sigma),d^{\prime}\rangle\). Observe that by the construction in the proof of Theorem 4.6, for every weight \(W\) in \(\mathcal{D}\) we have that \(|W|\leq 2T+M_{\mathcal{B}}\leq 3M_{\mathcal{B}}\), where \(T\) is the maximal difference between the weights in \(\mathcal{B}\). Hence \(S>M_{\mathcal{A}}+M_{\mathcal{D}}\) is polynomial w.r.t. \(|\mathcal{A}|\) and \(|\mathcal{B}|\), and can be calculated in polynomial space w.r.t. \(|\mathcal{A}|\) and \(|\mathcal{B}|\). We show by induction on the length of the word \(w\) that whenever a word \(w\) and a run \(r_{w}\) are generated, the value \(d\) in the corresponding local data \(\langle q,p,d\rangle\) indeed stands for the normalized difference between \(\mathcal{A}(r_{w})\) and \(\mathcal{D}(w)\), namely \[d=\rho_{\mathcal{A}}(r_{w})\big{(}\mathcal{A}(r_{w})-\mathcal{D}(w)\big{)} \tag{5.1}\] For the base case we have a single-letter word \(w=\sigma\), and a single-transition run \(r_{w}=t\). Hence, \(d^{\prime}=\rho_{\mathcal{A}}(t)\big{(}d+\gamma_{\mathcal{A}}(t)-\gamma_{ \mathcal{D}}(p,\sigma)\big{)}=\rho_{\mathcal{A}}(r_{w})\big{(}0+\mathcal{A}( r_{w})-\mathcal{D}(w)\big{)}=\rho_{\mathcal{A}}(r_{w})\big{(}\mathcal{A}(r_{w})- \mathcal{D}(w)\big{)}\). For the induction step, consider an iteration whose initial local data is \(\langle q,p,d\rangle\), for a generated word \(w\) and run \(r_{w}\), that guessed the next letter \(\sigma\) and transition \(t\), and calculated the next local data \(\langle q^{\prime},p^{\prime},d^{\prime}\rangle\). Then we have \(d^{\prime}=\rho_{\mathcal{A}}(t)\big{(}d+\gamma_{\mathcal{A}}(t)-\gamma_{ \mathcal{D}}(p,\sigma)\big{)}\). By the induction assumption, we get: \[d^{\prime} =\rho_{\mathcal{A}}(t)\Big{(}\rho_{\mathcal{A}}(r_{w})\big{(} \mathcal{A}(r_{w})-\mathcal{D}(w)\big{)}+\gamma_{\mathcal{A}}(t)-\gamma_{ \mathcal{D}}(p,\sigma)\Big{)}\] \[=\rho_{\mathcal{A}}(r_{w})\rho_{\mathcal{A}}(t)\Big{(}\mathcal{A }(r_{w})+\frac{\gamma_{\mathcal{A}}(t)}{\rho_{\mathcal{A}}(r_{w})}-\mathcal{D }(w)-\frac{\gamma_{\mathcal{D}}(p,\sigma)}{\rho_{\mathcal{A}}(r_{w})}\Big{)}\] \[=\rho_{\mathcal{A}}(r_{w}\cdot t)\Big{(}\mathcal{A}(r_{w}\cdot t )-\Big{(}\mathcal{D}(w)+\frac{\gamma_{\mathcal{D}}(p,\sigma)}{\rho_{\mathcal{ A}}(r_{w})}\Big{)}\Big{)},\] and since the discount-factor functions of \(\mathcal{A}\) and \(\mathcal{D}\) both agree with \(\theta\), we have \[d^{\prime}=\rho_{\mathcal{A}}(r_{w}\cdot t)\Big{(}\mathcal{A}(r_{w}\cdot t)- \Big{(}\mathcal{D}(w)+\frac{\gamma_{\mathcal{D}}(p,\sigma)}{\rho_{\mathcal{ D}}(w)}\Big{)}\Big{)}=\rho_{\mathcal{A}}(r_{w}\cdot t)\big{(}\mathcal{A}(r_{w} \cdot t)-\mathcal{D}(w\cdot\sigma)\big{)},\] which provides the required result of the induction claim. Next, we show that the accept condition holds iff there exist a finite word \(w\) and run \(r_{w}\) of \(\mathcal{A}\) on \(w\) such that \(\mathcal{A}(r_{w})-\mathcal{D}(w)<0\). Since for every finite word \(w\) we have \(\rho_{\mathcal{A}}(w)>0\), we conclude from Equation 5.1 that if \(d^{\prime}<0\) was reached for a generated word \(w\) and a run \(r_{w}\), we have that \(\mathcal{A}(r_{w})-\mathcal{D}(w)<0\). For the other direction, assume toward contradiction that there exist finite word \(w\) and run \(r_{w}\) of \(\mathcal{A}\) on \(w\) such that \(\mathcal{A}(r_{w})-\mathcal{D}(w)<0\), but the algorithm aborts after generating some prefixes \(w[0..i]\) and \(r_{w}[0..i]\). Meaning that \(\rho_{\mathcal{A}}(r_{w}[0..i])\big{(}\mathcal{A}(r_{w}[0..i])-\mathcal{D}(w [0..i])\big{)}>2M_{\mathcal{A}}+2M_{\mathcal{D}}\). Let \(W_{1}=\mathcal{A}(r_{w}[i+1..|r_{w}|-1])\) and \(W_{2}=\mathcal{D}^{\delta_{\mathcal{D}}(w[0..i])}(w[i+1..|r_{w}|-1])\). Observe that \[0 >\mathcal{A}(r_{w})-\mathcal{D}(w)>\rho_{\mathcal{A}}\big{(}r_{w} [0..i]\big{)}\big{(}\mathcal{A}(r_{w})-\mathcal{D}(w)\big{)}\] \[=\rho_{\mathcal{A}}\big{(}r_{w}[0..i]\big{)}\mathcal{A}\big{(}r_ {w}[0..i]\big{)}+W_{1}-\big{(}\rho_{\mathcal{A}}(r_{w}[0..i])\mathcal{D}(w[0..i ])+W2\big{)}\] \[>2M_{\mathcal{A}}+2M_{\mathcal{D}}+W_{1}-W_{2}\] But since all the discount factors applied by \(\theta\) are greater or equal to \(2\), we have that \(|W_{1}|\leq 2M_{\mathcal{A}}\) and \(|W_{2}|\leq 2M_{\mathcal{B}}\), leading to a contradiction. To see that the algorithm indeed only uses space polynomial in \(|\mathcal{A}|\) and \(|\mathcal{B}|\), observe that the first element of the data storage is a state of \(\mathcal{A}\), only requiring a space logarithmic in \(|\mathcal{A}|\), the second element is a state of \(\mathcal{D}\), requiring by Theorem 4.6 a space polynomial in \(\mathcal{B}\), and the third element is a non-negative rational number bounded by \(2S\), whose denominator is the multiplication of the denominators of the weights in \(\mathcal{A}\) and \(\mathcal{D}\), and as shown in the proof of Theorem 4.6, also of the multiplication of the denominators of the weights in \(\mathcal{A}\) and \(\mathcal{B}\), thus requires a space polynomial in \(|\mathcal{A}|\) and \(|\mathcal{B}|\). Finally, in order to compute this third element, we calculated a weight of a transition in \(\mathcal{D}\), which only requires, by the proof of Theorem 4.6, a space polynomial in \(|\mathcal{B}|\). _Containment(\(>\))._ The algorithm is identical to the one used for the containment(\(\geq\)) problem with changing the accept condition \(d^{\prime}<0\) to \(d^{\prime}\leq 0\). This condition is met iff there exists a finite word \(w\) such that \(\mathcal{A}(w)-\mathcal{B}(w)\leq 0\). The proof is identical while modifying "\(<0\)" to "\(\leq 0\)" in all of the equations. The algorithm for determining containment(\(\geq\)) in the infinite-words settings is similar to the one presented for finite words, with the difference that rather than witnessing a finite word \(w\), such that \(\mathcal{A}(w)-\mathcal{B}(w)<0\), we witness a finite prefix \(u\) (of an infinite word \(w\)), such that the normalized difference between \(\mathcal{A}(u)\) and \(\mathcal{B}(u)\) (taking into account the accumulated discount factor on \(u\)) is bigger than some fixed threshold. **Theorem 5.10**.: _For every choice function \(\theta\), the containment problem of \(\theta\)-NMDAs w.r.t. infinite words and non-strict inequality is PSPACE-complete._ Proof.: PSPACE hardness directly follows from Lemma 5.7. We provide a PSPACE upper bound. Consider a choice function \(\theta\), and \(\theta\)-NMDAs \(\mathcal{A}\) and \(\mathcal{B}\). Analogously to the proof of Theorem 5.9, we present a nondeterministic algorithm that determines whether there exist a word \(w\) and a run \(r_{w}\) of \(\mathcal{A}\) on \(w\), such that \(\mathcal{A}(r_{w})-\mathcal{B}(w)<0\), and thus \(\mathcal{A}(w)-\mathcal{B}(w)<0\). The algorithm uses polynomial space w.r.t. \(|\mathcal{A}|\) and \(|\mathcal{B}|\), which shows that the problem is in co-NPSPACE and hence in PSPACE. The algorithm is identical to the one presented in the proof of Theorem 5.9, with the only difference that the condition for an infinite word \(w\) such that \(\mathcal{A}(w)-\mathcal{B}(w)<0\) is that we generated a finite word \(u\) and a run \(r_{u}\) of \(\mathcal{A}\) on \(u\), that resulted in a local data with normalized difference \(d<-2S\). We will use the same notations as in the proof of Theorem 5.9. Observe that for any infinite word \(w\) and infinite walks \((\psi_{1},\psi_{2})\) of \((\mathcal{A},\mathcal{D})\) on \(w\) from any state in \((\mathcal{A},\mathcal{D})\), we have that \(2S=2M_{\mathcal{A}}+2M_{\mathcal{D}}\geq\mathcal{A}(\psi_{1})-\mathcal{D}( \psi_{2})\). If \(-2S>d=\rho_{\mathcal{A}}(r_{u})(\mathcal{A}(r_{u})-\mathcal{D}(u))\) was reached for a generated finite word \(u\), and a run \(r_{u}\) of \(\mathcal{A}\) on \(u\), then for any infinite suffix word \(w\) and a walk \(\psi_{1}\) of \(\mathcal{A}\) on \(w\) starting at \(\delta_{\mathcal{A}}(r_{u})\), we have that \[0=-2S+2S>\Big{(}\rho_{\mathcal{A}}(r_{u})\big{(}\mathcal{A}(r_{u})-\mathcal{D} (u)\big{)}\Big{)}+\Big{(}\mathcal{A}(\psi_{1})-\mathcal{D}(\psi_{2})\Big{)}\] where \(\psi_{2}\) is the walk of \(\mathcal{A}\) on \(w\) starting at \(\delta_{\mathcal{D}}(u)\). Hence, \[0 >\mathcal{A}(r_{u})+\frac{\mathcal{A}(\psi_{1})}{\rho_{\mathcal{A} }(r_{u})}-\Big{(}\mathcal{D}(u)+\frac{\mathcal{D}(\psi_{2})}{\rho_{\mathcal{A} }(r_{u})}\Big{)}\geq A(r_{u}\cdot\psi_{1})-\mathcal{D}(u\cdot w)\] \[\geq\mathcal{A}(u\cdot w)-\mathcal{D}(u\cdot w)\] For the other direction, assume that there exists an infinite word \(w\in\Sigma^{\omega}\) such that \(\mathcal{A}(r_{w})-\mathcal{D}(w)=-\epsilon<0\), where \(r_{w}\) is a run of \(\mathcal{A}\) on \(w\) that entails the minimum value. By an observation similar to the one presented in the proof of Theorem 5.9, we conclude that whenever a word prefix \(w[0..i]\) and a run \(r_{w}[0..i]\) are generated, the algorithm does not fulfill the abort condition. It is only left to show that there exist prefixes of \(w\) and \(r_{w}\) that result with \(d<-2S\). Indeed, we have that there exists \(n_{1}\in\mathbb{N}\) such that \(\forall i\geq n_{1}.\mathcal{A}(r_{w}[0..i])-\mathcal{D}(w[0..i])<-\frac{ \epsilon}{2}\) and there exists \(n_{2}\in\mathbb{N}\) such that \(\forall i\geq n_{2}.-\frac{\epsilon}{2}<-\frac{2S}{\rho(w[0..i])}\). Hence for \(n=\max\{n_{1},n_{2}\}\) we have \(\mathcal{A}(r_{w}[0..n])-\mathcal{D}(w[0..n])<-\frac{\epsilon}{2}<-\frac{2S}{ \rho(w[0..n])}\), meaning that the algorithm will accept when \(w[0..n]\) and \(r_{w}[0..n]\) are generated. As for the space analysis, the arguments presented in the proof of Theorem 5.9 also apply to the current algorithm, as the only relevant difference is that the third element in the data storage is now a rational number bounded by \(2S\) and \(-2S\), thus requiring double the space that was considered in the proof of Theorem 5.9, and hence remaining polynomial in \(|A|\) and \(|B|\). To find a witness for strict non-containment in the infinite-words setting, we adapt the above proof, by adding an accept condition for detecting convergence of the difference between the two automata values to the threshold value, which is the existence of a cycle with the same normalized difference. **Theorem 5.11**.: _For every choice function \(\theta\), the containment problem of \(\theta\)-NMDAs w.r.t. infinite words and strict inequality is in PSPACE._ Proof.: We use the same algorithm as in Theorem 5.10 with adding a new accept condition, which will identify the existence of an infinite word \(w\) and a run \(r_{w}\) of \(\mathcal{A}\) on \(w\), such that \(0=\mathcal{A}(r_{w})-\mathcal{B}(w)\). This new condition is reaching the same couple of states in \(\mathcal{A}\) and \(\mathcal{D}\) twice with the same value of normalized difference \(d\). Our NPSPACE algorithm can check this condition by guessing states \(q_{acc}\in Q_{\mathcal{A}}\), \(p_{acc}\in Q_{\mathcal{D}}\) and a normalized difference \(d_{acc}\), setting a flag when \(\langle q_{acc},p_{acc},d_{acc}\rangle\) is reached while the flag was clean, and accepting if it is reached while the flag was set. If the condition is met after generating some prefix word and a run of \(\mathcal{A}\) on that word, we have cycles in both \(\mathcal{A}\) and \(\mathcal{D}\) for the same suffix word, leading to the same normalized difference. Meaning that there exist finite words \(u\) and \(v\), a run \(r_{u}\) of \(\mathcal{A}\) on \(u\) and a walk \(\psi_{v}\) of \(\mathcal{A}\) on \(v\) starting at \(\delta_{\mathcal{A}}(r_{u})\), such that for every \(i\in\mathbb{N}\), according to Equation 5.1, we have \(\frac{d_{acc}}{\rho(w\cdot v^{i})}=\mathcal{A}(r_{u}\cdot\psi_{v}^{i})- \mathcal{D}(u\cdot v^{i})\). Hence \[0=\lim_{i\to\infty}\frac{d_{acc}}{\rho(u\cdot v^{i})}=\lim_{i\to\infty} \mathcal{A}(r_{u}\cdot\psi_{v}^{i})-\mathcal{D}(u\cdot v^{i})\] resulting in \(\mathcal{A}(u\cdot v^{\omega})\leq\mathcal{A}(r_{u}\cdot\psi_{v}^{\omega})= \mathcal{B}(u\cdot v^{\omega})\). For the other direction, we show that if there exist an infinite word \(w\) and a run \(r_{w}\) of \(\mathcal{A}\) on \(w\) such that \(\mathcal{B}(w)=\mathcal{A}(r_{w})\), then the new accept condition is met for some \(\langle q_{acc},p_{acc},d_{acc}\rangle\). Consider such \(w\) and \(r_{w}\) and observe that similarly to the analysis shown in the proof of Theorem 5.9, the normalized difference between the value of every prefix of \(r_{w}\) and the value of the same sized prefix of the single run of \(\mathcal{D}\) on \(w\), never exceeds the maximal recoverable difference. Hence, for every finite prefix \(w[0..i]\) of \(w\), we have that \(d_{i}=\rho(w[0..i])\big{(}\mathcal{A}(r_{w}[0..i])-\mathcal{D}(w[0..i])\big{)}\). The representation of \(d\) is bounded by a polynomial value with respect to \(|\mathcal{A}|\) and \(|\mathcal{B}|\), hence it is finite. Also, \(\mathcal{A}\) and \(\mathcal{D}\) have finitely many states, meaning that there exist \(j\neq k\in\mathbb{N}\), such that \(\delta_{\mathcal{A}}(r_{w}[0..j])=\delta_{\mathcal{A}}(r_{w}[0..k])=q_{acc}\), \(\delta_{\mathcal{D}}(w[0..j])=\delta_{\mathcal{D}}(w[0..k])=p_{acc}\), and \(d_{j}=d_{k}=d_{acc}\). Hence the accept condition is met when the \((\max\{j,k\})\)-sized prefixes of \(w\) and \(r_{w}\) are generated. Combined with the results shown in the proof of Theorem 5.10, we conclude that there exist an infinite word \(w\) and a run \(r_{w}\) of \(\mathcal{A}\) on \(w\), such that \(\mathcal{A}(r_{w})-\mathcal{B}(w)\leq 0\) iff one of the accept conditions is met. A PSPACE algorithm for equivalence directly follows from the fact that \(\mathcal{A}\equiv\mathcal{B}\) if and only if \(\mathcal{A}\geq\mathcal{B}\) and \(\mathcal{B}\geq\mathcal{A}\). **Corollary 5.12**.: _The equivalence problem of tidy NMDAs is PSPACE-complete._ We continue with the universality problems which are special cases of the containment problems. **Theorem 5.13**.: _The universality problems of tidy NMDAs are in PSPACE. The universality(\(<\)) w.r.t. finite words, universality(\(\leq\)) w.r.t. finite words, and universality(\(\leq\)) w.r.t. infinite words are PSPACE-complete._ Proof.: We will show that the universality problems of tidy NMDAs are in PSPACE. Hardness directly follows from Lemma 5.8 for universality(\(<\)) with respect to finite words, from Lemma 5.6 for universality(\(\leq\)) with respect to finite words, and from Lemma 5.7 for universality(\(\leq\)) with respect to infinite words. Consider a tidy NMDA \(\mathcal{B}\), and a threshold \(\nu\). The universality(\(<\)) is a special case of the containment(\(>\)) problem, replacing the automaton \(\mathcal{A}\) of the containment problem with a constant function that returns \(\nu\). Similarly, the non-strict universality is a special case of the non-strict containment. Accordingly, the algorithms for solving those problems are identical to the proofs of Theorem 5.9, Theorem 5.10 and Theorem 5.11, with changing all the references to the automaton \(\mathcal{A}\) with a "virtual" automaton implementing the constant function \(\nu\). For that purpose, the local data will be initialized with a normalized difference of \(d=\nu\) (instead of \(0\)), and when updated, we replace the addition of \(\gamma_{\mathcal{A}}(t)\) with \(0\), i.e., having \(d^{\prime}=\rho_{\mathcal{A}}(t)(d+0-\gamma_{\mathcal{D}}(p,\sigma))\). The maximal recoverable distance \(S\) will be calculated using \(M_{\mathcal{A}}=0\). The space requirement analysis is identical to Theorem 5.9 with omitting the analysis of \(\mathcal{A}\). **Theorem 5.14**.: _The exact-value problem of tidy NMDAs is in PSPACE (and PSPACE-complete w.r.t. finite words)._ Proof.: Consider a tidy NMDA \(\mathcal{B}\) and a threshold \(\nu\). The procedures for checking the existance of a words \(w\) such that \(\mathcal{B}(w)=\nu\) are similar to the procedures used in Theorem 5.11 and Theorem 5.9 for the containment(\(>\)) problems, with replacing the automaton \(\mathcal{A}\) with a "virtual" NMDA for the constant function \(\nu\), as in the proof of Theorem 5.13, and using only the accept conditions that determines \(\nu-\mathcal{B}(w)=0\). For the finite words case, the accept condition is generating a word that its normalized difference is \(d=0\). An analysis similar to the one showed in the proof of Theorem 5.9, with replacing "\(<0\)" in the equations with "\(=0\)", proves the correctness. For the infinite words case, the accept condition is the one presentented in the proof of Theorem 5.11, which determines the convergence to \(\nu\). In the proof of Theorem 5.11 we showed that this accept condition determines the existence of an infininte word \(w\) such that \(\nu-\mathcal{B}(w)=0\). In both problems we also abort if the normalized difference gets below \(-2S\), to preserve the polynomial space usage. Hardness with respect to finite words directly follows from Lemma 5.8. Considering deterministic automata, all of the above decision problems are in PTIME. **Theorem 5.15**.: _The non-emptiness, containment, equivalence and universality problems of integral DMDAs are in PTIME for both finite and infinite words._ Proof.: The complexity of the non-emptiness problem directly follows from Theorem 5.4, Theorem 5.3 and Theorem 5.5. We will now show that the containment problems are special cases of the emptiness problems when swapping the strictness of the problem ("\(>\)" becomes "\(\leq\)" and "\(\geq\)" becomes "\(<\)"). Consider integral DMDAs \(\mathcal{A}\) and \(\mathcal{B}\). According to Theorem 4.8, we can construct an integral DMDA \(\mathcal{C}\equiv\mathcal{A}-\mathcal{B}\) in linear time. Observe that for all words \(w\), \(\mathcal{A}(w)>\mathcal{B}(w)\Leftrightarrow\) for all words \(w\), \(\mathcal{C}(w)>0\Leftrightarrow\) there is no word \(w\) s.t \(\mathcal{C}(w)\leq 0\). Meaning that \(\mathcal{A}\) is contained(\(>\)) in \(\mathcal{B}\) iff \(\mathcal{C}\) is empty(\(\leq\)) with respect to the threshold \(0\). Similarly, \(\mathcal{A}\) is contained(\(\geq\)) in \(\mathcal{B}\) iff \(\mathcal{C}\) is empty(\(<\)) with respect to the threshold \(0\). Equivalence is a special case of containment(\(\geq\)) as in Corollary 5.12, and the universality problems are special cases of the containment problems when setting \(\mathcal{B}\) to be the input DMDA and \(\mathcal{A}\) to be a constant DMDA that gets the value of the input threshold on every word. Observe that since Theorem 5.3 and Theorem 5.4 are also valid for general NMDAs, having discount factors that are not necessarily integral, the results of Theorem 5.15 are also valid for general DMDAs, considering all the problems with respect to infinite words, and the problems of non-emptiness(\(<\)), containment(\(\geq\)), universality(\(\leq\)) and equivalence w.r.t. finite words. ## 6. Conclusions and Future Work The measure functions most commonly used in the field of quantitative verification, whether for describing system properties [13, 21, 36], automata valuation schemes [9, 10, 18, 5], game winning conditions [4, 22, 47], or temporal specifications [1, 6, 20, 42], are the limit-average (mean payoff) and the discounted-sum functions. Limit-average automata cannot always be determined [18] and checking their (non-strict) universality is undecidable [22]. Therefore, the tendency is to only use deterministic such automata, possibly with the addition of algebraic operations on them [14]. Discounted-sum automata with arbitrary rational discount factors also cannot always be determinized [18] and are not closed under algebraic operations [10]. Yet, with integral discount factors, they do enjoy all of these closure properties and their decision problems are decidable [10]. They thus provide a very interesting automata class for quantitative verification. Yet, they have a main drawback of only allowing a single discount factor. We define a rich class of discounted-sum automata with multiple integral factors (tidy NMDAs) that strictly extends the expressiveness of automata with a single factor, while enjoying all of the good properties of the latter, including the same complexity of the required decision problems. We thus believe that tidy NMDAs can provide a natural and useful generalization of integral discounted-sum automata in all fields, and especially in quantitative verification of reinforcement learning applications, as novel approaches in this field extend the single discount factor that is used in the calculation of the expected return value to multiple ones [34, 26, 39, 46, 32]. While we show that the containment problem of two tidy NMDAs with the same choice function is decidable, and of general integral NMDAs is undecidable, we leave for future work the question with respect to two tidy NMDAs with different choice functions. Though the problem with respect to two NDAs with different discount factors is decidable in PSPACE [8], we believe that considering two different choice functions requires more involved techniques.
2310.19242
Rainbow Stars and Rota's Basis Conjecture for Graphic Matroids
Let $G$ be a connected multigraph with $n$ vertices, and suppose $G$ has been edge-colored with $n-1$ colors so that each color class induces a spanning tree. Rota's Basis Conjecture for graphic matroids posits that one can find $n-1$ mutually edge-disjoint rainbow spanning trees. In a recent paper, Maezawa and Yazawa have shown that the conjecture holds if one assumes that the color classes induce spanning stars. We delve further into the star case to explore some extreme subcases including: all stars with different centers, the same center, or one of two centers. In addition, we identify the cases in which a graph composed of monochromatic stars can be decomposed into rainbow stars. We also show that the statement is false if one replaces `stars' with `paths'.
Anant Asthana, Shreev Goyal
2023-10-30T03:18:13Z
http://arxiv.org/abs/2310.19242v2
# Rainbow stars and Roia's basis conjecture for graphic matroids ###### Abstract. Let \(G\) be a connected multigraph with \(n\) vertices, and suppose \(G\) has been edge-colored with \(n-1\) colors so that each color class induces a spanning tree. Rota's Basis Conjecture for graphic matroids posits that one can find \(n-1\) mutually edge-disjoint rainbow spanning trees. In a recent paper, Maezawa and Yazawa have shown that the conjecture holds if one assumes that the color classes induce spanning stars. We delve further into the star case to explore some extreme subcases including: all stars with different centers, the same center, or one of two centers. In addition, we identify the cases in which a graph composed of monochromatic stars can be decomposed into rainbow stars. We also show that the statement is false if one replaces'stars' with 'paths'. ## 1. Introduction Rota's basis conjecture is a well-known open question in matroid theory that involves rearranging elements in a given set of bases to produce other 'rainbow' bases. The statement of the conjecture first appeared in a paper of Huang and Rota [1]. For the case of graphic matroids (which is our main focus here), the conjecture can be stated as follows. **Conjecture 1.1**.: _Let \(G\) be a connected graph on \(n\) vertices, and suppose \(G\) has been edge colored with \(n-1\) colors so that each color class induces a monochromatic spanning tree. Then \(G\) can be decomposed into \(n-1\) disjoint rainbow spanning trees._ We refer to Section 2 for any undefined terms. Note that Rota's Basis Conjecture is vacuous if the underlying graph \(G\) is simple (the relevant number of disjoint spanning trees cannot exist), so here we are always working with graphs that have parallel edges. According to Davies and McDiarmid [2], graphs with no \(K_{4}\) minor satisfy the conjecture. In a recent preprint, Maezawa and Yazawa [3] established Rota's Basis Conjecture for graphic matroids in the special case that each color class induces a monochromatic _spanning star_. They prove that under this assumption \(G\) can be decomposed into \(n-1\) disjoint rainbow trees. Even for this special case, their proof is rather long and technical. In this note we study special cases of Conjecture 1.1 involving arrangements of monochromatic stars and how one can find resulting rainbow spanning stars or trees. In particular, we consider the case of all stars having different centers, and the case of all stars having their centers among at most two vertices. Our main result states that if the color classes induce monochromatic stars, then one can find a collection of disjoint rainbow _stars_ if and only if all the given stars share the same center or all have different centers. See Theorem 3.6.1. We also consider the case of induced paths and show, via a counterexample, that a similar statement does not hold in this case. See Proposition 4.1 for details. Along the way, we also consider the question of _how many_ collections of rainbow trees (or stars) can be found in a given edge colored graph (Rota's Conjecture states that at least one Introduction Let \(G\) be a graph on \(n\) vertices. A graph \(G\) is a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. ### **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** Let \(G\) be a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a star. Then we say **Stars-to-Stars** holds for \(G\) if one can find a collection of \(n-1\) disjoint rainbow stars. **Stars-to-Stars** and Stars-to-Trees** [MISSING_PAGE_POST] Let \(G\) is a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic Now, we claim that this graph can be decomposed into \(n-1\) rainbow stars. Let \(c_{j}\) denote the color of the monochromatic star with center at \(v_{j}\); we will denote the whole monochromatic star as \(S_{j}\). To construct \(RS_{j}\), the rainbow star with center at vertex \(j\), we do the following: 1. We choose the edge \(e_{jn}\in S_{j}\) to connect node \(v_{n}\) to \(v_{j}\) such that \(e_{jn}\) has color \(c_{j}\). 2. Then, for any node \(v_{i}\) such that \(i\neq n,j\), we choose the edge \(e_{ji}\in S_{i}\) between \(v_{i}\) and \(v_{j}\) to connect node \(i\) to \(j\). Thus, \[RS_{j}=\{V\cup e_{ji}\mid i\neq n,i\neq j,e_{ji}\in S_{i},\text{ and }e_{jn}\in S_{j}\}.\] For instance, consider the following graph on \(n=4\) vertices: The nodes of this graph have already been rearranged as detailed above. The center vertex has also been colored black to distinguish it from the centers of the monochromatic spanning stars. Notice that this graph is induced by the following 3 monochromatic stars: In addition, the center of each star has been colored with the same color as the edges of that monochromatic star. We follow the algorithm outlined in the proof to dissect the construction into 3 rainbow stars, as shown below. Note that all of these rainbow stars are disjoint and use all the edges of the original graph. In other words, all the rainbow stars are built by connecting a colored point and the center point with its own color, and then receiving a color from every other colored point. We can guarantee that all edges are used exactly once. Thus, the proof is complete. ### Example with 5 nodes The same process can be used to decompose a graph with 5 vertices (induced by 4 monochromatic spanning stars) to create 4 rainbow stars, as depicted below. Shown above are the 4 monochromatic stars that comprise the graph in the previous figure. Following the algorithm described in the previous proof, we connect each center of a c color-induced star to the center vertex, and we connect that same center to the remaining vertices using the respective colors of the stars whose center is at that remaining vertex. This algorithm can be used for an arbitrary number of points. In the context of Rota's Conjecture, a natural question to ask is _how many_ sets of rainbow spanning objects one can find in certain contexts. In the case of finding rainbow stars from monochromatic stars, we have the following observation. **Proposition 3.1**.: _Suppose \(G\) is a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that the induced monochromatic subgraphs are all stars with different centers. Then there exists a unique collection of \(n-1\) disjoint spanning rainbow stars._ Proof.: If we take a closer look, we can identify a few points to consider: 1. The center node can not have a star because if it did, this would use up the edges connecting the center and the remaining trees could not connect the center. 2. Each of the colored stars on their nodes contain \(n-1\) edges of its own color and \(1\) color edge from the other \(n-2\) stars.; 3. Each star is created by combining the lone edge to the center plus the collection of edges from the other stars. This means there is one and exactly one way to create each star, so the total number of collections of disjoint spanning rainbow stars is one. ### All Stars Have the Same Center **Theorem 3.2.1**.: _Stars-to-Stars holds if all monochromatic stars have the same center._ Proof.: Say a graph \(G=(V,E)\) satisfies the conditions of the statement. We will construct the resulting rainbow stars. Let \(v_{i}\) denote any given vertex of the graph, with \(v_{0}\) signifying the common center of all the monochromatic stars. For \(1\leq j\leq n-1\), we define the \(j\)th resulting star \(RS_{j}\subset E\) in the following manner: \[RS_{j}=\{e_{v_{0}v_{j+k-1}}\mid e_{v_{0}v_{j+k-1}}\in S_{k},1\leq k\leq n-1\},\] where \(j+k-1\) is taken mod \(n-1\). For instance, in the example below on \(4\) vertices and \(3\) colors, the following trees result. Notice how the colors are merely rotated between trees. Once again we are interested in how many sets of rainbow spanning trees can be found in this context, and in particular we consider the following. **Question 3.2.2**.: _From a configuration with all monochromatic stars sharing the same center, how many collections of \(n-1\) spanning rainbow stars are possible?_ Returning to the \(n=4\) example where there are \(3\) rainbow stars, there exist exactly \(2\) distinct unordered collections, as illustrated below. For \(n=2,3\), the number of collections is \(1\). For \(n=4\), the number of collections increases to \(2\). For \(n=5\), the number of collections jumps to \(24\). \(n=6\) yields an even larger result that is much more difficult to calculate by hand (see table). In fact, we claim the following: **Theorem 3.2.3**.: _For a graph on \(n\) vertices where all induced spanning monochromatic stars share the same center, the number of collections of \(n-1\) disjoint spanning rainbow stars is_ \[\Omega(n)=\frac{L_{n-1}}{(n-1)!},\] _where \(L_{n-1}\) is the number of Latin squares of size \(n-1\)._ Proof.: The strategy involves representing the set of all edges of all stars as a matrix. Consider each resulting star as a row of \(n-1\) elements, where the \(i\)th element in the row is the color of the edge connecting \(v_{0}\) and \(v_{i}\). Considering all the resulting rainbow stars at once, we can combine \(n-1\) rows/stars of \(n-1\) elements each into a **rainbow star matrix**. In other words, the \(i\)th star would be \(i\)th row in the matrix, and the \(j\)th edge in the \(i\)th star would be the \(j\)th element in the \(i\)th row. For instance, in the example shown below, the leftmost star would represent the row \(\{R,B,G\}\), the next would be \(\{G,R,B\}\), and the last would be \(\{B,G,R\}\), where R=red, B=blue, and G=green. This gives us the matrix: \[\begin{bmatrix}\text{R}&\text{B}&\text{G}\\ \text{G}&\text{R}&\text{B}\\ \text{B}&\text{G}&\text{R}\end{bmatrix}\] Notice that this rainbow star matrix possesses two special properties: 1. Each element within a given row is distinct. 2. Each element within a given column is distinct. The first condition is due to the fact that every edge within a rainbow star must have a different color by definition. The second condition holds because an edge of a given color connecting the same pair of vertices cannot be used in two different rainbow stars (as per the definition of decomposition). Thus, such a rainbow star matrix is really just a Latin square of size \(n-1\) in disguise. Therefore, the number of such rainbow star matrices is \(\text{L}_{n-1}\), i.e. the number of size \(n-1\) Latin squares. However, we ignore the order of the resulting trees, as this would be rearranging stars in a collection which wouldn't count as a distinct collection. If we were to fix a color onto the diagonal of the rainbow star matrix, (in the above case red), then the real number of collections would be \(\text{L}_{n-1}\) divided by the number of ways to permute the number of rows, \((n-1)!\) Thus, the number of ways to decompose a graph consisting of \(n-1\) monochromatic stars sharing a center into rainbow stars is \(\frac{\text{L}_{n-1}}{(n-1)!}\). An explicit formula due to Shao and Wei [4] for \(\text{L}_{n}\) is given below. \[\text{L}_{n}=n!\sum_{A\in\text{B}_{n}}(-1)^{\sigma_{0}(A)}\binom{\text{per}\, A}{n} \tag{1}\] Here, \(\text{B}_{n}\) is the set of \(n\times n\) matrices with entries in \(\{0,1\}\), \(\sigma_{0}(A)\) is the number of zero elements in \(A\), and per \(A\) is the permanent of the matrix \(A\). Fitting this into our equation \(\frac{\text{L}_{n-1}}{(n-1)!}\), the \((n-1)!\)s cancel out nicely, leaving us with: \[\Omega(n)=\sum_{A\in\text{B}_{n-1}}(-1)^{\sigma_{0}(A)}\binom{\text{per}A}{n-1} \tag{2}\] Values for \(\Omega(n)\) grow very quickly, as shown below. \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \(\Omega(n)\) & 1 & 1 & 1 & 2 & 24 & 1344 & 1128960 & 12198297600 \\ \hline \end{tabular} ### Extension of 3.2 to Trees **Theorem 3.3.1**.: _Suppose \(G\) has been edge colored so that each color class induces \(n-1\) copies of the same monochromatic spanning tree. Then Rota's Basis Conjecture holds._ Our argument stated in section 3.1 isn't restricted to just stars with the same center case. It can be extended to any collection of monochromatic spanning trees that are all the same. This is because for each edge on a rainbow tree, we pick edges from a set of \(n-1\) edges on \(n-1\) trees. Therefore, the same conversion to a rainbow matrix holds, and the Latin Square argument for identical stars can be used for identical trees (identical except for color, of course). The only difference is that we have to assign each edge to a column in the matrix, giving us a **rainbow tree matrix**. We present an example of how our argument applies to a set of identical trees when \(n=4\). The two resulting collections of rainbow trees would look like this. For the matrix, we let the first column be the left edge, the middle column to be the diagonal edge, and the right column to be the right edge. The corresponding rainbow tree matrices would be the following, where the left matrix represents the top collection and the right matrix represents the bottom matrix. \[\begin{bmatrix}R&B&G\\ G&R&B\\ B&G&R\end{bmatrix}\&\begin{bmatrix}R&G&B\\ B&R&G\\ G&B&R\end{bmatrix}\] This method of counting the number of collections will hold true as long as all monochromatic spanning trees within a collection are identical. ### Stars-to-stars Can Fail We next consider a situation where monochromatic stars do not lead to rainbow stars. Consider a graph \(G\) on \(4\) vertices, which is edge-colored in such a way that two of the induced monochromatic stars share a center. Below, we see that the red and blue stars share the top left corner as their center. We also run into the problem where we can't color our vertex according to its star, so we'll just mix the colors of the stars with internal vertices at that spot, conveniently red and blue make purple. We will mark the bottom left corner with a black vertex. There is only one possible way to decompose G into 3 rainbow trees, ignoring symmetry between the red and blue edges: Notice that only one of the trees above is a rainbow star. Thus, it is impossible to decompose G into rainbow stars. ### All Stars Have a Center that Falls on One of Two Nodes Although an arbitrary arrangement of monochromatic stars does not necessarily lead to a collection of rainbow stars, we can give another proof of Rota's Conjecture in a special case. **Theorem 3.5.1**.: _Suppose a graph G on \(n\) vertices is edge colored such that it consists of \(n-1\) monochromatic spanning stars that have a center on one of two vertices. Then, it is possible to decompose G into \(n-1\) disjoint spanning rainbow trees._ Proof.: Let's suppose we have a graph G with \(n_{1}\) monochromatic stars centered on vertex \(1\) and \(n_{2}\) monochromatic stars centered on vertex \(2\), where \(n_{1}+n_{2}=n-1\). We want to decompose such a graph into \(n-1\) rainbow trees. We will construct these \(n-1\) rainbow trees from scratch. First, some notation: let \(C_{1},C_{2}\) denote the set of colors that belong to \(1\) and \(2\), respectively. Then, \(|C_{1}|=n_{1}\) and \(|C_{2}|=n_{2}\). For the vertices that serve as the centers of stars, we define an arbitrary 'order' of colors. For instance, 'for vertex k, blue comes clockwise after red'. We will also reorder the colors within \(C_{k}\) and \(C_{j}\) to reflect this arbitrary choice of color ordering. Then, construct \(n-1\) trees using the following idea: * For each tree, we choose the edge \(e_{jk}\), where \(c\) is distinct for each tree. * Pick any one of the trees-under-construction. Say that for one given tree, the edge connecting k and j is of the form \(e_{jk}\), where \(e_{jk}\in S_{j}\). Then, for the \(n_{1}-1\) remaining colors in \(C_{j}\), we place edges of the form \(e_{jk}\), where \(\delta=k+i\) and \(e_{jk}\in S_{i}\); this way, the order of the colors of the edges emanating from j matches the order that was arbitrarily determined previously. * After that, we will be left with \((n-1)-(n_{1})=n_{2}\) points that are still not connected to a vertex. To rescue those points, we will connect them each to k; the order of the colors of the edges will follow the pre-determined order. Thus, we see a 'rotation' motif present throughout such constructs. For instance, say we have a graph on \(n=5\) vertices that consists of the following monochromatic stars. \[\begin{array}{c}\includegraphics[width=142.26378pt]{2 Finally, we follow a similar process for \(j\) but with the remaining vertices and follow the clockwise pre-determined ordering. To see why this idea does not produce any cycles or disconnections, note the following points: * Each vertex \(m\neq j,k\) is connected to **exactly one** of \(j\) or \(k\) (either \(m\) was connected to \(j\) through the second step, or it was one of the remaining \(n_{2}\) vertices that was connected to \(k\) through the third step). Also, every tree, by construction, has an edge connecting \(j\) to \(k\). Therefore, there are no disconnected points in our constructed rainbow trees. * Because there are no repeated edges or edges of the form \(e_{mr}\) in the resulting rainbow trees, where \(m,r\neq k,j\), there are no cycles. We can also guarantee that the collection of rainbow trees is disjoint; if one were given the original set of monochromatic trees and the orderings of the stars within a center, then it would be possible to tell which rainbow tree an edge would belong to. This is possible because for a center, we can place the remaining edges of the other colors emanating from that center based off of the ordering of stars on that center, and then place the remaining edges of the stars with the other center based off of the ordering of stars on that center. Being able to dictate which tree an edge is in guarantees disjointedness. Thus, we can decompose \(G\) into \(n-1\) disjoint spanning rainbow trees. ### When Stars-to-Stars Holds Sections 3.1 and 3.2 suggest that it is possible to make rainbow stars from monochromatic stars with different or same centers, respectively, but a question to ponder is if these are the only cases of monochromatic stars which produce rainbow stars. A natural extension to the star problem addressed above is that of whether a graph induced by monochromatic spanning paths can be decomposed into a set of rainbow spanning paths. **Theorem 3.6.1**.: _Stars-to-stars holds if and only if all monochromatic stars share the same center or all have different centers._ Proof.: The 'if' implication has been established in Theorem 3.1.1 and Theorem 3.2.1. We are left to prove the converse. The statement is trivial for \(n=2\) because there is only \(1\) star consisting of \(1\) edge. For \(n\geq 3\), say we start out with a decomposition of a graph into monochromatic stars. In said decomposition, say there are \(s_{k}\) stars on vertex \(k\). Then the degree of vertex \(k\) is \[D_{k}=s_{k}(n-1)+(n-1)-s_{k}=(s_{k}+1)(n-2)+1.\] The \(s_{k}(n-1)\) comes from the number of stars on that center times the number of edges per star, and the \((n-1)-s_{k}\) comes from the number of stars not on that center times \(1\) for the number of edges that connect to vertex \(k\) per star. The right-hand side is a factored version of the equation. Now, consider a hypothetical decomposition of the same graph into rainbow stars. There must be exactly \(s_{k}\) rainbow stars centered at vertex \(k\). If there were more than \(s_{k}\) rainbow stars on vertex \(k\), we would have at least \((s_{k}+2)(n-2)+1\) edges incident to vertex \(k\), which is larger than \(D_{k}\). Likewise, if there were less than \(s_{k}\) rainbow stars on vertex \(k\), we would have at most \((s_{k})(n-2)+1\) edges incident to vertex \(k\), which is smaller than \(D_{k}\). Now, note that if a monochromatic star is centered at \(k\), then all edges of that color emanate from vertex \(k\). If that monochromatic star is not centered at \(k\), then only one edge of that color connects to vertex \(k\). Since each rainbow star must have exactly one of a certain color, exactly \(0\), \(1\), or \(n-1\) rainbow stars must be centered at vertex \(k\). There being \(0\) stars on a vertex must be included because there are \(n\) vertices and \(n-1\) stars, and by the Pigeonhole Principle, at least one vertex does not have a star centered at it. Therefore, \(s_{k}=0,1,n-1\), proving our theorem. Note, the \(1\) or \(n-1\) rainbow stars being centered at a point corresponds with Theorems 3.1.1 and 3.2.1, respectively. ## 4. Paths-to-Paths _Definition 4.1_.: A **spanning path** on a set of \(n\) vertices is a set of \(n-1\) edges such that 1. All vertices are included in the path (hence the name "spanning"), and 2. For a given spanning path, all vertices have degree at most \(2\). In other words, a spanning path can be drawn in one motion without lifting the pencil off of the paper. _Definition 4.2_.: Suppose \(G\) is a graph on \(n\) vertices that has been edge colored with \(n-1\) colors so that each monochromatic subgraph induces a spanning path. Then we say **Paths-to-Paths** holds for \(G\) if one can find a collection of \(n-1\) disjoint spanning rainbow paths. We next show that Paths-to-Paths is in general not possible. **Proposition 4.1**.: _For the edge coloring of the graph \(G\) on 4 vertices depicted below, there does not exist any collection of 3 disjoint rainbow paths._ Proof.: Suppose \(G\) is the edge colored graph on 4 nodes whose color classes induce the 3 monochromatic spanning paths shown below. Notice that the first vertex has only 3 total edges emanating from it (1 red, 1 blue, and 1 green). Thus, each of the 3 rainbow spanning paths must have the first vertex be either a starting or ending vertex; without loss of generality, we assume the first vertex to be the starting node for all 3 of the rainbow spanning paths. Now, consider constructing the rainbow spanning path that uses the red edge between the first node and the third node. The remaining edges in the rainbow spanning path must be either blue or green. Thus, the two possibilities for the rainbow spanning path are as presented below. However, notice that neither of the two options presented above is a path; both of them are stars. Therefore, the original graph cannot be decomposed into 3 rainbow spanning paths, establishing the claim. ## 5. Further thoughts We end with some discussion and ideas for future research. Further work regarding Conjecture 1.1 would include considering the following questions. * If the colors classes induce monochromatic spanning paths, under which circumstances can we find spanning rainbow _paths_ (i.e. when does Paths-to-Paths hold)? * Can we prove Conjecture 1.1 for graphs where the color classes induce monochromatic spanning _paths_ (i.e. does Paths-to-Trees hold)? We also remark that Rota's Basis Conjecture makes sense in the more general context of matroid theory. Recall that a _matroid_ is a pair \((M,E)\) consisting of a finite ground set \(E\) and a set \(M\) of subsets of \(E\), called _independent sets_, satisfying: 1. \(\emptyset\in M\); 2. If \(X\in M\) and \(Y\subset X\) then \(Y\in M\); 3. If \(X,Y\in M\) and \(|X|>|Y|\), then there exists an element \(x\in X\setminus Y\) so that \(Y\cup\{x\}\in B\). The maximal elements of \(M\) are called the _bases_ of \(M\). The _rank_ of \(M\) is the cardinality of any (and hence all) basis element. Rota's Conjecture can then be stated in full generality as follows. **Conjecture 5.1**.: _Suppose \(M\) is a rank \(n\) matroid with a collection of disjoint bases \(\mathcal{B}=\{B_{1},\ldots,B_{n}\}\). Then one can arrange the elements of \(\mathcal{B}\) into an \(n\times n\) matrix in such a way that the \(i\)th row consists of the elements of \(B_{i}\) and each column is also a basis of \(M\)._ In other words, if we think of each \(B_{i}\) as a color class, we obtain a collection of _colorful_ bases \(\mathcal{C}=\{C_{1},\ldots,C_{n}\}\) where each \(C_{i}\) contains exactly one element from each \(B_{j}\in\mathcal{B}\). In this paper, we considered matroids coming from a graph \(G=(V,E)\), where the ground set is given by the set \(E\) of edges, and independent sets are given by collections of edges that do contain a cycle. Conjecture 5.1 has been established for all paving matroids by Geelen and Humphries in [5], and for matroids of rank at most 3 by Chan [6]. It was shown by Wild [7] that a stronger version of the conjecture holds for _strongly base orderable_ matroids. This class is closed under duality and taking minors, and includes all gammoids. In [8] it was shown that for any matroid one can always find \((1/2-o(1))n\) disjoint transversal bases (so that we are 'halfway' to Rota's Conjecture). Recently, Rota's Basis Conjecture for realizable matrices over finite fields has been established in a probabilistic setting by Sauermann [9]. Interest in Rota's Basis Conjecture continues to this day, as evidenced by the recent Polymath project [10]. It would be interesting to consider Rota's Basis conjecture for other classes of matroids. ## Acknowledgements We would like to thank the 2022 Mathworks Honors Summer Math Camp for their support and encouragement. We would like to extend our thanks to Dr. Max Warshauer and Dr. Eugene Curtin of Texas State University for connecting us after said camp. We would finally like to thank Dr. Anton Dochtermann of Texas State University for continually supporting and guiding us throughout the research project and providing a great working environment.
2304.14821
Conditional logic as a short-circuit logic
Both two-valued and three-valued conditional logic (CL), defined by Guzm\'an and Squier (1990) and based on McCarthy's non-commutative connectives, axiomatise a short-circuit logic (SCL) that defines more identities than MSCL (Memorising SCL), which also has a two- and a three-valued variant. This follows from the fact that the definable connective that prescribes full left-sequential conjunction is commutative in CL. We show that in CL, the full left-sequential connectives and negation define Bochvar's three-valued strict logic. In two-valued CL, the full left-sequential connectives and negation define a commutative logic that is weaker than propositional logic because the absorption laws do not hold. Next, we show that the original, equational axiomatisation of CL is not independent and give several alternative, independent axiomatisations.
Jan A. Bergstra, Alban Ponse
2023-04-28T13:04:02Z
http://arxiv.org/abs/2304.14821v2
# Conditional logic as a short-circuit logic ###### Abstract Both two-valued and three-valued conditional logic (CL), defined by Guzman and Squier (1990) and based on McCarthy's non-commutative connectives, axiomatise a short-circuit logic (SCL) that defines more identities than MSCL (Memorising SCL), which also has a two- and a three-valued variant. This follows from the fact that the definable connective that prescribes full left-sequential conjunction is commutative in CL. We show that in CL, the full left-sequential connectives and negation define Bochvar's three-valued strict logic. In two-valued CL, the full left-sequential connectives and negation define a commutative logic that is weaker than propositional logic because the absorption laws do not hold. Next, we show that the original, equational axiomatisation of CL is not independent and give several alternative, independent axiomatisations. **Keywords & phrases:** Conditional logic, short-circuit evaluation, short-circuit logic, left-sequential connectives, Bochvar's logic ###### Contents * 1 Introduction * 2 Proposition algebra and short-circuit logics * 3 Conditional short-circuit logic, the two-valued case * 4 Conditional short-circuit logic, the three-valued case * 5 Independent axiomatisations of CL * 6 Discussion and conclusions
2302.06173
SWIFT: Expedited Failure Recovery for Large-scale DNN Training
As the size of deep learning models gets larger and larger, training takes longer time and more resources, making fault tolerance more and more critical. Existing state-of-the-art methods like CheckFreq and Elastic Horovod need to back up a copy of the model state (i.e., parameters and optimizer states) in memory, which is costly for large models and leads to non-trivial overhead. This paper presents SWIFT, a novel recovery design for distributed deep neural network training that significantly reduces the failure recovery overhead without affecting training throughput and model accuracy. Instead of making an additional copy of the model state, SWIFT resolves the inconsistencies of the model state caused by the failure and exploits the replicas of the model state in data parallelism for failure recovery. We propose a logging-based approach when replicas are unavailable, which records intermediate data and replays the computation to recover the lost state upon a failure. The re-computation is distributed across multiple machines to accelerate failure recovery further. We also log intermediate data selectively, exploring the trade-off between recovery time and intermediate data storage overhead. Evaluations show that SWIFT significantly reduces the failure recovery time and achieves similar or better training throughput during failure-free execution compared to state-of-the-art methods without degrading final model accuracy. SWIFT can also achieve up to 1.16x speedup in total training time compared to state-of-the-art methods.
Yuchen Zhong, Guangming Sheng, Juncheng Liu, Jinhui Yuan, Chuan Wu
2023-02-13T08:17:30Z
http://arxiv.org/abs/2302.06173v1
# SWIFT: Expedited Failure Recovery for Large-scale DNN Training ###### Abstract As the size of deep learning models gets larger and larger, training takes longer time and more resources, making fault tolerance more and more critical. Existing state-of-the-art methods like CheckFreq and Elastic Horvood need to back up a copy of the model state (i.e., parameters and optimizer states) in memory, which is costly for large models and leads to non-trivial overhead. This paper presents Swift, a novel recovery design for distributed deep neural network training that significantly reduces the failure recovery overhead without affecting training throughput and model accuracy. Instead of making an additional copy of the model state, Swift resolves the inconsistencies of the model state caused by the failure and exploits the replicas of the model state in data parallelism for failure recovery. We propose a logging-based approach when replicas are unavailable, which records intermediate data and replays the computation to recover the lost state upon a failure. The re-computation is distributed across multiple machines to accelerate failure recovery further. We also log intermediate data selectively, exploring the trade-off between recovery time and intermediate data storage overhead. Evaluations show that Swift significantly reduces the failure recovery time and achieves similar or better training throughput during failure-free execution compared to state-of-the-art methods without degrading final model accuracy. Swift can also achieve up to 1.16x speedup in total training time compared to state-of-the-art methods. Distributed DNN Training; Failure Resilience ## 1 Introduction Larger and larger deep neural networks (DNNs) have recently emerged for improved model performance [1, 2, 3]. Large DNN model training jobs typically use many accelerators (e.g., GPUs) and have long-running times [4]. For example, training a GPT-3 model [3] on 1024 A100 GPUs is estimated to take more than one month [5]. Job failures are common in a GPU training cluster [4]. For example, machine crashes and network failures happen occasionally, or higher priority jobs take up resources [4]. In these cases, the distributed DNN training job is interrupted, resulting in loss of the DNN model state (i.e., model parameters and optimizer states) and failure of the training job. Failures are more severe for large DNN model training jobs: increasing the number of machines will inevitably lead to an increased chance of failure; training large models takes days to months, making it more likely for failures to happen during the course. Recent works also echo this [6, 7]. Global checkpointing is the _de facto_ method for fault tolerance in deep learning (DL) frameworks [8, 9]. The training job periodically checkpoints the entire model state. All workers restart from the latest checkpoint when the job fails. Depending on the checkpointing frequency, this often results in several hours of lost computation time [10]. CheckFreq [10] achieves more frequent checkpoints by splitting the operation into two phases: first, the model state is copied in the GPU memory, called a _snapshot_, or to the CPU memory if the GPU memory is insufficient; in the second phase, the snapshot is written to the disk asynchronously. Elastic Horvood [11], a framework for elastic training, takes a similar approach, but without the second phase. The reason is that Elastic Horvood assumes distributed data-parallel training, where each worker maintains a replica of the model state; during failure recovery, one of the surviving workers broadcasts the snapshot to other workers, and all workers restart training from the snapshot. Taking a snapshot is necessary for Elastic Horvood to prevent a corrupted state: if a worker crashes during the parameters update, the other workers are in an awkward situation - some parameters are updated while the others are not. We identify this problem as the _crash-consistency problem_ (SS2.3). However, as we shall see in SS2.2, for large DNN models, both methods can slow down the training due to the overhead of snapshotting. This paper studies a better failure resilience design for distributed DNN training that significantly reduces the recovery overhead without affecting training throughput and final model accuracy. One of our key observations is that many of the optimizers used for model state updates in DNN training are mathematically _invertible_. For example, stochastic gradient descent (SGD) only involves linear operators such as element-wise addition and scalar multiplication, and the inverse operators are straightforward. In case of a crash-consistency problem in distributed data-parallel training, we can restore the model states of the surviving workers to a consistent state by _undoing the update_ of the updated parameters (SS4). Therefore, we do not need to snapshot periodically as CheckFreq and Elastic Horvood do, reducing the overhead during failure-free execution to zero (except for periodic checkpoints). Since this approach exploits replicas of the model state in surviving workers for recovery, we name this recovery method _replication-based recovery_. However, replicas are not always available, even with data parallelism. For instance, some prior works advocate data parallelism only across multiple GPUs on the same machine to leverage high-speed intra-server interconnects such as NVLink [5, 12] to accelerate gradient synchronization. All replicas would be lost in the event of a machine failure. We then investigate another fundamental approach for fault tolerance in distributed systems - logging, which has been widely explored in data processing systems [13, 14, 15, 16]. We introduce _logging-based recovery_ (SS5) for pipeline-parallel training. In pipeline parallelism, workers form a chain topology and pass intermediate activations or gradients to the successor or predecessor worker using point-to-point communication. Figure 0(a) illustrates One-Forward-One-Backward (1F1B) pipeline schedule [17] (SS2.1). With logging, each worker locally records all outgoing data to the adjacent workers on the other machine. Upon a failure, the replacements of the failed workers retrieve the logging data and replay the computation to recover the lost state. Moreover, we spread the logging data to surviving workers to have them assist in recovery (SS5.2). Logging _limits the recovery scope from the complete computation graph across all workers to the computation graph on the failed workers_, thus reducing the recovery time compared to global checkpointing. An example is given in Figure 0(b). To the best of our knowledge, we are the first to bring logging into distributed DL systems for failure resilience. However, logging-based recovery brings unique challenges. Logging needs to be done constantly during DNN training, and the overhead in runtime and space can be prohibitive. Once a piece of logged data is missing, the original state cannot be recovered precisely. To reduce the runtime overhead of logging, we only log inter-machine communication data since failures often occur at machines rather than at individual workers on machines. Moreover, we perform logging asynchronously, storing the data in the background. We further utilize workers' idle time (i.e., bubble time in pipeline parallelism) to do logging. This way, logging is _off the critical path_ (SS5.1). To control the space overhead at a manageable level, we devise an algorithm to select only a subset of machines to log intermediate data (SS5.3). Selective logging trades the recovery time for space consumption. We design and implement Swift, including replication-based and logging-based recovery for expedited failure recovery. Our key contributions are summarized below: \(\triangleright\) We propose a novel mechanism called update-undo that resolves model state inconsistencies caused by the failure and enables failure recovery using replicas of the model state in data parallelism without creating additional copies. \(\triangleright\) We propose to use the logging method to achieve expedited failure recovery in pipeline parallelism. We use asynchronous logging, logging during the bubble time, and selective logging to reduce the runtime and space overhead. \(\triangleright\) We implement Swift in PyTorch and demonstrate its benefits on distributed training of large DNN models. For replication-based recovery in training Wide-ResNet-50 [18], Swift reduces recovery time by 98.9%, 98.1%, and 98.1% compared to global checkpointing, CheckFreq and Elastic Horwood, respectively. For logging-based recovery in training BERT [1] and ViT [19], Swift reduces recovery time by 57.3% and 76.3% compared to global checkpointing, respectively. Using traces collected in our experiments, we show that Swift can achieve up to 1.16x speedup in total training time compared to state-of-the-art methods. We have open-sourced Swift at [https://github.com/jasperzhong/swift](https://github.com/jasperzhong/swift). ## 2 Background and Motivation ### _Distributed DNN Training_ We focus on _synchronous_ distributed DNN training, where many workers on multiple machines collectively work on the latest DNN model iteratively. Each training iteration contains a forward computation pass (to compute a loss) and a backward pass (to compute the gradients), and the gradients computed are used for the model update [20]. Synchronous training ensures better model accuracy than asynchronous training and is thus popular for large-scale DNN training [5, 20, 21, 22]. **Data parallelism** is the most widely used paradigm for distributed DNN training [21, 22]. Input data is partitioned across workers. Each worker has a model replica and computes local gradients on a subset of data. Gradient synchronization is performed among workers in each iteration to ensure the consistency of model replicas. **Operator parallelism** is a solution to handle large DNNs by splitting an operator in a DNN model among multiple workers along non-batch axes [23]. Communication is needed to fetch the input data from other workers [2]. **Pipeline parallelism** splits a mini-batch into smaller micro-batches and pipelines them to the DNN model stages hosted on different workers so that workers can process different micro-batches simultaneously [12, 17, 24, 25]. Point-to-point communication is performed between workers hosting neighbor stages to transfer intermediate activations. Synchronous pipeline parallelism schedules like GPipe [24] and One-Forward-One-Backward (1F1B) [25] flush the pipeline in each iteration, i.e., worker waiting for all in-flight micro-batches of the iteration to complete before moving on to the Fig. 1: Pipeline parallelism and logging example. next iteration. Despite better model accuracy, pipeline flush causes worker idling (i.e., _bubbles_) in pipeline execution [12, 24, 5, 25]. For GPipe and 1F1B, the ratio of the bubble time is \((p-1)/(m+p-1)\), where \(p\) is the number of stages and \(m\) is the number of micro-batches [5]. For the example shown in Figure 1a, the ratio of the bubble time is \(3/7\). This paper adopts 1F1B [25] because it has the same bubble time ratio but lower peak memory usage than GPipe [24]. Note that our approach is not limited to 1F1B. Recent works combine the three parallelism paradigms, called _3D parallelism_[26, 27, 5, 12, 2]. Figure 2 shows a hand-optimized parallelism plan in Megatron-LM [2, 5], a state-of-the-art training system for transformer language models. Although data parallelism is used in this example, the replicas reside on the same machine. If one machine fails, we lose the model state on that failed machine. ### _Problems on Snapshotting Large Models_ CheckFreq [10] and Elastic Horovod [11], state-of-the-art methods for fault tolerance, rely on the _snapshot_ operation. After updating the model state of the iteration, a copy of the model state (called a snapshot) is captured in GPU memory or copied to CPU memory if the GPU cannot hold it. Snapshotting can overlap with the next iteration's forward and backward pass. The next iteration of the update operation does not start until the snapshot operation is completed, leading to a checkpoint stall. However, DNN models have proliferated from millions to billions of parameters in recent years and become too large to fit into a single GPU [28]. It became increasingly difficult to fit a complete snapshot on a single GPU. In that case, the snapshot is copied to the CPU. We experimentally find that snapshotting to CPU memory is costly for large models and reduces training throughput. We train an enlarged Wide-ResNet-50 [18] model with a model state size of 9.8GB using data parallelism on two machines using 8 32GB V100 GPUs. The snapshot operation needs to copy the model state to the CPU. The model setting, the training setting, and the settings of CheckFreq and Elastic Horovod are described in SS7. During training, the GPU memory consumption reaches 30.4 GB, which cannot accommodate a snapshot. Figure 3 shows that at the time of snapshots (iterations 30, 60, and 90), the iteration time is significantly longer with CheckFreq and Elastic Horovod. After the snapshots, CheckFreq's iteration time is longer, showing that writing the snapshot to the disk also affects normal training. But global checkpointing causes large overhead (iteration 100) since it is synchronous. Interestingly, the checkpoint stall is indeed negligibly with CheckFreq, for only 0.2 milliseconds. This experiment shows that the snapshot operation can still incur non-trivial runtime overhead and slow down the training process. ### _Crash-consistency Problem_ Most DL frameworks [29, 8, 9, 26] adopt wait-free model updates, as illustrated in Figure 4. Model state update of a DNN layer can be performed as soon as the gradient of that layer is ready. If a worker crashes during the update, the other workers are in an inconsistent state, where some layers are updated, and the others are not. The problem exists not only in data parallelism, but also in pipeline parallelism, where the DNN model is spread across multiple workers. Model state updates occur at different times due to the dependencies of the computation graph, and a worker moves on to the next training iteration once the part of the model state it hosts has been updated. Such inconsistencies can lead to a degradation of the accuracy of the final trained model [6, 30]. Elastic Horovod solves the problem with the snapshot operation. However, it can be costly for large models (SS2.2). Another workaround is to wait for gradients of all layers to be ready before updating the model state at the workers (by adding a barrier before the model update). With this method, the other workers can still complete their model updates and remain consistent even if a worker fails during the update. However, this update method incurs more waiting. In SS4, we propose a better solution to tackle the crash-consistency problem without snapshotting or incurring the waiting. Fig. 4: Crash-consistency problem in layer-wise wait-free update. The number is the layer index. Arrows represent dependency. The red dashed line indicates the failure. Fig. 3: Training throughput of Wide-ResNet-50 during failure-free execution. Fig. 2: A hand-optimized 3D parallelism plan in Megatron-LM, using 16 GPUs on two machines. The DNN model is split into four pipeline-parallel stages, each stage is partitioned onto two GPUs for operator parallelism and each stage has a replica. Replicacs of a stage are on the same machine. ### _Logging-based Failure Recovery_ The logging method logs a job's application data at runtime and exactly replays the computation during failure recovery. Two main types of data are logged. Spark [13] and Ray [14, 31] record lineage, i.e., the computation graph. Other systems record (or just buffer) raw, intermediate data [15, 16]. For example, in _upstream backup_ of stream processing systems [15], the upstream machines preserve the data in the output queue while the downstream machines are processing them. In DNN training, the computation graph is usually fixed, and the execution time of a single operator in the computation graph is usually in the order of milliseconds. Recording the lineage of fine-grained operators adds significant overhead but does not benefit much since operators do not fail very often [8]. Therefore, we consider logging intermediate data. The parallelism paradigm determines the communication operators and thus the data to be logged. Data parallelism and operator parallelism use collective communications, such as all-reduce and all-gather [27]. Collective communications have complex data dependencies (e.g., many-to-many), thus complicating logging. Pipeline parallelism performs point-to-point communication, which simplifies logging. Also, the communication data volume in pipeline parallelism is much smaller than in data parallelism and operator parallelism for transformer models, and the current mainstream large models are usually transformer-based models [3, 5]. Therefore, we advocate logging for pipeline parallelism. ## 3 Swift Overview Given a distributed DNN training job and its specific parallelism configuration, Swift introduces failure resilient mechanisms into the training job, targeting reduced failure recovery time without affecting the training throughput and final model accuracy. The distributed DNN training job spans a cluster of physical machines, with each machine hosting one or multiple workers. We focus on a fail-stop model [32] throughout the paper, which is more common in real-world clusters [4]. A machine in the cluster may crash during training, losing the volatile model states of workers it hosts, i.e., parameters and optimizer states which are mainly stored on the GPUs. Swift decides on a fault tolerance strategy before the training job starts. It always exploits redundancies if available (i.e., if the model state has at least one replica on another machine) because replication-based recovery achieves both low runtime and recovery overhead. When a replica is unavailable, and pipeline parallelism is used, and logging is _worth doing_ (SS5.4), then use logging-based recovery. If none of the above conditions are met, use global checkpointing only. In any case, global checkpointing is performed periodically to ensure that the system remains on track in case of a catastrophic failure (e.g., loss of all replicas or logging data). Replication-based recovery and logging-based recovery can be combined to use, as hybrid parallelism is common for large DNN training, e.g., parts of the model use data parallelism while other parts use pipeline parallelism [27]. A machine failure can be detected by catching communication errors by workers that communicate with the failed machine. After the failure is detected, a replacement machine will be added to the training job. The surviving workers stop training and start the failure recovery procedure. Surviving workers first resolve the model state inconsistency issue (SS2.3) with the _update-undo_ approach (SS4). Recovery is then performed for the replacement of the failed workers. For replication-based recovery, one of the surviving workers which holds the replica broadcasts the model state to the replacement workers (which uses data parallelism with the surviving worker). For logging-based recovery, replacement workers load the most recent checkpoint and then recompute the lost iterations based on the logged data until recovering up to the pre-failure iteration. We also discuss multiple failures and cascading failures in Appendix B. For the example in Figure 2, logging-based recovery can be used since replicas are unavailable and pipeline parallelism is used across the two machines. We record data of inter-machine communication during training (SS5.1): GPU 3 & 7 log the intermediate activations in the forward pass, while GPU 11 & 15 log the gradients in the backward pass. ## 4 Update-undo We propose undoing the update to address the crash-consistency problem (SS2.3). Our idea is simple: if a failure occurs during the model update when some parameters at the workers have been updated and some have not, the surviving workers will _undo_ the update for the updated parameters. In addition to model parameters, optimizer states, such as the momentum, also need to be restored. Finally, all workers return to a consistent version of the model state. We observe that many update operators of optimizers are mathematically _invertible_, i.e., for an operator \(f\), there exists an inverse operator \(f^{-1}\) that undoes the operation of \(f\). For example, linear operators like element-wise addition and scalar multiplication are all invertible [36]. Table I \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Operator} & SGD & \begin{tabular}{c} Adam \\ [33] \\ \end{tabular} & \begin{tabular}{c} AdamW \\ [34] \\ \end{tabular} & \begin{tabular}{c} LAMB \\ [22] \\ \end{tabular} & \begin{tabular}{c} AMSGrad \\ [35] \\ \end{tabular} \\ \hline \multirow{5}{*}{Inv.} & EW add & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-6} & scalar mul & ✓ & ✓ & ✓ & ✓ \\ \cline{2-6} & EW mul & & ✓ & ✓ & ✓ \\ \cline{2-6} & EW sqrt & & ✓ & ✓ & ✓ \\ \cline{2-6} & EW div & ✓ & ✓ & ✓ & ✓ \\ \hline Not & EW-max & & & & ✓ \\ \cline{2-6} Inv. & sum & & & ✓ & \\ \hline \end{tabular} \end{table} TABLE I: Operators used in five representative optimizers. EW = element-wise; Inv. = invertible. summarizes operators used in five representative optimizers. Algorithm 2 demonstrates how the update of SGD with momentum (Algorithm 1) can be undone (i.e., from \(x_{t+1}\) to \(x_{t}\) and from \(m_{t}\) to \(m_{t-1}\)). More examples can be found in Appendix A. If an optimizer only has linear operators, then undoing is straightforward. However, some optimizers involve non-linear operators, e.g., LAMB optimizer [22] scales the gradients with the L2 norm of the parameters. For the LAMB optimizer, we can additionally save the L2 norm (a scalar), and recover the previous model state accordingly. For AMSGrad [35], update undo is not applicable. Although the undo algorithms are mathematically correct, the recovered state may slightly differ from the original state due to floating-point errors [37]. Our experiments show that this minor error does not affect trained model accuracy (SS7.2). Figure 5 gives an example of how undoing updates helps replication-based recovery. Two workers train a DNN model using synchronous data parallelism. In training iteration \(t\), worker \(1\) crashes and loses all the volatile states during the backward pass. At this time, worker 2 has already updated the parameters of layer \(N-1\), but not the parameters of the other layers. Worker 2 then undoes layer \(N-1\)'s update to ensure consistency of its state. When the replacement of worker 1 joins the system, worker 2 sends its state to worker 1, and then both continue training from iteration \(t\). ``` 1:Input: learning rate \(\eta_{t}\); weight decay \(\lambda>0\); momentum parameter \(0\leq\mu\leq 1\); dampening for momentum \(0\leq\tau\leq 1\); \(x_{t+1}\in\mathbb{R}^{d}\); \(g_{t}\in\mathbb{R}^{d}\); \(m_{t}\in\mathbb{R}^{d}\). 2:\(x_{t}=x_{t+1}+\eta_{t}m_{t}\) 3:\(m_{t-1}=(m_{t}-(1-\tau)(g_{t}+\lambda x_{t}))/\mu\) ``` **Algorithm 2** Undo SGD with Momentum by chunking the logging file into multiple smaller files. If necessary, the surviving workers will undo the update (SS4) (not shown in the figure). The most significant difference with pure global checkpointing is that surviving workers do not need to load the checkpoint and roll back their training progress for recovery. Only relaunched workers on the replacement machine do. The recovery scope is limited to the local computation graph on the failed machine rather than the whole computation graph, thus expediting the recovery. **Garbage collection.** All earlier logging files are obsoleted after a global checkpointing, and garbage is collected because the system can directly load the latest checkpoint then. Even though the logging size increases as the number of iterations increases, the size is upper bounded due to periodic global checkpointing. Therefore, the frequency of global checkpointing determines the upper bound of the logging size. We will discuss more the storage overhead in SS5.3. **Consistency.** We replay the computations in the same order as the pre-failure execution using timestamps, using the same inputs as the pre-failure computation. Note that logging requires the computation to be _deterministic_ (i.e., the same input leads to the same output). Otherwise, we would get different outputs when re-computing with the logged data. We provide the details of achieving determinism in SS 6. ### _Parallel Recovery_ We utilize the surviving workers to assist in recovery of the failed workers. Since the intermediate results of all micro-batches have been logged, we can perform data-parallel training based on the logged data to expedite the re-computation of the lost states. Specifically, all workers, including replacement workers and surviving workers, retrieve the logging files and have a copy of the computation graph on the failed machine. Each worker reads logging data of different micro-batches from the logging files and uses them as input to re-compute gradients, synchronizes gradients with other workers computing other micro-batches and then performs the model update. This way, the micro-batches are re-computed in parallel by multiple workers for the failed machine, accelerating recovery while ensuring logical equivalence to executing these micro-batches sequentially by each replacement worker. If a batch is divided into \(m\) micro-batches and we use \(d\) workers for parallel recovery, each worker is assigned with \(m/d\) micro-batches for re-computation. An example is given in Figure 7, where two machines run a 4-stage training pipeline. Machine 2 (hosting stages 2 and 3) fails, and Machine 1 (hosting stages 0 and 1 in normal training) assists in the recovery computation of the replacement machine. Worker P0 on Machine 1 and worker P2 on the replacement machine re-compute the stage-2 model in a data-parallel manner, each using two micro-batches (0, 2 and 1, 3, respectively); worker P1 and worker P3 re-compute the stage-3 model, using micro-batches 0, 2 and 1, 3, respectively. Note that extra time is needed for gradient synchronization with parallel recovery. The parallel recovery procedures are given in Figure 5(c). Similar to Figure 5(b), at a surviving worker, uncommitted logging data are first flushed and uploaded to the global store (steps 1 to 3, omitted in the figure). Then the surviving workers checkpoint their states (step 4). The replacement workers load their model states from their latest checkpoints and broadcast their states to the surviving workers (step 5). Meanwhile, all workers download logging files from the global store (step 6) and select the logging data of corresponding micro-batches for re-computation (step 7). After the recovery, the surviving workers load their checkpoints to restore their original model parameters and optimizer states (not shown in the figure). Fig. 6: Logging mechanism. Fig. 7: Parallel recovery. The number of micro-batches is 4 (\(m=4\)) and we use 2 workers (\(d=2\)) in each data-parallel recovery group. Each machine has two workers. Suppose machine 2 crashes and is replaced. We decompose the pipeline in Figure 0(a) into data-parallel 2-stage sub-pipelines. ### _Selective Logging_ Logging all cross-machine messages may consume large storage space. We next investigate a trade-off between the storage space and the recovery time with selective logging. Our idea is to group machines and log inter-group communication but not intra-group communication. We can consider the original approach as a particular case, where each machine forms a group. In this way, if one machine in a group fails, training on the entire group of machines needs to be rolled back from the latest checkpoint, as we do not record intra-group communication. As a result, and the recovery time will be longer. Thus, selective logging trades recovery time for space overhead. A simple grouping strategy is to have a balanced number of machines in each group. However, due to the often unbalanced model partition in pipeline parallelism [12], this grouping strategy is usually suboptimal. Given a storage capacity constraint for logging data, how do we group machines to minimize the failure recovery time? Suppose we have \(N\) machines and create \(N\) groups initially. We profile the averaged per-iteration computation time \(R(G_{i})\) for each group \(G_{i}\). For each pair of adjacent groups \(G_{i}\) and \(G_{i+1}\) (i.e., hosting adjacent workers in the pipeline), we obtain the transmission size per iteration \(M(G_{i},G_{i+1})\) between them. Then with storage capacity limit \(M_{\text{max}}\), network bandwidth \(B\) (assuming homogeneous bandwidth) and checkpointing interval \(T\) (iterations), we aim at finding a group configuration \(\mathcal{G}=\{G_{1},\ldots,G_{k}\}\) that minimizes the overall recovery time \(R\): \[\min_{\mathcal{G}}R(\mathcal{G})\quad\text{s.t.}\ M(\mathcal{G})\leq M_{ \text{max}},\] where \(M(\mathcal{G})\) denotes the overall storage space needed by the logging data. As discussed in SS5.1, it is determined by the global checkpointing frequency: \[M(\mathcal{G})=T\cdot\sum_{G_{i},G_{i+1}\in\mathcal{G}}M(G_{i},G_{i+1}).\] Suppose we merge two adjacent groups \(G_{i}\) and \(G_{i+1}\), and have the following recovery time for the merged group: \[R(G_{i},G_{i+1})=R(G_{i})+R(G_{i+1})+M(G_{i},G_{i+1})/B,\] where \(M(G_{i},G_{i+1})/B\) is the point-to-point communication time between the two adjacent groups. We ignore the bubble time for simplicity, and derive the change in overall recovery time \(R\) and overall space overhead \(M\): \[\Delta R =R(G_{i},G_{i+1})\cdot\frac{|G_{i}|+|G_{i+1}|}{N}-R(G_{i})\cdot \frac{|G_{i}|}{N}\] \[\quad-R(G_{i+1})\cdot\frac{|G_{i+1}|}{N},\] \[\Delta M =M(G_{i},G_{i+1})\cdot T,\] where \(|G_{i}|\) is the number of machines in \(G_{i}\). \(\Delta R\) is calculated assuming that each machine has an equal failure probability. Note that \(\Delta R\) is always positive. We minimize increased recovery time \(\Delta R\) per unit storage space reduction when merging \(G_{i}\) and \(G_{i+1}\), i.e., minimize \(\Delta R/\Delta M\). To identify the grouping of machines, we iteratively merge two adjacent groups with the smallest \(\Delta R/\Delta M\), until the overall space consumption is less than \(M_{\text{max}}\). Note that it runs for at most \(N-1\) iterations, at which point all machines form a single group and there will be no logging. So the time complexity is at most \(O(N^{2})\). If parallel recovery is used, the recovery of a group \(G_{i}\) is parallelized by at most \(\lfloor N/|G_{i}|\rfloor\) data-parallel groups. For simplicity, we assume it can achieve linear scalability with data parallelism. Thus, we divide the \(R(G_{i})\) with \(\lfloor N/|G_{i}|\rfloor\) in calculation. ### _Use Case_ Not all cases are suitable for logging. For example, it would be better to checkpoint a model when the logging size far exceeds the model size. Typically, the intermediate activations for CNN-based models would be massive and unsuitable for logging (even unsuitable for pipeline parallelism) [39]. We can calculate the per-iteration logging size. For transformer-based models, the intermediate activation/gradient size would be micro_batch_size\(\times\)hidden_size\(\times\)sequence_length in a micro-batch [28]. Further, we can calculate the bubble time ratio according to the pipeline schedule (SS2.1). Given the iteration time and PCIe bandwidth, we can determine whether the logging data can be transferred from GPU to CPU within the bubble time. If not, then logging is not worth doing. ## 6 Implementation We implement Swift in PyTorch 1.9.0 [9] with NCCL 2.7.6 [40], using 2.6k LoC in Python. We also add about 400 lines of C++ code for PyTorch and NCCL. **Failure detection.** We launch a background thread on each worker that uses NCCL's ncclCommGetAsyncError() function to keep polling whether a communication failure has occurred. If a failure occurs, the worker first sets a failure flag to true in a global key-value store and then aborts its own NCCL communicators. The global key-value store is co-located with the master machine (rank 0). Other workers' background threads also poll this flag from the global key-value store, and if a worker finds that the flag is set to true, it will abort its own NCCL communicators. **Update-undo.** In data parallelism, we insert a CUDA event after the all-reduce operation for each tensor and query whether it has been completed before updating the gradient's corresponding parameter. If it does not complete, it waits until the all-reduce operation completes. After it completes, the CUDA kernels for the corresponding parameter are launched to update the parameter and optimizer states, and the parameter is marked as updated. Note that even if there is a failure at this point, we need to let these kernels finish executing. Upon a failure, surviving workers undo the update of parameters that are marked updated. In pipeline parallelism, the model parameters on different stages are updated at different points in time due to computational dependency. Therefore, surviving workers need to exchange their current iteration number to determine the consensus pre-failure iteration after a failure occurs. Workers with a greater iteration number than the consensus pre-failure iteration need to undo the update. **Logging.** We use a dedicated CUDA stream to copy logging data from the GPU to the CPU for asynchronous logging. We insert a CUDA event after the copy operation to check if the copy operation is completed. After the main thread launches asynchronous copying operations at bubble time, it sends the CUDA event with the corresponding tensor and metadata to a queue. A background thread keeps reading items from the queue and checks if the asynchronous copy is complete by checking the CUDA event status. If completed, the thread saves the data to a file. For the global store described in SS5.1, we support HDFS [38] and Amazon S3. **Determinism in Logging.** Nondeterminism in DNN training may come from the random number seeds and algorithms themselves. For example, in DNN training, some convolution algorithms in cuDNN are nondeterministic1. We set torch.backend.cudnn.deterministic=True to resolve this issue. In addition, for convolutional operations, PyTorch also benchmarks multiple algorithms in the first run, selects the fastest one, and caches this choice so that the same algorithm can be directly selected later. However, there are still some slight differences in the computational results of different deterministic algorithms for the same input. In order to ensure that the worker selects the same convolutional algorithm after failure recovery as before failure, we save the previous benchmark results for failure recovery. Footnote 1: cuDNN reproducibility: [https://docs.nvidia.com/deeplearning/cudnn/developer-guide/index.html#reproducibility](https://docs.nvidia.com/deeplearning/cudnn/developer-guide/index.html#reproducibility) **Usage.** We provide an easy-to-use interface for users. A user only needs to provide a user-defined function (UDF) to train for one iteration and specify fault tolerance and training configurations. Then fault tolerance is in place during the user's model training, and recovery upon a failure can be automatically run without requiring user involvement. ## 7 Evaluation **Testbed.** We experiment on 16 DGX-2 machines, each equipped with eight 32 GB Tesla V100 GPUs (NVLink interconnect), 160 CPUs, 1.5 TB memory, and 3.6 TB NVMe SSD disks. The machines are connected via 40Gbps Ethernet. We build an HDFS cluster on these machines as the global storage. **Benchmark Models.** We evaluate Swift on training large image classification and language models with billions of parameters, as given in Table II. We scale up the original models in their respective papers: for Wide-ResNet-50 [18], we increase the base channel size from 64 to 320; for BERT-Large [1] and ViT-Large/32 [19], we increase the number of transformer layers from 24 to 128, keep the hidden size unchanged and refer to the enlarged models as BERT-128 and ViT-128/32, respectively. We use data parallelism to train Wide-ResNet-50 on two machines and four GPUs on each. To train ViT-128/32 or BERT-128 (with a maximum sequence length of 128), we use a 128-stage pipeline on all 16 machines, with each transformer layer occupying one GPU. We use SGD with momentum for Wide-ResNet-50 and ViT-128/32, and Adam for BERT-128 [19, 1, 1]. We select the micro-batch number to maximize the performance, using 16 and 4 for ViT-128/32 and BERT-128, respectively. Swift applies replication-based recovery to Wide-ResNet-50, and logging-based recovery to ViT-128/32 and BERT-128 (by default 16 machine groups and 8 machine groups in selective logging). We run each experiment for 200 iterations, perform a global checkpoint at the beginning of iteration 100, and kill a machine (rank 1) at the beginning of iteration 150. **Baselines.** We compare Swift with global checkpointing (default in PyTorch), CheckFreq [10] and Elastic Horovod [11]. We use CheckFreq's open-sourced code [42] and replace Elastic Horovod's snapshot implementation with CheckFreq's since it does not implement snapshotting to the CPU. We calculate the optimal snapshot frequency (once per 30 iterations) based on the algorithm suggested by CheckFreq and using the same permissible checkpoint overhead (3.5%) as in CheckFreq's experiments. For logging-based recovery, we only compare with global checkpointing, as its checkpointing overhead is already very low (checkpointing is pipelined in pipeline-parallel training), and the performance of CheckFreq would be similar. Elastic Horovod is not applicable since it only supports data parallelism. We also introduce a synchronous logging method (calling torch.save() before sending a tensor) as a baseline to evaluate the effect of our asynchronous logging and logging during bubble time (SS5.1). **Metrics.** We evaluate the training throughput and iteration time during failure-free execution, throughput during recovery, and recovery time. Training throughput is calculated by the number of images (or tokens) processed by all workers per training iteration. Initialization time counts from when workers detect the failure to when the replacements of failed workers join the training job. Recovery time is the duration from when the replacements of workers join the training job to the time they recover to the pre-failure iteration. ### _Macro-benchmarks_ **Replication-based recovery.** Figure 7(a) shows that Swift's replication-based recovery incurs less runtime overhead than state-of-the-art methods during failure-free training. Training throughput of CheckFreq and Elastic Horovod degrades compared to the normal training (without any checkpoint or snapshot). Figure 7(a) also presents the recovery time upon a machine failure at iteration 150. Global checkpointing takes a long time to recover, as all workers must load the checkpoint and re-compute the lost iterations (50 iterations in this experiment). CheckFreq and Elastic Horovod do frequent snapshotting but still need to re-compute 30 iterations (the last snapshot was captured at iteration 120). With Swift's replication-based recovery, surviving workers resolve inconsistencies by undoing updates and then broadcast replicas to the replacement workers. It reduces the recovery time by 98.9%, 98.1%, and 98.1% as compared to global checkpointing, CheckFreq, and Elastic Horovod, respectively. **Logging-based recovery.** Figure 7(b) and Figure 7(c) show that Swift's logging is slightly slower than global checkpointing during failure-free training of ViT-128/32 and achieves similar throughput for BERT-128. The slight delay for ViT-128/32 is because we use a large batch (4096) when training and \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Dataset} & Batch & \#params & \multirow{2}{*}{Parallelism} \\ & & size & (billion) & \\ \hline Wide-ResNet-50 & ImageNet [41] & 256 & 1.23 & DP \\ ViT-128/32 & ImageNet & 4096 & 1.64 & PP \\ BERT-128 & Wikipedia [1] & 512 & 1.11 & PP \\ \hline \hline \end{tabular} \end{table} TABLE II: Benchmark Models. DP = data parallelism. PP = pipeline parallelism. the logging data size is relatively large. Synchronous logging significantly degrades training throughput, especially when training ViT-128/32, due to logging more data than BERT-128. Swift's asynchronous logging and logging during bubble time (SS5.1) take logging off the critical path, leading to a similar throughput compared to global checkpointing. Figure 8(b) and Figure 8(c) show that the recovery time with logging is substantially smaller than global checkpointing. With 16 machine groups, the recovery time is reduced by 36.0% and 58.5% for ViT-128/32 and BERT-128, respectively. This is because only the 8-stage sub-pipeline on the failed machine needs to be recovered, compared to re-running the whole 128-stage pipeline when using global checkpointing. Note that logging needs slightly more initialization time because it requires additional initialization operations such as creating a CUDA stream and logging threads **Machine group size.** Figures 8(b) and 8(c) also show the impact of different machine group sizes on training throughput and recovery time for logging-based recovery. In Figure 8(b), with 8 machine groups, the throughput is similar to global checkpointing, due to less logging data than with 16 machine groups (SS5.3). In Figure 9 and Figure 8(b), we observe that logging with 8 machine groups requires a longer recovery time due to recovering a 16-stage sub-pipeline on two machines instead of the 8-stage sub-pipeline in the case of 16 machine groups. Table 3 shows the total logging size per iteration and average bandwidth taken by logging in bubble time with different models and different numbers of machine groups. This shows the trade-off between recovery time and space overhead with selective logging (SS5.3). **Parallel Recovery.** For logging with parallel recovery (SS5.2) cases in Figures 9, 8(b), and 8(c), we use 16 workers (GPUs) to concurrently do the recovery computation for one failed worker. We see that parallel recovery significantly improves training throughput for ViT-128/32 from 12.5x to 15x as compared to global checkpointing (similar results for BERT-128) due to reduced recovery time (by 57.3% and 76.3% for ViT-128/32 and BERT-128, respectively). The throughput fluctuation with parallel recovery is because parallel recovery is so fast that file transfer becomes a bottleneck, i.e., the new logging files are not yet downloaded from HDFS while the replay is already done with the earlier files. **Space-time trade-off.** Figure 10 further evaluates the trade-off between recovery time and space overhead with selective logging. Given a maximal storage capacity, we use the algorithm in SS5.3 to decide how to group machines. For both DNN models, the recovery time becomes longer when we lower the space threshold. We can identify good trade-offs using the plotted curves in practical usage. The grouping configurations can be found in Appendix C. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Model} & \#Machine & Total logging & Average consumed \\ & group & size (GB) & bandwidth (GB/s) \\ \hline \multirow{2}{*}{ViT-128/32} & 16 & 24.66 & 0.23 \\ & 8 & 11.51 & 0.11 \\ \multirow{2}{*}{BERT-128} & 16 & 8.05 & 0.075 \\ & 8 & 3.76 & 0.035 \\ \hline \hline \end{tabular} \end{table} TABLE III: Space overhead caused by logging per iteration. Fig. 8: Failure-free training throughput (top) and recovery time (bottom). PR = parallel recovery. Fig. 9: Training throughput of ViT-128/32 during failure recovery. Blue dashed line indicates completion of recovery with global checkpointing. ### _End-to-end Training_ We next run end-to-end training to verify that Swift does not affect the trained model accuracy. In Figure (a)a, We finetune BERT-Large [1] with the Adam optimizer on SQuAD-v1.1 dataset [43], using pipeline parallelism with 8 GPUs on two machines. We disable logging in this experiment but inspect potential impact of update-undo (SS4). We kill one machine at the end of iteration 500, intentionally make an additional update at iteration 500 and then undo this update. We observe that update-undo does not affect the final finetuning accuracy. In Figure (b)b, we finetune ViT-Base/32 [19] using SGD with momentum on CIFAR-100 dataset for 10000 iterations, using pipeline parallelism with 12 GPUs on three machines. We kill the machine hosting stages in the middle of the pipeline (i.e., machine 1 with workers from rank 4 to rank 7) at the end of iteration 500. We do not group the machines for logging nor enable parallel recovery. We see that our logging-based failure recovery has no loss of accuracy compared to the failure-free counterpart. ### _Simulation Study_ We further investigate the effects of Swift on the end-to-end training time through simulations. Simulation settings are given in Table IV (others are the same as experimental settings in SS7.1). We calculate the expected end-to-end training time without failures based on the iteration time measured in the experiments and the total number of training iterations. For Wide-ResNet-50 and ViT-128/32, we assume storing a checkpoint at the end of each epoch, following common ML practice [18, 19, 44]. For BERT-128, we assume performing checkpointing once every 5000 iterations, which is 1% of its total number of training iterations. We then inject failures uniformly randomly during training, assuming a 17-hour median-time-between-failure (following [6]). We repeat each simulation ten times and present the average results. **End-to-end training time.** As shown in Table V, Swift can reduce the end-to-end training time significantly for long-running jobs, as compared to global checkpointing. Specifically, Swift can speed up end-to-end training for training Wide-ResNet-50 on ImageNet and pretraining BERT-128 on the Wikipedia dataset by 1.16x and 1.10x, respectively. This translates into saving 77 hours and 48 hours of training time. Short-running jobs like training ViT-128/32 on ImageNet encounter fewer failures and thus benefit less from fast failure recovery. We also compare the end-to-end training time of Wide-ResNet-50 with Elastic Horovod and CheckFreq. We consider the overhead of snapshots in our simulations using data collected in SS7.1 and use the same snapshot frequency as in SS7.1. End-to-end training with CheckFreq takes 518.9 hours, and with Elastic Horovod takes 515.9 hours. Swift is 1.08 and 1.07 times faster than CheckFreq and Elastic Horovod, respectively. **Effects of checkpoint frequency.** We vary the checkpoint/snapshot frequency to investigate its impact on end-to-end training time. We keep the checkpoint frequency unchanged for replication-based recovery in Swift since it does not require frequent checkpointing. As shown in Figure 12, Swift achieves a shorter training time than other methods in all cases. An optimal checkpoint frequency can be obtained for each method from the curves, which leads to the shortest training time. Comparing the optimal cases of each method, for Wide-ResNet-50, Swift saves 11.8 hours, 7.1 hours, and 7.2 hours compared to global checkpointing, CheckFreq, and Elastic Horovod, respectively; for BERT-128, Swift saves 1.3 hours compared to global checkpointing - limited improvement due to the minimal checkpointing overhead of BERT-128 (0.93 seconds). **Effects of failure frequency.** We further adjust the median-time-between-failure to investigate its effect on end-to-end training time while fixing the checkpoint/snapshot frequency to the optimal frequencies given by Figure 12. Figure 13 shows that Swift achieves better speedup when failures are \begin{table} \begin{tabular}{c c c c} \hline \hline Model & total \# of iterations & checkpoint interval & End-to-end training time w/o failure \\ \hline Wide-ResNet-50 & 450,360 & 5,004 & 479.4hr \\ ViT-128/32 & 93,600 & 312 & 85.6hr \\ BERT-128 & 500,000 & 5,000 & 461.1hr \\ \hline \hline \end{tabular} \end{table} TABLE IV: Training workload in the simulation study. Fig. 11: End-to-end training. The red line indicates a failure at iteration 500. Fig. 10: Trade-off between recovery time and storage space limit. Marker: (recovery time in seconds, storage limit in gigabytes). \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & \#failure & Globalckpt. & Swift & Speedup \\ \hline Wide-ResNet-50 & 28 & 557.4hr & 480.7hr & 1.16x \\ ViT-128/32 & 5 & 86.4hr & 86.0hr & 1.01x \\ BERT-128 & 27 & 524.2hr & 476.1hr & 1.10x \\ \hline \hline \end{tabular} \end{table} TABLE V: Simulated end-to-end training time with failures. more frequent and also the shortest training time among all methods when failures are infrequent. ## 8 Related Work **Elastic training.** Most DL jobs use static job configuration (e.g., the number of workers). In elastic training, workers can join and leave. The job can scale out to utilize transient idle resources (e.g., spot instances in cloud computing), or scale in to reserve resources for high-priority jobs. Unfortunately, most elastic training works [45, 46, 47] still rely on checkpoint-restart method to avoid the crash-consistency problem (SS2.3). Swift can resolve the inconsistency using update-undo (SS4) and thus benefit elastic training (e.g., broadcast the worker's state when new workers come in). **Checkpointing in DL systems.** Check-N-Run [7] proposes incremental checkpointing tailored for training DL recommendation models, exploiting the fact that only a fraction of the recommendation model is updated in each iteration. It is complementary to our work because Swift is not limited to recommendation models. The MLP layers in recommendation models are usually trained using data parallelism [6], which can benefit from the replication-based recovery. Orpheus [48] also proposes incremental checkpointing but stores sufficient vectors of gradients, which are much smaller than the gradients themselves. During recovery, the gradients are reconstructed by the stored sufficient vectors and applied to a checkpoint to recover the model state. Our logging method can be seen as an extension of their approach. We consider the sufficient vectors (e.g., intermediate activation/gradient in pipeline parallelism) of the computation graph on a machine rather than a single operator. We log data asynchronously by upstream machines while they require costly synchronous logging for consistency. Recent works also propose partial recovery that loads the checkpoint of the failed machine only and continues with training [6, 30]. Partial recovery avoids global rollback but incurs accuracy loss due to inconsistent model state among workers [6]. In contrast, Swift does not degrade final model accuracy while reducing recovery time. **Large-scale DNN training.** In addition to parallelism, other complementary techniques for large-scale DNN training include memory optimization [28] and mixed-precision training [49]. Swift can be combined with many of them. For example, we can combine our replication-based recovery with Fully Sharded Data Parallel (FSDP), a popular memory optimization technique that shards the model state across data-parallel workers [28]. We can maintain two copies of each piece of the sharded model state for failure resilience. Moreover, mixed-precision training can reduce the logging size due to using a lower precision for intermediate data [49]. ## 9 Conclusion This paper presents Swift, a novel design that expedites failure recovery in distributed DNN training. Swift exploits redundancies in data-parallel training for failure recovery and resolves the crash-consistency problem with update-undo. Swift advocates logging for pipeline-parallel training, which records inter-machine intermediate data at runtime and limits the computation graph to be re-executed to those on the failed workers. We also design parallel recovery to expedite recovery further and explore the trade-off between recovery time and space overhead with selective logging. Compared to state-of-the-art approaches, extensive evaluations show that Swift significantly accelerates failure recovery without affecting training throughput and model accuracy.
2308.13243
Compact binary systems in Einstein-Aether gravity: Direct integration of the relaxed field equations to 2.5 post-Newtonian order
The Einstein-Aether theory is an alternative theory of gravity in which the spacetime metric is supplemented by a long-range timelike vector field (the "aether" field). Here, for the first time, we apply the full formalism of post-Minkowskian theory and of the Direct Integration of the Relaxed Einstein Equations (DIRE), to this theory of gravity, with the goal of deriving equations of motion and gravitational waveforms for orbiting compact bodies to high orders in a post-Newtonian expansion. Because the aether field is constrained to have unit norm, a naive application of post-Minkowskian theory leads to contributions to the effective energy momentum tensor that are {\em linear} in the perturbative fields. We show that a suitable redefinition of fields using an array of "superpotentials" can eliminate such linear terms to any desired post-Newtonian order, resulting in flat spacetime wave equations for all fields, with sources consisting of matter terms and terms quadratic and higher in the fields. As an initial application of this new method, and as a foundation for obtaining the equations of motion for compact binaries, we obtain explicit solutions of the relaxed equations sufficient to obtain the metric in the near zone through 2.5 post-Newtonian order, or $O[(v/c)^5]$ beyond the Newtonian approximation.
Fatemeh Taherasghari, Clifford M. Will
2023-08-25T08:32:43Z
http://arxiv.org/abs/2308.13243v2
Compact binary systems in Einstein-AEther gravity: Direct integration of the relaxed field equations to 2.5 post-Newtonian order ###### Abstract The Einstein-AEther theory is an alternative theory of gravity in which the spacetime metric is supplemented by a long-range timelike vector field (the "aether" field). Here, for the first time, we apply the full formalism of post-Minkowskian theory and of the Direct Integration of the Relaxed Einstein Equations (DIRE), to this theory of gravity, with the goal of deriving equations of motion and gravitational waveforms for orbiting compact bodies to high orders in a post-Newtonian expansion. Because the aether field is constrained to have unit norm, a naive application of post-Minkowskian theory leads to contributions to the effective energy momentum tensor that are _linear_ in the perturbative fields. We show that a suitable definition of fields using an array of "superpotentials" can eliminate such linear terms to any desired post-Newtonian order, resulting in flat spacetime wave equations for all fields, with sources consisting of matter terms and terms quadratic and higher in the fields. As an initial application of this new method, and as a foundation for obtaining the equations of motion for compact binaries, we obtain explicit solutions of the relaxed equations sufficient to obtain the metric in the near zone through 2.5 post-Newtonian order, or \(O[(v/c)^{6}]\) beyond the Newtonian approximation. ## I Introduction One of the classic approaches to devising a theory of gravity alternative to general relativity (GR) is to postulate, in addition to the spacetime metric, an auxiliary gravitational field. The quintessential example is the 1961 Brans-Dicke theory (which built upon earlier work by Fierz, Pauli and Jordan)[1], in which the added field was a scalar. By proposing a suitable action for the auxiliary field along with a suitable coupling between it and the action for the spacetime metric, one could obtain field equations with reasonable mathematical properties (such as partial differential equations of order no greater than two). In addition, one could automatically abide by very precise tests of the Einstein Equivalence Principle, such as the Eotvos experiment, by ensuring that the coupling to the fields of matter involved only the spacetime metric, a concept called "universal coupling" or "metric coupling". This set of ideas continues to serve as a template for inventing theories of gravity into the present, with a profusion of theories having multiple scalar fields, vector fields, and tensor fields of various ranks (for reviews, see [2; 3; 4; 5; 6; 7; 8]). One of the earliest _vector-tensor_ theories was invented by Will and Nordtvedt [9] and later generalized by Hellings and Nordtvedt [10], motivated by a desire to explore theories that might exhibit "preferred-frame" effects. In general relativity, the gravitational physics of an isolated system does not depend on its velocity relative to the rest of the universe because the asymptotic, or large-distance limit of the metric (which establishes the boundary conditions for solving for the local gravitational physics) can always be transformed to the Minkowski metric, which is independent of the motion of the reference frame in which it is observed. The same is true in scalar-tensor theories because the asymptotic scalar field is also independent of reference frame. By contrast, in a theory with a timelike vector field \(K^{\mu}\) that is somehow related to the distribution of mass energy, the asymptotic field that establishes the boundary conditions for that system would be expected to point purely in the time direction (i.e. have components (\(K^{0},\,0,\,0,\,0\))) if the system is at rest relative to the mean rest frame of the cosmic distribution of matter. But if an isolated system were to move relative to that cosmic frame, then the asymptotic vector field in the frame of the system would have the form (\(K^{0},\,K^{1},\,K^{2},\,K^{3}\)), where the spatial part of the vector field is related to the speed and direction of motion relative to the cosmic frame (see Chapter 5 of [11] for a review of alternative theories of gravity), and this would alter the internal structure and dynamics of the isolated system. One defect of these early theories was that the field equation for the vector field was homogeneous and linear in \(K^{\mu}\) with no matter source (by virtue of metric coupling), so that \(K^{\mu}=0\) was an immediate solution unless one forced the asymptotic value of \(K^{0}\) or \(|K^{\mu}|\) to be a non-zero arbitrary constant. As a result, the subject of vector-tensor theories lay somewhat dormant until Jacobson and colleagues proposed the "Einstein-AEther " theory [12; 13; 14; 15; 16]. As before, the goal was to study violations of Lorentz invariance in gravity, now in parallel with similar studies in matter interactions, such as the Standard Model Extension of Kostalecky and Samuel [17]. Another motivation was the notion that such Lorentz violations might be a classical relic of a quantum gravity theory in which there was a fundamental quantum of length. Other theories and generalizations followed, including the Tensor-Vector-Scalar (TeVeS) theory of Bekenstein [18], designed to provide a relativistic foundation for the phenomenological Modified Newtonian Dynamics (MOND) proposal of Milgrom[19]; Khronometric theory, a low-energy limit of "Horava gravity", a proposal for a theory that is power-counting renormalizable [20], shown later to be a singular limit of Einstein-AEther theory [21; 22; 23]; the Scalar-Tensor-Vector (STV, but also called MOG) of Moffat [24], designed to avoid the need for dark matter; and a generalized tensor-vector-scalar theory of Skordis [25; 26], designed mainly for cosmological investigations. At the lowest post-Newtonian (PN) order, the parametrized post-Newtonian (PPN) parameters of Einstein-AEther theory were calculated by Foster and Jacobson [16]; the values were identical to those of general relativity, except for the "preferred-frame" parameters, \(\alpha_{1}\) and \(\alpha_{2}\), which could be non-zero. Foster also derived the leading gravitational radiation damping effects [27; 28] and the PN equations of motion for compact bodies such as neutron stars and black holes [29], later verified by Yagi et al. [30]. Constraints on the parameters of the theory have been placed using binary pulsar data [31; 32; 30]. The detection of gravitational waves from inspiralling binary black holes in 2015 presented new possibilities for testing alternative theories of gravity, and the LIGO-Virgo collaboration has published comprehensive papers detailing a wide range of tests, first using data from the discovery event GW 150914 [33], and subsequently using data from the full catalogue of events through the middle of the third observing run [34]. One notable result was the observation of the nearly concident arrival times of the gravitational-wave and gamma-ray signals from the binary neutron star merger event GW170817/GRB170817 [35; 36], which placed an extremely strong bound on the speed of gravitational waves, relative to that of light, \[-3\times 10^{-15}<v_{g}-1<7\times 10^{-16}\,. \tag{1}\] This had the effect of ruling out a significant number of alternative theories of gravity [37; 38; 39; 40], and constraining the Einstein-AEther Theory [41]. The data have also constrained the strong-field dynamical evolution of compact binary mergers, as reflected in the detailed time evolution of the detected waveforms. No deviations from the predictions of general relativity have been found, and constraints have been placed on the coefficients of the terms in a PN expansion of the waveform phase [34]. While these "theory agnostic" constraints are useful and important, they provide only limited information about what theories might be ruled out, simply because very few theories have been analyzed in sufficient detail to provide predictions for these coefficients to an order comparable to what is known for GR. In scalar-tensor theories, considerable effort has gone toward obtaining the coefficents up to 2PN order [42; 43; 44; 45; 46; 47]. However, because of the very strong bound on the scalar-tensor coupling parameter \(\omega\) from solar-system measurements, combined with the fact that, in this class of theories, binary-black hole evolution is indistinguishable from its counterpart in general relativity, it seems unlikely that gravitational-wave measurements will lead to stronger constraints, except possibly via the detection of a favorable black-hole neutron-star merger. What makes the study of gravitational waves in alternative theories intriguing is that they generally predict the existence of _dipole_ and even _monopole_ gravitational radiation, none of which exist in GR. In particular, if the binary source is sufficently asymmetrical, either in mass or composition (eg. a black-hole neutron-star binary), then dipole gravitational radiation can lead to contributions to the energy flux and the waveform evolution that are larger than the conventional quadrupole contributions by a factor of \((c/v)^{2}\), where \(v\) is the orbital velocity. In other words, dipole radiation effects can occur at "-1PN" order, in a hierarchy where quadrupole radiation is denoted by "0PN" order. This is both a blessing and a curse. It is a blessing because it could lead to tighter constraints on the theory than might have been expected a priori. But it is a curse because, in order to calculate the waveform evolution to an order equivalent to the \(n\)PN order of general relativity, one must determine the radiative moments of the auxiliary fields and the equations of motion of the binary system to the \((n+1)\)PN order. These considerations have motivated us to begin an effort to determine the equations of motion for compact binaries and the emitted gravitational-waveform in a post-Newtonian expansion of Einstein-AEther theory beyond the lowest-order dipole and quadrupole contributions, and beyond linearized, which constitute the current state of the art [48; 49]. Because of the significant additional complexity of this class of theories, combined with the "curse" of dipole radiation, our goal will be modest: to obtain the gravitational waveform to 1.5 PN order beyond the conventional quadrupole level. This paper is devoted to obtaining the metric to 2.5PN order, while future papers will obtain the equations of motion for compact bodies to 2.5PN order, the far-zone fields to the required order, and finally the energy flux and waveform to 1.5PN order. The results will augment the waveform templates described in [48; 49]. In Sec. II, we review the essentials of Einstein-AEther theory, and impose a condition on one of its four arbitrary parameters that arises from the gravitational-wave speed constraint from GW170817. Section III expresses the theory in the form of "relaxed field equations" of the post-Minkowskian method that has been used in GR and scalar-tensor theory to carry out PN expansions (see eg. [50]). In Sec. IV we note that the presence in Einstein-AEther theory of a vector auxiliary field with unit norm necessitates a change of field variables in order to obtain wave equations for the fields whose sources consists of matter plus field contributions that are quadratic in small quantities, thus enabling a consistent PN expansion. Section V obtains solutions within the near-zone for the fields to orders that permit the construction of the complete spacetime metric to 2.5PN order. In Sec. VI we briefly describe ongoing work and make concluding remarks. ## II Einstein-AEther theory Einstein-AEther theory is defined by the covariant action \[S_{\AE} \equiv\frac{1}{16\pi G_{0}}\int\sqrt{-g}\bigg{[}R-E^{\mu\nu}_{ \alpha\beta}\nabla_{\mu}K^{\alpha}_{\ae}\nabla_{\nu}K^{\beta}_{\ae}\] \[\qquad+\lambda(K_{\ae\,\mu}K^{\mu}_{\ae}+1)\bigg{]}d^{4}x+\int \sqrt{-g}\,\mathcal{L}_{M}d^{4}x\,, \tag{1}\] where \(g\) is the determinant of the metric \(g_{\mu\nu}\), \(R\) is the Ricci scalar, \(\nabla_{\mu}\) is a covariant derivative with respect to the metric, \[E^{\mu\nu}_{\alpha\beta}=c_{1}g^{\mu\nu}g_{\alpha\beta}+c_{2}\delta^{\mu}_{ \alpha}\delta^{\nu}_{\beta}+c_{3}\delta^{\mu}_{\beta}\delta^{\nu}_{\alpha}-c _{4}K^{\mu}_{\ae}K^{\nu}_{\ae}g_{\alpha\beta}\,, \tag{2}\] \(\lambda\) is a Lagrange multiplier designed to enforce the constraint \(K_{\ae\,\mu}K^{\mu}_{\ae}=-1\), and \(\mathcal{L}_{M}\) is the matter Lagrangian. We use units in which the speed of light \(c\) is unity, and the spacetime metric has the signature \((-,+,+,+)\); Greek indices denote spacetime components and Roman indices denote spatial components; parentheses (square brackets) around groups of indices denote symmetrization (antisymmetrization). However, it is well-known that the speed of transverse-traceless gravitational waves in this theory is given by \(v_{g}=(1-c_{1}-c_{3})^{-1/2}\). Because of the extraordinary bound from GW170817, _we will make the assumption that_\(c_{3}=-c_{1}\), reducing the theory to a three-parameter set of theories. The Lagrangian for the AEther field then takes the form \[\mathcal{L}_{\AE}=-\frac{1}{2}c_{1}F_{\mu\nu}F^{\mu\nu}-c_{2}\left(\nabla_{ \mu}K^{\mu}_{\ae}\right)^{2}+c_{4}|\mathbf{a}_{\ae}|^{2}\,, \tag{3}\] where \[F_{\mu\nu}\equiv\partial_{\nu}K_{\ae\,\mu}-\partial_{\mu}K_{\ae\,\nu}\,,\qquad a ^{\mu}_{\ae}\equiv K^{\nu}_{\ae}\nabla_{\nu}K^{\mu}_{\ae}\,. \tag{4}\] The initial studies by Jacobson et al. established the post-Newtonian limit, studied gravitational wave propagation, and analyzed other aspects of the theory. Here we summarize the main results, _but we impose the constraint_\(c_{3}=-c_{1}\)_a priori_. The parametrized post-Newtonian (PPN) parameters [11] are given by [16]: \[\gamma =1\,,\quad\beta=1\,,\] \[\alpha_{1} =-4c_{14}\,,\] \[\alpha_{2} =\frac{c_{14}}{c_{2}}\left(\frac{2c_{2}c_{14}+c_{14}-c_{2}}{2-c_{ 14}}\right)\,, \tag{5}\] with \(\xi=\alpha_{3}=\zeta_{1}=\zeta_{2}=\zeta_{3}=\zeta_{4}=0\), where \(c_{14}=c_{1}+c_{4}\). The present value of the gravitational constant is given by \[G=G_{0}\left(1-\frac{c_{14}}{2}\right)^{-1}\,. \tag{6}\] Considerations of positivity of energy impose the constraints \(c_{1}\geq 0\), \(c_{2}\geq 0\), and \(c_{14}\geq 0\). ## III The relaxed field equations in Einstein-AEther theory ### Field equations We begin by deriving the field equations in a form that will be useful for obtaining the so-called "relaxed" field equations, analogous to those in general relativity. Varying the action (1) with respect to the AEther field yields the field equation for \(K^{\mu}_{\ae}\), \[c_{1}\nabla_{\nu}F^{\mu\nu} =8\pi G_{0}T^{\mu}_{\ae}-\lambda K^{\mu}_{\ae}-c_{2}\nabla^{\mu}( \nabla_{\nu}K^{\nu}_{\ae})\] \[\quad-c_{4}\left(a^{\nu}_{\ae}\nabla^{\mu}K_{\ae\,\nu}-a^{\mu}_{ \ae}\nabla_{\nu}K^{\nu}_{\ae}-K^{\nu}_{\ae}\nabla_{\nu}a^{\mu}_{\ae}\right)\,, \tag{7}\] where we define the matter energy-momentum tensor and vector by \[T^{\mu\nu}\equiv\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{M})}{ \delta g_{\mu\nu}}\,,\quad T_{\ae\mu}\equiv-\frac{1}{\sqrt{-g}}\frac{\delta( \sqrt{-g}\mathcal{L}_{M})}{\delta K^{\mu}_{\ae}}\,. \tag{8}\] In a conventional metric theory of gravity, where the matter Lagrangian couples only to the metric, the quantity \(T_{\ae\mu}\) would vanish. However we anticipate treating compact bodies, where the mass of each body may have an _effective_ dependence on \(K^{\mu}_{\ae}\) via its gravitational binding energy. This idea is based on the original proposal by Eardley [51], with follow-up work by Gralla [52; 53]; see also [54]. This will be addressed in detail in a subsequent paper; for now we will include the effective energy-momentum vector in all our considerations. Contracting Eq. (7) with \(K_{\ae\,\mu}\) and using the constraint that \(K_{\ae\,\mu}K^{\mu}_{\ae}=-1\) yields an expression for the Lagrange multiplier \(\lambda\) \[\lambda =-8\pi G_{0}K_{\ae\,\nu}T^{\nu}_{\ae}-\frac{1}{2}c_{1}\left(F_{\mu \nu}F^{\mu\nu}+2\nabla_{\nu}a^{\nu}_{\ae}\right)\] \[\qquad+c_{2}K^{\nu}_{\ae}\nabla_{\nu}(\nabla_{\mu}K^{\mu}_{\ae}) +2c_{4}|\mathbf{a}_{\ae}|^{2}\,. \tag{9}\] Note that \(\mathbf{K}_{\ae}\cdot\mathbf{a}_{\ae}=0\). Varying the action with respect to the metric and making use of Eq. (III.1) to eliminate the Lagrange multiplier \(\lambda\), we obtain the field equations \[G^{\mu\nu}=8\pi G_{0}\left(T^{\mu\nu}-K^{\mu}_{\ae}K^{\nu}_{\ae}K_{\ae\,\alpha}T^ {\alpha}_{\ae}\right)+S^{\mu\nu}\,, \tag{10}\] for the metric, and \[c_{1}\nabla_{\nu}F^{\mu\nu}=8\pi G_{0}\left(T^{\mu}_{\ae}+K^{\mu}_{\ae}K_{\ae\, \nu}T^{\nu}_{\ae}\right)\] \[+\frac{1}{2}c_{1}K^{\mu}_{\rm ae}\left(F_{\alpha\beta}F^{\alpha \beta}+2\nabla_{\nu}a^{\nu}_{\rm ae}\right)\] \[-c_{2}\left[\nabla^{\mu}(\nabla_{\nu}K^{\nu}_{\rm ae})+K^{\mu}_{ \rm ae}K^{\nu}_{\rm ae}\nabla_{\nu}(\nabla_{\alpha}K^{\alpha}_{\rm ae})\right]\] \[-c_{4}\left[a^{\nu}_{\rm ae}\nabla^{\mu}K_{\rm ae\,\nu}-a^{\mu}_ {\rm ae}\nabla_{\nu}K^{\nu}_{\rm ae}\right.\] \[\left.-K^{\nu}_{\rm ae}\nabla_{\nu}a^{\mu}_{\rm ae}+2K^{\mu}_{ \rm ae}|\mathbf{a}_{\rm ae}|^{2}\right]\,, \tag{11}\] for the AEther field, where \[S^{\mu\nu}=c_{1} \bigg{[}2K^{(\mu}_{\rm ae}\nabla_{\alpha}F^{\nu)\alpha}+F_{\alpha} {}^{\mu}F^{\alpha\nu}-K^{\mu}_{\rm ae}K^{\nu}_{\rm ae}(\nabla_{\alpha}a^{ \alpha}_{\rm ae})\] \[-\frac{1}{4}\left(g^{\mu\nu}+2K^{\mu}_{\rm ae}K^{\nu}_{\rm ae} \right)F_{\alpha\beta}F^{\alpha\beta}\bigg{]}\] \[+c_{2}\bigg{[}(g^{\mu\nu}+K^{\mu}_{\rm ae}K^{\nu}_{\rm ae})\,K^{ \beta}_{\rm ae}\nabla_{\beta}(\nabla_{\alpha}K^{\alpha}_{\rm ae})\] \[+\frac{1}{2}g^{\mu\nu}(\nabla_{\alpha}K^{\alpha}_{\rm ae})^{2} \bigg{]}\] \[-c_{4}\Big{[}a^{\mu}_{\rm ae}a^{\nu}_{\rm ae}-\frac{1}{2}\left(g^ {\mu\nu}+4K^{\mu}_{\rm ae}K^{\nu}_{\rm ae}\right)|\mathbf{a}_{\rm ae}|^{2}\] \[-K^{\mu}_{\rm ae}K^{\nu}_{\rm ae}(\nabla_{\alpha}a^{\alpha}_{\rm ae })-2a^{\alpha}_{\rm ae}(\nabla_{\alpha}K^{(\mu}_{\rm ae})K^{\nu}_{\rm ae})\] \[+2a^{(\mu}_{\rm ae}K^{\nu)}_{\rm ae}\nabla_{\alpha}K^{\alpha}_{ \rm ae}+2K^{\alpha}_{\rm ae}(\nabla_{\alpha}a^{(\mu)}_{\rm ae})K^{\nu}_{\rm ae }\bigg{]}\,. \tag{12}\] Note that contracting Eq. (11) with \(K_{\rm ae\,\mu}\) now yields a trivial equality. ### Relaxed Einstein-AEther field equations To recast Eq. (10) into the form of a "relaxed" Einstein-AEther equation, we define the quantities \[\mathfrak{g}^{\mu\nu} \equiv \sqrt{-g}g^{\mu\nu}\,,\] \[H^{\mu\alpha\nu\beta} \equiv \mathfrak{g}^{\mu\nu}\mathfrak{g}^{\alpha\beta}-\mathfrak{g}^{ \alpha\nu}\mathfrak{g}^{\beta\mu}\,, \tag{13}\] and use the identity, valid for any spacetime, \[H^{\mu\alpha\nu\beta}{}_{,\alpha\beta}=(-g)(2G^{\mu\nu}+16\pi t^{\mu\nu}_{LL} )\,, \tag{14}\] where \(t^{\mu\nu}_{LL}\) is the Landau-Lifshitz pseudotensor. We next define the gravitational field \(h^{\mu\nu}\) by the equation \[\mathfrak{g}^{\mu\nu}\equiv\eta^{\mu\nu}-h^{\mu\nu}\,, \tag{15}\] and impose the "Lorenz" or harmonic gauge condition \[h^{\mu\nu}{}_{,\nu}=0\,. \tag{16}\] Substituting Eqs. (10), (15) and (16) into (14), we can recast the field equation (10) into the form \[\Box_{\eta}h^{\mu\nu}=-16\pi G_{0}\tau^{\mu\nu}\,, \tag{17}\] where \(\Box_{\eta}\) is the flat spacetime d'Alembertian with respect to \(\eta_{\mu\nu}\), and where \[\tau^{\mu\nu}=(-g)\left(T^{\mu\nu}-K^{\mu}_{\rm ae}K^{\nu}_{\rm ae \,\alpha}T^{\alpha}_{\rm ae}\right)+(-g)\left(t^{\mu\nu}_{LL}+t^{\mu\nu}_{H}\right)\] \[\qquad\qquad+\frac{1}{8\pi G_{0}}(-g)S^{\mu\nu}\,, \tag{18}\] where \(t^{\mu\nu}_{H}\) is the Harmonic pseudotensor (see Eqs. (12) and (13) of [50] for explicit formulae for \((-g)t^{\mu\nu}_{LL}\) and \((-g)t^{\mu\nu}_{H}\)). ## IV Formal structure of the near-zone fields ### Metric in terms of the fields The next task will be to solve these equations iteratively in a post-Newtonian expansion in the near-zone, i.e. within one characteristic gravitational wavelength \(\lambda\) of the center of mass of the system, in terms of a small parameter \(\epsilon\sim v^{2}\sim G_{0}m/r\), where \(v\), \(r\) and \(m\) are the characteristic velocities, separations and masses of the bodies in the system. The strong-field internal gravity effects of each body will be encoded in expressions for the energy-momentum quantities \(T^{\mu\nu}\) and \(T^{\nu}_{\rm ae}\). We follow [55] (hereafter referred to as PWI) by defining a simplified notation for the field \(h^{\mu\nu}\): \[\tilde{N} \equiv h^{00}\sim O(\epsilon)\,,\] \[\tilde{K}^{j} \equiv h^{0j}\sim O(\epsilon^{3/2})\,,\] \[\tilde{B}^{jk} \equiv h^{jk}\sim O(\epsilon^{2})\,,\] \[\tilde{B} \equiv h^{jj}\equiv\sum_{j}h^{jj}\sim O(\epsilon^{2})\,. \tag{19}\] We assume that the coordinate system is at rest with respect to the mean rest frame of the universe that is singled out by the asymptotic value of the AEther field \(K^{\mu}_{\rm ae}\). This implies that the asymptotic values of the spatial components vanish, and that therefore, within the near zone, they behave as \[\tilde{K}^{j}_{\rm ae}\sim O(\epsilon^{3/2})\,. \tag{20}\] Later, when we have the equations of motion and gravitational wave signals in hand, we will be able to transform them to a frame in which the system is at rest, using a suitably expanded Lorentz transformation combined with a gauge transformation (often called a "post-Galilean" transformation). From the constraint on the norm of \(K^{\mu}_{\rm ae}\) it follows that \(K^{0}_{\rm ae}\) can be expressed in terms of the variables of Eqs. (IV.1) and (20): \[K^{0}_{\rm ae} =1+\frac{\epsilon}{4}\tilde{N}+\frac{\epsilon^{2}}{4}\left( \tilde{B}-\frac{3}{8}\tilde{N}^{2}\right)+\frac{\epsilon^{3}}{16}\bigg{[} \tilde{N}\tilde{B}+\frac{7}{8}\tilde{N}^{3}\] \[+8\tilde{K}^{j}_{\rm ae}\tilde{K}^{j}_{\rm ae}-16\tilde{K}^{j} \tilde{K}^{j}_{\rm ae}+4\tilde{K}^{j}\tilde{K}^{j}\bigg{]}+O(\epsilon^{4})\,. \tag{21}\] The harmonic gauge condition becomes \(\tilde{N}_{,0}+\tilde{K}^{j}_{,j}=0\) and \(\tilde{K}^{j}_{,0}+\tilde{B}^{jk}_{,k}=0\). Hereafter we do not distinguish between covariant and contravariant components of spatial indices, which are assumed to be raised or lowered using the Minkowski metric, whose spatial components are \(\delta_{ij}\). In the equations of motion to 2.5PN order, we need to determine the components of the physical metric and \(\tilde{K}^{j}_{\rm ae}\) to the following orders: \(g_{00}\) to \(O(\epsilon^{7/2})\), \(g_{0j}\) to \(O(\epsilon^{3})\), \(g_{jk}\) to \(O(\epsilon^{5/2})\), and \(\tilde{K}^{j}_{\rm ae}\)to \(O(\epsilon^{3})\). From the definitions (3.7) and (3.9), one can invert to find \(g_{\mu\nu}\) in terms of \(h^{\mu\nu}\) and \(K^{j}\) to the appropriate order in \(\epsilon\), as in PWI, Eq. (4.2). Expanding to the required order, we find, \[g_{00} =-1+\frac{\epsilon}{2}\tilde{N}+\frac{\epsilon^{2}}{8}\left(4 \tilde{B}-3\tilde{N}^{2}\right)\] \[\quad+\frac{\epsilon^{3}}{16}\left(5\tilde{N}^{3}-4\tilde{N} \tilde{B}+8\tilde{K}^{j}\tilde{K}^{j}\right)+O(\epsilon^{4})\,,\] \[g_{0j} =-\epsilon^{3/2}\tilde{K}^{j}+\frac{\epsilon^{5/2}}{2}\tilde{N} \tilde{K}^{j}+O(\epsilon^{7/2})\,,\] \[g_{jk} =\delta^{jk}\left\{1+\frac{\epsilon}{2}\tilde{N}-\frac{\epsilon^ {2}}{8}\left(\tilde{N}^{2}+4\tilde{B}\right)\right\}\] \[\quad+\epsilon^{2}\tilde{B}^{jk}+O(\epsilon^{3})\,,\] \[(-g) =1+\epsilon\tilde{N}-\epsilon^{2}\tilde{B}+O(\epsilon^{3})\,. \tag{4.4}\] ### Change of field variables We can now use these definitions to express the field equations and the _E_ther equation to the required PN order in terms of \(\tilde{N}\), \(\tilde{K}^{j}\), \(\tilde{B}_{jk}\), \(\tilde{B}\) and \(\tilde{K}^{j}_{\rm ae}\). For example, the components of the combination \(t^{\mu\nu}_{LL}+t^{\mu\nu}_{H}\) have the same form as the components of \(\Lambda^{\mu\nu}\) found in Eq. (4.4) of PWI. However, despite the fact that the _E_ther energy-momentum tensor \(S^{\mu\nu}\) is formally quadratic and higher-order in the fields, the fact that \(K^{0}_{\rm ae}=1\) at lowest order implies that the fields \(\tilde{N}\), \(\tilde{K}^{j}\), \(\tilde{K}^{j}_{\rm ae}\), \(\tilde{B}^{jk}\) and \(\tilde{B}\) can contribute _linearly_ to the effective source of the relaxed field equation. Even worse, the function \(\tilde{N}\) contributes to \(S^{00}\) at _Newtonian_ order. At purely linear order in the fields, the vacuum versions of the relaxed field equations take the form for the metric fields, \[\Box\tilde{N} =\frac{1}{2}c_{14}\left[\nabla^{2}(\tilde{N}+\epsilon\tilde{B})- 4\epsilon(\tilde{K}^{j}_{\rm ae,0j}-\tilde{K}^{j}_{,0j})\right]\,,\] \[\Box\tilde{K}^{j} =\frac{1}{2}c_{2}\left[3\tilde{N}_{,0j}+4\tilde{K}^{k}_{\rm ae,jk}-\epsilon\tilde{B}_{,0j}\right]\,,\] \[\Box\tilde{B}^{jk} =-\frac{1}{2}c_{2}\delta^{jk}\left[3\tilde{N}_{,00}+4\tilde{K}^{ k}_{\rm ae,k0}-\epsilon\tilde{B}_{,00}\right]\,, \tag{4.5}\] and for the _E_ther field, \[c_{1}\left[\nabla^{2}\left(\tilde{K}^{j}_{\rm ae}-\tilde{K}^{j} \right)-\left(\tilde{K}^{k}_{\rm ae}-\tilde{K}^{k}\right)_{,kj}\right]\] \[\quad=-\frac{1}{4}c_{14}\left[\tilde{N}_{,0j}+\epsilon\tilde{B}_{,0j}-4\epsilon\left(\tilde{K}^{j}_{\rm ae}-\tilde{K}^{j}\right)_{,00}\right]\] \[\quad\quad-\frac{1}{4}c_{2}\left[3\tilde{N}_{,0j}+4\tilde{K}^{k} _{\rm ae,kj}-\epsilon\tilde{B}_{,0j}\right]\,. \tag{4.6}\] In Appendix A, we will study the wavelike solutions of these equations as an alternative method to verify the speeds and polarizations of waves derived in the literature [16]. These coupled, linear-order terms complicate the iteration procedure that is part of the post-Minkowskian method in general relativity or scalar-tensor theories, which rely upon the contributions to the right-hand-side of the relaxed equations being quadratic and higher in the small field quantities. However, it turns out that a suitable change of variables eliminates these linear terms to the desired 2.5PN order; details are given in Appendix B. This transformation is given by \[\tilde{N} =N+\frac{\epsilon c_{14}}{2-c_{14}}B+\frac{\epsilon}{v_{L}^{2}} \dot{R}+\frac{\epsilon^{2}c_{14}}{2(2-c_{14})v_{L}^{2}}\ddot{X}_{B}\] \[\quad+\frac{\epsilon^{2}}{12v_{L}^{4}}\left(\stackrel{{ (3)}}{{Y}}_{R}-c_{1}v_{W}^{2}\stackrel{{(3)}}{{Y}}_{Kae}\right)+O (\epsilon^{3})\,,\] \[\tilde{K}^{j} =K^{j}-R_{,j}-\frac{\epsilon c_{14}}{2(2-c_{14})}\dot{X}_{B,j}\] \[\quad-\frac{\epsilon}{12v_{L}^{2}}\left(\ddot{Y}_{R,j}-c_{1}v_{L} ^{2}W_{T}\dot{Y}_{Kae,j}\right)+O(\epsilon^{2})\,,\] \[\tilde{B}^{jk} =B^{jk}+\delta^{jk}\bigg{[}\dot{R}+\frac{\epsilon c_{14}}{2(2-c_{1 4})}\ddot{X}_{B}\] \[\quad+\frac{\epsilon}{12v_{L}^{2}}\left(\stackrel{{ (3)}}{{Y}}_{R}-c_{1}v_{L}^{2}W_{T}\stackrel{{(3)}}{{Y}}_{Kae} \right)\bigg{]}+O(\epsilon^{2})\,,\] \[\tilde{K}^{j}_{\rm ae} =K^{j}_{\rm ae}+K^{j}+\frac{1}{2c_{14}}\left(W_{L}R_{,j}+c_{1}W_{T }X_{Kae,j}\right)\] \[\quad+\frac{\epsilon}{4}\frac{W_{L}}{2-c_{14}}\dot{X}_{B,j}+\frac{ \epsilon}{24c_{14}v_{L}^{2}}\left(W_{L}\ddot{Y}_{R,j}\right.\] \[\quad\quad+c_{1}v_{L}^{2}(1-W_{L})W_{T}\ddot{Y}_{Kae,j}\right)+O (\epsilon^{2})\,, \tag{4.7}\] where \[v_{T}^{2} \equiv\frac{c_{1}}{c_{14}}\,,\] \[v_{L}^{2} \equiv\frac{c_{2}(2-c_{14})}{c_{14}(2+3c_{2})}\,, \tag{4.8}\] are the propagation speeds of the transverse and longitudinal waves of the _E_ther field (Appendix A), and where \[W_{T}\equiv 1-\frac{1}{v_{T}^{2}}\,,\quad W_{L}\equiv\left(1-\frac{c_{14}}{2} \right)\left(1-\frac{1}{v_{L}^{2}}\right)\,. \tag{4.9}\] Here and for future use, we define an array of "superpotentials" \(X\), superduperpotentials \(Y\) and "megasuperpotentials" \(Z\), defined by \[\nabla^{2}X_{N} \equiv 2N\,,\quad\nabla^{2}Y_{N}\equiv 12X_{N}\,,\quad\nabla^{2}Z_{N} \equiv 30Y_{N}\,,\] \[\nabla^{2}X_{Kae} \equiv 2K^{k}_{\rm ae,k}\,,\nabla^{2}Y_{Kae}\equiv 12X_{Kae}\,,\] \[\nabla^{2}Z_{Kae}\equiv 30Y_{Kae}\,,\] \[\nabla^{2}X_{B} \equiv 2B\,,\nabla^{2}Y_{B}\equiv 12X_{B}\,,\nabla^{2}Z_{B} \equiv 30Y_{B}\,, \tag{4.10}\] along with the superpotential combination, \[R\equiv\frac{1}{4}c_{14}\dot{X}_{N}-c_{1}X_{Kae}\,, \tag{4.11}\] and its own supersuperpotential defined by \(\nabla^{2}Y_{R}=12R\). The harmonic gauge conditions \(\tilde{N}_{,0}+\tilde{K}^{j}_{,j}=0\) and \(\tilde{K}^{j}_{,0}+\tilde{B}^{jk}_{,k}=0\) imply gauge conditions in the new variables given by \[K^{j}_{,j}+\left(1-\frac{c_{14}}{2}\right)\dot{N} =-2c_{1}K^{j}_{\text{ae},j}-\epsilon c_{1}W_{T}\ddot{X}_{\text{ae} }+O(\epsilon^{2})\,,\] \[\dot{K}^{j}+B^{jk}{}_{,k} =O(\epsilon^{2})\,. \tag{4.12}\] ### Final relaxed Einstein-AEther equations In terms of the new variables, the relaxed Einstein-AEther equations take the form \[\left(1-\frac{1}{2}c_{14}\right)\Box N =-16\pi G_{0}\tau^{00}+O(\rho\epsilon^{3})\,, \tag{4.13a}\] \[\Box K^{j} =-16\pi G_{0}\tau^{0j}+O(\rho\epsilon^{5/2})\,,\] (4.13b) \[\Box B^{jk} =-16\pi G_{0}\tau^{jk}+O(\rho\epsilon^{2})\,,\] (4.13c) \[\Box B =-16\pi G_{0}\tau^{kk}+O(\rho\epsilon^{3})\,,\] (4.13d) \[c_{1}\Box^{*}K^{j}_{\text{ae}} =8\pi G_{0}\tau^{j}_{\text{ae}}+O(\rho\epsilon^{5/2})\,, \tag{4.13e}\] where \(\Box^{*}\equiv\nabla^{2}-v_{T}^{-2}\partial_{0}^{2}\), and where \[\tau^{\mu\nu}\equiv(-g)T^{\mu\nu}_{T}+(16\pi G_{0})^{-1}\Lambda^ {\mu\nu}_{T}\,,\] \[\tau^{j}_{\text{ae}}\equiv T^{j}_{\text{ae}\,T}+(8\pi G_{0})^{-1 }\Lambda^{j}_{\text{ae}}\,. \tag{4.14}\] In obtaining these equations, we made use of the spatial components of the AEther field equation (3.5) to make further simplifications of the \(S^{\mu\nu}\). Pulling all the matter contributions together gives the total matter source tensor, \[T^{00}_{T} =T^{00}-(K^{0}_{\text{ae}})^{2}K_{\text{ae}\,\mu}T^{\mu}_{\text{ae}}\] \[\quad+2(\tilde{K}^{j}_{\text{ae}}-\tilde{K}^{j})(T^{j}_{\text{ae} }+\tilde{K}^{j}_{\text{ae}}K_{\text{ae}\,\mu}T^{\mu}_{\text{ae}})\,,\] \[T^{0j}_{T} =T^{0j}+K^{0}_{\text{ae}}T^{j}_{\text{ae}}\,,\] \[T^{jk}_{T} =T^{jk}+2\tilde{K}^{j}_{\text{ae}\,T}(T^{k}_{\text{ae}}+\tilde{ K}^{j}_{\text{ae}}\tilde{K}^{k}_{\text{ae}}K_{\text{ae}\,\mu}T^{\mu}_{\text{ae}}\,,\] \[T^{j}_{\text{ae}\,T} =T^{j}_{\text{ae}}+\tilde{K}^{j}_{\text{ae}}K_{\text{ae}\,\mu}T^{ \mu}_{\text{ae}}\,, \tag{4.15}\] where \(K_{\text{ae}\,\mu}\) is a four vector with components \((K_{\text{ae}\,0},\tilde{K}_{\text{ae}\,j})\). Many of these contributions to \(T^{\mu\nu}_{T}\) are of higher PN order than we need, but we will address this when we introduce our "compact point mass model" for the matter sources. We will also transform all potentials in Eq. (4.15) such as \(\tilde{K}^{j}\) and \(\tilde{K}^{j}_{\text{ae}}\) to our new potentials using Eqs. (4.7). The field contributions to \(S^{\mu\nu}\) have been combined with the Landau-Lifshitz and Harmonic pseudotensors to produce the total \(\Lambda^{\mu\nu}_{T}\) and \(\Lambda^{j}_{\text{ae}}\), given by \[\Lambda^{00}_{T} =-\frac{7}{16}(2-c_{14}){N_{,j}}^{2}\] \[\quad+\frac{7}{16}(2-c_{14})NN{}_{,j}{}^{2}+\frac{1}{4}(1-4c_{14 })N_{,j}B_{,j}-\frac{1}{2}(2-c_{14})B_{jk}N_{,jk}-\frac{1}{4}\left[(2-c_{14})(2 +3c_{14})-c_{14}W_{L}\right]N\ddot{N}\] \[\quad+\frac{5}{16}\left\{(2-c_{14})(1-2c_{14})+c_{14}W_{L}\right\} \dot{N}^{2}+(1+c_{14})N_{,j}\dot{K}^{j}-\frac{1}{2}(4-5c_{14})\dot{N}_{,j}K^{j} +\frac{1}{2}K^{j}_{,k}(K^{j}_{,k}+3K^{k}_{,j})\] \[\quad+\frac{3}{2}c_{14}N_{,j}\dot{K}^{j}_{\text{ae}}+\frac{3}{2}c _{14}\dot{N}_{,j}K^{j}_{\text{ae}}-\frac{1}{2}c_{14}N\nabla^{2}B+2c_{14}K^{k}_{ \text{ae},j}(K^{j}_{,k}+K^{j}_{\text{ae},k})+6c_{1}K^{[j,k]}_{\text{ae}}K^{[j,k]}_{\text{ae}}+\frac{1}{2}c_{14}\nabla^{2}(K^{j}K^{j})\] \[\quad+\frac{1}{2}\frac{c_{14}}{c_{14}}\left(1-2c_{14}-W_{L} \right)(3c_{14}\dot{N}-2c_{1}K^{j}_{\text{ae},j})K^{j}_{\text{ae},j}-2c_{1}(1+ W_{T})K^{j}_{\text{ae}}K^{k}_{\text{ae},kj}-2c_{1}(1+W_{T})K^{j}K^{k}_{\text{ae},kj}\] \[\quad+\frac{1}{4}\left(4(2-c_{14})+3W_{L}\right)\dot{N}_{,j}R_{,j} +\frac{3}{4}c_{1}W_{T}\dot{N}_{,j}X_{Kae,j}-\frac{c_{1}}{c_{14}}W_{L}(1+W_{T})K ^{k}_{\text{ae},kj}R_{,j}\] \[\quad-\frac{c_{1}^{2}}{c_{14}}W_{T}(1+W_{T})K^{k}_{\text{ae},kj}X_ {Kae,j}-\frac{5}{4}W_{L}\dot{R}\nabla^{2}N-\frac{3}{8}c_{1}W_{T}\dot{X}_{Kae} \nabla^{2}N+2R_{,j}\nabla^{2}K^{j}\] \[\quad-(W_{L}R_{,j}+c_{1}W_{T}X_{Kae,j})\nabla^{2}K^{j}_{\text{ae} }-\frac{1}{4}\left(4-2c_{14}-5W_{L}\right)\nabla^{2}(N\dot{R})+\frac{3}{8}c_{1 }W_{T}\nabla^{2}(N\dot{X}_{Kae})\] \[\quad-(2-c_{14})\nabla^{2}\left(K^{j}R_{,j}\right)+\nabla^{2}\left[ \left(K^{j}+K^{j}_{\text{ae}}\right)(W_{L}R_{,j}+c_{1}W_{T}X_{Kae,j})\right]\] \[\quad+\frac{1}{2}(2-c_{14})\nabla^{2}\left(R_{,j}R_{,j}\right)+ \frac{1}{4c_{14}}\nabla^{2}\left[W_{L}R_{,j}+c_{1}W_{T}X_{Kae,j}\right]^{2}+O( \rho\epsilon^{3})\,, \tag{4.16a}\] \[\Lambda^{0j}_{T} =N_{,k}\left[(1-c_{2})K^{k}_{,j}-K^{j}_{,k}\right]+\frac{3}{8}(2+3 c_{2})\dot{N}N_{,j}+\frac{1}{4}c_{2}(6-3c_{14}-W_{L})N\dot{N}_{,j}-\frac{1}{2}c_{14} \nabla^{2}N(K^{j}+K^{j}_{\text{ae}})\] \[\quad+\frac{1}{2}N_{,k}\left[(c_{14}-2c_{2})K^{k}_{\text{ae},j}-c_ {14}K^{j}_{\text{ae},k}\right]-c_{2}N_{,jk}(K^{k}+K^{k}_{\text{ae}})-\frac{1}{2} \frac{c_{1}c_{2}}{c_{14}}(3-6c_{14}-2W_{L}-W_{T})NK^{k}_{\text{ae},kj}\] \[\quad-\frac{c_{2}}{2c_{14}}N_{,jk}(W_{L}R_{,k}+c_{1}W_{T}X_{Kae,k} )+\frac{1}{4c_{14}}\left(c_{2}-c_{14}\right)\nabla^{2}N\left(W_{L}R_{,j}+c_ {1}W_{T}X_{Kae,j}\right)\] \[\sigma \equiv T_{T}^{j}\,,\] \[\sigma^{jk} \equiv T_{T}^{jk}\,,\] \[\sigma_{\ae}^{j} \equiv T_{\ae}^{j}\,, \tag{4.17}\] and will express various potentials formally in terms of these densities. Later we will make a PN expansion of them, including compact body sensitivities, and iterate the potentials to include these effects. The gauge conditions (4.12) lead to useful conservation equations. Adding the time derivative of Eq. (4.13a) to the divergence of Eq. (4.13b), and making use of the d'Alembertian of the first of Eqs. (4.12) and the divergence of Eq. (4.13e), we obtain the equations, valid through 2PN order \[\tau^{0\nu}{}_{,\nu}-\tau_{\ae,j}^{j} =0\,,\] \[\tau^{j\nu}{}_{,\nu} =0\,. \tag{4.18}\] ### Near-zone field to 2.5PN order We now solve Eqs. (4.13) for field points within the near-zone. The formal solutions of Eqs. (4.13) consist of integrals of the source divided by \(|\mathbf{x}-\mathbf{x}^{\prime}|\) over the past harmonic "null" cone of the field point. These integrals divide into two distinct integrals, an inner integral out to a boundary where the null cone \(\mathcal{C}\) intersects the near-zone world tube of radius \(\mathcal{R}\sim\lambda\), and an outer integral over the remainder of the null cone. The retarded time of the inner integrals over the region \(\mathcal{N}\) can be expanded in powers of \(|\mathbf{x}-\mathbf{x}^{\prime}|\), leading to bounded integrals over a constant time hypersurface \(\mathcal{M}\), evaluated at the time \(t\) of the field point. The outer integrals over the rest of the null cone \(\mathcal{C}-\mathcal{N}\) are carried out using a special change of integration variables. For a detailed pedagogical description of this method, see Sec. 6.3 of [50]. Both the inner and outer integrals may individually depend on the radius \(\mathcal{R}\), but their sum cannot; in practice this means that one can evaluate each integral, keeping only terms that do not depend explicitly on \(\mathcal{R}\). The expansions of the inner integrals of the fields \(N\), \(K^{j}\), \(B^{jk}\) and \(K^{j}_{\rm ae}\) are then given by \[N_{\mathcal{N}} = \frac{2G_{0}}{2-c_{14}}\bigg{[}4\epsilon\int_{\mathcal{M}}\frac{ \tau^{00}(t,\mathbf{x}^{\prime})}{|\mathbf{x}-\mathbf{x}^{\prime}|}d^{3}x^{ \prime}+2\epsilon^{2}\partial_{t}^{2}\int_{\mathcal{M}}\tau^{00}(t,\mathbf{x}^ {\prime})|\mathbf{x}-\mathbf{x}^{\prime}|d^{3}x^{\prime}-\frac{2}{3}\epsilon^ {5/2}\stackrel{{(3)}}{{\mathcal{I}}}\stackrel{{(4)}}{{ \mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{ (4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}} \stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{ \mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{ (5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}} \stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{ \mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{ (4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}} \stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{ \mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{ (5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}} \stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{ \mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{ (5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}} \stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{ \mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{ (5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}} \stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{ \mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{( 4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{ (4)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}} \stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{ \mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{ (5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}} \stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{ \mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{ (4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}} \stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{ \mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{ (5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{ (5)}}{{\mathcal{I}}}\stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{ (5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}} \stackrel{{(4)}}{{\mathcal{I}}}\stackrel{{(5)}}{{ \mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{ (5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{ (5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}} \stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{ \mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}} \stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{ \mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{I}}}\stackrel{{(5)}}{{\mathcal{I}}}\stackrel{{( 5)}}{{\mathcal{ the Poisson potential to be \[P(f)\equiv\frac{1}{4\pi}\int_{\mathcal{M}}\frac{f(t,{\bf x}^{\prime})}{|{\bf x}-{ \bf x}^{\prime}|}d^{3}x^{\prime}\,,\quad\nabla^{2}P(f)=-f\,. \tag{4.21}\] We also define potentials based on the "densities" \(\sigma\), \(\sigma^{j}\) and \(\sigma^{jk}\) and \(\sigma^{j}_{\rm ae}\): \[\Sigma(f) \equiv \int_{\mathcal{M}}\frac{\sigma(t,{\bf x}^{\prime})f(t,{\bf x}^{ \prime})}{|{\bf x}-{\bf x}^{\prime}|}d^{3}x^{\prime}=P(4\pi\sigma f)\,,\] \[\Sigma^{j}(f) \equiv \int_{\mathcal{M}}\frac{\sigma^{j}(t,{\bf x}^{\prime})f(t,{\bf x} ^{\prime})}{|{\bf x}-{\bf x}^{\prime}|}d^{3}x^{\prime}=P(4\pi\sigma^{j}f)\,,\] \[\Sigma^{jk}(f) \equiv \int_{\mathcal{M}}\frac{\sigma^{jk}(t,{\bf x}^{\prime})f(t,{\bf x }^{\prime})}{|{\bf x}-{\bf x}^{\prime}|}d^{3}x^{\prime}=P(4\pi\sigma^{jk}f)\,,\] \[\Sigma^{j}_{\rm ae}(f) \equiv \int_{\mathcal{M}}\frac{\sigma^{j}_{\rm ae}(t,{\bf x}^{\prime})f( t,{\bf x}^{\prime})}{|{\bf x}-{\bf x}^{\prime}|}d^{3}x^{\prime}=P(4\pi\sigma^{j}_{ \rm ae}f)\,,\] along with the superpotentials \[X(f) \equiv \int_{\mathcal{M}}\sigma(t,{\bf x}^{\prime})f(t,{\bf x}^{\prime })|{\bf x}-{\bf x}^{\prime}|d^{3}x^{\prime}\,,\] \[Y(f) \equiv \int_{\mathcal{M}}\sigma(t,{\bf x}^{\prime})f(t,{\bf x}^{\prime })|{\bf x}-{\bf x}^{\prime}|^{3}d^{3}x^{\prime}\,, \tag{4.23}\] and their obvious counterparts \(X^{j}\), \(X^{jk}\), \(X^{j}_{\rm ae}\), and so on. Using Eq. (4.21), we can express the superpotential defined in Eq. (4.10) in the form \[X_{N}=-2P(N)\,,\,Y_{N}=-12P(X_{N})\,,\,Z_{N}=-30P(Y_{N})\,, \tag{4.24}\] and so on. A number of potentials occur sufficiently frequently in the PN expansion that it is useful to define them specifically. There is the "Newtonian" potential, \[U\equiv\int_{\mathcal{M}}\frac{\sigma(t,{\bf x}^{\prime})}{|{\bf x}-{\bf x}^{ \prime}|}d^{3}x^{\prime}=P(4\pi\sigma)=\Sigma(1)\,. \tag{4.25}\] The potentials needed for the post-Newtonian limit are: \[V^{j} \equiv \Sigma^{j}(1)\,,\quad V^{j}_{\rm ae}\equiv\Sigma^{j}_{\rm ae}(1)\,,\] \[\Phi^{jk}_{1} \equiv \Sigma^{jk}(1)\,,\quad\Phi_{1}\equiv\Sigma^{jj}(1)\,,\quad\Phi_{2 }\equiv\Sigma(U)\,,\] \[X \equiv X(1)=-2P(U)\,,\] \[X^{j}_{\rm ae} \equiv X^{j}_{\rm ae}(1)=-2P(V^{j})\,. \tag{4.26}\] Useful 2PN potentials include: \[V^{j}_{2}\equiv\Sigma^{j}(U)\,, V^{j}_{2{\rm ae}}\equiv\Sigma^{j}_{\rm ae}(U)\,,\] \[\Phi^{j}_{2}\equiv\Sigma(V^{j})\,, \Phi^{j}_{2{\rm ae}}\equiv\Sigma(V^{j}_{\rm ae})\,,\] \[X_{1}\equiv X^{jj}(1)=-2P(\Phi_{1})\,, X_{2}\equiv X(U)=-2P(\Phi_{2})\,,\] \[X^{j}\equiv X^{j}(1)=-2P(V^{j})\,, Y\equiv Y(1)\,,\] \[P^{ij}_{2}\equiv P(U^{:}U^{:j})\,, P_{2}\equiv P^{ii}_{2}=\Phi_{2}-\frac{1}{2}U^{2}\,,\] \[G_{1}\equiv P(\dot{U}^{2})\,, G_{2}\equiv P(U\ddot{U})\,,\] \[G_{3}\equiv-P(\dot{U}^{:k}V^{k})\,, G_{3{\rm ae}}\equiv-P(\dot{U}^{:k}V^{k}_{\rm ae})\,,\] \[G_{4}\equiv P(V^{i,j}V^{j,i})\,, G_{4{\rm ae}}\equiv P(V^{i,j}_{\rm ae}V^{j,i})\,,\] \[G_{4{\rm ae}}^{\rm ae}\equiv P(V^{i,j}_{\rm ae}V^{j,i}_{\rm ae})\] \[G_{5}\equiv-P(\dot{V}^{k}U^{:k})\,, G_{5{\rm ae}}\equiv-P(\dot{V}^{k}_{\rm ae}U^{:k})\] \[G_{6}\equiv P(U^{:ij}\Phi^{ij}_{1})\,, G_{7{\rm ae}}^{i}\equiv P(U^{:k}V^{k,i}_{\rm ae})\,,\] \[G_{8}^{i}\equiv P(U^{:i}\dot{U})\,, G_{9}^{j}\equiv P(U\dot{U}^{:j})\,,\] \[H\equiv P(U^{:ij}P^{ij}_{2})\,. \tag{4.27}\] ## V Expansion of near-zone fields to 2.5PN order In evaluating the contributions at each order, we shall use the following notation, \[N = N_{0}+\epsilon N_{1}+\epsilon^{3/2}N_{1.5}+\epsilon^{2}N_{2}+ \epsilon^{5/2}N_{2.5}\] \[\quad+O(\epsilon^{3})\,,\] \[K^{j} = K_{1}^{j}+\epsilon^{1/2}K_{1.5}^{j}+\epsilon K_{2}^{j}+\epsilon^{3 /2}K_{2.5}^{j}+O(\epsilon^{2})\,,\] \[B = B_{1}+\epsilon^{1/2}B_{1.5}+\epsilon B_{2}+\epsilon^{3/2}B_{2.5}+O( \epsilon^{2})\,,\] \[B^{ij} = B_{2}^{ij}+\epsilon^{1/2}B_{2.5}^{ij}+O(\epsilon)\,,\] \[K^{j}_{\rm ae} = K_{\rm ae1}^{j}+\epsilon^{1/2}K_{\rm ae1.5}^{j}+\epsilon K_{\rm ae 2}^{j}+\epsilon^{3/2}K_{\rm ae2.5}^{j}\] \[\quad+O(\epsilon^{2})\,,\] \[R = R_{1}+\epsilon^{1/2}R_{1.5}+\epsilon R_{2}+\epsilon^{3/2}R_{2.5}+O( \epsilon^{2})\,,\] \[X_{\rm Kae} = X_{\rm Kae1}+\epsilon^{1/2}X_{\rm Kae1.5}+\epsilon X_{\rm Kae2}\] \[\quad+\epsilon^{3/2}X_{\rm Kae2.5}+O(\epsilon^{2})\,,\] \[X_{B} = X_{B2}+\epsilon^{1/2}X_{B2.5}+O(\epsilon)\,,\] \[Y_{R} = Y_{R2}+\epsilon^{1/2}Y_{R2.5}+O(\epsilon)\,,\] \[Y_{\rm Kae} = Y_{\rm Kae2}+\epsilon^{1/2}Y_{\rm Kae2.5}+O(\epsilon)\,, \tag{5.1}\] where the subscript on each term indicates the level (1PN, 2PN, 2.5PN, etc.) of its leading contribution to the equations of motion, and where we also include the superpotential functions needed to construct the metric. ### Newtonian, 1PN and 1.5PN solutions At lowest order in the PN expansion, we only need to evaluate \(\tau^{00}=(-g)T^{00}+O(\rho\epsilon)=\sigma+O(\rho\epsilon)\) (recall that \(\sigma^{ii}\sim\epsilon\sigma\)). Since the density has compact support, the outer integral vanishes, and we find \[N_{0}=\frac{8G_{0}U}{2-c_{14}}\,. \tag{5.2}\] The metric to Newtonian order is given by the leading term in Eq. (4.4), \(g_{00}=-1+N/2\). Using Eq. (2.6) to relate \(G_{0}\) to \(G\), we obtain \(N_{0}=4GU\), \(g_{00}=-1+2GU\) and \(-g=1+4GU+O(\epsilon^{2})\). To the next PN order, we obtain, from Eqs. (4.14), (4.16) and (5.2), \[\tau^{00} = \sigma-\sigma^{ii}+4G\sigma U-\frac{7}{8\pi}G\nabla U^{2}+O(\rho \epsilon^{2})\,,\] \[\tau^{0j} = \sigma^{j}+O(\rho\epsilon^{3/2})\,,\] \[\tau^{jj} = \sigma^{ii}-\frac{1}{8\pi}G\nabla U^{2}+O(\rho\epsilon^{2})\,,\] \[\tau^{j}_{\rm ae} = \sigma^{j}_{\rm ae}+O(\rho\epsilon^{3/2})\,. \tag{100}\] Substituting into Eqs. (101), and calculating terms through 1.5PN order (e.g. \(O(\epsilon^{3/2})\) in \(N\)), we obtain, \[N_{1} = 7G^{2}U^{2}-4G\Phi_{1}+2G^{2}\Phi_{2}+2G\ddot{X}\,,\] \[K_{1}^{j} = 4\left(1-\frac{1}{2}c_{14}\right)GV^{j}\,,\] \[B_{1} = \left(1-\frac{1}{2}c_{14}\right)\left[G^{2}U^{2}+4G\Phi_{1}-2G^{ 2}\Phi_{2}\right]\,,\] \[K_{\rm ae1}^{j} = -2\left(1-\frac{1}{2}c_{14}\right)Gc_{1}^{-1}V_{\rm ae}^{j}\,,\] \[R_{1} = c_{14}G\dot{X}+2G\left(1-\frac{1}{2}c_{14}\right)X_{\rm ae,j}^{ j}\,,\] \[X_{K\rm ae1} = -\frac{2G}{c_{1}}\left(1-\frac{1}{2}c_{14}\right)X_{\rm ae,j}^{ j}\,, \tag{101}\] and \[N_{1.5} = -\frac{2}{3}G\stackrel{{(3)}}{{\mathcal{I}}^{kk}}\ - \frac{4}{3}Gx^{k}\ddot{\mathcal{I}}_{\rm ae}^{k}\,,\] \[K_{1.5}^{j} = 0\,,\] \[B_{1.5} = -2G\left(1-\frac{1}{2}c_{14}\right)\left[\stackrel{{ (3)}}{{\mathcal{I}}^{kk}}\ +2\ddot{\mathcal{I}}_{\rm ae}^{kk}\right]\,.\] \[K_{\rm ae1.5}^{j} = \frac{2G}{c_{1}v_{T}}\left(1-\frac{1}{2}c_{14}\right)\dot{ \mathcal{I}}_{\rm ae}^{j}\,,\] \[R_{1.5} = 0\,,\] \[X_{K\rm ae1.5} = 0\,. \tag{102}\] As in the GR case, it is straightforward to show that the outer integrals and surface terms give no \({\cal R}\)-independent terms. We now use Eq. (100) to construct the original fields \(\tilde{N}\), \(\tilde{B}\) etc., and then Eq. (100) to construct the metric to 1.5PN order. After applying a gauge transformation, \[x^{\mu^{\prime}}=x^{\mu}+\xi^{\mu}\,, \tag{103}\] with \[\xi_{0} =\frac{1}{2}\left(1+\frac{1}{2}c_{14}(3+v_{L}^{-2})\right)G\dot{X}\] \[\quad+\frac{1}{2}\left(1-\frac{1}{2}c_{14}\right)(3+v_{L}^{-2})GX _{\rm ae,k}^{k}\] \[\quad-\frac{2}{3}G\ddot{\mathcal{I}}^{kk}-G\ddot{\mathcal{I}}_{ \rm ae}^{kk}-\frac{1}{3}Gx^{k}\dot{\mathcal{I}}_{\rm ae}^{k}\,,\] \[\xi_{j} =\frac{1}{3}G\mathcal{I}_{\rm ae}^{j}\,, \tag{104}\] we obtain the 1.5PN metric \[g_{00}=-1+2GU-2G^{2}U^{2}\,,\] \[g_{0j} =-4\left(1-\frac{c_{14}}{2}\right)GV^{j}-\frac{1}{2}\left(1- \frac{c_{14}}{2}(1-v_{L}^{-2})\right)G\dot{X}_{,j}\] \[\quad+\frac{1}{2}\left(1-\frac{c_{14}}{2}\right)(1-v_{L}^{-2})GX _{\rm ae,jk}^{k}\,,\] \[g_{jk} =\delta_{jk}\left(1+2GU\right)\,. \tag{105}\] In the absence of self-gravitating bodies, the source of the AEther field vanishes, or if the bodies are weakly self-gravitating, the AEther effects are of one PN order higher; in either case we can set \(X_{\rm ae}^{k}=0\), and read off the PPN parameters \[\gamma =1\,,\quad\beta=1\,,\] \[\alpha_{1} =-4c_{14}\,,\quad\alpha_{2}=-\frac{1}{2}c_{14}\left(1-\frac{1}{v_ {L}^{2}}\right)\,, \tag{106}\] with the remaining parameters vanishing (see Eq. (5)). This is in agreement with standard results [11; 16]. The AEther field to 1.5PN order is given by \[\tilde{K}_{\rm ae}^{j} =-2\left(1-\frac{1}{2}c_{14}\right)Gc_{1}^{-1}V_{\rm ae}^{j}+4 \left(1-\frac{1}{2}c_{14}\right)GV^{j}\] \[\quad+\frac{1}{2}W_{L}G\dot{X}_{,j}+\frac{1}{c_{14}}\left(1-\frac {c_{14}}{2}\right)(W_{L}-W_{T})\,GX_{\rm ae,jk}^{k}\] \[\quad+\frac{2}{c_{1}v_{T}}\left(1-\frac{c_{14}}{2}\right)G\dot{ \mathcal{I}}_{\rm ae}^{j}\,. \tag{107}\] Notice that there are apparently no 1.5PN radiation reaction terms in the metric. As in GR, the 1.5PN terms proportional to \(\dddot{\mathcal{I}}^{kk}\) that appeared in \(N_{1.5}\) are pure gauge; but in addition the dipole and monopole AEther terms \(x^{k}\ddot{\mathcal{I}}_{\rm ae}^{k}\) and \(\ddot{\mathcal{I}}_{\rm ae}^{kk}\) are also pure gauge. This does not imply, however, that there is no dipole radiation in this theory; those effects will enter not via the metric but via the modified geodesic equation for compact self-gravitating bodies [54]. ### \(B^{jk}\), \(K^{j}\) and \(K_{\rm ae}^{j}\) to 2.5PN order Substituting our solutions for the fields to 1.5PN order into Eqs. (100) and (101), we obtain \[\tau^{jk}=\sigma^{jk}+\frac{1}{4\pi}G\left(U^{,j}U^{,k}-\frac{1}{2}\delta^{jk} |\nabla U|^{2}\right)\,, \tag{108}\] with the solutions \[B_{2}^{jk} =4G\left(1-\frac{c_{14}}{2}\right)\left[\Phi_{1}^{jk}+GP_{2}^{jk}- \frac{G}{4}\delta^{jk}(2\Phi_{2}-U^{2})\right],\] \[B_{2.5}^{jk} =-2G\left(1-\frac{c_{14}}{2}\right)\left(\stackrel{{ (3)}}{{\mathcal{I}}^{jk}}\ +2\ddot{\mathcal{I}}_{\rm ae}^{(jk)}\right)\,. \tag{109}\] Notice that there are only two 1.5PN radiation reaction terms in the metric. As in GR, the 1.5PN terms proportional to \(\dddot{\mathcal{I}}^{kk}\) that appeared in \(N_{1.5}\) are pure gauge; but in addition the dipole and monopole AEther terms \(x^{k}\ddot{\mathcal{I}}_{\rm ae}^{k}\) and \(\ddot{\mathcal{I}}_{\rm ae}^{kk}\) are also pure gauge. This does not imply, however, that there is no dipole radiation in this theory; those effects will enter not via the metric but via the modified geodesic equation for compact self-gravitating bodies [54]. ### \(B^{jk}\), \(K^{j}\) and \(K_{\rm ae}^{j}\) to 2.5PN order Substituting our solutions for the fields to 1.5PN order into Eqs. (100) and (101), we obtain \[\tau^{jk}=\sigma^{jk}+\frac{1}{4\pi}G\left(U^{,j}U^{,k}-\frac{1}{2}\delta^{jk} |\nabla U|^{2}\right)\,, \tag{110}\] with the solutions \[B_{2}^{jk} =4G\left(1-\frac{c_{14}}{2}\right)\left[\Phi_{1}^{jk}+GP_{2}^{jk}- \frac{G}{4}\delta^{jk}(2\Phi_{2}-U^{2})\right],\] \[B_{2.5}^{jk} =-2G\left(1-\frac{c_{14}}{2}\right)\left(\stackrel{{ (3)}}{{\mathcal{I}}^{jk}}\ +2\ddot{\mathcal{I}}_{\rm ae}^{(jk)}\right)\,. \tag{111}\] For \(K^{j}\), we substitute the lower-order solutions into \[\tau^{0j}=(1+4GU)\sigma^{j}+(16\pi G_{0})^{-1}\Lambda_{T}^{0j}\,, \tag{5.13}\] and use Eq. (4.19b) to obtain \[K_{2}^{j}= \ 8G^{2}\left(1-\frac{c_{14}}{2}\right)\left[V_{2}^{j}-(1-c_{14}) \Phi_{2}^{j}+UV^{j}+2(1-c_{2})G_{7}^{j}+\frac{1}{c_{1}}\left(c_{2}-\frac{c_{14 }}{2}\right)G_{7ae}^{j}-\frac{1}{4v_{L}^{2}}\left(UV_{\rm ae}^{j}+\Phi_{2ae}^{j }-V_{2ae}^{j}\right)\right.\] \[\ \ \ -2c_{2}P(U_{,jk}V^{k})+\frac{c_{2}}{c_{1}}P(U_{,jk}V_{\rm ae }^{k})+\frac{c_{2}}{4c_{14}}(W_{L}-W_{T})\left(UX_{{\rm ae},jk}^{k}+U_{,j}X_{{ \rm ae},k}^{k}+2P(U_{,j}V_{{\rm ae},k}^{k})\right)\] \[\ \ \ -\frac{1}{4}(W_{L}-W_{T})\left(\Sigma(X_{{\rm ae},jk}^{k})- \frac{c_{2}}{c_{14}}\Sigma_{,j}(X_{{\rm ae},k}^{k})\right)+\frac{c_{2}}{2c_{1 4}}(3-6c_{14}-2W_{L}-W_{T})P(UV_{{\rm ae},jk}^{k})\bigg{]}\] \[\ \ +G^{2}\bigg{[}2(6+9c_{2}+c_{2}W_{L})G_{8}^{j}+4c_{2}(6-3c_{14 }-W_{L})G_{9}^{j}+c_{14}W_{L}\Sigma(\dot{X}_{,j})+c_{2}W_{L}\left(U\dot{X}_{,j }+U_{,j}\dot{X}-\Sigma_{,j}(\dot{X})\right)\bigg{]}\] \[\ \ +2G\left(1-\frac{c_{14}}{2}\right)\ddot{X}^{j}\,,\] \[K_{2.5}^{j}= \ \frac{2}{9}\left(1-\frac{c_{14}}{2}\right)G\bigg{[}3x^{k}\, \overset{(4)}{\mathcal{I}}^{jk}-\overset{(4)}{\mathcal{I}}^{jkk}\ +2\epsilon^{mjk}\overset{(3)}{ \mathcal{J}}^{mk}\ +6x^{k}\,\overset{(3)}{\mathcal{I}}^{(jk)}_{\rm ae}\ -\overset{(3)}{\mathcal{I}}^{jkk} _{\rm ae}\ +\frac{18}{c_{1}v_{T}}G\left(c_{14}U\overset{j}{\mathcal{I}}^{j}_{ \rm ae}+c_{2}X_{,jk}\overset{k}{\mathcal{I}}^{k}_{\rm ae}\right)\bigg{]}\,. \tag{5.14}\] Finally, for \(K_{\rm ae}^{j}\), we substitute the 1.5PN solutions into \[\tau_{\rm ae}^{j}=\sigma_{\rm ae}^{j}+(8\pi G_{0})^{-1}\Lambda_{\rm ae}^{j}\,, \tag{5.15}\] and use Eq. (4.19d) to obtain \[K_{\rm ae2}^{j}= \ -\frac{2G^{2}}{c_{1}}\left(1-\frac{c_{14}}{2}\right)\bigg{[}c_{14 }V_{2}^{j}+c_{14}\Phi_{2}^{j}+(6c_{1}-c_{14})UV^{j}+2(3c_{1}-2c_{2})G_{7}^{j}+ \frac{1}{c_{1}}\left(2c_{2}+c_{1}-c_{14}\right)G_{7ae}^{j}\] \[\ \ \ -UV_{\rm ae}^{j}-2\Phi_{2ae}^{j}+3V_{2ae}^{j}+(3c_{1}-c_{14 }-2c_{2})\left(2P(U_{,jk}V^{k})-\frac{1}{c_{1}}P(U_{,jk}V_{\rm ae}^{k})\right)\] \[\ \ \ +\frac{1}{2}(W_{L}-W_{T})\left(\Sigma(X_{{\rm ae},jk}^{k})+ \frac{3c_{1}-c_{14}-2c_{2}}{2c_{14}}\Sigma_{,j}(X_{{\rm ae},k}^{k})\right)\] \[\ \ \ +\frac{1}{2c_{14}}\left(12c_{14}c_{1}+(3c_{1}-c_{14}+2c_{2} )(W_{L}-W_{T})\right)P(UV_{{\rm ae},jk}^{k})\] \[\ \ \ -\frac{1}{2c_{14}}\left(\frac{2c_{14}}{c_{2}}(3c_{1}-c_{14 })+(3c_{1}-c_{14}-2c_{2})(W_{L}-W_{T})\right)P(U_{j}V_{{\rm ae},k}^{k})\] \[\ \ \ +\frac{1}{4c_{14}}(W_{L}-W_{T})\left((3c_{1}-c_{14}+2c_{2} )UX_{{\rm ae},jk}^{k}-(3c_{1}-c_{14}-2c_{2})U_{,j}X_{{\rm ae},k}^{k}\right)\bigg{]}\] \[\ \ -\frac{G^{2}}{4c_{1}}\bigg{[}(8(2-c_{14})(3c_{2}+c_{14}-3c_{1 })+2(3c_{1}-c_{14}-4c_{2})W_{L})\,G_{8}^{j}\] \[\ \ \ +(36c_{2}+12c_{14}-24c_{1}(2-c_{14})+2(3c_{1}-c_{14}+2c_{2} )W_{L})\,G_{9}^{j}\] \[\ \ \ +(3c_{1}-c_{14}+2c_{2})W_{L}U\dot{X}_{,j}-(3c_{1}-c_{14}-2c_{2 })W_{L}\left(U_{,j}\dot{X}-\Sigma_{,j}(\dot{X})\right)+2c_{14}W_{L}\Sigma(\dot{ X}_{,j})\bigg{]}\] \[\ \ -\frac{G}{c_{1}}\left(1-\frac{c_{14}}{2}\right)(1-W_{T})\ddot{X} _{\rm ae}^{j}\,,\] \[K_{{\rm ae}2.5}^{j}= \ \frac{G}{c_{1}}\left(1-\frac{c_{14}}{2}\right)\left[\frac{1}{3v_{T}^{3 }}\left(r^{2}\overset{(3)}{\mathcal{I}}^{j}_{\rm ae}\ -2x^{k}\,\overset{(3)}{\mathcal{I}}^{jk}_{\rm ae}+\overset{(3)}{\mathcal{I}}^{ jkk}_{\rm ae}\ \right)-\frac{1}{c_{1}v_{T}}G\left(6c_{1}U\overset{j}{\mathcal{I}}^{j}_{\rm ae}-(3c_{1}-c_{14 }-2c_{2})X_{,jk}\overset{k}{\mathcal{I}}^{k}_{\rm ae}\right)\right]. \tag{5.16}\] ### \(N\) and \(B\) to 2.5PN order Using the 1.5PN metric (prior to the gauge transformation to the PPN gauge), we find that, to the required order, \[-g=1+4GU+\left[\left(1-\frac{c_{14}}{2}\right)(4G^{2}\Phi_{2}-8G \Phi_{1})+(6+c_{14})G^{2}U^{2}+\left(2-3c_{14}+\frac{c_{14}}{v_{L}^{2}}\right)G \ddot{X}\right.\] \[\qquad-2\left(1-\frac{c_{14}}{2}\right)\left(3-\frac{1}{v_{L}^{2 }}\right)G\dot{X}_{\text{ae},k}^{k}\bigg{]}+\left[\frac{2}{3}(2-3c_{14})G \stackrel{{(3)}}{{\mathcal{I}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[+\frac{2}{c_{14}}\left(1-\frac{c_{14}}{2}\right)^{2}(W_{L}-W_{T})G^{2 }\bigg{[}X_{\text{ae},j}^{j}V_{\text{ae},j}^{j}+\Sigma_{\text{ae}}^{j}(X_{\text {ae},jk}^{k})-\Sigma_{\text{ae}}^{j}(X_{\text{ae},k}^{k})_{,j}\bigg{]}+\frac{4}{ c_{1}}\left(1-\frac{c_{14}}{2}\right)^{2}G^{2}\Sigma_{\text{ae}}^{j}(V_{\text{ae}}^{j})\] \[+\left(1-\frac{c_{14}}{2}\right)W_{L}G^{2}\left(\Sigma_{\text{ae} }^{j}(\dot{X}_{,j})-\Sigma_{\text{ae}}^{j}(\dot{X}_{,j})\right)+\frac{8}{c_{1}} \left(1-\frac{c_{14}}{2}\right)^{2}G^{2}\left[P(V_{\text{ae}}^{j}V_{\text{ae},jk}^{k})-2c_{1}P(V^{j}V_{\text{ae},jk}^{k})\right]\] \[+2\left(1-\frac{c_{14}}{2}\right)\left(6-12c_{14}-4W_{L}-W_{T}- \frac{6c_{2}}{c_{14}}(W_{L}-W_{T})\right)G^{2}P(\dot{U}V_{\text{ae},j}^{j})\] \[+\frac{2}{c_{1}}\left(1-\frac{c_{14}}{2}\right)G^{2}((12c_{2}-13 )c_{1}-6c_{2}+c_{14}))P(U\dot{V}_{\text{ae},j}^{j})\] \[+\frac{4}{c_{14}}\left(1-\frac{c_{14}}{2}\right)^{2}G^{2}\left(3 -6c_{14}-2W_{L}-W_{T}\right)P(V_{\text{ae},j}^{j}V_{\text{ae},k}^{k})\,,\] \[R_{2} =-c_{14}\bigg{[}7G^{2}P(U\dot{U})+G\dot{X}_{1}-\frac{1}{2}G^{2} \dot{X}_{2}-\frac{1}{12}G^{\dot{Y}}\bigg{]}-c_{1}X_{\text{ae}2}\,,\] \[c_{1}X_{K\text{ae}2} =2\left(1-\frac{c_{14}}{2}\right)G^{2}\bigg{[}8c_{2}P(U_{,j}V^{j} )-4\frac{c_{2}}{c_{1}}P(U_{,j}V_{\text{ae}}^{j})-\frac{c_{2}}{c_{14}}(W_{L}-W_ {T})UX_{\text{ae},j}^{j}+\frac{1}{2}W_{T}X_{\text{ae}}^{j}(U_{,j})+\frac{1}{2} W_{T}X(V_{\text{ae},j}^{j})\] \[\qquad-\frac{1}{2}(4-W_{T})X_{\text{ae}}^{j}(U)_{,j}-\frac{1}{2}( W_{L}-W_{T})X(X_{\text{ae},jk}^{k})_{,j}-2c_{14}X(V^{j})_{,j}+\frac{c_{14}}{c_{1}}X(V_{ \text{ae}}^{j})_{,j}\] \[\qquad+\frac{1}{c_{1}}(4c_{1}c_{2}+3c_{1}-c_{14}-2c_{2})P(UV_{ \text{ae},j}^{j})+\frac{c_{2}}{c_{14}}(W_{L}-W_{T})\Sigma(X_{\text{ae},j}^{j}) \bigg{]}-\frac{1}{6v_{T}^{2}}\left(1-\frac{c_{14}}{2}\right)G\ddot{Y}_{\text{ ae},j}^{j}\] \[+c_{2}W_{L}G^{2}\left(\Sigma(\dot{X})-U\dot{X}\right)-\frac{1}{2}c _{14}W_{L}G^{2}X(\dot{X}_{,j})_{,j}+4\left((c_{14}-5)c_{2}-c_{14}\right)G^{2}P( U\dot{U})\,,\] \[X_{B2} =-2\left(1-\frac{c_{14}}{2}\right)\left[G^{2}P(U^{2})-2GX_{1}+G^{2 }X_{2}\right],\] \[Y_{R2} =c_{14}G\dot{Y}+2G\left(1-\frac{c_{14}}{2}\right)Y_{\text{ae},j}^ {j}\,,\] \[Y_{K\text{ae}2} =Y_{\text{ae},j}^{j}\,, \tag{5.18}\] and \[N_{2.5} =-\frac{1}{30}G\bigg{\{}(4x^{kl}+2r^{2}\delta^{kl})\stackrel{{ (5)}}{{\mathcal{I}}}_{\text{${}^{kl}$}}\ -4x^{k}\stackrel{{(5)}}{{\mathcal{I}}}_{\text{${}^{klkl}$}}\ +\stackrel{{(5)}}{{\mathcal{I}}}_{\text{${}^{kklkl}$}}\bigg{\}}+ \frac{16}{3}\left(1-\frac{3c_{14}}{2}\right)G^{2}U\stackrel{{(3)}}{{ \mathcal{I}}}_{\text{${}^{kk}$}}^{kk}\] \[-4\left(1-\frac{c_{14}}{2}\right)G^{2}\left(\stackrel{{ (3)}}{{\mathcal{I}}}_{\text{${}^{jk}$}}\ +2\ddot{\mathcal{I}}_{\text{ae}}^{(jk)}\right)X _{,jk}+16(1-c_{14})G^{2}U\ddot{\mathcal{I}}_{\text{ae}}^{kk}-\frac{2}{15}Gr^{2 }x^{k}\stackrel{{(4)}}{{\mathcal{I}}}_{\text{${}^{k}$}}^{k}\ -\frac{6}{v_{T}^{3}}\dot{ \mathcal{I}}_{\text{ae}}^{j}G^{2}\dot{X}_{,j}\] \[-\frac{8}{v_{T}^{3}}\left(1-\frac{c_{14}}{2}\right)G^{2}\dot{ \mathcal{I}}_{\text{ae}}^{j}\left(2V^{j}+\frac{1}{2c_{14}}(1+W_{T})X_{\text{ae},jk}^{k}\right)-\frac{16}{3}G^{2}\ddot{\mathcal{I}}_{\text{ae}}^{j}\Sigma(x^{j}) -\frac{2}{3}\left(7+\frac{9}{v_{T}^{3}}\right)G^{2}\ddot{\mathcal{I}}_{\text{ae} }^{j}X_{,j}\,,\] \[B_{2.5} =-\frac{1}{9}\left(1-\frac{c_{14}}{2}\right)G\bigg{[}3r^{2} \stackrel{{(5)}}{{\mathcal{I}}}_{\text{${}^{kk}$}}\ -2x^{l}\stackrel{{(5)}}{{\mathcal{I}}}_{\text{${}^{kkl}$}}\ -8x^{l}\epsilon^{mkl}\stackrel{{(4)}}{{\mathcal{J}}}_{\text{${}^{mk}$}} +6\stackrel{{(3)}}{{M}}_{\text{${}^{kklkl}$}}+6r^{2}\stackrel{{ (4)}}{{\mathcal{I}}}_{\text{${}^{kk}$}}^{kk}\ +6x^{4}\stackrel{{(4)}}{{\mathcal{I}}}_{\text{${}^{kkl}$}}^{(kkl)}\ \bigg{]}\] \[+\left(1-\frac{c_{14}}{2}\right)G^{2}\bigg{[}\frac{2}{c_{1}v_{T}} \left((c_{14}-6c_{2})\left(\dot{\mathcal{I}}_{\text{${}^{2}$}}^{j}\dot{X}_{,j}+ \ddot{\mathcal{I}}_{\text{ae}}^{j}X_{,j}\right)+(2-c_{14})\dot{\mathcal{I}}_{ \text{ae}}^{j}X_{\text{ae},jk}^{k}\right)-\frac{2}{3}\ddot{\mathcal{I}}_{ \text{ae}}^{j}X_{,j}\bigg{]}\,,\] \[R_{2.5} =-\frac{1}{18}c_{14}Gr^{2}\bigg{[}\stackrel{{(4)}}{{ \mathcal{I}}}_{\text{${}^{kk}$}}\ +\frac{6}{5}x^{j}\stackrel{{(3)}}{{ \mathcal{I}}}_{\text{${}^{2}$}}^{j}\bigg{]}-c_{1}X_{K\text{ae}2.5}\,,\] \[c_{1}X_{K\text{ae}2.5} =-\frac{2}{9v_{T}}\left(1-\frac{c_{14}}{2}\right)G\bigg{[}c_{14}r^{2 }\left(\stackrel{{(3)}}{{\mathcal{I}}}_{\text{${}^{kk}$}}^{k}\ -\frac{3}{5}x^{j}\stackrel{{(3)}}{{ \mathcal{I}}}_{\text{${}^{2}$}}^{j}\right)-9G(c_{14}+2c_{2})\dot{\mathcal{I}}_{ \text{ae}}^{j}X_{,j}\bigg{]}\,,\] \[X_{B2.5} =-\frac{2}{3}\left(1-\frac{c_{14}}{2}\right)Gr^{2}\left[\stackrel{{ (3)}}{{\mathcal{I}}}_{\text{${}^{kk}$}}^{(k)}\ +2\ddot{\ Future prospects and concluding remarks We have applied post-Minkowskian theory to the Einstein-AEther theory, and demonstrated that, after a field transformation, the relaxed field equations can be put into a form that parallels that of general relativity, and that is suitable for obtaining solutions to high orders in a post-Newtonian expansion. As an application of the method, we obtained explicit solutions for the fields through 2.5PN order, in terms of Poisson-like potentials and superpotentials constructed from the matter densities. In a forthcoming publication we will use these results to obtain the equations of motion for compact binaries through 2.5PN order. We will use the prescription pioneered by Eardley [51] for treating gravitationally bound bodies in alternative theories of gravity, in which one assumes that each body's mass is a function of an invariant quantity constructed from the auxiliary field(s) of the theory, evaluated at the location of the body. For scalar-tensor theory (Eardley's original motivation) it is the scalar field itself; for Einstein-AEther theory, the conventional choice is the invariant \(\gamma\equiv-\tilde{K}^{\mu}u_{\mu}\), where \(u^{\mu}\) is the four-velocity of the body (the other possible invariant \(\tilde{K}^{\mu}\tilde{K}_{\mu}\) is unity by definition, and thus trivial). This results in a modified geodesic equation for each body, given by (see, eg. [54]) \[u_{A}^{\nu}\nabla_{\nu}\left[m_{A}u_{A\alpha}+m_{A}^{\prime} \tilde{K}^{\mu}\left(g_{\mu\alpha}+u_{A\mu}u_{A\alpha}\right)\right]\] \[\qquad\qquad=m_{A}^{\prime}u_{A\mu}\nabla_{\alpha}\tilde{K}^{\mu}\,, \tag{101}\] where \(m_{A}=m_{A}(\gamma)\), \(m_{A}^{\prime}\equiv dm_{A}/d\gamma\). This paper provides the ingredients needed to obtain the equations of motion to 2.5PN order. ###### Acknowledgements. This work was supported in part by the National Science Foundation, Grants No. PHY 19-09247 and PHY 22-07681. We are grateful for the hospitality of the Institut d'Astrophysique de Paris where part of this work was carried out. ## Appendix A Waelike solutions to the linearized vacuum equations Here we analyze the far-zone waves implied by the linearized equations (4.5) and (4.6) using an extension of the method described in Sec. 11.1 of [50] for decomposing waves in the far-away wave zone in general relativity. Far from the source we express each field in the generic form \[A=R^{-1}A_{0}(\tau,\mathbf{n})+O(R^{-2})\,, \tag{102}\] where \(\tau=t-R/v_{g}\), and \(\mathbf{n}=\mathbf{\nabla}R\). Then \[A_{,j}=-n^{j}\dot{A}_{0}/v_{g}R+O(R^{-2})\,,\] \[\square A=-\left(1-v_{g}^{-2}\right)\ddot{A}_{0}/R+O(R^{-2})\,, \tag{103}\] where a dot denotes \(d/d\tau\). We also decompose the various vector and tensor amplitudes into their irreducible pieces (see, eg. Box 5.7 of [50]), \[K_{0}^{j} =K_{0}n^{j}+K_{\rm T}^{j}\,,\] \[K_{\rm ae0}^{j} =K_{\rm ae0}n^{j}+K_{\rm aeT}^{j}\,,\] \[B_{0}^{jk} =\frac{1}{3}\delta^{jk}B_{0}+\left(n^{j}n^{k}-\frac{1}{3}\delta^ {jk}\right)B_{\rm LTF}\] \[\qquad+2n^{(j}B_{\rm T}^{k)}+B_{\rm TT}^{jk}\,, \tag{104}\] where the subscripts denote the transverse (T), longitudinal tracefree (LTF) and transverse traceless (TT) parts. Imposing harmonic gauge \(\dot{N}+K_{,j}^{j}=0\), and \(\dot{K}^{j}+B_{,k}^{jk}=0\), keeping the leading \(1/R\) amplitudes, and decomposing into irreducible parts leads to the four conditions \[K_{0} =v_{g}N_{0}\,,\] \[v_{g}K_{0} =\frac{1}{3}B_{0}+\frac{2}{3}B_{\rm LTF}\,,\] \[v_{g}K_{\rm T}^{j} =B_{\rm T}^{j}\,, \tag{105}\] where henceforth, we drop the dots. Under a gauge transformation \(x^{\alpha}\to x^{\alpha}+\zeta^{\alpha}\) with \[\zeta^{0} =R^{-1}\alpha(\tau,\mathbf{n})+O(R^{-2})\,,\] \[\zeta^{j} =R^{-1}\left[\beta(\tau,\mathbf{n})n^{j}+\beta_{\rm T}^{j}(\tau,\mathbf{ n})\right]+O(R^{-2})\,, \tag{106}\] the amplitudes undergo the changes \[N_{0} \to N_{0}+\dot{\alpha}+v_{g}^{-1}\dot{\beta}\,,\] \[K_{0} \to K_{0}+v_{g}^{-1}\dot{\alpha}+\dot{\beta}\,,\] \[K_{\rm T}^{j} \to K_{\rm T}^{j}+\dot{\beta}_{\rm T}^{j}\,,\] \[B_{0} \to B_{0}+3\dot{\alpha}-v_{g}^{-1}\dot{\beta}\,,\] \[B_{\rm LTF} \to B_{\rm LTF}+2v_{g}^{-1}\dot{\beta}\,,\] \[B_{\rm T}^{j} \to B_{\rm T}^{j}+v_{g}^{-1}\dot{\beta}_{\rm T}^{j}\,,\] \[B_{\rm TT}^{jk} \to B_{\rm TT}^{jk}\,,\] \[K_{\rm ae0} \to K_{\rm ae0}+\dot{\beta}\,,\] \[K_{\rm aeT}^{j} \to K_{\rm aeT}^{j}+\dot{\beta}_{\rm T}^{j}\,. \tag{107}\] The time component of the AEther field, \(K_{\rm ae}^{0}\) is gauge invariant to linear order. Substituting Eqs. (102) - (104) (but not the harmonic gauge conditions) into Eqs. (4.5) and (4.6) and decomposing into irreducible parts, we obtain the system of nine equations: \[(1-v_{g}^{2})N_{0}=\frac{c_{14}}{2}\left[N_{0}+B_{0}+4v_{g}(K_{\rm ae0}-K_{0}) \right]\,, \tag{108a}\] \[(1-v_{g}^{2})K_{0}=\frac{c_{14}}{2}v_{g}\left[N_{0}+B_{0}+4v_{g}(K_{ \rm ae0}-K_{0})\right]\,, \tag{46a}\] \[(1-v_{g}^{2})K_{\rm T}^{j}=2(c_{14}v_{g}^{2}-c_{1})(K_{\rm aeT}^{j} -K_{\rm T}^{j})\,,\] (46b) \[(1-v_{g}^{2})B_{0}=-\frac{3}{2}c_{2}\left[v_{g}^{2}(3N_{0}-B_{0}) -4v_{g}K_{\rm ae0}\right]\,,\] (46c) \[(1-v_{g}^{2})B_{\rm LTF}=0\,,\] (46d) \[(1-v_{g}^{2})B_{\rm JT}^{j}=0\,,\] (46e) \[(1-v_{g}^{2})B_{\rm TT}^{j}=0\,,\] (46f) \[(1-v_{g}^{2})B_{\rm TT}^{j}=0\,,\] (46g) \[(c_{1}-v_{g}^{2}c_{14})(K_{\rm aeT}^{j}-K_{\rm T}^{j})=0\,,\] (46h) \[c_{14}v_{g}\left[N_{0}+B_{0}+4v_{g}(K_{\rm ae0}-K_{0})\right]\] \[\qquad\qquad=-c_{2}\left[v_{g}(3N_{0}-B_{0})-4K_{\rm ae0}\right]\,. \tag{46i}\] It is straightforward to show that this system has three distinct eigenvalues for \(v_{g}^{2}\). _Case 1:_\(v_{g}=1\). In this case, \(B_{\rm LTF}\), \(B_{\rm T}^{j}\) and \(B_{\rm TT}^{jk}\) are unconstrained, and \(K_{\rm aeT}^{j}-K_{\rm T}^{j}=0\) (unless \(c_{4}=0\)). Combining the scalar parts of the gauge conditions (46) and the wave equations (46) we find that \(N_{0}=K_{0}=(B_{0}+2B_{\rm LTF})/3\) and \(K_{\rm ae0}=(3N_{0}-B_{0})/4\) and thus that \(2K_{\rm ae0}-B_{\rm LTF}=0\) We can then choose \(\alpha\) and \(\beta\) so that \(N_{0}\), \(K_{0}\) and \(B_{0}\) all vanish, and thus so that \(K_{\rm ae0}\) and \(B_{\rm LTF}\) vanish. Also we have that \(K_{\rm T}^{j}=K_{\rm aeT}^{j}=B_{\rm T}^{j}\); we can choose \(\beta_{\rm T}^{j}\) to make them all vanish. In the end, only the gauge invariant \(B_{\rm TT}^{jk}\) is unconstrained. This is a pure transverse traceless metric gravitational wave, with speed unity. It was the observational constraint on the speed of gravitational waves set by the event GW170817 and GRB170817 that led us to impose the constraint \(c_{1}+c_{3}=0\) in the first place. _Case 2:_\(v_{g}=(c_{1}/c_{14})^{1/2}=v_{T}\). In this case, \(K_{\rm aeT}^{j}\) is unconstrained, while \(B_{\rm LTF}=B_{\rm T}^{j}=B_{\rm TT}^{jk}=K_{\rm T}^{j}=0\). Examining the four scalar wave equations (46a), (46b), (46d) and (46i), we observe that the determinant of the linear system does not vanish, so that \(N_{0}=K_{0}=B_{0}=K_{\rm ae0}=0\). This is a pure transverse vector wave, with no metric perturbation, to linear order. _Case 3:_ For this final case, we must consider the five non-transverse scalar wave equations (46a), (46b), (46d), (46e) and (46i). Requiring the determinant of this system to vanish yields \(v_{g}=1\) (Case 1) plus a solution with speed \(v_{g}=v_{L}\) given by \[v_{L}^{2}=\frac{c_{2}(2-c_{14})}{c_{14}(2+3c_{2})}\,. \tag{47}\] The solutions are \(B_{\rm LTF}=B_{\rm T}^{j}=B_{\rm TT}^{jk}=K_{\rm T}^{j}=K_{\rm aeT}^{j}=0\), along with \[N_{0} =\frac{4c_{14}}{2-c_{14}}\frac{v_{L}}{1-v_{L}^{2}}K_{\rm ae0}\,,\] \[K_{0} =v_{L}N_{0}\,,\] \[B_{0} =3v_{L}K_{0}\,, \tag{48}\] with \(K_{\rm ae0}\) the unconstrained amplitude. This is a longitudinal \(\Delta\)Ether wave with accompanying longitudinal metric perturbations. ## Appendix B Transformation to new variables In this Appendix, we derive the transformation (4.7) that eliminates all terms linear in the fields \(\tilde{N}\), \(\tilde{K}^{j}\), \(\tilde{B}^{jk}\) and \(\tilde{K}^{j}_{\rm ae}\) apart from terms that consist of a leading d'Alembertian of the fields. Those linear terms are displayed in Eqs. (4.5) and (4.6). It is known from earlier work on Einstein-AEther theory that the d'Alembertian of \(N\) appears in the combination \((1-c_{14}/2)\Box N\), so that the coupling constant \(G_{0}\) is renormalized by that prefactor. That will be a constraint on the solution. From the structure of Eqs. (4.5) and (4.6) it is clear that the combination \(\tilde{K}^{j}_{\rm ae}-\tilde{K}^{j}\) is prevalent, so we will define \(\tilde{K}^{j}_{\rm ae}=K^{j}_{\rm ae}+K^{j}+\dots\). We want to remove all offending linear terms in the field equations through 2PN order. Finally, we will want to investigate the forms taken by the harmonic gauge conditions \(\tilde{N}_{,0}+\tilde{K}^{j}_{,j}=0\) and \(\tilde{K}^{j}_{,0}+\tilde{B}^{j}_{,k}=0\) in the new variables. Because the transformation of \(\tilde{N}\) will go through 2PN order, or to relative order \(\epsilon^{2}\), we will want to include terms at relative order \(\epsilon^{2}\) in the transformation of \(\tilde{K}^{j}\), even though that is a PN order higher in \(\tilde{K}^{j}\) than we actually need for the equations of motion; for completeness, we will also transform \(\tilde{K}^{j}_{\rm ae}\) to the same relative order. The second gauge condition does not impose additional conditions on the transformations. Accordingly we try a linear transformation of the form: \[\tilde{N} =N+\epsilon\big{(}a_{2}B+a_{3}\ddot{X}_{N}+a_{4}\dot{X}_{K}+a_{5} \dot{X}_{K\rm ae}\big{)}\] \[\quad+\epsilon^{2}\big{(}a_{6}\ddot{X}_{B}+a_{7}\stackrel{{ (\ref{eq:K_a_2})}}{{Y}}+a_{8}\stackrel{{(\ref{eq:K_a_2})}}{{Y}}+a_{9} \stackrel{{(\ref{eq:K_a_2})}}{{Y}}K_{\rm ae}\big{)}\,,\] \[\tilde{B}^{jk} =B^{jk}+\delta^{jk}\Big{[}b_{3}\ddot{X}_{N}+b_{4}\dot{X}_{K}+b_{5} \dot{X}_{K\rm ae}\] \[\quad+\epsilon\big{(}b_{6}\ddot{X}_{B}+b_{7}\stackrel{{ (\ref{eq:K_a_2})}}{{Y}}+b_{8}\stackrel{{(\ref{eq:K_a_2})}}{{Y}}+b_{9} \stackrel{{(\ref{eq:K_a_2})}}{{Y}}K_{\rm ae}\big{)}\bigg{]}\,,\] \[\tilde{K}^{j} =K^{j}+d_{3}\dot{X}_{N,j}+d_{4}{X}_{K,j}+d_{5}X_{K\rm ae},j\] \[\quad+\epsilon\big{(}d_{6}\dot{X}_{B,j}+d_{7}\stackrel{{ (\ref{eq:K_a_2})}}{{Y}}+d_{8}\ddot{Y}_{K,j}+d_{9}\ddot{Y}_{K\rm ae,j}\big{)}\] \[\quad+\epsilon^{2}\big{(}d_{10}\stackrel{{(\ref{eq:K_a 2})}}{{Y}}_{B,j}+d_{11}\stackrel{{(\ref{eq:K_a_2})}}{{Z}}_{N,j}+d_{ 12}\stackrel{{(\ref{eq:K_a_2})}}{{Z}}_{K,j}+d_{13}\stackrel{{ (\ref{eq:K_a_2})}}{{Z}}_{K\rm ae,j}\big{)}\] \[\tilde{K}^{j}_{\rm ae} =K^{j}_{\rm ae}+K^{j}+e_{3}\dot{X}_{N,j}+e_{4}{X}_{K,j}+e_{5}{X}_ {K\rm ae,j}\] \[\quad+\epsilon\big{(}e_{6}\dot{X}_{B,j}+e_{7}\stackrel{{ (\ref{eq:K_a_2})}}{{Y}}_{N,j}+e_{8}\ddot{Y}_{K,j}+e_{9}\ddot{Y}_{K\rm ae,j} \big{)}\] \[\quad+\epsilon^{2}\big{(}e_{10}\stackrel{{(\ref{eq:K_a 2})}}{{Y}}_{B,j}+e_{11}\stackrel{{(\ref{eq:K_a_2})}}{{Z}}_{N,j}+e_{12} \stackrel{{(\ref{eq:K_a_2})}}{{Z}}_{K,j}+e_{13}\stackrel{{ (\ref{eq:K_a_2})}}{{Z}}_{K\rm ae,j}\big{)}\,, \tag{49}\] where the various superpotentials are defined by Eqs. (4.10) \(\Box K^{j}=0\), and \(\Box^{*}K^{j}_{\rm ae}=0\), where \(\Box^{*}\equiv\nabla^{2}-v_{T}^{-2}\partial_{0}^{2}\). The resulting solution is given by Eq. (4.7). Should one wish to go to higher PN order, it is straightforward to extend the linear transformation (at the cost of introducing even more exotic superpotentials), to push the offending linear terms to even higher PN orders. In terms of the new variables, the harmonic gauge conditions become \[K^{j}_{,j} =-\left(1-\frac{c_{14}}{2}\right)\dot{N}-2c_{1}K^{j}_{\rm ae,j}\] \[\quad+\epsilon c_{1}\left(1-\frac{1}{v_{T}^{2}}\right)\left(\ddot {X}_{K\rm ae}+\frac{1}{12}\epsilon\stackrel{{(\ref{eq:2})}}{{Y}} _{K\rm ae}\right)\,,\] \[\dot{K}^{j}+B^{jk}_{,k}=0\,. \tag{101}\] The first gauge condition can be used to eliminate \(K^{j}_{,j}\) and its various superpotentials from the problem. By applying the inverse Laplacian to this equation and iterating, we obtain, to the required 2PN order, \[X_{K} =-(1-\frac{c_{14}}{2})\dot{X}_{N}-2c_{1}X_{Kae}+\epsilon\frac{c_{1 }}{6}W_{T}\ddot{Y}_{Kae}\,,\] \[Y_{K} =-(1-\frac{c_{14}}{2})\dot{Y}_{N}-2c_{1}Y_{Kae}\,. \tag{102}\] These relations have been used to eliminate \(K^{j}_{,j}\), \(X_{K}\) and \(Y_{K}\) from the transformations shown in Eq. (4.7). ## Appendix C Properties of Poisson Potentials Here we summarize some useful properties of Poisson potentials and superpotentials, defined in Sec. IV.5. These rely upon the general result, which can be obtained by integration by parts, \[P(\nabla^{2}g)=-g+\mathcal{B}_{P}(g)\,, \tag{103}\] where \(\mathcal{B}_{P}(g)\) denotes the boundary term, given by \[\mathcal{B}_{P}(g) \equiv\frac{1}{4\pi}\oint_{\partial\mathcal{M}}\biggl{[}\frac{g(t,\mathbf{x}^{\prime})}{|\mathbf{x}-\mathbf{x}^{\prime}|}\partial^{\prime}_{r} \ln(g(t,\mathbf{x}^{\prime})|\mathbf{x}-\mathbf{x}^{\prime}|)\biggr{]}_{r^{ \prime}=\mathcal{R}}\] \[\qquad\times\mathcal{R}^{2}d\Omega^{\prime}\,. \tag{104}\] The boundary terms must be carefully evaluated case by case to determine if any \(\mathcal{R}\)-independent terms survive. All \(\mathcal{R}\)_-dependent_ terms can be discarded. At 2.5PN order, none of these surface terms contribute. Some useful formulae that result from this include: \[P(|\nabla g|^{2}) =-\frac{1}{2}\{g^{2}+2P(g\nabla^{2}g)\}\,,\] \[P(\nabla g\cdot\nabla f) =-\frac{1}{2}\{fg+P(f\nabla^{2}g)+P(g\nabla^{2}f)\}\,,\] \[P(U) =-\frac{1}{2}X\,,\quad P(X)=-\frac{1}{12}Y\,,\] \[P(|\nabla U|^{2}) =-\frac{1}{2}U^{2}+\Phi_{2}\,,\] \[P(\nabla U\cdot\nabla\ddot{X}) =-\frac{1}{2}\{U\ddot{X}-\Sigma(\ddot{X})+2G_{2}\}\,. \tag{105}\] Other useful identities include \[\Sigma(x^{i}) =x^{i}U-X^{.i}\,,\] \[P(1) =-\frac{1}{6}r^{2}\,,\] \[P(x^{k}) =-\frac{1}{10}x^{k}r^{2}\,. \tag{106}\]
2310.09051
Bots, Elections, and Controversies: Twitter Insights from Brazil's Polarised Elections
From 2018 to 2023, Brazil experienced its most fiercely contested elections in history, resulting in the election of far-right candidate Jair Bolsonaro followed by the left-wing, Lula da Silva. This period was marked by a murder attempt, a coup attempt, the pandemic, and a plethora of conspiracy theories and controversies. This paper analyses 437 million tweets originating from 13 million accounts associated with Brazilian politics during these two presidential election cycles. We focus on accounts' behavioural patterns. We noted a quasi-monotonic escalation in bot engagement, marked by notable surges both during COVID-19 and in the aftermath of the 2022 election. The data revealed a strong correlation between bot engagement and the number of replies during a single day ($r=0.66$, $p<0.01$). Furthermore, we identified a range of suspicious activities, including an unusually high number of accounts being created on the same day, with some days witnessing over 20,000 new accounts and super-prolific accounts generating close to 100,000 tweets. Lastly, we uncovered a sprawling network of accounts sharing Twitter handles, with a select few managing to utilise more than 100 distinct handles. This work can be instrumental in dismantling coordinated campaigns and offer valuable insights for the enhancement of bot detection algorithms.
Diogo Pacheco
2023-10-13T12:18:23Z
http://arxiv.org/abs/2310.09051v1
# Bots, Elections, and Controversies: ###### Abstract From 2018 to 2023, Brazil experienced its most fiercely contested elections in history, resulting in the election of far-right candidate Jair Bolsonaro followed by the left-wing, Lula da Silva. This period was marked by a murder attempt, a coup attempt, the pandemic, and a plethora of conspiracy theories and controversies. This paper analyses 437 million tweets originating from 13 million accounts associated with Brazilian politics during these two presidential election cycles. We focus on accounts' behavioural patterns. We noted a quasi-monotonic escalation in bot engagement, marked by notable surges both during COVID-19 and in the aftermath of the 2022 election. The data revealed a strong correlation between bot engagement and the number of replies during a single day (\(r=0.66\), \(p<0.01\)). Furthermore, we identified a range of suspicious activities, including an unusually high number of accounts being created on the same day, with some days witnessing over 20,000 new accounts and super-prolific accounts generating close to 100,000 tweets. Lastly, we uncovered a sprawling network of accounts sharing Twitter handles, with a select few managing to utilise more than 100 distinct handles. This work can be instrumental in dismantling coordinated campaigns and offer valuable insights for the enhancement of bot detection algorithms. Introduction Brexit in Europe, Trump in the U.S., and Bolsonaro in Brazil exemplify the escalating polarisation characterising political discourse worldwide [13]. Simultaneously, the pivotal role of online social platforms as primary mediums for campaigns, debates, and recruitment has come to the forefront [4; 46; 47]. The presence of bots in electoral campaigns has seen a year-on-year increase, coinciding with a growing academic focus [6]. This expanding body of literature delves into elections worldwide, encompassing the 2016 U.S. elections [5], the 2017 electoral contests in Germany [20] and in France [15], Italy in 2018 [35], Spain in 2019 [32], elections across numerous African countries during 2017-2018 [29], the Asia-Pacific region in 2019-2020 [44], and the 2019 European Parliament elections [33], to name a few. However, a more pressing concern emerges as the diffusion of misinformation disproportionately affects accounts depending on their political affiliations [5; 8; 22]. In 2016, Brazil experienced political turmoil with the impeachment of Dilma Rousseff. Subsequently, the years 2018 and 2022 witnessed Brazil's most hotly contested elections in its history, culminating in the elections of far-right candidate Jair Bolsonaro, followed by the left-wing figure Lula da Silva. This era was overshadowed by a murder attempt, a coup attempt, the pandemic, and a profusion of conspiracy theories and controversies, creating fertile ground for misinformation. Notably, Brazilian datasets have been at the forefront of developing computational methods for detecting propaganda [2], countering misinformation in advertisements [40], identifying low-credibility Brazilian websites [11], and fact-checking images [36]. WhatsApp groups, immensely popular in Brazil, played a pivotal role in monitoring misinformation spread during the 2018 elections [24]. Furthermore, substantial criticism has emerged regarding the use of misinformation as a political weapon during the COVID-19 pandemic [37] and culminating in Bolsonaro's ineligibility until 2030 [39]. In this study, we harness social media data and network analysis to discern and illuminate population-level political behaviour in Brazil. Our analysis tracks the evolution of political groups, from contentious competitors during campaigns to government and opposition blocks after elections. Our findings illuminate a transition from a pre-election phase marked by numerous polarised groups to a post-election phase in which these factions coalesce into government and opposition clusters. Our investigation uncovers a sprawling network of coordinated accounts that share Twitter handles. We also observe a pronounced surge in bot engagement, with noteworthy peaks during the pandemic and in the aftermath of the 2022 election. Furthermore, our data underscores a strong correlation between bot engagement and the number of replies. Finally, we identify anomalous days characterised by an unexpectedly high number of account creations. We employed the Twitter streaming API to monitor fourteen Brazilian presidential candidates during the 2018 elections, and thirteen candidates and twenty-seven political parties during the 2022 cycle. The data collection spanned from August 30, 2018, to March 14, 2023. The period encompasses 1,657 days, and the collection process remained active for 94% of this time. This comprehensive effort resulted in the acquisition of a vast dataset comprising 437 million tweets originating from 13 million distinct accounts. ## II Dissecting Twitter accounts ### Dynamics of political engagement The Twitter timeline depicted in Figure 1A reveals discernible shifts in political engagement. In 2018, there is a notable surge in activity leading up to the election day, followed by a decline in the period between the release of election results and the inauguration day (January 1, 2019). Subsequently, the volume of tweets and active users stabilises, punctuated by occasional peaks corresponding to significant events. The volume of tweets remained relatively low throughout 2020 until the onset of COVID. Subsequently, a series of peaks emerged, driven by discussions surrounding both the pandemic and political developments. The most significant surge occurred at the beginning of 2022, building steadily until the election day. A pattern akin to 2018 repeats as there is a decline between the election and the inauguration. Notably, 2022 also witnessed an abrupt surge coinciding with the coup attempt on January 8, 2023. Despite the somewhat consistent daily number of accounts engaging in the conversation, Figure 1B reveals an intriguing trend wherein more than three thousand new accounts join the Brazilian political discourse each day. This observation hints at an account churn rate of approximately 5%. Importantly, the introduction of new terms into the data collection on July 1, 2022, does not appear to have significantly influenced the influx of new accounts. In forthcoming research endeavours, we intend to delve deeper into the dynamics of accounts exiting the conversation. One plausible interpretation for this is that it may be driven by a substantial presence of bots within the Twitter ecosystem. These bots could potentially be replaced by new ones as they are suspended by the platform for policy violations. However, it is essential to note that in our current analysis, we did not assess bot activity among the incoming accounts. ### Accounts' heterogeneous characteristics Consistent with observations in various social systems, our dataset underscores the presence of accounts exhibiting heavy-tailed properties. Figure 2 illustrates this phenomenon, wherein the majority of accounts contribute relatively few tweets, while a select few man Figure 1: **A Political Tale from Tweets — (A) Timelines spanning two election cycles (2018/22), covering 1657 days and involving 437 million tweets (in orange) from 13 million accounts (in purple). Daily tweet counts are represented by the lighter lines, while the 30-day moving average is depicted in bold. Key events, such as the 2018 (#1) and 2022 (#7) election days, are highlighted for reference. (B) The cumulative plot illustrates a continuous increase in the number of distinct accounts joining the political conversation for the first time. The red vertical lines (1/July/22) indicate the beginning of the 2022 election cycle, while the red dashed line marks the election day.** age to produce an exceptionally high volume, nearing 100,000 tweets within the specified timeframe. It is worth noting that, despite Twitter's imposed limit of 2,400 tweets per day, some accounts employ strategies to circumvent this restriction, often through the adoption of abusive deletion behaviours [43]. The skewness observed in the distribution of tweet volume is mirrored in the distribution of active days. While the majority of accounts engage for just a few days, there exists a subset of accounts that remain active on a daily basis. However, it is imperative to acknowledge that the figures presented herein may be underrepresented, as they pertain exclusively to the tweets captured by our data collection. These extremes in user behaviour raise suspicions of automation. Prior research has highlighted the association of multiple handles (i.e., screen names) used by a single account or shared among multiple accounts with potentially malicious activities [25] and coordinated campaigns [31]. Figure 2 further elucidates this trend by illustrating the distribution of the number of distinct names employed by the accounts within our dataset. It is noteworthy that some accounts exhibit the use of more than a hundred distinct handles, amplifying concerns of potentially deceptive practices. Figure 2: **Heterogeneous Behaviour of Accounts** — (A) Distribution of the number of tweets per account, showcasing a long-tail pattern where a small number of accounts post close to 100k tweets. (B) Distribution of the number of active days, revealing exceptional accounts that tweet every day. (C) Distribution of the number of distinct screen names used by accounts, with some accounts utilizing more than 100 different names. ### Coordinated accounts Pacheco et al. [31] introduced a framework for identifying coordinated campaigns on Twitter, focusing on the presence of shared handles among multiple accounts. This entails different accounts, signified by distinct user_id, adopting the same perceived identity, denoted by identical screen_name. Importantly, this methodology enables the detection of coordinated groups of accounts, regardless of their automation level, extending the scope beyond bots. Figure 3 displays the outcomes of the coordination detection [31] within our dataset. The figure unveils numerous connected components, representing the coordinated groups, which vary in size. Notably, the figure emphasises the most suspicious groups, filtered either due to their size, with more than ten accounts involved, or because of their prolific engagement in the conversation, generating over ten thousand tweets. It is important to highlight that, in this study, we did not uncover groups involved in name squatting or hijacking, as previously reported in the literature [25; 31]. This discrepancy can be attributed to the distinctions between our dataset, which is domain-specific, and the datasets employed in prior research, which were domain-agnostic. Furthermore, our analysis refrains from delving into the specifics of the campaigns undertaken by these groups or their overall impact on the broader discourse. These facets will be addressed in future research. ### Bots engagement In this section, we explore the temporal evolution of bot engagement by utilising BotometerLite [49], a tool designed for the assessment of bot-like activities within social media data. It is essential to acknowledge that while bot detection algorithms are valuable, they are far from infallible [12]. These algorithms have faced criticism on various fronts, including concerns about their lack of transparency [16], the presence of elevated numbers of false negatives and false positives [18; 28], and issues of historical data [9]. To mitigate some of these criticisms, our analysis focuses on bot activity as a broad trend, avoiding specific account-level scrutiny or rigid threshold definitions. BotometerLite [49] operates by assessing a single tweet, specifically the _user profile object_ within a tweet, to assign a _botscore_ to the account responsible for that tweet. The botscore, Figure 3: **Screen Name Sharing Network** — Each node in the network represents a Twitter account, and they are connected if they share a common handle (_screen_name_). The size of each node is proportional to the number of tweets posted, and different colours represent various suspicious coordinated groups (connected components). For clarity, we display only groups consisting of at least 10 accounts or those responsible for producing more than 10,000 tweets. which can range from zero to one, serves as an indicator of the extent to which an account's features resemble those of a human versus automated account (bot) activities. It is important to note that the _user profile features_ used for this assessment are subject to change over time, meaning that even two consecutive tweets from the same account may yield different botscores. For our analysis, we define the _daily botscore_ of an account as the average botscore derived from all of its tweets within a given day. Additionally, we establish the concept of _bot engagement_ or _content botscore_, denoting the average botscore calculated from all tweets collectively. Figure 4: **Increasing Bot Activity —** (A) The daily average botscore derived from tweets exhibits a continuous upward trend, reaching peaks in the days following the 2022 election. (B) The percentage of accounts exhibiting bot-like behaviour remains relatively stable in the dataset, with notable increases observed after the initial wave of the pandemic and following the 2022 election. As illustrated in Figure 4A, we present the evolving landscape of bot engagement within the discourse surrounding Brazilian politics. While the daily engagement displays noticeable fluctuations, the moving average reveals a pronounced upward trajectory that has persisted since the commencement of our data collection in 2018. Notably, we observe a significant surge in bot engagement commencing in March 2020 during the pandemic, and this trend further intensifies in the aftermath of the 2022 elections. This observed trend aligns with findings reported by academics and media outlets, which have highlighted the escalating dissemination of disinformation in Brazil. Of particular concern are unsubstantiated claims, often attributed to Bolsonaro, regarding the e-voting system [7; 17; 38]. The recent acquisition of Twitter has sparked considerable controversy regarding the prevalence of bots on the platform [21]. In 2017, Varol et al. [45] proposed a method for conducting a census of Twitter accounts and found that approximately 9% to 15% of accounts were likely to be social bots. In this study, we refrain from providing a specific numerical estimate and instead examine trends surrounding the presence of bots within our dataset. Figure 4B portrays the temporal evolution of the daily percentage of what we term _suspicious users_. In our analysis, we categorise _suspicious_ accounts as those with a botscore exceeding 0.5. The percentage of bots within this category remains relatively stable, fluctuating between 15% and 20%. It is worth noting that varying thresholds for suspicious accounts would result in different quantities of bots, but the overall stable trend persists. Noteworthy spikes in bot activity occur during COVID and in the aftermath of the 2022 elections, with specific days registering a particularly high proportion, exceeding 50% of the total accounts. The convergence of two key results, namely the escalating content botscore and the sustained proportion of bots, offers compelling evidence that bots are progressively intensifying their involvement in the ongoing conversations. This observation raises important questions about the effectiveness of existing measures aimed at countering and mitigating bot activities. It hints at the possibility that current initiatives designed to combat and block bots may not be sufficient to curtail their presence and influence. Figure 5: **Relationship Between Retweet and Reply, and Bot Engagement — The percentage of _replies_ in a day exhibits a significant positive correlation (\(r=0.66\)) with bot engagement, while the percentage of _retweets_ demonstrates a negative correlation (\(r=-0.55\)) with bot activity.** Figure 6: **Evolution of Tweet Types Over Time — While _retweets_ remain the dominant type of tweets, there is a noticeable increase in the popularity of _replies_ over time.** ### Replying bots Mbona and Eloff [27] employed a combination of Benford's Law, Principal Component Analysis (PCA), and random forest techniques to identify discriminative features for bot detection. Notably, their findings underscored that the number of retweets serves as an effective discriminator, whereas the number of replies did not exhibit the same discriminatory power. In contrast, Pozzana and Ferrara [34] demonstrated that both the fraction of retweets and replies tend to be more prevalent in human interactions compared to bot-driven activities. Finally, Mazza et al. [26] distinguished between trolls and social bots, revealing that the latter tend to employ a higher volume of replies than human users. In this section, we delve into the intricate relationships among these engagement metrics and the overarching _content botscore_, as defined in Section II.4. A lower content botscore signifies a scenario in which the majority of tweets originate from accounts that exhibit human-like characteristics, while a higher content botscore indicates a greater involvement of automated accounts in the conversation. Figure 5 showcases the distributions and correlation patterns among the percentage of retweets, percentage of replies, and the content botscore. Our results shed light on the distinct tendencies of bots, particularly their propensity for engaging through replies, offering valuable insights into the interplay of these engagement metrics. Figure 6 offers a chronological perspective on the proportions of different tweet types. Throughout this timeline, retweets consistently dominate the landscape, maintaining a prominent presence. However, notable shifts in tweet composition are observed. For instance, there is a discernible 10% decline in the retweet rate, plummeting from 71% prior to the second round of the 2018 election to 61% following the inauguration day. In stark contrast, the number of replies more than doubled during the same period, surging from 14% to 30%. These changes may reflect two distinct behavioural patterns: the prevalence of propaganda-oriented content during election campaigns, juxtaposed with an emphasis on discourse and debate in the post-election mandate period. This phenomenon, characterised by shifts in the composition of tweet types, was not unique to the 2018 election cycle but recurred during the 2022 cycle as well at a lower scale. ### Uncovering accounts "birthdays" Contrary to the zodiac, online "dates of birth" can reveal a wealth of information beyond just an account's age. Prior research, such as Tardelli et al. [42], has demonstrated that financial social bots often share similar creation dates. Jones [19] successfully detected bots exploiting the Gulf crisis primarily by analysing account creation dates. Similarly, Takacs and McCulloh [41] utilised creation dates to identify dormant bots during the 2018 US Senate election. These bots [41] were not particularly active, yet they attempted to exert influence based on their substantial number of followers. In our investigation, we embark on a quest for days marked by a substantial influx of "newborn" accounts. We scrutinised the creation dates of each of the 13 million accounts actively participating in the discourse surrounding Brazilian politics and categorised them accordingly. Figure 7 visually depicts the distribution of the number of "newborns" per day. This distribution exhibits a fat-tailed pattern, characterised by a decline in the probability of multiple accounts being created on the same day up to around 1,000 creations daily. Subsequently, the probability rises, peaking at approximately 3,000 accounts per day, before sharply declining once more. Figure 7: The distribution of the number of accounts created per day. Figure 8 provides a timeline of account creation counts for those accounts born on the same date. Notably, the plot expectedly reveals that numerous accounts were established long before our data collection began, with some dating back to the inception of Twitter itself. However, the plot also unveils peculiar and anomalous peaks, primarily concentrated in 2020, with a maximum day in 2022 registering the creation of over 20,000 accounts. Of particular note is an outlier peak on 1st January 1970, which we chose to omit from the plot. This anomaly, marked by the creation of 41 accounts on a date preceding the existence of the Twitter platform itself, is unequivocally suspicious. The existence of accounts with creation dates preceding the platform's inception, as well as those exhibiting multiple creation dates, presents a puzzling phenomenon. We remain uncertain whether this issue is an innocuous glitch or a deliberately orchestrated malicious activity. Although we have not encountered official reports on this matter, it has garnered attention on social media [11]. In future research, we plan to delve deeper into the analysis and characterisation of these enigmatic accounts, shedding light on their origins and potential significance. Figure 8: **Accounts Created on the Same Date (“Birthday Twins”)** — This figure displays the count of accounts created per day, highlighting the age diversity of Twitter accounts in the dataset. Unusual peaks are also observed, including accounts created as far back as the early days of Twitter. ## III Discussions _Bot Detection Challenges_The significant increase in bot engagement, as evidenced by our findings, underscores the escalating concerns about our capacity to effectively combat fringe actors. Our use of BotometerLite, reliant on historical data and a model trained in 2020, might not fully encapsulate the evolving nature of bot behaviour, especially during critical events like elections. Research has illuminated the adaptive tactics of bots during elections and the formidable challenges in detecting automated accounts [23]. Detecting bots has become a more intricate task, and the proliferation of misinformation may be greatly exacerbated by innovations like GPT [14; 48]. _Dataset Bias_While our analysis is grounded in a substantial sample of online users, it remains uncertain how representative Twitter data is of the broader Brazilian political spectrum. It is crucial to acknowledge that no dataset is entirely free from bias. Many research efforts rely on datasets constructed using dynamic keyword-based approaches, which involve continually updating tracking terms to adapt to the evolving online environment. For example, some researchers employ snowball techniques to harvest new hashtags, resulting in datasets that are tailored to current trends. In contrast, our approach was distinct. We aimed to minimise changes to tracking terms. For instance, we retained hashtags primarily associated with campaign periods throughout our study duration (see Tables 1 and 2). Similarly, we continued tracking all presidential candidates even after the elections. Notably, a substantial proportion of these candidates remained actively engaged within the Brazilian political landscape. A striking 46% of the 2022 candidates were participants in the 2018 cycle. Some of these candidates joined coalitions to support new contenders, while others assumed leadership roles in political parties or government. Maintaining a stable list of tracked individuals allowed us to consistently monitor Brazilian politics without introducing additional bias stemming from trending topics. _Twitter's Evolution_The transformation of Twitter has gone beyond a mere re-branding to \(X\). The new API introduces significant limitations on data collection, which have the potential to hamper the monitoring capacity of academics and open new avenues for exploitation by malicious actors. However, it is not yet clear whether certain behaviours observed on "old" Twitter have ceased to exist on \(X\). Consequently, it is imperative to continue exploring datasets from the older version of Twitter. Furthermore, it's unlikely that researchers can reconstruct such a comprehensive dataset as the one presented here. Although we are unable to directly share our data, we are actively seeking collaborations to expand and extend this research. Future Research DirectionsFuture work should delve into the dynamics of accounts joining the political discourse. Who constitutes the persistent core of participants? Does the churn rate only capture isolated instances of engagement? Do accounts engage periodically or based on specific topics? The coordinated suspicious groups and accounts created on the same day warrant further investigation, including characterisation efforts to identify who these actors are and the subjects they discuss. Additionally, it is imperative to measure the impact of their actions on the overall conversation and trace back groups involved in the coup attempt. Despite the lingering questions, we anticipate that this work will play a pivotal role in dismantling coordinated campaigns and offer valuable insights to enhance bot detection algorithms. In summary, our research has revealed the alarming growth in bot engagement, raising concerns about our ability to combat fringe actors effectively. While our study is not without limitations, such as the evolving nature of bot behaviour and dataset biases, it has provided valuable insights into the landscape of Brazilian politics. As we confront the challenges of evolving social media platforms and advancing technologies, it is imperative to continue probing these issues and collaborating to develop effective solutions. ## IV Data Collection and Context This paper examines a dataset consisting of 437 million tweets generated by 13 million accounts associated with Brazilian politics between 2018 and 2023. Before delving into the specifics of data collection, it is essential to provide a contextual overview of the Brazilian electoral process. Brazil operates as a federal presidential representative democratic republic with a multi-party system, comprising 27 federal units (states and a federal district). Voting in Brazil is mandatory for individuals aged between 18 and 70 years, while it is optional for those under 18, over 16, or over 70. Elected Brazilian politicians generally serve four-year terms, and the population is required to select their representatives in general elections every two years, alternating between federal and local elections. For instance, the years 2018 and 2022 constituted federal elections for the positions of president, governor, and federal congressmen, while 2016 and 2020 featured elections for mayor, state deputies, and city councillors. Elections in Brazil are conducted on a single day, during which all votes must be cast in person, typically on a Sunday in October, between 8 AM and 5 PM. In cases where no candidate secures an absolute majority of the valid votes (more than 50%), a second round of voting is held, featuring the two leading candidates. Since 1996, Brazil has employed electronic voting machines, which have eliminated paper-based fraud and enabled rapid result tabulation. Despite increasing concerns regarding the system's security, it undergoes regular audits and testing by representatives from all political parties and various organisations, including cybersecurity experts. So far, there has been no concrete evidence of corruption in the system [1; 3; 50]. There are currently 30 parties registered at the Superior Electoral Court (TSE)[10]. Each party is assigned a unique identification number, which is used as part of a candidate's ID. For most positions (e.g., president and governor), each party can field at most one candidate, and their ID corresponds to the party number itself. Candidate IDs are prominently featured in campaign materials, as voters must type them to cast their e-vote. Our dataset was compiled using the Twitter streaming API. Data collection commenced in August 2018 and continued until the API's termination in March 2023. We focused on the presidential elections and, for each candidate, monitored (i) the official Twitter account, (ii) the official campaign hashtag (often following the pattern "#!last name?+!candidate ID?"), and (iii) the candidate's full name. We also tracked the Twitter account of the Superior Electoral Court (TSE). Table 1 provides an overview of the keywords employed during the 2018 election cycle. In July 2022, the TSE officially released the updated list of candidates for the 2022 elections. This event prompted the sole adjustment to the set of keywords over the five-year period. Table 2 features the revised list of candidates and associated keywords. Additionally, we initiated monitoring of the official accounts of Brazilian political parties and the Supreme Court (STF). Table 3 presents the parties' accounts added for the 2022 cycle. ## Acknowledgements I would like to express my gratitude for the valuable insights and fruitful conversations with Filippo Menczer and Alessandro Flammini during the early stages of this investigation in 2018-19 at Indiana University Bloomington. Their guidance and expertise significantly contributed to the development of this research. I am also indebted to Marcos Oliveira at the University of Exeter for his thoughtful comments and reviews of the manuscript, which greatly enhanced the quality of this work.
2310.06362
InfoCL: Alleviating Catastrophic Forgetting in Continual Text Classification from An Information Theoretic Perspective
Continual learning (CL) aims to constantly learn new knowledge over time while avoiding catastrophic forgetting on old tasks. We focus on continual text classification under the class-incremental setting. Recent CL studies have identified the severe performance decrease on analogous classes as a key factor for catastrophic forgetting. In this paper, through an in-depth exploration of the representation learning process in CL, we discover that the compression effect of the information bottleneck leads to confusion on analogous classes. To enable the model learn more sufficient representations, we propose a novel replay-based continual text classification method, InfoCL. Our approach utilizes fast-slow and current-past contrastive learning to perform mutual information maximization and better recover the previously learned representations. In addition, InfoCL incorporates an adversarial memory augmentation strategy to alleviate the overfitting problem of replay. Experimental results demonstrate that InfoCL effectively mitigates forgetting and achieves state-of-the-art performance on three text classification tasks. The code is publicly available at https://github.com/Yifan-Song793/InfoCL.
Yifan Song, Peiyi Wang, Weimin Xiong, Dawei Zhu, Tianyu Liu, Zhifang Sui, Sujian Li
2023-10-10T07:00:13Z
http://arxiv.org/abs/2310.06362v1
# InfoCL: Alleviating Catastrophic Forgetting in Continual ###### Abstract Continual learning (CL) aims to constantly learn new knowledge over time while avoiding catastrophic forgetting on old tasks. We focus on continual text classification under the class-incremental setting. Recent CL studies have identified the severe performance decrease on analogous classes as a key factor for catastrophic forgetting. In this paper, through an in-depth exploration of the representation learning process in CL, we discover that the compression effect of the information bottleneck leads to confusion on analogous classes. To enable the model learn more sufficient representations, we propose a novel replay-based continual text classification method, InfoCL. Our approach utilizes fast-slow and current-past contrastive learning to perform mutual information maximization and better recover the previously learned representations. In addition, InfoCL incorporates an adversarial memory augmentation strategy to alleviate the overfitting problem of replay. Experimental results demonstrate that InfoCL effectively mitigates forgetting and achieves state-of-the-art performance on three text classification tasks. The code is publicly available at [https://github.com/Yifan-Song793/InfoCL](https://github.com/Yifan-Song793/InfoCL). ## 1 Introduction Continual learning (CL) enables conventional static natural language processing models to constantly gain new knowledge from a stream of incoming data (Sun et al., 2020; Biesialska et al., 2020). In this paper, we focus on continual text classification, which is formulated as a class-incremental problem, requiring the model to learn from a sequence of class-incremental tasks (Huang et al., 2021). Figure 1 gives an illustrative example of continual text classification. The model needs to learn to distinguish some new classes in each task and is eventually evaluated on all seen classes. Like other CL systems, the major challenge of continual text classification is catastrophic forgetting: after new tasks are learned, performance on old tasks may degrade dramatically (Lange et al., 2022). The earlier work in the CL community mainly attributes catastrophic forgetting to the corruption of the learned representations as new tasks arrive and various methods have been introduced to retain or recover previously learned representations (Kirkpatrick et al., 2017; Rebuffi et al., 2017; Mallya and Lazebnik, 2018; Lange et al., 2022). Recently, some studies (Wang et al., 2022; Zhao et al., 2023) find that, under the class-incremental setting, the severe performance decay among analogous classes is the key factor of catastrophic forgetting. To improve the performance of distinguishing analogous classes, Wang et al. (2022) exploit a heuristic adversarial class augmentation and Zhao et al. (2023) propose a sophisticated memory-insensitive prototype mechanism. However, due to a lack of thorough investigation into the underlying cause of confusion in similar classes, previous empirical methods may not be universally effective and are Figure 1: Illustration for continual text classification with three tasks where each task involves two new classes. \(X_{i}\), \(Y_{i}\), and \(Z_{i}\) denote input sentences, new classes and learned representations for \(i\)-th task \(\mathcal{T}_{i}\) respectively. Although the representations \(Z_{i}\) learned in \(\mathcal{T}_{i}\) are sufficient for classifying \(Y_{i}\), they are insufficient to distinguish all seen classes \(Y\) in the final test. unable to offer guidance for further improvements. In this paper, for the first time we present an in-depth analysis of the analogous class confusion problem from an information theoretic perspective. We investigate the impact of the information bottleneck (IB) on the representation learning process of CL, specifically how it compresses the mutual information between the representation and the input. Through formal analysis and empirical study, we find that within each task, current CL models tend to discard features irrelevant to current task due to the compression effect of IB. While the acquired representations are locally sufficient for current task, they may be globally insufficient for classifying analogous classes in the final test. We refer to this phenomenon as **representation bias** and aim to enhance the CL model's ability to learn more comprehensive representations. Based on our analysis, we propose a replay-based continual text classification method InfoCL. Using contrastive learning as the core mechanism, we enable the model to learn more comprehensive representations by maximizing the mutual information between representation and the original input via InfoNCE. Specifically, we design fast-slow and current-past contrast strategies. First, from the IB theory, the representations in the early stage of optimization preserves more information. Hence, when learning for new classes in current task, we leverage MoCo framework He et al. (2020) and conduct fast-slow contrastive learning to facilitate the learned representations to retain more information about the input. On the other hand, to further alleviate representation corruption, when conducting memory replay, we leverage current-past contrastive learning to ensure the learned representations do not undergo significant changes. Due to the limited budget of memory, the performance of current-past contrastive learning is hindered by the over-fitting problem. To this end, InfoCL incorporates adversarial data augmentation to generate more training instances for replay. Our contributions are summarized as follows: (1) We formally analyze the analogous class confusion problem in CL from an information theoretic perspective and derive that the representation bias led by the compression effect of IB is the underlying cause of forgetting. (2) We propose a novel replay-based continual text classification method InfoCL, which exploits fast-slow and current-past constrastive learning to capture more comprehensive representations. (3) Experimental results on several text classification datasets show that InfoCL learns more effective representations and outperforms state-of-the-art methods. ## 2 Related Work Continual LearningContinual Learning (CL) studies the problem of continually learning knowledge from a sequence of tasks Lange et al. (2022) while avoiding catastrophic forgetting. Previous CL work mainly attributes catastrophic forgetting to the corruption of learned knowledge and can be divided into three major families. _Replay-based_ methods Rebuffi et al. (2017); Prabhu et al. (2020) save a few previous task instances in a memory module and retrain on them while training new tasks. _Regularization-based_ methods Kirkpatrick et al. (2017); Aljundi et al. (2018) introduce an extra regularization loss to consolidate previous knowledge. _Parameter-isolation_ methods Mallya and Lazebnik (2018) dynamically expand the network and dedicate different model parameters to each task. Recent studies have identified the confusion among analogous classes as a key factor of catastrophic forgetting. In this paper, we discover the representation bias is the underlying cause of such confusion and design InfoCL to mitigate it. Contrastive LearningContrastive learning aims to learn representations by contrasting positive pairs against negative pairs Chen et al. (2020). Recently, contrastive learning has made great progress in both unsupervised and supervised settings Chen et al. (2020); He et al. (2020); Khosla et al. (2020); Barbano et al. (2022). The success of contrastive learning can be partially attributed to that the commonly used objective, InfoNCE, maximizes the mutual information between representations and inputs van den Oord et al. (2018). Previous continual learning work has already integrated contrastive learning to alleviate the catastrophic forgetting. Cha et al. (2021) and Zhao et al. (2022) use supervised contrastive learning to learn more consistent representations. Hu et al. (2022) design a prototypical contrastive network to alleviate catastrophic forgetting. However, due to the lack of in-depth analysis of the representations learned in continual learning, these approaches fail to harness the full potential of contrastive learning1. In this paper, we investigate the representation learning process in CL and propose fast-slow and current-past contrastive learning to enable the model learn more comprehensive representations and further mitigate the representation corruption problem. ## 3 Task Formulation In this work, we focus on continual learning for a sequence of \(k\) class-incremental text classification tasks \((\mathcal{T}_{1},\mathcal{T}_{2},...,\mathcal{T}_{k})\). Each task \(\mathcal{T}_{i}\) has its dataset \(\mathcal{D}_{i}=\{(x_{n},y_{n})\}_{n=1}^{N_{i}}\), where \((x_{n},y_{n})\) is an instance of current task and is sampled from an individually i.i.d. distribution \(p(X_{i},Y_{i})\). Different tasks \(\mathcal{T}_{i}\) and \(\mathcal{T}_{j}\) have disjoint label sets \(Y_{i}\) and \(Y_{j}\). The goal of CL is to continually train the model on new tasks to learn new classes while avoiding forgetting previously learned ones. From another perspective, if we denote \(X=\cup_{i}X_{i}\) and \(Y=\cup_{i}Y_{i}\) as the input and output space of the entire CL process respectively, continual learning aims to approximate a holistic distribution \(p(Y|X)\) from a non-i.i.d data stream. The text classification model \(F\) is usually composed of two modules: the encoder \(f\) and the classifier \(\sigma\). For an input \(x\), we get the corresponding representation \(\mathbf{z}=f(x)\), and use the logits \(\sigma\left(\mathbf{z}\right)\) to compute loss and predict the label. ## 4 Representation Bias in CL Previous work Wang et al. (2022); Zhao et al. (2023) reveals that the severe performance degradation on analogous classes is the key factor of catastrophic forgetting. In this section, we investigate the representation learning process of continual learning from an information theoretic perspective and find that the representation bias is the underlying cause of confusion in analogous classes. ### Information Bottleneck We first briefly introduce the background of information bottleneck in this section. Information bottleneck formulates the goal of deep learning as an information-theoretic trade-off between representation compression and preservation Tishby and Zaslavsky (2015); Shwartz-Ziv and Tishby (2017). Given the input \(\mathcal{X}\) and the label set \(\mathcal{Y}\), one model is built to learn the representation \(\mathcal{Z}=\mathcal{F}(\mathcal{X})\), where \(\mathcal{F}\) is the encoder. The learning procedure of the model is to minimize the following Lagrangian: \[I(\mathcal{X};\mathcal{Z})-\beta I(\mathcal{Z};\mathcal{Y}), \tag{1}\] where \(I(\mathcal{X};\mathcal{Z})\) is the mutual information (MI) between \(\mathcal{X}\) and \(\mathcal{Z}\), quantifying the information retained in the representation \(\mathcal{Z}\). \(I(\mathcal{Z};\mathcal{Y})\) quantifies the amount of information in \(\mathcal{Z}\) that enables the identification of the label \(\mathcal{Y}\). \(\beta\) is a trade-off hyperparameter. With information bottleneck, the model will learn _minimal sufficient representation_\(\mathcal{Z}^{*}\)Achille and Soatto (2018) of \(\mathcal{X}\) corresponding to \(\mathcal{Y}\): \[\mathcal{Z}^{*}=\arg\min_{\mathcal{Z}}I(\mathcal{X};\mathcal{Z}) \tag{2}\] \[\mathrm{s.t.}\;I(\mathcal{Z};\mathcal{Y})=I(\mathcal{X};\mathcal{ Y}). \tag{3}\] Minimal sufficient representation is important for supervised learning, because it retains as little about input as possible to simplify the role of the classifier and improve generalization, without losing information about labels. ### Representation Learning Process of CL Continual learning is formulated as a sequence of individual tasks \((\mathcal{T}_{1},\mathcal{T}_{2},...,\mathcal{T}_{k})\). For \(i\)-th task \(\mathcal{T}_{i}\), the model aims to approximate distribution of current task \(p(Y_{i}|X_{i})\). According to IB, if the model \(F=\sigma\circ f\) converges, the learned hidden representation \(Z_{i}=f(X_{i})\) will be local minimal sufficient for \(\mathcal{T}_{i}\): \[Z_{i}=\arg\min_{Z_{i}}I\left(X_{i};Z_{i}\right) \tag{4}\] \[\mathrm{s.t.}\;I\left(Z_{i};Y_{i}\right)=I(X_{i};Y_{i}), \tag{5}\] which ensures the performance and generalization ability of the current task. Nevertheless, the local minimization of the compression term \(I\left(X_{i};Z_{i}\right)\) will bring potential risks: features that are useless in the current task but crucial for other tasks will be discarded. The goal of CL is to classify all seen classes \(Y=\cup_{i}Y_{i}\). For the entire continual learning task with the holistic target distribution \(p(Y|X)\), the necessary condition to perform well is that the representation \(Z\) is globally sufficient for \(Y\): \(I(Z;Y)=I(X;Y)\). However, as some crucial features are compressed, the combination of local minimal sufficient representations for each task \(Z=\cup_{i}Z_{i}\) may be globally insufficient: \[I\left(Z;Y\right)<I(X;Y). \tag{6}\] We name this phenomenon as **representation bias**: due to the compression effect of IB, the learned representations in each individual task may be insufficient for the entire continual task. Then the underlying cause of the performance decrease of analogous classes is obvious. Take two cross-task analogous classes \(y_{a}\) and \(y_{b}\) as an example. Under sequential task setting of CL, the model is unable to co-training on instances from \(y_{a}\) and \(y_{b}\). It means that the representations of these two classes can exclusively be learned within their respective tasks. When learning \(y_{a}\), the local sufficient representations to identify \(y_{a}\) are insufficient to differentiate with \(y_{b}\). Hence, the appearance of \(y_{b}\) will lead to a dramatically performance decrease of \(y_{a}\), resulting in what is known as catastrophic forgetting. ### Empirical Results To confirm our analysis, here we directly measure the mutual information among \(X\), \(Y\) and \(Z\). Since the representations learned by supervised learning is always globally sufficient, i.e., \(I\left(Z;Y\right)=I(X;Y)\), we use supervised learning on all data as the baseline, and compare it with several strong CL methods. Concretely, we use MINE Belghazi et al. (2018) as the MI estimator and conduct experiments on FewRel and MAVEN datasets2. Footnote 2: See Section 6.1 for details of CL baselines and datasets. First, we measure \(I(X;Z)\) to quantify the features preserved in the representation \(Z\). However, previously learned representations will be corrupted once the model learns new tasks, leading to inaccurate estimation. To exclude the impact of representation corruption, we instead estimate \(I(X_{1};Z_{1})\) on \(\mathcal{T}_{1}\)'s test set. Second, to assess whether learned representations are sufficient for the entire continual task, we compare \(I(Z;Y)\) on the final test set with all classes. As shown in Table 1, both \(I(X_{1};Z_{1})\) and \(I(Z;Y)\) of three CL models are significantly lower than supervised learning, indicating that the CL model tends to compress more information due to the individual task setting and the representations learned in CL are insufficient for the entire continual task. ## 5 Methodology From the formal analysis and empirical verification, we establish that representation bias plays a crucial role in the performance decline of analogous classes. Consequently, in this section, we propose a novel replay-based CL method, InfoCL, which is able to maximize \(I(X_{i};Z_{i})\) and help the model learn more comprehensive representations. ### Overall Framework The objective of contrastive learning, specifically InfoNCE, serves as a proxy to maximize the mutual information \(I(X_{i};Z_{i})\)van den Oord et al. (2018). Therefore, we utilize contrastive learning as the core mechanism of InfoCL to acquire more comprehensive representations. Concretely, we design fast-slow and current-past contrastive learning for training new data and memory data. The overall framework of InfoCL is depicted in Figure 2. For a new task \(\mathcal{T}_{k}\), we first train the model on \(\mathcal{D}_{k}\) to learn this task. We perform fast-slow contrastive learning to help the model capture more sufficient representations and mitigate the representation bias. Then we store a few typical instances for each class \(y\in Y_{k}\) into the memory \(\mathcal{M}\), which contains instances of all seen classes. To alleviate representation decay, we next conduct memory replay with current-past contrastive learning. As the performance of representation recovery is always hindered by the limited size of memory, we incorporates adversarial augmentation to alleviate overfitting. ### Fast-Slow Contrastive Learning In the representation learning process of a task \(\mathcal{T}_{i}\), the compression effect of IB will minimize the mutual information \(I(X_{i};Z_{i})\), leading to globally insufficient representations. Intuitively, in the early phase of optimization, \(I(X_{i};Z_{i})\) is larger and the representations preserve more information about the inputs. However, directly adopting an early \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{2}{c}{**FewRel**} & \multicolumn{2}{c}{**MAVEN**} \\ \cline{2-5} & \(I(X_{1};Z_{1})\) & \(I(Z;Y)\) & \(I(X_{1};Z_{1})\) & \(I(Z;Y)\) \\ \hline Supervised & 2.42 & 2.45 & 3.50 & 2.42 \\ \hline RP-CRE & 2.08 & 2.21 & 3.15 & 2.31 \\ CRL & 2.12 & 2.18 & 3.12 & 2.30 \\ CRECL & 2.20 & 2.31 & 3.01 & 2.36 \\ \hline \hline \end{tabular} \end{table} Table 1: Mutual information comparison between supervised learning and strong CL baselines on FewRel and MAVEN datasets. We use \(I(X;Z)\) to measure how much features of input \(X\) representation \(Z\) preserves. To exclude the impact of representation corruption, we instead estimate \(I(X_{1};Z_{1})\) after CL models finish \(\mathcal{T}_{1}\). \(I(Z;Y)\) measures whether the learned representation is sufficient for the entire continual task. stop strategy is not feasible, as the representation compression is essential for generalization. Instead, we try to pull the learned representations and early representations of the same class together, facilitating the preservation of more comprehensive information in the final representations. Inspired by He et al. (2020), we employ a momentum contrast which consists of a fast encoder and a momentum updated slow encoder. The representations from the slowly-updated branch will preserve more information about the input sentences. The fast-slow contrast can "distill" these information from the slow branch to fast branch to learn more comprehensive representations. Figure 2 (b) depicts the architecture. The fast model is updated by gradient descent, while the slow model truncates the gradient and is updated with the momentum mechanism during training. Formally, denoting the parameters of the fast and slow models as \(\theta\) and \(\theta^{\prime}\), \(\theta^{\prime}\) is updated by: \[\theta^{\prime}\leftarrow\eta\theta^{\prime}+(1-\eta)\theta, \tag{7}\] where \(\eta\) is the momentum coefficient which is relatively large to ensure the slow update (e.g., \(\eta=0.99\)). For the slow encoder, we also maintain a representation queue \(Q\) to increase the number of negative instances beyond the batch size, enabling InfoNCE to more effectively maximize MI. The queue is updated with the output of slow model by first-in-first-out strategy. We denote \(\mathbf{z}\) for the representations from the fast model and \(\widetilde{\mathbf{z}}\) for the slow model. Then the slow representations \(\widetilde{\mathbf{z}}\) preserve more information than \(\mathbf{z}\). We use InfoNCE to perform fast-slow contrast: \[\mathcal{L}_{\mathrm{fs}}=-\frac{1}{|B|}\sum_{i\in I}\sum_{p\in P(i)}\log\frac {\exp(\mathbf{z}_{i}\cdot\widetilde{\mathbf{z}}_{p}/\tau_{1})}{\sum_{j\in J} \exp(\mathbf{z}_{i}\cdot\widetilde{\mathbf{z}}_{j}/\tau_{1})}, \tag{8}\] where \(I=\{1,2,...,|B|\}\) is the set of indices of batch \(B\). \(J=\{1,2,...,|B\cup Q|\}\) denotes the indices set of instances in the batch or the queue. \(P(i)=\{p\in J:y_{p}=y_{i}\}\) is the indices of instances which have the same label as \(\mathbf{z}_{i}\) from the batch or the queue. \(\tau_{1}\) is the temperature hyperparameter. The final optimization objective in new task training is the combination of cross entropy loss \(\mathcal{L}_{\mathrm{ce}}\) and the contrastive loss \(\mathcal{L}_{\mathrm{fs}}\): \[\mathcal{L}_{1}=\mathcal{L}_{\mathrm{ce}}+\lambda_{1}\mathcal{L}_{\mathrm{fs}}, \tag{9}\] where \(\lambda_{1}\) is the factor to adjust the loss weight. ### Memory Selection After the initial training stage, we select and store typical instances for each class for replay. Since the primary focus of our paper is to address the representation bias problem, we adopt the memory sampling strategy employed in prior work Cui et al. (2021); Zhao et al. (2022) to ensure a fair comparison. Specifically, for each class, we use K-means to cluster the corresponding representations, and the instances closest to the centroids are stored in memory \(\mathcal{M}\). Then we use the instances Figure 2: (a) A demonstration for InfoCL. We design fast-slow and current-past contrastive learning for initial training and memory replay, respectively. (b) Fast-slow contrastive learning. The slowly progressing model generates representations preserving more information. (c) Current-past contrastive learning with adversarial augmentation. Contrasting with old model from \(\mathcal{T}_{k-1}\) further alleviates representation corruption. of all seen classes from memory \(\mathcal{M}\) to conduct the memory replay stage. ### Current-Past Contrastive Learning When performing memory replay, we also employ contrastive learning to enable the model to learn more comprehensive representations for all previously seen classes. Additionally, to further enhance representation recovery in memory replay stage, we propose current-past contrastive learning which explicitly aligns current representations to the previous ones. As shown in Figure 2 (c), after the model finishs \(\mathcal{T}_{k-1}\), we store the representations \(\bar{\mathbf{z}}\) of the instances from memory \(\mathcal{M}\). Then we use InfoNCE loss to pull current representations \(\mathbf{z}\) and past representations \(\bar{\mathbf{z}}\) of the same class together: \[\mathcal{L}_{\mathrm{cp}}=-\frac{1}{|B|}\sum_{i\in I}\sum_{p\in P(i)}\log\frac {\exp(\mathbf{z}_{i}\cdot\bar{\mathbf{z}}_{p}/\tau_{2})}{\sum_{m\in M}\exp( \mathbf{z}_{i}\cdot\bar{\mathbf{z}}_{m}/\tau_{2})}, \tag{10}\] where \(I=\{1,2,...,|B|\}\) is the set of indices of batch \(B\). \(M=\{1,2,...,|\mathcal{M}|\}\) denotes the indices set of instances in memory \(\mathcal{M}\). \(P(i)=\{p\in M:y_{p}=y_{i}\}\) is the indices of instances which have the same label as \(\mathbf{z}_{i}\) from the memory. \(\tau_{2}\) is the temperature hyperparameter. The optimization objective in the memory replay stage is \[\mathcal{L}_{2}=\mathcal{L}_{\mathrm{ce}}+\lambda_{2}\mathcal{L}_{\mathrm{cp}}, \tag{11}\] where \(\mathcal{L}_{\mathrm{ce}}\) is cross entropy loss and \(\lambda_{2}\) is the factor to adjust the loss weight. ### Adversarial Memory Augmentation Due to the constrained memory budgets, the performance of current-past contrastive learning in the memory replay stage is hindered by the overfitting problem. To alleviate overfitting and enhance the effect of representation recovery, we incorporate adversarial data augmentation Zhu et al. (2020): \[\mathcal{L}_{\mathrm{adv}}= \tag{12}\] \[\min_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{M}}\left[\frac{1}{K} \sum_{t=0}^{K-1}\max_{|\delta_{t}|\leq\epsilon}\mathcal{L}_{2}\left(F(x+ \delta_{t}),y\right)\right].\] Intuitively, it performs multiple adversarial attack iterations to craft adversarial examples, which is equivalent to replacing the original batch with a \(K\)-times larger adversarial augmented batch. Please refer to Appendix B for details about Eq. 12. ## 6 Experiments ### Experiment Setups DatasetsTo fully measure the ability of InfoCL, we conduct experiments on 4 datasets for 3 different text classification tasks, including relation extraction, event classification, and intent detection. For relation extraction, following previous work Han et al. (2020); Cui et al. (2021); Zhao et al. (2022), we use **FewRel**Han et al. (2018) and **TA-CRED**Zhang et al. (2017). For event classification, following Yu et al. (2021) and Wu et al. (2022), we use **MAVEN**Wang et al. (2020) to build our benchmark. For intent detection, following Liu et al. (2021), we choose **HWU64**Liu et al. (2019) dataset. For the task sequence, we simulate 10 tasks by randomly dividing all classes of the dataset into 10 disjoint sets, and the number of new classes in each task for FewRel, TACRED, MAVEN and HWU64 are 8, 4, 12, 5 respectively. For a fair comparison, the result of baselines are reproduced on the same task sequences as our method. Please refer to Appendix C for details of these four datasets. Following previous work Hu et al. (2022); Wang et al. (2022), we use the average accuracy (Acc) on all seen tasks as the metric. BaselinesWe compare InfoCL against the following baselines: IDBR Huang et al. (2021), KCN Cao et al. (2020), KDRK Yu et al. (2021), EMAR Han et al. (2020), RP-CRE Cui et al. (2021), CRL Zhao et al. (2022), CRECL Hu et al. (2022), ACA Wang et al. (2022) and CEAR Zhao et al. (2023). See Appendix D for details of the baselines. Some baselines are originally proposed to tackle one specific task. For example, RP-CRE is designed for continual relation extraction. We adapt these baselines to other tasks and report the corresponding results. Since ACA and CEAR consist of data augmentation specially designed for relation extraction, they cannot be adapted to other tasks. Implementation DetailsFor InfoCL, we use BERT\({}_{\mathrm{base}}\)Devlin et al. (2019) as the encoder following previous work Cui et al. (2021); Wang et al. (2022). The learning rate of InfoCL is set to 1e-5 for the BERT encoder and 1e-3 for other modules. Hyperparameters are tuned on the first three tasks. The memory budget for each class is fixed at 10 for all methods. For all experiments, we use NVIDIA A800 GPUs and report the average result of 5 different task sequences. More implementation details can be found in Appendix E. ### Main Results Table 2 shows the performance of InfoCL and baselines on four datasets for three text classification tasks. Due to space constraints, we only illustrate results on the last three tasks. The complete accuracy and standard deviation of all 10 tasks can be found in Appendix G. As shown, on FewRel, MAVEN and HWU64, our proposed InfoCL consistently outperforms all baselines and achieves new state-of-the-art results. These experimental results demonstrate the effectiveness and universality of our proposed method. Regarding the TACRED dataset, it has been noticed that a large fraction of the examples are mislabeled, thus compromising the reliability of the evaluation Alt et al. (2020); Stoica et al. (2021). Here we strongly advocate for a more reliable evaluation on other high-quality text classification datasets. ## 7 Analysis ### Ablation Study We conduct an ablation study to investigate the effectiveness of different components of InfoCL. The results are shown in Table 3. We find that the three core mechanisms of InfoCL, namely fast-slow contrast, current-past contrast, and adversarial memory augmentation, are conducive to the model performance. Furthermore, the fast-slow contrastive learning performed in the new task training stage seems to be more effective than the other components, indicating that learning comprehensive representations for the new task is more essential to mitigate representation bias problem. ### Performance on Analogous Classes We reveal that the representation bias problem is the underlying cause of confusion on analogous classes in CL. Since InfoCL aims to learn more sufficient representations and mitigate the bias, we also conduct experiment to explore this point. Following Wang et al. (2022), we use the cosine distance of the average embedding of the instances as a metric to identify analogous classes. Specifically, we select 16 and 24 classes (20% of all of the classes) for FewRel and MAVEN, which are most likely to be confused with other classes. The list of these classes are shown in Appendix F. If \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{2}{c}{**FewRel**} & \multicolumn{2}{c}{**MAVEN**} \\ \cline{2-4} & Accuracy & Drop & Accuracy & Drop \\ \hline CRL & 75.3 & 13.3 & 59.8 & 21.2 \\ CRECL & 74.9 & 13.6 & 59.2 & 21.9 \\ InfoCL & **78.6** & **11.8** & **61.3** & **20.7** \\ \hline \hline \end{tabular} \end{table} Table 4: Average Accuracy (%) and accuracy drop (%) on analogous classes. For each dataset, we select 20% classes which are most likely to be confused with other classes. \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c c} \hline \hline **Datasets** & \multicolumn{3}{c|}{**FewRel**} & \multicolumn{3}{c|}{**TACRED**} & \multicolumn{3}{c|}{**MAVEN**} & \multicolumn{3}{c}{**HWU64**} \\ \hline **Models** & \(\mathcal{T}_{8}\) & \(\mathcal{T}_{9}\) & \(\mathcal{T}_{10}\) & \(\mathcal{T}_{8}\) & \(\mathcal{T}_{9}\) & \(\mathcal{T}_{10}\) & \(\mathcal{T}_{8}\) & \(\mathcal{T}_{9}\) & \(\mathcal{T}_{10}\) & \(\mathcal{T}_{8}\) & \(\mathcal{T}_{9}\) & \(\mathcal{T}_{10}\) \\ \hline IDBR Huang et al. (2021) & 73.7 & 71.7 & 68.9 & 64.2 & 63.8 & 60.1 & 64.4 & 60.2 & 57.3 & 80.2 & 78.0 & 76.2 \\ KCN Cao et al. (2020) & 80.3 & 78.8 & 76.0 & 72.1 & 72.2 & 70.6 & 68.4 & 67.7 & 64.4 & 83.7 & 82.7 & 81.9 \\ KDRK Yu et al. (2021) & 81.6 & 80.2 & 78.0 & 72.9 & 72.1 & 70.8 & 69.6 & 68.9 & 65.4 & 85.1 & 82.5 & 81.4 \\ EMAR Han et al. (2020) & 86.1 & 84.8 & 83.6 & 76.6 & 76.8 & 76.1 & 76.8 & 75.7 & 73.2 & 85.5 & 83.9 & 83.1 \\ RP-CRE Cui et al. (2021) & 85.8 & 84.4 & 82.8 & 76.1 & 75.0 & 75.3 & 77.1 & 76.0 & 73.6 & 84.5 & 83.8 & 82.7 \\ CRL Zhao et al. (2022) & 85.6 & 84.5 & 83.1 & 79.1 & 79.0 & 78.0 & 76.8 & 75.9 & 73.7 & 83.1 & 81.3 & 81.5 \\ CRECL Hu et al. (2022) & 84.6 & 83.6 & 82.7 & **81.4** & 79.3 & 78.5 & 75.9 & 75.1 & 73.5 & 83.1 & 81.9 & 81.1 \\ ACA Wang et al. (2022) & 87.0 & 86.3 & 84.7 & 78.6 & 78.8 & 78.1 & – & – & – & – & – & – \\ CEAR Zhao et al. (2023) & 86.9 & 85.6 & 84.2 & 81.1 & **80.1** & **79.1** & – & – & – & – & – \\ \hline InfoCL (Ours) & **87.8** & **86.8** & **85.4** & 79.7 & 78.4 & 78.2 & **78.2** & **77.1** & **75.3** & **86.3** & **85.3** & **84.1** \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy (%) on all seen classes after learning the last three tasks. We report the average result of \(5\) different runs. The best results are in **boldface**. ACA and CEAR is specially designed for continual relation extraction and cannot be adapted to other tasks. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Models** & **Few.** & **TAC.** & **MAV.** & **HWU.** \\ \hline InfoCL & 85.4 & 78.2 & 75.3 & 84.1 \\ \hline w/o f-s con. & 84.9 & 77.6 & 74.7 & 83.4 \\ w/o c-p con. & 85.0 & 78.1 & 75.1 & 83.7 \\ w/o adv. aug. & 84.8 & 77.9 & 75.0 & 83.6 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study of InfoCL. “f-s con.” and “c-p con.” denote fast-slow and current-past contrastive learning. “adv. aug.” denotes adversarial memory augmentaion mechanism. these selected classes are in the former five tasks, we record the average final accuracy and the accuracy drop on them. As shown in Table 4, the performance on analogous classes of our model is superior and drops the least, demonstrating that our model succeeds in alleviating confusion among analogous classes. ### Comprehensive Representation Learning InfoCL employs contrastive learning to learn comprehensive representations via mutual information maximization. To assess the efficiency of representation learning process of our method, we directly compute the mutual information of the representations. Same as Section 4.3, we measure \(I(X_{1};Z_{1})\) on \(\mathcal{T}_{1}\) and \(I(Z;Y)\) after the final task with all classes. For InfoCL, \(I(X_{1};Z_{1})=2.32\) and \(I(Z;Y)=2.38\) on FewRel, \(I(X_{1};Z_{1})=3.20\) and \(I(Z;Y)=2.37\) on MAVEN. Compared with baselines in Table 1, both mutual information metrics are higher, indicating our method can learn more sufficient representation to mitigate the representation bias. To provide an intuitive demonstration of the effective representations acquired by InfoCL, we conducted a case study. We select two analogous classes from FewRel, P57 ("director") and P58 ("screenwriter"). We use t-SNE to visualize the representations of these two classes after the model learned them. As illustrated in Figure 4, for both methods, the accuracy of P57 reaches 100% after learning it. However, due to the representation bias, when P58 appears, the accuracy of CRL dramatically declined. In contrast, InfoCL maintains a relatively stable performance. Notably, even without training for P58, the representations of two classes in our method exhibit a noticeable level of differentiation, highlighting InfoCL's capacity to acquire more comprehensive representations. ### Influence of Memory Size Memory size is the number of stored instances for each class, which is an important factor for the performance of replay-based CL methods. Therefore, in this section, we study the impact of memory size on InfoCL. As Table 2 has reported the results with memory size 10, here we compare the performance of InfoCL with strong baselines on FewRel and MAVEN under memory sizes 5 and 20. Table 3 demonstrates that InfoCL consistently outperforms strong baselines across various memory sizes. Surprisingly, even with a memory size of 5, InfoCL achieves comparable performance to the baselines with a memory size of 20, highlighting its superior capability. As the memory size decreases, the performance of all models degrades, showing the importance of memory for replay-based methods. Whereas, InfoCL maintains a relatively stable performance, showcasing its robustness even in extreme scenarios. Figure 4: The representations of instances from P57 (“director”) and P58 (“screenwriter”). P57 and P58 are in different tasks and P57 is learned before P58 appears. Figure 3: Accuracy (%) w.r.t. different memory sizes of different methods. Conclusion In this paper, we focus on continual learning for text classification in the class-incremental setting. We formally investigate the representation learning process of CL and discover the representation bias will lead to catastrophic forgetting on analogous classes. Based on our analysis, we propose InfoCL, which utilizes fast-slow and current-past contrastive learning to learn more comprehensive representations and alleviate representation corruption. An adversarial augmentation strategy is also employed to further enhance the performance of the representation recovery. Experimental results show that InfoCL learns effective representations and outperforms the latest baselines. ### Limitations Our paper has several limitations: (1) Our proposed InfoCL utilizes fast-slow and current-past contrastive learning to learn more comprehensive representations, which introduces extra computational overhead and is less efficient than other replay-based CL methods; (2) We only focus on catastrophic forgetting problem in continual text classification. How to encourage knowledge transfer in CL is not explored in this paper. ### Ethics Statement Our work complies with the ACL Ethics Policy. Since text classification is a standard task in NLP and all datasets we used are publicly available, we have not identified any significant ethical considerations associated with our work. ## Acknowledgement We thank the anonymous reviewers for their helpful comments on this paper. This work was partially supported by National Key R&D Program of China (No. 2022YFC3600402) and National Social Science Foundation Project of China (21&ZD287). The corresponding author of this paper is Sujian Li.
2301.10241
K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
We introduce k-planes, a white-box model for radiance fields in arbitrary dimensions. Our model uses d choose 2 planes to represent a d-dimensional scene, providing a seamless way to go from static (d=3) to dynamic (d=4) scenes. This planar factorization makes adding dimension-specific priors easy, e.g. temporal smoothness and multi-resolution spatial structure, and induces a natural decomposition of static and dynamic components of a scene. We use a linear feature decoder with a learned color basis that yields similar performance as a nonlinear black-box MLP decoder. Across a range of synthetic and real, static and dynamic, fixed and varying appearance scenes, k-planes yields competitive and often state-of-the-art reconstruction fidelity with low memory usage, achieving 1000x compression over a full 4D grid, and fast optimization with a pure PyTorch implementation. For video results and code, please see https://sarafridov.github.io/K-Planes.
Sara Fridovich-Keil, Giacomo Meanti, Frederik Warburg, Benjamin Recht, Angjoo Kanazawa
2023-01-24T18:59:08Z
http://arxiv.org/abs/2301.10241v2
# \(K\)-Planes: Explicit Radiance Fields in Space, Time, and Appearance ###### Abstract We introduce \(k\)-planes, a white-box model for radiance fields in arbitrary dimensions. Our model uses \(\binom{d}{2}\) ("\(d\)-choose-\(2\)") planes to represent a \(d\)-dimensional scene, providing a seamless way to go from static (\(d=3\)) to dynamic (\(d=4\)) scenes. This planar factorization makes adding dimension-specific priors easy, e.g. temporal smoothness and multi-resolution spatial structure, and induces a natural decomposition of static and dynamic components of a scene. We use a linear feature decoder with a learned color basis that yields similar performance as a nonlinear black-box MLP decoder. Across a range of synthetic and real, static and dynamic, fixed and varying appearance scenes, \(k\)-planes yields competitive and often state-of-the-art reconstruction fidelity with low memory usage, achieving \(1000\)x compression over a full 4D grid, and fast optimization with a pure PyTorch implementation. For video results and code, please see [https://sarafridov.github.io/K-Planes](https://sarafridov.github.io/K-Planes). * equal contribution ## 1 Introduction Recent interest in dynamic radiance fields demands representations of 4D volumes. However, storing a 4D volume directly is prohibitively expensive due to the curse of dimensionality. Several approaches have been proposed to factorize 3D volumes for static radiance fields, but these do not easily extend to higher dimensional volumes. We propose a factorization of 4D volumes that is simple, interpretable, compact, and yields fast training and rendering. Specifically, we use six planes to represent a 4D volume, where the first three represent space and the last three represent space-time changes, as illustrated in Fig. 1(d). This decomposition of space and space-time makes our model interpretable, dynamic objects are clearly visible in the space-time planes, whereas static objects only appear in the space planes. This interpretability enables dimension-specific priors in time and space. More generally, our approach yields a straightforward, prescriptive way to select a factorization of any dimension with 2D planes. For a \(d\)-dimensional space, we use \(k=\binom{d}{2}\) ("\(d\)-_choose_-2_") _k-planes_, which represent every pair of dimensions -- for example, our model uses \(\binom{4}{2}=6\)_hexplanes_ in 4D and reduces to \(\binom{3}{2}=3\)_tri-planes_ in 3D. Choosing any other set of planes would entail either using more than \(k\) planes and thus occupying unnecessary memory, or using fewer planes and thereby forfeiting the ability to represent some potential interaction between two of the \(d\) dimensions. We call our model \(k\)-planes; Fig. 1 illustrates its natural application to both static and dynamic scenes. Most radiance field models entail some black-box components with their use of MLPs. Instead, we seek a simple model whose functioning can be inspected and understood. We find two design choices to be fundamental in allowing \(k\)-planes to be a white-box model while maintaining reconstruction quality competitive with or better than previous black-box models [16, 30]: (1) Features from our \(k\)-planes are _multiplied_ together rather than added, as was done in prior work [5, 6], and (2) our linear feature decoder uses a learned basis for view-dependent color, enabling greater adaptivity including the ability to model scenes with variable appearance. We show that an MLP decoder can be replaced with this linear feature decoder only when the planes are multiplied, suggesting that the former is involved in both view-dependent color and determining spatial structure. Our factorization of 4D volumes into 2D planes leads to a high compression level without relying on MLPs, using \(200\) MB to represent a 4D volume whose direct representation at the same resolution would require more than 300 GB, a compression rate of three orders of magnitude. Furthermore, despite not using any custom CUDA kernels, \(k\)-planes trains orders of magnitude faster than prior implicit models and on par with concurrent hybrid models. In summary, we present the first white-box, interpretable model capable of representing radiance fields in arbitrary dimensions, including static scenes, dynamic scenes, and scenes with variable appearance. Our \(k\)-planes model achieves competitive performance across reconstruction quality, model size, and optimization time across these varied tasks, without any custom CUDA kernels. ## 2 Related Work \(K\)-planes is an interpretable, explicit model applicable to static scenes, scenes with varying appearances, and dynamic scenes, with compact model size and fast optimization time. Our model is the first to yield all of these attributes, as illustrated in Tab. 1. We further highlight that \(k\)-planes satisfies this in a simple framework that naturally extends to arbitrary dimensions. Spatial decomposition.NeRF [24] proposed a fully implicit model with a large neural network queried many times during optimization, making it slow and essentially a black-box. Several works have used geometric representations to reduce the optimization time. Plenoxels [10] proposed a fully explicit model with trilinear interpolation in a 3D grid, which reduced the optimization time from hours to a few minutes. However, their explicit grid representation of 3D volumes, and that of DVGO [33], grows exponentially with dimension, making it challenging to scale to high resolution and completely intractable for 4D dynamic volumes. Hybrid methods [6, 25, 33] retain some explicit geometric structure, often compressed by a spatial decomposition, alongside a small MLP feature decoder. Instant-NGP [25] proposed a multiresolution voxel grid encoded implicitly via a hash function, allowing fast optimization and rendering with a compact model. TensoRF [6] achieved similar model compression and speed by replacing the voxel grid with a tensor decomposition into planes and vectors. In a generative setting, EG3D [5] proposed a similar spatial decomposition into three planes, whose values are added together to represent a 3D volume. Our work is inspired by the explicit modeling of Plenoxels as well as these spatial decompositions, particularly the triplane model of [5], the tensor decomposition of [6], and the multiscale grid model of [25]. We also draw inspiration from Extreme MRI [26], which uses a multiscale low \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline & & & & & & & \\ & & & & & & & \\ \hline NeRF & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ \\ NeRF-W & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ \\ DVGO & ✓ & ✗ & ✗ & ✓ & ✗ & ✗ \\ Plenoxels & ✓ & ✗ & ✗ & ✓ & ✗ & ✓ \\ Instant-NGP, TensoRF & ✓ & ✗ & ✗ & ✓ & ✓ & ✗\({}^{1}\) \\ DyNeRF, D-NeRF & – & ✗ & ✓ & ✗ & ✓ & ✗ \\ TiNeuVox, Tensor4D & – & ✗ & ✓ & ✓ & ✓ & ✗ \\ MixVoxels, V4D & – & ✗ & ✓ & ✓ & ✗ & ✗ \\ NeRFplayer & – & ✗ & ✓ & ✓ & ✓\({}^{2}\) & ✗ \\ \hline \(K\)-planes hybrid (Ours) & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ \\ \(K\)-planes explicit (Ours) & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \({}^{1}\) TensoRF offers both hybrid and explicit versions, with a small quality gap \({}^{2}\) NerffPlayer offers models at different sizes, the smallest of which has \(<100\) million parameters but the largest of which has \(>300\) million parameters \end{table} Table 1: **Related work overview.** The \(k\)-planes model works for a diverse set of scenes and tasks (static, varying appearance, and dynamic). It has a low memory usage (compact) and fast training and inference time (fast). Here “fast” includes any model that can optimize within a few (\(<6\)) hours on a single GPU, and “compact” denotes models that use less than roughly 100 million parameters. “Explicit” denotes white-box models that do not rely on MLPs. rank decomposition to represent 4D dynamic volumes in magnetic resonance imaging. These spatial decomposition methods have been shown to offer a favorable balance of memory efficiency and optimization time for static scenes. However, it is not obvious how to extend these factorizations to 4D volumes in a memory-efficient way. \(K\)-planes defines a unified framework that enables efficient and interpretable factorizations of 3D and 4D volumes and trivially extends to even higher dimensional volumes. Dynamic volumes.Applications such as Virtual Reality (VR) and Computed Tomography (CT) often require the ability to reconstruct 4D volumes. Several works have proposed extensions of NeRF to dynamic scenes. The two most common schemes are (1) modeling a deformation field on top of a static _canonical_ field [8, 9, 17, 30, 36, 43, 27], or (2) directly learning a radiance field conditioned on time [12, 16, 17, 28, 41]. The former makes decomposing static and dynamic components easy [43, 40], but struggles with changes in scene topology (e.g. when a new object appears), while the latter makes disentangling static and dynamic objects hard. A third strategy is to choose a representation of 3D space and repeat it at each timestep (e.g. NeRFlayer [32]), resulting in a model that ignores space-time interactions and can become impractically large for long videos. Further, some of these models are fully implicit [16, 30] and thus suffer from extremely long training times (e.g. DyNeRF used 8 GPUs for 1 week to train a single scene), as well as being completely black-box. Others use partially explicit decompositions for video [9, 11, 14, 18, 19, 31, 32, 37], usually combining some voxel or spatially decomposed feature grid with one or more MLP components for feature decoding and/or representing scene dynamics. Most closely related to \(k\)-planes is Tensor4D [31], which uses 9 planes to decompose 4D volumes. \(K\)-planes is less redundant (_e.g._ Tensor4D includes two \(yt\) planes), does not rely on multiple MLPs, and offers a simpler factorization that naturally generalizes to static and dynamic scenes. Our method combines a fully explicit representation with a built-in decomposition of static and dynamic components, the ability to handle arbitrary topology and lighting changes over time, fast optimization, and compactness. Appearance embedding.Reconstructing large environments from photographs taken with varying illumination is another domain in which implicit methods have shown appealing results, but hybrid and explicit approaches have not yet gained a foothold. NeRF-W [20] was the first to demonstrate photorealistic view synthesis in such environments. They augment a NeRF-based model with a learned global appearance code per frame, enabling it to explain away changes in appearance, such as time of day. With Generative Latent Optimization (GLO) [4], these appearance codes can further be used to manipulate the scene appearance by interpolation in the latent appearance space. Block-NeRF [34] employs similar appearance codes. We show that our \(k\)-planes representation can also effectively reconstruct these unbounded environments with varying appearance. We similarly extend our model - either the learned color basis in the fully explicit version, or the MLP decoder in the hybrid version - with a global appearance code to disentangle global appearance from a scene without affecting geometry. To the best of our knowledge, ours is both the first fully explicit and the first hybrid method to successfully reconstruct these challenging scenes. ## 3 K-planes model We propose a simple and interpretable model for representing scenes in arbitrary dimensions. Our representation yields low memory usage and fast training and rendering. The \(k\)-planes factorization, illustrated in Fig. 2, models a \(d\)-dimensional scene using \(k=\binom{d}{2}\) planes representing every Figure 2: **Method overview.** (a) Our \(k\)-planes representation factorizes 4D dynamic volumes into six planes, three for space and three for spatiotemporal variations. To obtain the value of a 4D point \(\mathbf{q}=(x,y,z,t)\), we first project the point into each plane, in which we (b) do multiscale bilinear interpolation. (c) The interpolated values are multiplied and then concatenated over the \(S\) scales. (d) These features are decoded either with a small MLP or our explicit linear decoder. (e) We follow the standard volumetric rendering formula to predict ray color and density. The model is optimized by (f) minimizing the reconstruction loss with simple regularization in space and time. combination of two dimensions. For example, for static 3D scenes, this results in _tri-planes_ with \(\binom{3}{2}=3\) planes representing \(xy\), \(xz\), and \(yz\). For dynamic 4D scenes, this results in _hex-planes_, with \(\binom{4}{2}=6\) planes including the three space-only planes and three space-time planes \(xt\), \(yt\), and \(zt\). Should we wish to represent a 5D space, we could use \(\binom{5}{2}=10\)_deca-planes_. In the following section, we describe the 4D instantiation of our \(k\)-planes factorization. ### Hex-planes The hex-planes factorization uses six planes. We refer to the space-only planes as **P\({}_{xy}\)**, **P\({}_{xz}\)**, and **P\({}_{yz}\)**, and the space-time planes as **P\({}_{xt}\)**, **P\({}_{yt}\)**, and **P\({}_{zt}\)**. Assuming symmetric spatial and temporal resolution \(N\) for simplicity of illustration, each of these planes has shape \(N\)x\(N\)x\(M\), where \(M\) is the size of stored features that capture the density and view-dependent color of the scene. We obtain the features of a 4D coordinate \(\mathbf{q}=(i,j,k,\tau)\) by normalizing its entries between \([0,N)\) and projecting it onto these six planes \[f(\mathbf{q})_{c}=\psi\big{(}\textbf{P}_{c},\pi_{c}(\mathbf{q})\big{)}, \tag{1}\] where \(\pi_{c}\) projects \(\mathbf{q}\) onto the \(c\)'th plane and \(\psi\) denotes bilinear interpolation of a point into a regularly spaced 2D grid. We repeat Eq. (1) for each plane \(c\in C\) to obtain feature vectors \(f(\mathbf{q})_{c}\). We combine these features over the six planes using the Hadamard product (elementwise multiplication) to produce a final feature vector of length \(M\) \[f(\mathbf{q})=\prod_{c\in C}f(\mathbf{q})_{c}. \tag{2}\] These features will be decoded into color and density using either a linear decoder or an MLP, described in Sec. 3.3. Why Hadamard product?In 3D, \(k\)-planes reduces to the tri-plane factorization, which is similar to [5] except that the elements are multiplied. A natural question is why we multiply rather than add, as has been used in prior work with tri-plane models [5, 29]. Fig. 3 illustrates that combining the planes by multiplication allows \(k\)-planes to produce spatially localized signals, which is not possible with addition. This selection ability of the Hadamard product produces substantial rendering improvements for linear decoders and modest improvement for MLP decoders, as shown in Tab. 2. This suggests that the MLP decoder is involved in both view-dependent color and determining spatial structure. The Hadamard product relieves the feature decoder of this extra task and makes it possible to reach similar performance using a linear decoder solely responsible for view-dependent color. ### Interpretability The separation of space-only and space-time planes makes the model interpretable and enables us to incorporate dimension-specific priors. For example, if a region of the scene never moves, its temporal component will always be \(1\) (the multiplicative identity), thereby just using the features from the space planes. This offers compression benefits since a static region can easily be identified and compactly represented. Furthermore, the space-time separation improves interpretability, _i.e_. we can track the changes in time by visualizing the elements in the time-space planes that are not \(1\). This simplicity, separation, and interpretability make adding priors straightforward. Multiscale planes.To encourage spatial smoothness and coherence, our model contains multiple copies at different spatial resolutions, for example \(64\), \(128\), \(256\), and \(512\). Models at each scale are treated separately, and the \(M\)-dimensional feature vectors from different scales are concatenated together before being passed to the decoder. The red and blue squares in Fig. 2a-b illustrate bilinear interpolation with multiscale planes. Inspired by the multiscale hash mapping of Instant-NGP [25], this representation efficiently encodes spatial features at different scales, allowing us to reduce the number of features stored at the highest resolution and thereby further compressing our model. Empirically, we do not find it necessary to represent our time dimension at multiple scales. Total variation in space.Spatial total variation regularization encourages sparse gradients (with L1 norm) or smooth gradients (with L2 norm), encoding priors over \begin{table} \begin{tabular}{c c c c} \hline \hline Plane Combination & Explicit & Hybrid & \# params \(\downarrow\) \\ \hline \multirow{2}{*}{Multiplication} & 35.29 & 35.75 & 33M \\ & Addition & 28.78 & 34.80 & 33M \\ \hline \hline \end{tabular} \end{table} Table 2: **Ablation study over Hadamard product.** Multiplication of plane features yields a large improvement in PSNR \(\uparrow\) for our explicit model, whereas our hybrid model can use its MLP decoder to partially compensate for the less expressive addition of planes. This experiment uses the static _Lego_ scene [24] with 3 scales: \(128\), \(256\), and \(512\), and \(32\) features per scale. Figure 3: **Addition versus Hadamard product.** Elementwise addition of plane features (left) compared to multiplication (right), in a triplane example. A single entry in each plane is positive and the rest are zero, selecting a single 3D point by multiplication but producing intersecting lines by addition. This selection ability of multiplication improves the expressivity of our explicit model. edges being either sparse or smooth in space. We encourage this in 1D over the spatial dimensions of each of our space-time planes and in 2D over our space-only planes: \[\mathcal{L}_{TV}(\textbf{P})=\frac{1}{|C|n^{2}}\sum_{c,i,j}\big{(}\|\textbf{P}_{ c}^{i,j}-\textbf{P}_{c}^{i-1,j}\|_{2}^{2}+\|\textbf{P}_{c}^{i,j}-\textbf{P}_{c}^{i,j-1}\|_{2}^{2}\big{)}, \tag{3}\] where \(i,j\) are indices on the plane's resolution. Total variation is a common regularizer in inverse problems and was used in Plenoxels [10] and TensoRF [6]. We use the L2 version in our results, though we find that either L2 or L1 produces similar quality. Smoothness in time.We encourage smooth motion with a 1D Laplacian (second derivative) filter \[\mathcal{L}_{smooth}(\textbf{P})=\frac{1}{|C|n^{2}}\sum_{c,i,t}\lVert\textbf{ P}_{c}^{i,t-1}-2\textbf{P}_{c}^{i,t}+\textbf{P}_{c}^{i,t+1}\rVert_{2}^{2}, \tag{4}\] to penalize sharp "acceleration" over time. We only apply this regularizer on the time dimension of our space-time planes. Please see the appendix for an ablation study. Sparse transients.We want the static part of the scene to be modeled by the space-only planes. We encourage this separation of space and time by initializing the features in the space-time planes as \(1\) (the multiplicative identity) and using an \(\ell_{1}\) regularizer on these planes during training: \[\mathcal{L}_{sep}(\textbf{P})=\sum_{c}\lVert\textbf{1}-\textbf{P}_{c}\rVert _{1},\qquad c\in\{xt,yt,zt\}. \tag{5}\] In this way, the space-time plane features of the \(k\)-planes decomposition will remain fixed at \(1\) if the corresponding spatial content does not change over time. ### Feature decoders We offer two methods to decode the \(M\)-dimensional temporally- and spatially-localized feature vector \(f(\mathbf{q})\) from Eq. (2) into density, \(\sigma\), and view-dependent color, **c**. Learned color basis: a linear decoder and explicit model.Plenoxels [10], Plenoctrees [42], and TensoRF [6] proposed models where spatially-localized features are used as coefficients of the spherical harmonic (SH) basis, to describe view-dependent color. Such SH decoders can give both high-fidelity reconstructions and enhanced interpretability compared to MLP decoders. However, SH coefficients are difficult to optimize, and their expressivity is limited by the number of SH basis functions used (often limited 2nd degree harmonics, which produce blurry specular reflections). Instead, we replace the SH functions with a learned basis, retaining the interpretability of treating features as coefficients for a linear decoder yet increasing the expressivity of the basis and allowing it to adapt to each scene, as was proposed in NeX [39]. We represent the basis using a small MLP that maps each view direction \(\mathbf{d}\) to red \(b_{R}(\mathbf{d})\in\mathbb{R}^{M}\), green \(b_{G}(\mathbf{d})\in\mathbb{R}^{M}\), and blue \(b_{B}(\mathbf{d})\in\mathbb{R}^{M}\)_basis vectors_. The MLP serves as an adaptive drop-in replacement for the spherical harmonic basis functions repeated over the three color channels. We obtain the color values \[\mathbf{c}(\mathbf{q},\mathbf{d})=\bigcup_{i\in\{R,G,B\}}f(\mathbf{q})\cdot b_{i}(\mathbf{d}), \tag{6}\] where \(\cdot\) denotes the dot product and \(\cup\) denotes concatenation. Similarly, we use a learned basis \(b_{\sigma}\in\mathbb{R}^{M}\), independent of the view direction, as a linear decoder for density: \[\sigma(\mathbf{q})=f(\mathbf{q})\cdot b_{\sigma}. \tag{7}\] Predicted color and density values are finally forced to be in their valid range by applying the sigmoid to \(\mathbf{c}(\mathbf{q},\mathbf{d})\), and the exponential (with truncated gradient) to \(\sigma(\mathbf{q})\). MLP decoder: a hybrid model.Our model can also be used with an MLP decoder like that of Instant-NGP [25] and DVGO [33], turning it into a hybrid model. In this version, features are decoded by two small MLPs, one \(g_{\sigma}\) that maps the spatially-localized features into density \(\sigma\) and additional features \(\hat{f}\), and another \(g_{RGB}\) that maps \(\hat{f}\) and the embedded view direction \(\gamma(\mathbf{d})\) into RGB color \[\sigma(\mathbf{q}),\hat{f}(\mathbf{q}) =g_{\sigma}(f(\mathbf{q})) \tag{8}\] \[\mathbf{c}(\mathbf{q},\mathbf{d}) =g_{RGB}(\hat{f}(\mathbf{q}),\gamma(\mathbf{d})).\] As in the linear decoder case, the predicted density and color values are finally normalized via exponential and sigmoid, respectively. Global appearance.We also show a simple extension of our \(k\)-planes model that enables it to represent scenes with consistent, static geometry viewed under varying lighting or appearance conditions. Such scenes appear in the Phototourism [15] dataset of famous landmarks photographed at different times of day and in different weather. To model this variable appearance, we augment \(k\)-planes with an \(M\)-dimensional vector for each training image \(1,\dots,T\). Similar to NeRF-W [20], we optimize this per-image feature vector and pass it as an additional input to either the MLP learned color basis \(b_{R},b_{G},b_{B}\), in our explicit version, or to the MLP color decoder \(g_{RGB}\), in our hybrid version, so that it can affect color but not geometry. ### Optimization details Contraction and normalized device coordinates.For forward-facing scenes, we apply normalized device coordinates (NDC) [24] to better allocate our resolution while enabling unbounded depth. We also implement an \(\ell_{\infty}\) version (rather than \(\ell_{2}\)) of the scene contraction proposed in Mip-NeRF 360 [2], which we use on the unbounded Phototourism scenes. **Proposal sampling.** We use a variant of the proposal sampling strategy from Mip-NeRF 360 [2], with a small instance of \(k\)-planes as density model. Proposal sampling works by iteratively refining density estimates along a ray, to allocate more points in the regions of higher density. We use a two-stage sampler, resulting in fewer samples that must be evaluated in the full model and in sharper details by placing those samples closer to object surfaces. The density models used for proposal sampling are trained with the histogram loss [2]. **Importance sampling.** For multiview dynamic scenes, we implement a version of the importance sampling based on temporal difference (IST) strategy from DyNeRF [16]. During the last portion of optimization, we sample training rays proportionally to the maximum variation in their color within 25 frames before or after. This results in higher sampling probabilities in the dynamic region. We apply this strategy after the static scene has converged with uniformly sampled rays. In our experiments, IST has only a modest impact on full-frame metrics but improves visual quality in the small dynamic region. Note that importance sampling cannot be used for monocular videos or datasets with moving cameras. ## 4 Results We demonstrate the broad applicability of our planar decomposition via experiments in three domains: static scenes (both bounded \(360^{\circ}\) and unbounded forward-facing), dynamic scenes (forward-facing multi-view and bounded \(360^{\circ}\) monocular), and Phototourism scenes with variable appearance. For all experiments, we report the metrics PSNR (pixel-level similarity) and SSIM1[38] (structural similarity), as well as approximate training time and number of parameters (in millions), in Tab. 3. Blank entries in Tab. 3 denote baseline methods for which the corresponding information is not readily available. Full per-scene results may be found in the appendix. Footnote 1: Note that among prior work, some evaluate using an implementation of SSIM from MipNeRF [1] whereas others use the scikit-image implementation, which tends to produce higher values. For fair comparison on each dataset we make a best effort to use the same SSIM implementation as the relevant prior work. ### Static scenes We first demonstrate our triplane model on the bounded, \(360^{\circ}\), synthetic scenes from NeRF [24]. We use a model with three symmetric spatial resolutions \(N\in\{128,256,512\}\) and feature length \(M=32\) at each scale; please see the appendix for ablation studies over these hyperparameters. The explicit and hybrid versions of our model perform similarly, within the range of recent results on this benchmark. Fig. 4 shows zoomed-in visual results on a small sampling of scenes. We also present results of our triplane model on the unbounded, forward-facing, real scenes from LLFF [23]. Our results on this dataset are similar to the synthetic static scenes; both versions of our model match or exceed the prior state-of-the-art, with the hybrid version achieving slightly higher metrics than the fully explicit version. Fig. 5 shows zoomed-in visual results on a small sampling of scenes. ### Dynamic scenes We evaluate our hexplane model on two dynamic scene datasets: a set of synthetic, bounded, \(360^{\circ}\), monocular videos from D-NeRF [30] and a set of real, unbounded, forward-facing, multiview videos from DyNeRF [16]. The D-NeRF dataset contains eight videos of varying duration, from 50 frames to 200 frames per video. Each timestep has a single training image from a different viewpoint; the camera "teleports" between adjacent timestamps [13]. Standardized test views are from novel camera positions at a range of timestamps throughout the video. Both our explicit and hybrid models outperform D-NeRF in both quality metrics and training time, though they do not surpass very recent hybrid methods TiNeuVox [9] and V4D [11], as shown in Fig. 7. The DyNeRF dataset contains six 10-second videos recorded at 30 fps simultaneously by 15-20 cameras from a range of forward-facing view directions; the exact number of cameras varies per scene because a few cameras produced miscalibrated videos. A central camera is reserved Figure 4: **Zoomed qualitative results on static NeRF scenes.** Visual comparison of \(k\)-planes, TensoRF [6], and the ground truth, on _ship_ (top) and _hotdog_ (bottom). Figure 5: **Zoomed qualitative results on static LLFF scenes.** Visual comparison of \(k\)-planes, TensoRF [6], and the ground truth, on _orchids_ (top) and _T-rex_ (bottom). for evaluation, and training uses frames from the remaining cameras. Both our methods again produce similar quality metrics to prior state-of-the-art, including recent hybrid method MixVoxels [37], with our hybrid method achieving higher quality metrics. See Fig. 6 for a visual comparison. #### 4.2.1 Decomposing time and space One neat consequence of our planar decomposition of time and space is that it naturally disentangles dynamic and static portions of the scene. The static-only part of the scene can be obtained by setting the three time planes to one (the multiplicative identity). Subtracting the static-only rendered image from the full rendering (i.e. with the time plane parameters not set to \(1\)), we can reveal the dynamic part of the scene. Fig. 9 shows this decomposition of time and space. This natural volumetric disentanglement of a scene into static and dynamic regions may enable many applications across augmented and virtual reality [3]. We can also visualize the time planes to better understand where motion occurs in a video. Fig. 8 shows the averaged features learned by the _\(xt\)_ plane in our model for the _flame salmon_ and _cut beef_ DyNeRF videos, in which we can identify the motions of the hands in both space and time. The \(xt\) plane learns to be sparse, with most entries equal to the multiplicative identity, due to a combination of our sparse transients prior and the true sparsity of motion in the video. For example, in the left side of Fig. 8 one of the Figure 6: **Qualitative video results.** Our hexplane model rivals the rendering quality of state-of-the-art neural rendering methods. Our renderings were obtained after at most 4 hours of optimization on a single GPU whereas DyNeRF trained for a week on 8 GPUs. MixVoxels frame comes from a slightly different video rendering, and is thus slightly shifted. Figure 7: **Zoomed qualitative results on scenes from D-NeRF [30].** Visual comparison of \(k\)-planes, D-NeRF [30], TiNeVox [9] and V4D [11], on _t-rex_ (top) and _hook_ (bottom). \begin{table} \begin{tabular}{l c c c c} \hline \hline & PSNR \(\uparrow\) & SSIM \(\uparrow\) & Train Time \(\downarrow\) & \# Params \(\downarrow\) \\ \hline \multicolumn{5}{c}{NeRF [24] (static, synthetic)} \\ \hline Ours-explicit & 32.21 & 0.960 & 38 min & 33M \\ Ours-hybrid & 32.36 & 0.962 & 38 min & 33M \\ Plenoxels [10] & 31.71 & 0.958 & 11 min & \(\sim\)500M \\ TensoRF [6] & 33.14 & 0.963 & 17 min & 18M \\ I-NGP [25] & 33.18 & - & 5 min & \(\sim\) 16M \\ \hline \multicolumn{5}{c}{LLFF [23] (static, real)} \\ \hline Ours-explicit & 26.78 & 0.841 & 33 min & 19M \\ Ours-hybrid & 26.92 & 0.847 & 33 min & 19M \\ Plenoxels & 26.29 & 0.839 & 24 min & \(\sim\)500M \\ TensoRF & 26.73 & 0.839 & 25 min & 45M \\ \hline \multicolumn{5}{c}{D-NeRF [30] (dynamic, synthetic)} \\ \hline Ours-explicit & 31.05 & 0.97 & 52 min & 37M \\ Ours-hybrid & 31.61 & 0.97 & 52 min & 37M \\ D-NeRF & 29.67 & 0.95 & 48 hrs & 1-3M \\ TiNeVox [9] & 32.67 & 0.97 & 30 min & \(\sim\)12M \\ V4D [11] & 33.72 & 0.98 & 4.9 hrs & 275M \\ \hline \multicolumn{5}{c}{DyNeRF [16] (dynamic, real)} \\ \hline Ours-explicit & 30.88 & 0.960 & 3.7 hrs & 51M \\ Ours-hybrid & 31.63 & 0.964 & 1.8 hrs & 27M \\ DyNeRF [16] & \({}^{1}\)29.58 & - & 1344 hrs & 7M \\ LLFF [23] & 123.24 & - & - & - \\ MixVoxels-L [37] & 30.80 & 0.960 & 1.3 hrs & 125M \\ \hline \multicolumn{5}{c}{Photototourism [15] (variable appearance)} \\ \hline Ours-explicit & 22.25 & 0.859 & 35 min & 36M \\ Ours-hybrid & 22.92 & 0.877 & 35 min & 36M \\ NeRF-W [20] & 27.00 & 0.962 & 384 hrs & \(\sim\)2M \\ NeRF-W (public)\({}^{2}\) & 19.70 & 0.764 & 164 hrs & \(\sim\)2M \\ LearnIt [35] & 19.26 & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 3: **Results.** Averaged metrics over all scenes in the respective datasets. Note that Phototourism scenes use MS-SSIM (multiscale structural similarity) instead of SSIM. \(K\)-planes timings are based on a single NVIDIA A30 GPU. Please see the appendix for per-scene results and the website for video reconstructions. cook's arms contains most of the motion, while in the right side both arms move. Having access to such an explicit representation of time allows us to add time-specific priors. ### Variable appearance Our variable appearance experiments use the Phototourism dataset [15], which includes photos of well-known landmarks taken by tourists with arbitrary view directions, lighting conditions, and transient occluders, mostly other tourists. Our experimental conditions parallel those of NeRF-W [20]: we train on more than a thousand tourist photographs and test on a standard set that is free of transient occluders. Like NeRF-W, we evaluate on test images by optimizing our per-image appearance feature on the left half of the image and computing metrics on the right half. Visual comparison to prior work is shown in the appendix. Also similar to NeRF-W [4, 20], we can interpolate in the appearance code space. Since only the color decoder (and not the density decoder) takes the appearance code as input, our approach is guaranteed not to change the geometry, regardless of whether we use our explicit or our hybrid model. Fig. 10 shows that our planar decomposition with a 32-dimensional appearance code is sufficient to accurately capture global appearance changes in the scene. ## 5 Conclusions We introduced a simple yet versatile method to decompose a \(d\)-dimensional space into \(\binom{d}{2}\) planes, which can be optimized directly from indirect measurements and scales gracefully in model size and optimization time with increasing dimension, without any custom CUDA kernels. We demonstrated that the proposed \(k\)-planes decomposition applies naturally to reconstruction of static 3D scenes as well as dynamic 4D videos, and with the addition of a global appearance code can also extend to the more challenging task of unconstrained scene reconstruction. \(K\)-planes is the first explicit, simple model to demonstrate competitive performance across such varied tasks. **Acknowledgments.** Many thanks to Matthew Tancik, Ruilong Li, and other members of KAIR for helpful discussion and pointers. We also thank the DyNeRF authors for their response to our questions about their method. Figure 8: **Visualization of a time plane.** The \(xt\) plane highlights the dynamic regions in the scene. The wiggly patterns across time correspond to the motion of the person’s hands and cooking tools, in the _flame salmon_ scene (left) where only one hand moves and the _cut beef_ scene (right) where both hands move. Figure 10: **Appearance interpolation**. Like NeRF-W [20], we can interpolate our appearance code to alter the visual appearance of landmarks. We show three test views from the _Trevi fountain_ with appearance codes corresponding to day and night. Figure 9: **Decomposition of space and time.**\(K\)-planes (left) naturally decomposes a 3D video into static and dynamic components. We render the static part (middle) by setting the time planes to the identity, and the remainder (right) is the dynamic part. Top shows the _flame salmon_ multiview video [16] and bottom shows the _jumping jacks_ monocular video [30].
2301.06423
On a generalization of the spectral Mantel's theorem
Mantel's theorem is a classical result in extremal graph theory which implies that the maximum number of edges of a triangle-free graph of order $n$. In 1970, E. Nosal obtained a spectral version of Mantel's theorem which gave the maximum spectral radius of a triangle-free graph of order $n$. In this paper, the clique tensor of a graph $G$ is proposed and the spectral Mantel's theorem is extended via the clique tensor. Furthermore, a sharp upper bound of the number of cliques in $G$ via the spectral radius of the clique tensor is obtained. And we show that the results of this paper implies that a result of Erd\H{o}s [Magyar Tud. Akad. Mat. Kutat\'{o} Int. K\"{o}zl. 7 (1962)] under certain conditions.
Chunmeng Liu, Changjiang Bu
2023-01-16T13:21:20Z
http://arxiv.org/abs/2301.06423v1
# On a generalization of the spectral Mantel's theorem ###### Abstract Mantel's theorem is a classical result in extremal graph theory which implies that the maximum number of edges of a triangle-free graph of order \(n\). In 1970, E. Nosal obtained a spectral version of Mantel's theorem which gave the maximum spectral radius of a triangle-free graph of order \(n\). In this paper, we consider the generalized spectral extremal problems. The clique tensor of a graph \(G\) is proposed and the spectral Mantel's theorem is extended via the clique tensor. Furthermore, a sharp upper bound of the number of cliques in \(G\) via the spectral radius of the clique tensor is obtained. And we show that the results of this paper implies that a result of Erdos [Magyar Tud. Akad. Mat. Kutato Int. Kozl. 7 (1962)] under certain conditions. keywords: Mantel's theorem, Spectral radius, Cliques, Tensor _AMS classification (2020):_ 05C35, 05C50 ## 1 Introduction The graphs in this paper are undirected and simple. Let \(G\) be a graph with the set of vertices \(V(G)=\{1,2,\ldots,n\}\) and the set of edges \(E(G)=\{e_{1},...,e_{m}\}\), the number of edges of \(G\) is denoted by \(e(G)\). For a vertex \(i\in V(G)\), the neighborhood of \(i\) is the set of all vertices adjacent to \(i\), denoted by \(N(i)\). A clique is a subset of vertices of a graph such that its induced subgraph is complete, and a clique of size \(t\) is called a \(t\)-clique, denoted by \(c_{t}\). The set of all \(t\)-cliques in \(G\) is denoted by \(C_{t}(G)\) and the number of \(t\)-cliques of \(G\) is denoted by \(c_{t}(G)\). Let \(T_{r}(n)\) be the complete \(r\)-partite graph with partitions of sizes \(n_{1}\leq n_{2}\leq\cdots\leq n_{r}\) and \(n_{r}-n_{1}\leq 1\), the graph \(T_{r}(n)\) is called \(r\)_-partite Turan graph_. Mantel's theorem, established by Willem Mantel [1] in 1907, is one of the early results in extremal graph theory and states that if \(G\) is a triangle-free graph of order \(n\), then \(e(G)\leq e(T_{2}(n))\), equality holds if and only if \(G=T_{2}(n)\). In 1941, P. Turan [2] extended the Mantel's theorem and obtained the well-known Turan theorem. From then on, Turan type extremal problems have attracted extensive attention [3]. Let \(\rho(G)\) be the spectral radius of the adjacency matrix of a graph \(G\). The spectral extremal problem is an important topic in the extremal graph theory. Up to now, there have been many results on the spectral extremal graph theory [4, 5]. In 1970, E. Nosal [6] obtained the maximum spectral radius of a triangle-free graph \(G\) of order \(n\) and the conclusion is called the spectral form of Mantel's theorem. **Theorem 1.1**.: _(Spectral Mantel's theorem [6]) Suppose that \(G\) is a triangle-free graph of order \(n\), then_ \[\rho(G)\leq\rho(T_{2}(n)),\] _equality holds if and only if \(G=T_{2}(n)\)._ **Remark 1**.: _V. Nikiforov [7] proved that the relationship between Theorem 1.1 and Mantel's theorem. Through the inequality \(\frac{2e(G)}{n}\leq\rho(G)\) and Theorem 1.1, for an \(n\) vertices triangle-free graph \(G\), then_ \[e(G)\leq\left\lfloor\frac{n}{2}\rho(G)\right\rfloor\leq\left\lfloor\frac{n}{2 }\rho(T_{2}(n))\right\rfloor=\left\lfloor\frac{n}{2}\right\rfloor\left\lceil \frac{n}{2}\right\rceil=e(T_{2}(n)).\] _It shows that Theorem 1.1 implies Mantel's theorem._ V. Nikiforov [8] generalized Theorem 1.1 to \(K_{r+1}\)-free graphs and gave the spectral Turan's theorem. **Theorem 1.2**.: [8] _If \(G\) is a \(K_{r+1}\)-free graph of order \(n\), then_ \[\rho(G)\leq\rho(T_{r}(n)),\] _equality holds if and only if \(G=T_{r}(n)\)._ In addition, Theorem 1.1 is improved in terms of the number of edges of a graph in [9]. For further results, see [10, 11, 12]. In this paper, the clique tensor of a graph is proposed and Theorem 1.1 is extended via the clique tensor. Let \(\mathbb{C}^{n}\) be the set of \(n\)-dimensional vectors over complex number field \(\mathbb{C}\). An order \(m\) dimension \(n\) complex tensor \(\mathcal{A}=(a_{i_{1}i_{2}\cdots i_{m}})\) is a multidimensional array with \(n^{m}\) entries, where \(i_{j}=1,2,\ldots,n\), \(j=1,2,\ldots,m\). For vectors \(x=(x_{1},\ldots,x_{n})^{T}\in\mathbb{C}^{n}\) the \(\mathcal{A}x^{m-1}\) is a vector in \(\mathbb{C}^{n}\) whose \(i\)-th component is \(\sum_{i_{2},\ldots,i_{m}=1}^{n}a_{ii_{2}\cdots i_{m}}x_{i_{2}}\cdots x_{i_{m}}\)[13]. In 2005, Qi [13] and Lim [14] posed the eigenvalues of tensors, respectively. If there exist \(\lambda\in\mathbb{C}\) and nonzero vector \(x=(x_{1},\ldots,x_{n})^{T}\in\mathbb{C}^{n}\) satisfy \[\mathcal{A}x^{m-1}=\lambda x^{[m-1]}, \tag{1}\] then \(\lambda\) is called an _eigenvalue_ of \(\mathcal{A}\) and \(x\) an _eigenvector_ of \(\mathcal{A}\) corresponding to \(\lambda\), where \(x^{[m-1]}=(x_{1}^{m-1},\ldots,x_{n}^{m-1})^{T}\). The maximal modulus of all eigenvalues of \(\mathcal{A}\) is called the spectral radius of \(\mathcal{A}\), denoted by \(\rho(\mathcal{A})\). The tensor and its eigenvalues have been applied to many fields, such as signal processing [15], automatica [16], polynomial optimization [17], network analysis [18], solving multilinear systems [19, 20], spectral hypergraph theory [21, 22, 23, 24, 25], etc. The authors [26] improved the upper bound of Wilf's chromatic number in terms of the spectral radius of tensors and obtained a formula of the number of cliques of fixed size which involve the spectrum of tensors. The \(t\)_-clique tensor_ of a graph \(G\) is defined as follows. **Definition 1.3**.: _Let \(G\) be a graph with \(n\) vertices. An order \(t\) and dimension \(n\) tensor \(\mathcal{A}(G)=(a_{i_{1}i_{2}\cdots i_{t}})\) is called the \(t\)-clique tensor of \(G\), if_ \[a_{i_{1}i_{2}\cdots i_{t}}=\begin{cases}\frac{1}{(t-1)!},&\{i_{1},\ldots,i_{t} \}\in C_{t}(G).\\ 0,&\text{otherwise}.\end{cases}\] The eigenvalue of \(\mathcal{A}(G)\) is called the \(t\)_-clique eigenvalue_ of \(G\). The spectral radius of \(\mathcal{A}(G)\) is called the \(t\)_-clique spectral radius_ of \(G\), denoted by \(\rho_{t}(G)\). When \(t=2\), the \(2\)-clique tensor \(\mathcal{A}(G)\) is the adjacency matrix of \(G\) and \(\rho_{2}(G)\) is the spectral radius of the adjacency matrix of \(G\). In this paper, Theorem 1.1 is generalized. **Theorem 1.4**.: _Let \(G\) be a graph on \(n\) vertices. If \(G\) is \(K_{r+1}\)-free, then_ \[\rho_{r}(G)\leq\rho_{r}(T_{r}(n)),\] _equality holds if and only if \(G\) is the \(r\)-partite Turan graph \(T_{r}(n)\)._ Obviously, Theorem 1.4 is the spectral Mantel's theorem in the case when \(r=2\). Similar to Remark 1, a natural question is that does Theorem 1.4 imply the maximum number of \(r\)-cliques in \(K_{r+1}\)-free graphs of order \(n\)? Therefore, we study the maximum number of cliques of fixed size in graphs in terms of spectral radius of tensors and obtain a following sharp upper bound of \(c_{r}(G)\) in terms of \(\rho_{r}(G)\). **Theorem 1.5**.: _For the \(r\)-clique spectral radius \(\rho_{r}(G)\) of a graph \(G\) on \(n\) vertices, then_ \[c_{r}(G)\leq\frac{n}{r}\rho_{r}(G). \tag{2}\] _Furthermore, if the number of \(r\)-cliques containing \(i\) is equal for all \(i\in V(G)\), then equality holds in (2)._ For a \(K_{r+1}\)-free graph \(G\) of order \(n\), P. Erdos [27] proved that \[c_{r}(G)\leq c_{r}(T_{r}(n)) \tag{3}\] with equality holds if and only if \(G\) is the \(r\)-partite Turan graph \(T_{r}(n)\), and [28; 29] gave the same conclusion through different methods. In Section 3, we will show that Theorem 1.4 implies (3) in the case when \(r\mid n\). The remainder of the paper is organized as follows. In Section 2, we introduce some of the definitions and lemmas required for proofs. In Section 3, we give the proofs of Theorem 1.4 and Theorem 1.5. ## 2 Preliminaries The tensor \(\mathcal{A}\) is called symmetric if its entries are invariant under any permutation of their indices. If all elements of a tensor \(\mathcal{A}\) are nonnegative, then \(\mathcal{A}\) is called the nonnegative tensor. For the symmetric nonnegative tensor, Qi [30] gave the following conclusion. **Lemma 2.1**.: [30] _Suppose that \(\mathcal{A}\) is an order \(m\) dimension \(n\) symmetric nonnegative tensor, with \(m\geq 2\). Then_ \[\rho(\mathcal{A})=\max\left\{x^{T}\mathcal{A}x^{m-1}:\sum_{i=1}^{n}x_{i}^{m}=1,x=(x_{1},\ldots,x_{n})^{T}\in\mathbb{R}_{+}^{n}\right\},\] _where \(\mathbb{R}_{+}^{n}\) is the set of \(n\)-dimensional nonnegative real vectors._ The upper and lower bounds of spectral radius of nonnegative tensors are obtained in [31]. **Lemma 2.2**.: [31] _Let \(\mathcal{A}=(a_{i_{1}i_{2}\cdots i_{m}})\) be an order \(m\) dimension \(n\) nonnegative tensor. Then_ \[\min_{1\leq i\leq n}\sum_{i_{2},\ldots,i_{m}=1}^{n}a_{ii_{2}\cdots i_{m}}\leq \rho(\mathcal{A})\leq\max_{1\leq i\leq n}\sum_{i_{2},\ldots,i_{m}=1}^{n}a_{ii_{ 2}\cdots i_{m}}.\] For an order \(m\) dimension \(n\) nonnegative tensor \(\mathcal{A}=(a_{i_{1}i_{2}\cdots i_{m}})\), let \(G_{\mathcal{A}}=(V(G_{\mathcal{A}}),E(G_{\mathcal{A}}))\) be the digraph of the tensor \(\mathcal{A}\) with vertex set \(V(G_{\mathcal{A}})=\{1,2,\ldots,n\}\) and arc set \(E(G_{\mathcal{A}})=\{(i,j)|a_{ii_{2}\cdots i_{m}}\neq 0,j\in\{i_{2},\ldots,i_{m} \}\}\). The nonnegative tensor \(\mathcal{A}\) is weakly irreducible if the corresponding directed graph \(G_{\mathcal{A}}\) is strongly connected. Otherwise, the tensor \(\mathcal{A}\) is weakly reducible (see [32]). The Perron-Frobenius theorem for weakly irreducible nonnegative tensors is given in [32]. **Lemma 2.3**.: [32] _If a nonnegative tensor \(\mathcal{A}\) is weakly irreducible, then \(\rho(\mathcal{A})\) is the unique positive eigenvalue of \(\mathcal{A}\), with the unique positive eigenvector \(x\), up to a positive scaling coefficient._ Let \(\mathcal{A}\) and \(\mathcal{B}\) be two order \(m\) dimension \(n\) nonnegative tensors. If \(\mathcal{B}-\mathcal{A}\) is nonnegative, we write \(\mathcal{A}\leq\mathcal{B}\). **Lemma 2.4**.: [33] _Suppose \(0\leq\mathcal{A}\leq\mathcal{B}\), then \(\rho(\mathcal{A})\leq\rho(\mathcal{B})\). Furthermore, if \(\mathcal{B}\) is weakly irreducible and \(\mathcal{A}\neq\mathcal{B}\), then \(\rho(\mathcal{A})<\rho(\mathcal{B})\)._ Shao et al [34] obtained the following result on the "weakly reducible canonical form" which is a generalization of the corresponding "reducible canonical form" theorem for matrices. **Lemma 2.5**.: [34] _Let \(\mathcal{A}\) be an order \(m\) dimension \(n\) tensor. Then there exists positive integers \(r\geq 1\) and \(n_{1},\ldots,n_{r}\) with \(n_{1}+\cdots+n_{r}=n\) such that \(\mathcal{A}\) is permutational similar to some \((n_{1},\ldots,n_{r})\)-lower triangular block tensor, where all the diagonal blocks \(\mathcal{A}_{1},\ldots,\mathcal{A}_{r}\) are weakly irreducible. Furthermore, we have \(\rho(\mathcal{A})=\max\{\rho(\mathcal{A}_{1}),\ldots,\rho(\mathcal{A}_{r})\}\)._ ## 3 Proofs of Theorems A walk is a sequence of edges \(e_{1},e_{2},\ldots,e_{m}\) in which \(e_{i}\) and \(e_{i+1}\) are incident with a common vertex for \(i=1,\ldots,m-1\), and a graph is connected if any two of its vertices are joined by a walk. The adjacency matrix of a graph \(G\) is irreducible if and only if \(G\) is connected (see [35]). A \(t\)_-clique walk_ of a graph \(G\) is proposed to determine the \(t\)-clique tensor of \(G\) is weakly irreducible, that is, a sequence of \(t\)-cliques \(c_{t}^{1},c_{t}^{2},\ldots,c_{t}^{m}\) in which and \(c_{t}^{i+1}\) have at least one vertex in common for \(i=1,\ldots,m-1\). A graph \(G\) is called \(t\)_-clique connected_ if any two of its vertices are joined by a \(t\)-clique walk. For the \(t\)-clique tensor of a graph \(G\), we have the following conclusion. **Lemma 3.1**.: _The \(t\)-clique tensor of a graph \(G\) is weakly irreducible if and only if \(G\) is \(t\)-clique connected._ Proof.: Let \(\mathcal{A}(G)=(a_{i_{1}i_{2}\cdots i_{t}})\) be the \(t\)-clique tensor of \(G\). Firstly, we prove that \(\mathcal{A}(G)\) is weakly irreducible if \(G\) is \(t\)-clique connected. Let \(i_{1},i_{2},\ldots,i_{t}\) be the \(t\) vertices of \(G\). Suppose that vertices \(i_{1},i_{2},\ldots,i_{t}\) compose a \(t\)-clique in \(G\), we obtain \(a_{i_{1}i_{2}\cdots i_{t}}=a_{\sigma(i_{1}i_{2}\cdots i_{t})}=\frac{1}{(t-1)!}\) by Definition 1.3, where \(\sigma\) denotes a permutation of \(\{i_{1},i_{2},\ldots,i_{t}\}\). By the definition of the digraph \(G_{\mathcal{A}(G)}\) of \(\mathcal{A}(G)\), vertices \(i_{1},i_{2},\ldots,i_{t}\) form a complete digraph of \(t\) vertices in \(G_{\mathcal{A}(G)}\). Since \(G\) is \(t\)-clique connected, any two of its vertices are joined by a \(t\)-clique walk. Thus, we obtain \(G_{\mathcal{A}(G)}\) is strongly connected imply that the \(t\)-clique tensor \(\mathcal{A}(G)\) of \(G\) is weakly irreducible. Next, we prove that \(\mathcal{A}(G)\) is weakly reducible if \(G\) is not \(t\)-clique connected. Since \(G\) is not \(t\)-clique connected, there exist two vertices \(i\) and \(j\) can not join by any \(t\)-clique walks. Therefore, vertices \(i\) and \(j\) can not join by any walks in the digraph \(G_{\mathcal{A}(G)}\) of \(\mathcal{A}(G)\), implying that \(G_{\mathcal{A}(G)}\) is not strongly connected. That is, the \(t\)-clique tensor \(\mathcal{A}(G)\) is weakly reducible. The proof of Theorem 1.4 is divided into three parts. Let \(G^{\prime}\) be a \(K_{r+1}\)-free graph on \(n\) vertices with the largest \(r\)-clique spectral radius. Firstly, we show that \(G^{\prime}\) is \(r\)-clique connected. Secondly, we prove that \(G^{\prime}\) is a complete \(r\)-partite graph. Lastly, for all the complete \(r\)-partite graphs on \(n\) vertices, we show that the graph with the maximum \(r\)-clique spectral radius is the \(r\)-partite Turan graph \(T_{r}(n)\). **Lemma 3.2**.: _The graph \(G^{\prime}\) is \(r\)-clique connected._ Proof.: Let \(\mathcal{A}(G^{\prime})\) be the \(r\)-clique tensor of \(G^{\prime}\). Suppose to the contrary that \(G^{\prime}\) is not \(r\)-clique connected, then \(\mathcal{A}(G^{\prime})\) is weakly reducible by Lemma 3.1. Since \(\mathcal{A}(G^{\prime})\) is weakly reducible, there exists a subgraph \(H\) of \(G^{\prime}\) such that \(H\) is an \(r\)-clique component (an \(r\)-clique connected subgraph that is not part of any larger \(r\)-clique connected subgraph) attaining the \(r\)-clique spectral radius of \(G^{\prime}\) by Lemma 2.5, that is \(\rho_{r}(G^{\prime})=\rho_{r}(H)\). The graph \(G^{\prime}\) is not \(r\)-clique connected also implies that there exist two vertices \(i\) and \(j\) can not join by any \(r\)-clique walks. For the vertex \(i\), we discuss it in two cases. Case 1: The vertex \(i\in V(H)\). Then the vertex \(j\notin V(H)\), that is, the vertex \(j\) does not form an \(r\)-clique with vertices in \(H\). Therefore, the vertex \(j\) is adjacent to at most \(r-2\) vertices of any \(r\)-clique in \(H\). Take an \(r\)-clique \(c^{\prime}_{r}=\{v_{1},\ldots,v_{r}\}\) of \(H\) with the most vertices adjacent to \(j\). Suppose that vertices \(v_{1},\ldots,v_{s}\) are adjacent to \(j\), we have \(s\leq r-2\). Let \(H^{\prime}\) be the graph with \(V(H^{\prime})=V(H)\cup\{j\}\) and obtained by adding edges between the vertices \(v_{s+1},\ldots,v_{r-1}\) and \(j\). Thus, the vertices \(v_{1},\ldots,v_{r-1},j\) form an \(r\)-clique in \(H^{\prime}\). Since \(G^{\prime}\) is \(K_{r+1}\)-free, the graph \(H\) is \(K_{r+1}\)-free implies that \(H^{\prime}\) is also \(K_{r+1}\)-free. Otherwise, there exists a vertex \(k\) such that \(v_{1},\ldots,v_{r-1},j,k\) form an \((r+1)\)-clique in \(H^{\prime}\). That is, the vertices \(v_{1},\ldots,v_{r-1},k\) form an \(r\)-clique in \(H\) and \(j\) is adjacent to \(v_{1},\ldots,v_{s},k\). This contradicts the choice of \(c^{\prime}_{r}\). The graph \(H^{\prime}\) is \(r\)-clique connected and \(H\) is an \(r\)-clique connected subgraph of \(H^{\prime}\) with \(|V(H)|<|V(H^{\prime})|\). Add an isolated vertex to \(H\) such that the resulting graph \(\hat{H}\) has the same number of vertices as \(H^{\prime}\), and we have \(\rho_{r}(H)=\rho_{r}(\hat{H})\). Since \(H^{\prime}\) is \(r\)-clique connected, the \(r\)-clique tensor \(\mathcal{A}(H^{\prime})\) of \(H^{\prime}\) is weakly irreducible by Lemma 3.1. The \(r\)-clique tensor \(\mathcal{A}(\hat{H})<\mathcal{A}(H^{\prime})\) implies that \(\rho_{r}(H)=\rho_{r}(\hat{H})<\rho_{r}(H^{\prime})\) by Lemma 2.4. Thus \(\rho_{r}(G^{\prime})=\rho_{r}(H)<\rho_{r}(H^{\prime})\), this contradicts the choice of \(G^{\prime}\). Case 2: The vertex \(i\notin V(H)\). Similar to Case 1, we can construct a \(K_{r+1}\)-free graph \(H^{\prime\prime}\) through the vertex \(i\) and \(H\) such that \(\rho_{r}(G^{\prime})=\rho_{r}(H)<\rho_{r}(H^{\prime\prime})\). It is also contradictory. **Lemma 3.3**.: _The graph \(G^{\prime}\) is a complete \(r\)-partite graph._ Proof.: Let \(\mathcal{A}(G^{\prime})\) be the \(r\)-clique tensor of \(G^{\prime}\). From Lemma 3.2, the graph \(G^{\prime}\) is \(r\)-clique connected. Thus, the \(r\)-clique tensor \(\mathcal{A}(G^{\prime})\) is weakly irreducible by Lemma 3.1. By Lemma 2.3, the \(r\)-clique spectral radius \(\rho_{r}(G^{\prime})\) is the unique positive eigenvalue of \(\mathcal{A}(G^{\prime})\), with the unique positive eigenvector \(x=(x_{1},\ldots,x_{n})^{T}\). For the \(r\)-clique spectral radius \(\rho_{r}(G^{\prime})\), we have \[\rho_{r}(G^{\prime})=x^{T}\mathcal{A}(G^{\prime})x^{r-1}=r\sum_{\{i_{1},\ldots,i_{r}\}\in C_{t}(G^{\prime})}x_{i_{1}}\cdots x_{i_{r}}.\] In order to prove that \(G^{\prime}\) is a complete \(r\)-partite graph, suppose to the contrary that \(G^{\prime}\) is not a complete \(r\)-partite graph which implies that \(G^{\prime}\) is a complete \(s\)-partite graph for \(s<r\) or there exist two non adjacent vertices \(i,j\in V(G^{\prime})\) such that \(N(i)\neq N(j)\). We discuss it in the following two cases. Case 1. The graph \(G^{\prime}\) is a complete \(s\)-partite graph for \(s<r\), then the \(r\)-clique does not exist in \(G^{\prime}\). This contradicts that \(G^{\prime}\) is \(r\)-clique connected. Case 2. There exist two non adjacent vertices \(i,j\in V(G^{\prime})\) such that \(N(i)\neq N(j)\), suppose that there exists a vertex \(k\in V(G^{\prime})\) adjacent to \(i\) and not adjacent to \(j\). For vertex \(v\in V(G^{\prime})\) and the eigenvector \(x=(x_{1},\ldots,x_{n})^{T}\) corresponding to \(\rho_{r}(G^{\prime})\), we obtain \[\rho_{r}(G^{\prime})x_{v}^{r-1}=\sum_{\{v,i_{2},\ldots,i_{r}\}\in C_{t}(G^{\prime })}x_{i_{2}}\cdots x_{i_{r}}.\] Let \(WS_{G^{\prime}}(v,x)=\sum_{\{v,i_{2},\ldots,i_{r}\}\in C_{t}(G^{\prime})}x_{i_{2 }}\cdots x_{i_{r}}\). For \(WS_{G^{\prime}}(i,x)\), \(WS_{G^{\prime}}(j,x)\) and \(WS_{G^{\prime}}(k,x)\), it will be discussed on two cases. Case i. \(WS_{G^{\prime}}(j,x)<WS_{G^{\prime}}(i,x)\) or \(WS_{G^{\prime}}(j,x)<WS_{G^{\prime}}(k,x)\). If \(WS_{G^{\prime}}(j,x)<WS_{G^{\prime}}(i,x)\), we delete all edges incident to \(j\) and connect \(j\) to all vertices in \(N(i)\). After such a graph operation, we get a new graph \(G^{\prime\prime}\), and \(G^{\prime\prime}\) is still \(K_{r+1}\)-free. For the graph \(G^{\prime\prime}\) and the positive eigenvector \(x=(x_{1},\ldots,x_{n})^{T}\) corresponding to \(\rho_{r}(G^{\prime})\), by Lemma 2.1, we have \[\rho_{r}(G^{\prime\prime}) \geq r\sum_{\{i_{1},\ldots,i_{r}\}\in C_{t}(G^{\prime\prime})}x_{i_{1}} \cdots x_{i_{r}}\] \[=r\sum_{\{i_{1},\ldots,i_{r}\}\in C_{t}(G^{\prime})}x_{i_{1}} \cdots x_{i_{r}}-rx_{j}WS_{G^{\prime}}(j,x)+rx_{j}WS_{G^{\prime}}(i,x)\] \[>r\sum_{\{i_{1},\ldots,i_{r}\}\in C_{t}(G^{\prime})}x_{i_{1}} \cdots x_{i_{r}}=\rho_{r}(G^{\prime}).\] This contradicts the choice of \(G^{\prime}\). When \(WS_{G^{\prime}}(j,x)<WS_{G^{\prime}}(k,x)\), the proof is similar. Case ii. \(WS_{G^{\prime}}(j,x)\geq WS_{G^{\prime}}(i,x)\) and \(WS_{G^{\prime}}(j,x)\geq WS_{G^{\prime}}(k,x)\). In this case, we delete all edges incident to \(i\) and connect \(i\) to all vertices in \(N(j)\). Delete all edges incident to \(k\) and connect \(k\) to all vertices in \(N(j)\). Then we get a graph \(G^{\prime\prime\prime}\) and \(G^{\prime\prime\prime}\) is also a \(K_{r+1}\)-graph. Similar to Case i, we have \[\rho_{r}(G^{\prime\prime\prime}) \geq r\sum_{\{i_{1},\ldots,i_{r}\}\in C_{t}(G^{\prime\prime\prime})}x _{i_{1}}\cdots x_{i_{r}}\] \[=r\sum_{\{i_{1},\ldots,i_{r}\}\in C_{t}(G^{\prime})}x_{i_{1}} \cdots x_{i_{r}}-rx_{i}WS_{G^{\prime}}(i,x)-rx_{k}WS_{G^{\prime}}(k,x)\] \[+rx_{i}WS_{G^{\prime}}(j,x)+rx_{k}WS_{G^{\prime}}(j,x)+r\sum_{\{i, k,i_{3},\ldots,i_{r}\}\in C_{t}(G^{\prime})}x_{i}x_{k}x_{i_{3}}\cdots x_{i_{r}}.\] Since \(i\) and \(k\) are adjacency in \(G^{\prime}\) and each edge of \(G^{\prime}\) is contained in an \(r\)-clique, we obtain \[r\sum_{\{i,k,i_{3},\ldots,i_{r}\}\in C_{t}(G^{\prime})}x_{i}x_{k}x_{i_{3}} \cdots x_{i_{r}}>0.\] Hence, we have \(\rho_{r}(G^{\prime\prime\prime})>\rho_{r}(G^{\prime})\). This also contradicts the choice of \(G^{\prime}\). To sum up, the graph \(G^{\prime}\) is a complete \(r\)-partite graph. **Lemma 3.4**.: _For the \(r\)-clique spectral radius of the complete \(r\)-partite graph on \(n\) vertices, we have_ \[\rho_{r}(T_{r}(n))=\max\{\rho_{r}(H):\text{$H$ is a complete $r$-partite graph on $n$ vertices}\},\] _where \(T_{r}(n)\) is the \(r\)-partite Turan graph._ Proof.: Let \(H\) be the complete \(r\)-partite graph on \(n\) vertices with partitions of sizes \(n_{1},n_{2},\ldots,n_{r}\). And let \(\rho_{r}(H)\) be the \(r\)-clique spectral radius of \(H\) and \(x=(x_{1},\ldots,x_{n})^{T}\) be the eigenvector corresponding to \(\rho_{r}(H)\). Since \(H\) is \(r\)-clique connected, the \(r\)-clique tensor \(\mathcal{A}(H)\) is weakly irreducible. By Lemma 2.3, we obtain \(\rho_{r}(H)>0\) with the positive eigenvector \(x=(x_{1},\ldots,x_{n})^{T}\). The vertices in each part of \(H\) have the same neighborhood implies that the components of the eigenvectors corresponding to these vertices are equal. Let components of the eigenvectors corresponding to the vertices in each part of \(H\) be \(x_{n_{s}}\), \(s=1,2,\ldots,r\). Hence, for the characteristic equations of \(\rho_{r}(H)\), we have \[\begin{cases}\rho_{r}(H)x_{n_{1}}^{r-1}=n_{2}n_{3}\cdots n_{r}x_{n_{2}}x_{n_{ 3}}\cdots x_{n_{r}}\\ \rho_{r}(H)x_{n_{2}}^{r-1}=n_{1}n_{3}\cdots n_{r}x_{n_{1}}x_{n_{3}}\cdots x_{n _{r}}\\ \cdots\\ \rho_{r}(H)x_{n_{r}}^{r-1}=n_{1}n_{2}\cdots n_{r-1}x_{n_{1}}x_{n_{2}}\cdots x _{n_{r-1}}\end{cases} \tag{4}\] Multiply \(r\) equations in (4), we get \[(\rho_{r}(H))^{r}x_{n_{1}}^{r-1}x_{n_{2}}^{r-1}\cdots x_{n_{r}}^{r-1}=(n_{1}n _{2}\cdots n_{r})^{r-1}x_{n_{1}}^{r-1}x_{n_{2}}^{r-1}\cdots x_{n_{r}}^{r-1}.\] Since \(x_{n_{1}},x_{n_{2}},\ldots,x_{n_{r}}\) are all positive, the \(r\)-clique spectral radius \(\rho_{r}(H)=(n_{1}n_{2}\cdots n_{r})^{\frac{r-1}{r}}\). Thus, the \(r\)-clique spectral radius \(\rho_{r}(H)\) is maximal if and only if the \(n_{1},n_{2},\ldots,n_{r}\) are as equal as possible, that is \(H\simeq T_{r}(n)\). Next, we give the proof of Theorem 1.5 in the following. Proof of Theorem 1.5.: By the definition of symmetric tensor, the entries of the \(r\)-clique tensor \(\mathcal{A}(G)\) of \(G\) are invariant under any permutation of their indices, thus \(\mathcal{A}(G)\) is a symmetric tensor. For the \(r\)-clique tensor \(\mathcal{A}(G)=(a_{i_{1}i_{2}\cdots i_{r}})\), we have \[\sum_{i_{2},\ldots,i_{r}=1}^{n}a_{ii_{2}\cdots i_{r}}\text{ equals the number of $r$-cliques containing $i$ (i=1,\ldots,n)$}. \tag{5}\] Let \(\rho_{r}(G)\) be the spectral radius of the \(r\)-clique tensor \(\mathcal{A}(G)\). Since \(\mathcal{A}(G)\) is a symmetric nonnegative tensor, and by Lemma 2.1, we obtain \[\rho_{r}(G)=\max\left\{x^{T}\mathcal{A}x^{r-1}:\sum_{i=1}^{n}x_{i}^{r}=1,x=(x_ {1},\ldots,x_{n})^{T}\in\mathbb{R}_{+}^{n}\right\}. \tag{6}\] Let \(x_{i}=n^{-\frac{1}{r}}\) (\(i=1,\ldots,n\)) in (6). Then \[\rho_{r}(G) \geq x^{T}\mathcal{A}(G)x^{r-1}=\sum_{i_{1},\ldots,i_{r}}^{n}a_{ i_{1}\cdots i_{r}}x_{i_{1}}\cdots x_{i_{r}}\] \[=\frac{\sum_{i=1}^{n}\sum_{i_{2},\ldots,i_{r}=1}^{n}a_{ii_{2} \cdots i_{r}}}{n}\] \[=\frac{r\cdot c_{r}(G)}{n}.\] Therefore, the inequality (2) is obtained. If the number of \(r\)-cliques containing \(i\) is equal for all \(i\in V(G)\), and by (5), then \[\sum_{i_{2},\ldots,i_{r}=1}^{n}a_{ii_{2}\cdots i_{r}}=\frac{r\cdot c_{r}(G)}{n }\text{ }(i=1,\ldots,n).\] For the spectral radius \(\rho_{r}(G)\), by Lemma 2.2, we have \[\rho_{r}(G)=\frac{r\cdot c_{r}(G)}{n}.\] Next, we show that the relation between Theorem 1.4 and inequality (3). Through Theorem 1.4 and Theorem 1.5, for an \(n\) vertices \(K_{r+1}\)-free graph \(G\), we obtain \[c_{r}(G)\leq\left\lfloor\frac{n}{r}\rho_{r}(G)\right\rfloor\leq\left\lfloor\frac {n}{r}\rho_{r}(T_{r}(n))\right\rfloor=\left\lfloor\frac{n}{r}\left(\prod_{s=0 }^{r-1}\left\lfloor\frac{n+s}{r}\right\rfloor\right)^{\frac{r-1}{r}}\right\rfloor.\] When \(r\mid n\), we have \(\left\lfloor\frac{n}{r}\left(\prod_{s=0}^{r-1}\left\lfloor\frac{n+s}{r}\right \rfloor\right)^{\frac{r-1}{r}}\right\rfloor=\left(\frac{n}{r}\right)^{r}=c_{r }(T_{r}(n))\). Thus, Theorem 1.4 implies (3) in the case when \(r\mid n\). When \(r\nmid n\), the following example is considered. For 3-partite Turan graph with 28 vertices, the number of triangles in \(T_{3}(28)\) is \(c_{3}(T_{3}(28))=9\times 9\times 10=810<\left\lfloor\frac{28}{3}\left(\prod_{s=0 }^{2}\left\lfloor\frac{28+s}{3}\right\rfloor\right)^{\frac{2}{3}}\right]=811\). ## Acknowledgement This work is supported by the National Natural Science Foundation of China (No. 11801115, No. 12071097 and No. 12042103), the Natural Science Foundation of the Heilongjiang Province (No. QC2018002) and the Fundamental Research Funds for the Central Universities.
2308.05421
Progressive Spatio-temporal Perception for Audio-Visual Question Answering
Audio-Visual Question Answering (AVQA) task aims to answer questions about different visual objects, sounds, and their associations in videos. Such naturally multi-modal videos are composed of rich and complex dynamic audio-visual components, where most of which could be unrelated to the given questions, or even play as interference in answering the content of interest. Oppositely, only focusing on the question-aware audio-visual content could get rid of influence, meanwhile enabling the model to answer more efficiently. In this paper, we propose a Progressive Spatio-Temporal Perception Network (PSTP-Net), which contains three modules that progressively identify key spatio-temporal regions w.r.t. questions. Specifically, a temporal segment selection module is first introduced to select the most relevant audio-visual segments related to the given question. Then, a spatial region selection module is utilized to choose the most relevant regions associated with the question from the selected temporal segments. To further refine the selection of features, an audio-guided visual attention module is employed to perceive the association between auido and selected spatial regions. Finally, the spatio-temporal features from these modules are integrated for answering the question. Extensive experimental results on the public MUSIC-AVQA and AVQA datasets provide compelling evidence of the effectiveness and efficiency of PSTP-Net. Code is available at: \href{https://github.com/GeWu-Lab/PSTP-Net}{https://github.com/GeWu-Lab/PSTP-Net}
Guangyao Li, Wenxuan Hou, Di Hu
2023-08-10T08:29:36Z
http://arxiv.org/abs/2308.05421v1
# Progressive Spatio-temporal Perception for Audio-Visual Question Answering ###### Abstract. Audio-Visual Question Answering (AVQA) task aims to answer questions about different visual objects, sounds, and their associations in videos. Such naturally multi-modal videos are composed of rich and complex dynamic audio-visual components, where most of which could be unrelated to the given questions, or even play as interference in answering the content of interest. Oppositely, only focusing on the question-aware audio-visual content could get rid of influence, meanwhile enabling the model to answer more efficiently. In this paper, we propose a Progressive Spatio-Temporal Perception **N**etwork (**PSTP-Net**), which contains three modules that progressively identify key spatio-temporal regions _w.r.t._ questions. Specifically, a _temporal segment selection module_ is first introduced to select the most relevant audio-visual segments related to the given question. Then, a _spatial region selection module_ is utilized to choose the most relevant regions associated with the question from the selected temporal segments. To further refine the selection of features, an _audio-guided visual attention module_ is employed to perceive the association between audio and selected spatial regions. Finally, the spatio-temporal features from these modules are integrated for answering the question. Extensive experimental results on the public MUSIC-AVQA and AVQA datasets provide compelling evidence of the effectiveness and efficiency of **PSTP-Net**. Audio-visual, Question Answering, Scene Understanding + Footnote †: 2023) Corright held by the owner/author(s). Publication rights licensed to ACM. MSBN 979-8-4007-1018-5/23/10...$15.00 [https://doi.org/10.1145/381783.3612293](https://doi.org/10.1145/381783.3612293) + Footnote †: 2023) Corright held by the owner/author(s). Publication rights licensed to ACM. MSBN 979-8-4007-1018-5/23/10...$15.00 [https://doi.org/10.1145/381783.3612293](https://doi.org/10.1145/381783.3612293) + Footnote †: 2023) Corright held by the owner/author(s). Publication rights licensed to ACM. MSBN 979-8-4007-1018-5/23/10...$15.00 [https://doi.org/10.1145/381783.3612293](https://doi.org/10.1145/381783.3612293) + Footnote †: 2023) Corright held by the owner/author(s). Publication rights licensed to ACM. MSBN 979-8-4007-1018-5/23/10...$15.00 [https://doi.org/10.1145/381783.3612293](https://doi.org/10.1145/381783.3612293) + Footnote †: 2023) Corright held by the owner/author(s). Publication rights licensed to ACM. MSBN 979-8-4007-1018-5/23/10...$15.00 [https://doi.org/10.1145/381783.3612293](https://doi.org/10.1145/381783.3612293) + Footnote †: 2023) Corright held by the owner/author(s). Publication rights licensed to ACM. MSBN 979-8-4007-1018-5/23/10...$15.00 [https://doi.org/10.1145/381783.3612293](https://doi.org/10.1145/381783.3612293) + Footnote †: 2023) Corright held by the owner/author(s). Publication rights licensed to ACM. MSBN 979-8-4007-1018-5/23/10...$15.00 [https://doi.org/10.1145/381783.3612293](https://doi.org/10.1145/381783.3612293) + Footnote †: 2023) Corright held by the owner/author(s). Publication rights licensed to ACM. MSBN 979-8-4007-1018-5/23/10...$15.00 [https://doi.org/10.1145/381783.3612293](https://doi.org/10.1145/381783.3612293) + Footnote †: 2023) Corright held by the owner/author(s). Publication rights licensed to ACM. MSBN 979-8-4007-1018-5/23/10...$15.00 [https://doi.org/10.1145/381783.3612293](https://doi.org/10.1145/381783.3612293) ## 1. Introduction We are constantly surrounded by visual and auditory information in our daily life, and humans perceive the world by simultaneously processing and integrating visual and auditory inputs (Luo et al., 2018; Li et al., 2018). As a widely used medium for recording and reflecting reality, natural scene videos typically convey important event information through visual and auditory streams. Audio-visual learning has therefore emerged as one of the key components in the multimedia community, garnering significant interest from researchers in the past decade (Bang et al., 2016; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018). In particular, Audio-Visual Question Answering (AVQA) (Li et al., 2018; Li et al., 2018; Li et al., 2018) task involves answering questions related to intricate audio-visual scenes, which has gained much attention from researchers as a valuable and challenging task. Yun _et al._(Yun et al., 2018) proposed the Pano-AVQA dataset, which comprises 360-degree videos and corresponding question-answer pairs. The Pano-AVQA dataset covers two types of question-answer pairs: spherical spatial relations and audio-visual relations, enabling the exploration of panoramic scene understanding. Li _et al._(Li et al., 2018) introduced the MUSIC-AVQA dataset, a large-scale dataset that focuses on promoting spatio-temporal Figure 1. Locating the relevant temporal audiovisual snippets and spatial sounding objects associated with a given question is critical for audiovisual scene understanding. For instance, to achieve this when dealing with an input video, the process involves: (a) locating temporal segments relevant to the question; (b) identifying the spatial region most relevant to the question on selected segments; (c) utilizing the selected audio features to perform sound-aware perception. reasoning in dynamic and long-term audio-visual scenes, such as music performances. They first ground the sounding region visually, then perform spatio-temporal reasoning using attention mechanisms to perform effective question answering. To address the complexity of audio-visual relationships in real-life scenarios and the diverse range of audio-visual daily activities, Yang _et al._[(39)] proposed a large-scale AVQA dataset for multi-modal understanding of audio-visual objects and activities in realistic scenes in videos. These works provide benchmark platforms for evaluating complex audio-visual scene understanding and reasoning, and have achieved significant progress in advancing the task. Despite the significant progress made in AVQA, there are still several challenges that need to be addressed. **Firstly**, the task involves audio-visual understanding over long videos, which suffer from heavy information redundancy and huge computational cost. Existing AVQA explorations typically use the uniform sampling strategy to reduce redundancy and computational cost, while may lose some valuable information. **Secondly**, localizing relevant regions of the question in the video is still challenging. Some VQA methods use pretrained object detection models to localize key objects, but since some special categories (_e. g._, _souna_) in AVQA-related datasets are not included in datasets used for pretraining these models, they are still unable to effectively locate the relevant regions for the AVQA task. **Thirdly**, the lack of supervised information on spatial visual objects and sound poses a challenge for the model to associate visual targets with sound in the video, making it difficult to find potential sound-aware regions. **Therefore**, identifying video segments relevant to the question, extracting relevant visual regions from these segments, and determining whether they produce sound are crucial for exploring the AVQA task. For instance, as shown in Fig. 1, when answering the audio-visual question "_Which clarinet makes the sound first?_" for an instrumental ensemble, one needs to focus on the temporal segment related to "_first_", locate object "_clarinet_" and determine which clarinet produced the "_sound_". To address the above challenges, we propose a **P**rogressive **S**patio-**T**emporal **P**eception **N**etwork (PSTP-Net) to explore critical temporal segments and sound-aware regions among the complex audio-visual scenarios progressively. Firstly, the content related to the question is usually scattered in partial segments of the video instead of the whole sequence. Hence, we propose a **T**emporal **S**egment **S**election **M**odule (TSSM) that utilizes cross-modal attention to identify the most several relevant temporal segments for the given question. Secondly, identifying visual regions that are pertinent to the question within the selected key segments can aid in understanding spatial semantics. To achieve this, we propose the **S**patial **R**egion **S**election **M**odule (SRSM) to select the most relevant patches by using an attention-based ranking strategy. It enables a more effective comprehending of the spatial context presented in the video. Thirdly, the sound and location of the visual source can reflect the spatial relationship between audio and visual modalities, which can help to learn audio-visual associations in complex scenarios. We propose an **A**udio-guided **V**isual **A**ttention **M**odule (AVAM) module to model such cross-modal associations of audio signals and patches selected by the SRSM module. Finally, we fuse the above selected audio and visual modalities and obtain a joint representation for question answering. Extensive experimental results demonstrate that our proposed PSTP-Net can achieve precise spatial-temporal perception and outperforms pervious methods with only a limited number of learnable parameters and FLOPs. Our proposed approach takes a step toward more effective and efficient audio-visual scene understanding. Our contributions can be summarized as follows: * The proposed PSTP-Net excels in perceiving key temporal segments, identifying visual regions relevant to the posed question, and cross-modal association between audio signal and visual patches, which facilitate the scene understanding over audio, visual, and text modalities. * The PSTP-Net comprises several modules that progressively perceive key spatio-temporal regions, effectively reducing redundant information and computational cost. * Extensive experiments and ablation studies on the MUSIC-AVQA and AVQA benchmarks demonstrate the effectiveness and efficiency of the proposed framework. ## 2. Related Works ### Audio visual scene understanding Audio-visual scene understanding tasks focus on exploring audio-visual joint perception (Shen et al., 2017). Several tasks fall under this domain, including sound source localization (Shen et al., 2017; Liu et al., 2017; Wang et al., 2017), action recognition (Liu et al., 2017), event localization (Chen et al., 2017; Wang et al., 2017), video parsing (Wang et al., 2017; Wang et al., 2017), segmentation (Wang et al., 2017; Wang et al., 2017), dialog (Chen et al., 2017; Wang et al., 2017), question answering (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), _etc._ To achieve effective audio-visual scene understanding, it is essential to establish associations between audio and visual modalities. The sound source localization task (Chen et al., 2017; Wang et al., 2017; Wang et al., 2017) and the audio-visual segmentation task (Wang et al., 2017) aim to locate visual objects relevant to sound, usually learning with annotations of sounding objects. However, such annotations are usually not provided in the AVQA task. To address this, researchers have experimented with constructing positive and negative pairs (Wang et al., 2017) or designing different multi-modal fusion models (Wang et al., 2017; Wang et al., 2017) to enhance audio-visual correlation learning. Furthermore, some adapter-inserted transformer models have been proposed to enhance model representation capabilities. However, despite these explorations and efforts, the fundamental problem of audio-visual correlation still remains unresolved. In this work, we propose to calculate the similarity between the audio input and its corresponding visual patch-level features. This enables the identification of the visual patches that most relevant to the sound, as well as the accurate visual sound source position localization. ### Question answering As a typical video understanding task, question answering explores fine-grained scene understanding, including Audio Question Answering (AQA) (Chen et al., 2017), Visual Question Answering (VQA) (Chen et al., 2017), and Audio Visual Question Answering (AVQA) (Chen et al., 2017). The uni-modal question answering tasks represented by AQA (Chen et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) have been extensively studied. However, they are restricted to a single modality of either sound or visual scenes, making it difficult to perceive and reason about natural audio-visual video content. Yun _et al._(Yun et al., 2017) proposed the Pano-AVQA dataset, which includes 360-degree videos and their corresponding question-answer pairs, aimed at exploring understanding of panoramic scenes. Li _et al._(Li et al., 2017) presented the MUSIC-AVQA dataset, which is a large-scale dataset with rich audio-visual components and interaction, designed to promote research on spatio-temporal reasoning in dynamic and long-term audio-visual scenes. Considering that real-life scenarios contain more complex relationships between audio-visual objects and a greater variety of audio-visual daily activities, Yang _et al._[(39)] proposed a large-scale AVQA dataset to facilitate multimodal understanding of audio-visual objects and activities in real scenes captured in videos. Although the AVQA task has garnered significant attention [(4; 25; 24)], existing works are still in the preliminary exploration stage. Our work identifies temporal segments and localize spatial sound patches related to the given question progressively to explore complex audio-visual scene understanding. ## 3. Method In this section, we will present our proposed **P**rogressive **P**apotic-**T**emporal **P**ereception **N**etwork (PSTP-Net), which achieves precise perception of relevant temporal segments and spatial sound sources, resulting in a better scene understanding than its unimodal counterparts utilizing audio or video. An overview of the proposed framework is illustrated in Fig. 2. ### Input Representation Given an input video sequence containing both visual and audio tracks, we first divide it into \(K\) non-overlapping audio and visual segment pairs \(\{V_{k},A_{k}\}_{k=1}^{K}\), where each segment is \(T\)-seconds long. \(V_{k}\) and \(A_{k}\) denote visual and audio content in the same segment, respectively. For the \(k\)-th audio-visual segment pair, it contains \(\{v_{t},a_{t}\}_{t=1}^{T}\) snippet, where each snippet is \(1\)-second. Subsequently, we partition each visual frame into \(M\) patches and append a special \([CLS]\) token to the beginning of the first patch. The question sentence \(Q\) is tokenized into \(N\) individual words \(\{q_{n}\}_{n=1}^{N}\). Note that we assume a frame sampling rate of \(1fps\) for the the formula clarity. **Audio Representation**. For each audio snippet \(a_{t}\), we use the pre-trained VGGish [(12)] model to extract the audio feature as \(f_{a}^{t}\in\mathbb{R}^{D}\), where \(D\) is the feature dimension. The pretrained VGGish model is a VGG-like 2D CNN network that trained on the AudioSet [(12)], employing over transformed audio spectrograms. The audio representation is extracted offline and the model is not fine-tuned. Then, the \(k\)-th audio segment features are extracted as \(F_{a}^{k}=\{f_{a}^{1},f_{a}^{2},...,f_{a}^{T}\}\), where \(F_{a}^{k}\in\mathbb{R}^{T\times D}\). Finally, the audio features can be formulated as \(F_{A}=\{F_{a}^{1},f_{a}^{2},...,F_{a}^{K}\}\). **Visual Representation**. For each visual snippet \(v_{t}\), we sample a fixed number of frames. Then we apply pre-trained CLIP [(30)], with frozen parameters, extract both frame-level and patch-level features as \(f_{a}^{t}\) and \(f_{p}^{t}\) on video frames, respectively, where \(f_{a}^{t}\in\mathbb{R}^{D}\), \(f_{p}^{t}\in\mathbb{R}^{M\times D}\) and \(M\) is patch numbers of one frame. For the \(k\)-th segment, its **frame-level** features are extracted as \(F_{p}^{k}=\{f_{a}^{1},f_{a}^{2},...,f_{a}^{T}\}\), where \(F_{p}^{k}\in\mathbb{R}^{T\times D}\), and its **patch-level** features are extracted as \(F_{p}^{k}=\{f_{p}^{1},f_{p}^{2},...,f_{p}^{T}\}\), where \(F_{p}^{k}\in\mathbb{R}^{T\times M\times D}\). Finally, the visual frame-level and patch-level features can be formulated as \(F_{V}=\{F_{p}^{1},F_{p}^{2},...,F_{p}^{K}\}\), \(F_{P}=\{F_{p}^{1},F_{p}^{2},...,F_{p}^{K}\}\), respectively. Figure 2. Our proposed PSTP-Net model has a simple yet effective pipeline. Firstly, the video is divided into \(K\) segments, and we use a pre-trained model to extract audio, visual, and question features. Then, we calculate the similarity between the temporal segment features and the input question feature to highlight the \(Top_{k}\) relevant segments with the given question. Next, we choose the \(Top_{m}\) most relevant patches associated with the question from the selected segment features. Afterwards, we perform audio-guided attention to perceive potential sound regions through the selected patches. Finally, we aggregate the extracted audio, visual, and question features through multimodal fusion to predict the answer to the input question. (@, @ represent _question-guided_ and _audio-guided_, respectively.) **Question Representation.** Given an asked question \(Q=\{q_{n}\}_{n=1}^{N}\), we first represent each word \(q_{n}\) to a fixed-length vector with word embeddings, and then fed into pre-trained CLIP(Wang et al., 2018) model to get question-level features \(F_{Q}\), where \(F_{Q}\in\mathbb{R}^{1\times D}\). Note that the first token pooling is used for extracting question features. ### Temporal Segment Selection To hightlight the \(Top_{k}\) key segments that are closely associated to the question, we propose a Temporal Segments Selection Module (TSSM), which is designed for attending critical temporal segments among all \(K\) segments. An overview of the proposed TSSM is illustrated in Fig. 3. To simulataneously capture multimodal temporal contexts, we use an audio-visual fusion strategy, named \(AVF\), which is designed for providing temporal feature interactions. **For the \(k\)-th segment**, an \(AVF\) will be learned from audio and visual features \(\{f_{a}^{t},f_{b}^{t}\}_{t=1}^{T}\) to update \(f_{a}^{t}\) and \(f_{b}^{t}\), respectively. Firstly, the transformer encoder is employed to aggregate both within-modality and cross-modality information as: (1) \[\phi_{sa}(f_{a}^{t},F_{a}^{k},F_{a}^{k})=Softmax(\frac{f_{a}^{t}F_ {a}^{k}}{\sqrt{d}})F_{a}^{k},\] \[\phi_{ca}(f_{a}^{t},F_{b}^{k},F_{b}^{k})=Softmax(\frac{f_{a}^{t}F_ {b}^{k}}{\sqrt{d}})F_{b}^{k},\] (2) where the scaling factor \(d\) is equal to the audio feature dimension, \(\phi_{sa}(\cdot)\) and \(\phi_{ca}(\cdot)\) are self-attention and cross-modal attention functions, respectively. Then we aggregate these representations to update audio and visual features \(\hat{f}_{a}^{t},\hat{f}_{a}^{t}\): \[\hat{f}_{a}^{t}=f_{a}^{t}+\phi_{sa}(f_{a}^{t},F_{a}^{k},F_{a}^{k})+\phi_{ca}(f_ {a}^{t},F_{b}^{k},F_{b}^{k}),\] \[\hat{f}_{b}^{t}=f_{b}^{t}+\phi_{sa}(f_{a}^{t},F_{a}^{k},F_{b}^{k})+ \phi_{ca}(f_{b}^{t},F_{a}^{k},F_{a}^{k}), \tag{3}\] where \(F_{a}^{k}=\{f_{a}^{1},f_{a}^{2},...,f_{a}^{T}\}\) and \(F_{b}^{k}=\{f_{b}^{1},f_{a}^{2},...,f_{b}^{T}\}\). Afterward, the \(k\)-th audio and visual segment feature is updated to \(\hat{F}_{a}^{k}\) and \(\hat{F}_{b}^{k}\), where \(\hat{F}_{a}^{k}=\{\hat{f}_{a}^{1},\hat{f}_{a}^{2},...,\hat{f}_{a}^{T}\}\) and \(\hat{F}_{b}^{k}=\{\hat{f}_{b}^{1},\hat{f}_{b}^{2},...,\hat{f}_{b}^{T}\}\). Then we perform a pooling operation on \(\hat{F}_{a}^{k},\hat{F}_{b}^{k}\) in the temporal dimension to obtain the \(k\)-th segment feature representation \(\overline{F}_{a}^{k},\overline{F}_{b}^{k}\), where \(\overline{F}_{a}^{k}\in\mathbb{R}^{1\times D}\), \(\overline{F}_{b}^{k}\in\mathbb{R}^{1\times D}\). Then, \(\overline{F}_{a}^{k}\) and \(\overline{F}_{b}^{k}\) are concatenated, and an \(FC\) and activation layer are used to generate a joint representation, denoted as \(\overline{F}_{ao}^{k}\in\mathbb{R}^{1\times D}\). Finally, all segment-level features \(\overline{F}_{ao}^{k}\) are aggregated into \(\overline{F}_{AV}=\{\overline{F}_{ao}^{1},\overline{F}_{ao}^{2},..., \overline{F}_{ao}^{k}\}\), where \(\overline{F}_{AV}\in\mathbb{R}^{K\times D}\). Giving the segment features \(\overline{F}_{AV}\) and the input question feature \(F_{Q}\), we first perform temporal cross-modal attention on \(\overline{F}_{AV}\), and \(F_{Q}\) after using a linear projection layer formulated as: \[\bar{F}_{AV},W_{AV}=MultiHead(F_{Q}.\overline{F}_{AV}.\overline{F}_{AV}), \tag{4}\] where \(\bar{F}_{AV}\) is updated from \(\overline{F}_{AV}\). \(W_{AV}\) is the average attention weights of all heads, \(W_{AV}\in(0,1)^{K}\). Then we conduct \(Top_{k}\) feature selection over \(K\) segments. To be specific, we employ a selection operation, denoted as \(\Psi_{TSSM}\), to pick out the segments with highest attenton weights and their index as follow: \[F_{TSSM},\Omega_{TSSM}=\Psi_{TSSM}(\bar{F}_{AV},W_{AV},Top_{k}), \tag{5}\] where \(F_{TSSM}\) is selected temporal feature, \(F_{TSSM}\in\mathbb{R}^{Top_{k}\times\mathcal{T}\times D}\), \(\Omega_{TSSM}\) is the index position corresponding to the \(Top_{k}\) highest weights in \(W_{AV}\), \(\Omega_{TSSM}\in\{0,1,...,k\text{-}1\}^{Top_{k}}\). Furthermore, we select the visual patch-level features \(F_{TSSM}^{P}\in\mathbb{R}^{Top_{k}\times M\times D}\) and audio features \(F_{TSSM}^{A}\in\mathbb{R}^{Top_{k}\times D}\) from \(F_{P}\) and \(F_{A}\), respectively, according to \(\Omega_{TSSM}\). For formula clarity, we record the index of the snippet corresponding to these selected segments and sort them from \(1\) to \(\Gamma\), where \(\Gamma=T\times Top_{k}\). Then \(F_{TSSM}^{P}\) and Figure 3. The pipeline of the designed TSSM. Firstly, TSSM performs self-attention and cross-modal perception on the audio and visual feature sequences in each segment. Then, we perform pooling and concatenation operations on both modalities’ features in the temporal dimension to obtain segment-level joint representation. Finally, we calculate the similarity between the question and the fused segments to find the ones most relevant to the question. \(F^{A}_{TSSM}\) are redescribed as: \[F^{P}_{TSSM}=\{f^{1}_{p},f^{2}_{p},...,f^{P}_{p}\},F^{P}_{TSSM}\in\mathbb{R}^{T \times M\times D}, \tag{7}\] \[F^{A}_{TSSM}=\{f^{1}_{a},f^{2}_{a},...,f^{L}_{a}\},F^{A}_{TSSM}\in\mathbb{R}^{T \times D}. \tag{8}\] In next sections, we utilize the question feature to identify key spatial regions from selected patch-level features \(F^{P}_{TSSM}\), and use selected audio features \(F^{A}_{TSSM}\) to perform sound-aware perception. ### Spatial Region Selection To identify visual regions that are pertinent to the question, we design a Spatial Region Selection Module (SRSM) to choose the \(Top_{m}\) most relevant regions associated with the question from the selected frame patch-level features, as shown in the top of Fig. 4. **For the \(\mathbf{\gamma}\)-th frame**, its patch-level feature is \(f^{P}_{P}\in\mathbb{R}^{M\times D}\). We perform spatial cross-modal attention over \(f^{V}_{p}\) and \(F_{Q}\) after using a linear projection layer, which can be formulated as: (9) \[\tilde{f}^{V}_{p},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ to output probabilities \(p\in(0,1)^{C}\) for candidate answers. With the predicted probability vector and the corresponding ground-truth label, we can optimize our network by a cross-entropy loss. ## 4. Experiments ### Datasets The **MUSIC-AVQA** dataset (Wang et al., 2017) contains 9,288 videos covering 22 different instruments, with a total duration of over 150 hours and 45,867 Question-Answering (QA) pairs. Each video contains around 5 QA pairs in average. The questions are designed under multi-modal scenes containing 33 question templates covering 9 types. The MUSIC-AVQA dataset is well-suited for studying spatio-temporal reasoning for dynamic and long-term audio-visual scenes. The **AVQA** dataset (Wang et al., 2017) is designed for audio-visual question answering on general real-life scenario videos. It contains 57,015 videos from daily audio-visual activities, along with 57,335 QA pairs specially designed relying on clues from both modalities, where information from a single modality is insufficient or ambiguous. For both datasets, we adopt the official split of the two benchmarks into training, evaluation, and test sets. ### Implementation Details For the visual stream, we divide the video into 1-second snippets and sample the corresponding frames at \(1fps\), the pre-trained CLIP-ViT-B/32 (Wang et al., 2017) is used as the visual feature extractor to generate frame-level and patch-level 512-D feature vector for each visual snippet. For each 1-second long audio snippet, we use a linear layer to process the extracted 128-D VGGish feature into a 512-D feature vector. For each question sentence, we extract its feature by the pre-trained CLIP-ViT-B/32 (Wang et al., 2017) and obtain a 512-D feature vector. In all experiments, we use the Adam optimizer with an initial learning rate of \(1e-4\) and will drop by multiplying 0.1 every 10 epochs. We split all videos into 20 segments (\(K=20\)), in the proposed modules, we set \(Top_{k}=7\), \(Top_{m}=20\). Batch size and number of epochs are set to 64 and 30, respectively. We use the _thop_ library in PyTorch to calculate the model's parameters and FLOPs. Our model is trained on NVIDIA GeForce RTX 3090 and implemented in PyTorch. ### Quantitative Results To evaluate the effectiveness of the proposed PSTP-Net, we compare it with the existing method on the MUSIC-AVQA benchmark: AVSD (Wang et al., 2017), Pano-AVQA (Wang et al., 2017), AVST (Wang et al., 2017), _etc_. Tab 1 indicate that our method outperforms all comparison methods. Our method shows significant improvements in the subtask types of _Audio-visual_, including _Counting_, _Localization_, _Comparative_, and _Temporal_. Specifically, compared to the AVST (Wang et al., 2017), PSTP-Net achieves remarkable improvements of 3.35%, 7.56%, 7.12%, and 3.18% in the above-mentioned four complex question types, respectively. Additionally, the PSTP-Net shows a performance boost of 3.63% and 2.09% in the _Counting_ and _Localization_ subtasks of the visual modality, respectively, when compared to AVST (Wang et al., 2017). The significant performance improvements indicate that our proposed PSTP-Net is effective in accurately identifying crucial temporal segments, spatial regions, and their corresponding sound-aware perception in videos. We observe that the audio-only questions suffer from performance degradation, this could be because the visual content plays negative influence when answering audio-only questions. For example, the visual content tends to play as noise when answering the audio-only comparative-type question _"Is the <Object1> louder than the <Object2>7"_. Such phenomenon can be also found in some other methods(Beng et al., 2017; Wang et al., 2017). This intriguing observation motivates us to explore strategies in future work that can achieve better performance on both single-modality and multi-modality aspects. In a word, the proposed PSTP-Net is capable of comprehending complex audio-visual scenes, especially in scenarios where combining audio and visual information is necessary to answer questions. In Tab 2, we present the efficiency of our proposed PSTP-Net model by evaluating it using two metrics, parameters and FLOPs, and comparing it with AVST (Wang et al., 2017). Comparison results indicate that the proposed PSTP-Net achieves a significant reduction in training parameters (18.480M _vs_. 4.297M) and FLOPs (3.188G _vs_. 1.223G) compared to AVST, with a decrease of 76.73% and 63.68% in parameters and FLOPs, respectively. The main reason for this phenomenon is the _spatial grounding module_ designed for AVST, which aims to enhance audio-visual association by incorporating extra positive and negative sample pairs. However, this module heavily relies on _FC_ layers and spatial matrix calculations, resulting in a significant increase in the model's parameters and FLOPs. In contrast, the PSTP-Net effectively reduces the parameters and FLOPs by identifying critical temporal segments and spatial regions through its various designed modules, thus minimizing redundancy. Notably, this reduction in computational requirements is accompanied by an improvement in accuracy compared to AVST, indicating the efficiency of our model. As illustrated in the Tab 2, the PSTP-Net achieves better performance with lower parameters and FLOPs, \begin{table} \begin{tabular}{c|c c c|c c c|c c c c c|c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**Audio**} & \multicolumn{3}{c|}{**Visual**} & \multicolumn{3}{c|}{**Audio-Visual**} & \multirow{2}{*}{**Avg**} \\ & **Count** & **Comp** & **Avg** & **Count** & **Local** & **Avg** & **Exist** & **Count** & **Local** & **Comp** & **Temp** & **Avg** \\ \hline FCNLSTM (Wang et al., 2017) & 70.80 & 65.66 & 68.90 & 64.58 & 48.08 & 56.23 & 82.29 & 59.92 & 46.20 & 62.94 & 47.45 & 60.42 & 60.81 \\ GRU (Chen et al., 2018) & 71.29 & 63.13 & 68.28 & 66.08 & 68.08 & 67.09 & 80.67 & 61.03 & 51.74 & 62.85 & 57.79 & 63.03 & 65.03 \\ HCAH (Wang et al., 2017) & 70.80 & 54.71 & 64.87 & 63.49 & 67.10 & 65.32 & 79.48 & 59.84 & 48.80 & 56.31 & 56.33 & 60.32 & 62.45 \\ MCAN (Wang et al., 2017) & **78.07** & 57.74 & 70.58 & 71.76 & 71.76 & 71.76 & 80.77 & 65.22 & 54.57 & 56.77 & 46.84 & 61.52 & 65.83 \\ PSAC (Wang et al., 2017) & 75.02 & 66.84 & 72.00 & 68.00 & 70.78 & 69.41 & 79.76 & 61.66 & 55.22 & 61.13 & 59.85 & 63.00 & 66.62 \\ HME (Chen et al., 2018) & 73.65 & 63.74 & 69.89 & 67.42 & 70.20 & 68.83 & 80.87 & 63.64 & 54.89 & 63.03 & 60.58 & 64.78 & 66.75 \\ HCRN (Chen et al., 2018) & 71.29 & 50.67 & 63.69 & 65.33 & 64.98 & 65.15 & 54.15 & 53.28 & 41.74 & 51.04 & 46.72 & 49.82 & 56.34 \\ AVSD (Wang et al., 2017) & 72.47 & 62.46 & 68.78 & 66.00 & 74.53 & 70.31 & 80.77 & 64.03 & 57.93 & 62.85 & 61.07 & 65.44 & 67.32 \\ Pano-AVQA (Wang et al., 2017) & 75.71 & 65.99 & 72.13 & 70.51 & 75.76 & 73.16 & 82.09 & 65.38 & 61.30 & 63.67 & 62.04 & 66.97 & 69.53 \\ AVST (Wang et al., 2017) & 77.78 & **67.17** & **73.87** & 73.52 & 75.27 & 74.40 & **82.49** & 69.88 & 64.24 & 64.67 & 65.82 & 69.53 & 71.59 \\ \hline **PSTP-Net** & 73.97 & 65.59 & 70.91 & **77.15** & **77.36** & **77.26** & 76.18 & **73.23** & **71.80** & **71.79** & **69.00** & **72.57** & **73.52** \\ \hline \hline \end{tabular} \end{table} Table 1. Results of different methods on the test set of MUSIC-AVQA. The top-2 results are highlighted. showing its superior efficiency. In summary, the PSTP-Net offers significant improvements over existing approaches and has the potential to advance the field of audio-visual scene understanding. ### Ablation Study In this subsection, we investigate how different configuration choices of our model affect the performance on the MUSIC-AVQA dataset. **Effect of components in PSTP-Net.** We ablate key modules in PSTP-Net, _i.e._, TSSM, SRSM, AVAM, and LGPM, denoted as: * **PSTP-Net w/o. all.** All modules are removed except for the fusion of the input audio, video, and question features to verify whether the improved performance of PSPT-Net is due to the replacement of the ResNet-18 pre-trained model with CLIP-ViT-B/32. Table 3 shows that there is little performance difference when using two different backbones in the model _PSTP-Net w/o. all._ However, compared to PSTP-Net, its performance drops to 69.47% (4.05%), indicating that the performance improvement is not solely attributed to the replacement of ResNet-18 with CLIP-ViT-B/32 for feature extraction. It is worth noting that we extract input features offline (_i.e._, extract input features before training) and ensure their output dimension is the same as ResNet-18, the parameters and FLOPs of the PSTP-Net remain unchanged. * **PSTP-Net w/o. TSSM.** The temporal segment selection module is removed, which enables the model to locate key regions and develops sound-aware perception in all video frames. This results in significant temporal redundancy, leading to a decrease in model performance and an increase in FLOPs. As shown in Tab 3, the performance of the PSTP-Net drop to 72.98%, with FLOPs increased by 60% compared to PSTP-Net. * **PSTP-Net w/o. SRSM.** The spatial region selection module is removed, causing the model to rely solely on the audio feature selected by TSSM to identify potential sound-aware areas in its corresponding video frames, which limits its ability to prioritize spatial areas relevant to the question. As shown in Tab 3, compared with PSTP-Net, the performance of this model decreased to 72.92%, indicating the importance of the spatial region selection module in improving performance. * **PSTP-Net w/o. AVAM.** The AVAM aims to perceive sound sources within the selected spatio-temporal regions and learn the association between audio and visual modalities. It is crucial for answering questions that require the perception of audiovisual correlations in the spatial context. For example, the accuracy of answering audio-visual localization questions is improved by 6.37% when the AVAM is performed (71.80% vs. 65.43%). Additionally, the AVAM is a lightweight module, it only incur a small additional computational cost (10.3% FLOPs of the PSTP-Net). * **PSTP-Net w/o. LGPM.** The lightweight global perception module is removed, results in the model only attending to selected temporal segments and their corresponding regions. This caused the model to ignore global cues and makes it difficult to answer certain questions that require access to the entire video. As shown in Tab 3, compared to the model w/o. LGPM, the performance of PSTP-Net improves by 0.85%, with a low extra cost. Overall, each module contributes to better performance. When all modules are present, the proposed PSTP-Net achieves the best performance on the MUSIC-AVQA dataset. **Effects of different PSTP-Net configurations.** We also explore the impact of different configurations on model performance, including the number of temporal segments \(K\), selected segments \(Top_{K}\), selected patches \(Top_{m}\), and audio-visual fusion layers. We conducted all experiments on the MUSIC-AVQA and selected the best configuration on the validation set. As indicated in Fig 5(a), the model's performance first increases and then decreases with increasing \(K\). This phenomenon can be attributed to the fact that events in long videos occur for a considerable duration. If the segment duration is too short, an event may be divided into fragments, making it incomplete and difficult to capture semantic information. Meanwhile, if the segment duration is too long, it will result in too much redundant noise accompanying the event. Fig 5(b) shows that the performance initially improves and then declines with the increase of Topk, displaying slight fluctuations. This could be because few segments may overlook essential question-related clues, while too many may introduce irrelevant redundant information. Fig 5(c) displays the impact of the selected number of patches on the model. As the number of patches increases, the performance decreases, suggesting that the relevant target region occupies only a portion of space. Moreover, we analyzed the impact of the number of audio-visual fusion layers on the performance. As shown in Fig 5(d), more transformer layers cannot achieve better performance while leads \begin{table} \begin{tabular}{l|c c|c} \hline \hline **Method** & **Training Param (M)\(\downarrow\)** & **FLOPs (G)\(\downarrow\)** & **Avg** \\ \hline AVST [(21)] & 18.480 & 3.188 & 71.59 \\ PSTP-Net & 4.297 & 1.223 & 73.52 \\ \hline \hline \end{tabular} \end{table} Table 2. Parameter and FLOPs of PSTP-Net and AVST. Figure 5. Effects of different PSTP-Net configurations. to an increase in FLOPs. We set the number of layers to 1 in the experiment. In general, different configurations may cause slight performance fluctuations, but most results maintain above 73.00%, demonstrating the stability of the proposed PSPT-Net. ### Experiments on AVQA dataset To achieve more promising results, we conduct experiments with the proposed PSPT-Net on AVQA dataset. Specifically, we split all videos into 5 segments (\(K=5\)), and set select \(Top_{3}\) temporal segments and \(Top_{2}5\) spatio regions most relevant to the question, respectively. Then we aggregate these updated features with the features obtained from different modules to obtain a joint representation for answering questions. Results are present in Tab 4, the model achieves the best results compared to the ensemble \(HCRN+HAVF\). Obviously, the proposed PSPT-Net has significant potential for improving the performance on AVQA dataset. ### Visualization Results We visualize some cases from the MUSIC-AVQA dataset in Fig 6. Visualization results show that the PSTP-Net can effectively select video temporal segments, frame-level regions, and their corresponding sounding-aware patches that are relevant to the questions. ## 5. Conclusion and Discussion In this work, we propose a novel Progressive Spatio-Temporal Perception Network (PSTP-Net) for the audio-visual question answering task. The PSTP-Net first utilizes a temporal segment selection module to identify key temporal segments relevant to the given question. Then, a spatial region selection module is designed to select regions most relevant to the question. Next, an audio-guided visual attention module is introduced to enable sound-aware perception on the selected regions. Additionally, a lightweight global perception module is performed to capture global information. Finally, we fuse features extracted from the above modules to obtain a joint representation for answering questions. Extensive evaluations on the MUSIC-AVQA and AVQA datasets demonstrate the effectiveness and efficiency of the proposed PSTP-Net. **Discussion**. The proposed PSTP-Net is distinct from AVST (Krizhevsky et al., 2015) and VALOR (Beng et al., 2015), as they rely on constructing additional positive-negative pairs or using extra-large audio-visual datasets to enhance sound source visual localization accuracy. In contrast, we adopt a progressive strategy on a limited dataset to identify the most relevant temporal segments and corresponding key regions to the question, followed by sound-aware perception on these key regions. Our approach enables us to achieve good performance at a lower cost. Additionally, we use pre-trained models to extract features offline, eliminating the need to train model parameters during the training process, unlike LAVISH (Liu et al., 2017), which significantly reduces the demand for computational resources. Meanwhile, we notice that fine-tuning pre-trained transformer-based models (_e.g._, SwinV2-L (Krizhevsky et al., 2015)) can improve model performance (Liu et al., 2017). This suggests that fine-tuning large pre-trained models are beneficial for downstream AVQA tasks, but their finetuning strategy has not been sufficiently explored so far. Moreover, Large Language Models (LLMs) have shown their great potential in many tasks (Zhu et al., 2018), especially in demonstrating strong reasoning abilities in QA tasks. These challenges provide a wide scope for further exploration on AVQA task. ## 6. Acknowledgments This research was supported by National Natural Science Foundation of China (NO.62106272), the Young Elite Scientists Sponsorship Program by CAST (2021QNRC001), in part by the Research Funds of Renmin University of China (NO. 21XNLG17) and Public Computing Cloud, Renmin University of China. \begin{table} \begin{tabular}{c|c|c} \hline **Method** & **Ensemble** & **Total Accuracy (\%)** \\ \hline HME (Wang et al., 2017) & HAVF (Wang et al., 2017) & 85.0 \\ PSAC (Wang et al., 2017) & HAVF (Wang et al., 2017) & 87.4 \\ LADNet(Liu et al., 2017) & HAVF (Wang et al., 2017) & 84.1 \\ ACRTransformer (Liu et al., 2017) & HAVF (Wang et al., 2017) & 87.8 \\ HGA (Krizhevsky et al., 2015) & HAVF (Wang et al., 2017) & 87.7 \\ HCRN (Wang et al., 2017) & HAVF (Wang et al., 2017) & 89.0 \\ **PSTP-Net** & – & 90.2 \\ \hline \end{tabular} \end{table} Table 4. Results of different methods on the test of AVQA. Figure 6. Visualization results of the PSTP-Net. Based on the selection results of our method, the question-related area, the sounding area, and key timestamps are highlighted in the spatial and temporal dimensions, respectively, which indicates that our method can effectively model the spatio-temporal association across different modalities.
2306.13937
A Dynamic Data Structure for Representing Timed Transitive Closures on Disk
Temporal graphs represent interactions between entities over time. These interactions may be direct, a contact between two vertices at some time instant, or indirect, through sequences of contacts called journeys. Deciding whether an entity can reach another through a journey is useful for various applications in complex networks. In this paper, we present a disk-based data structure that maintains temporal reachability information under the addition of new contacts in a non-chronological order. It represents the \emph{timed transitive closure} (TTC) by a set of \emph{expanded} R-tuples of the form $(u, v, t^-, t^+)$, which encodes the existence of journeys from vertex $u$ to vertex $v$ with departure at time $t^-$ and arrival at time $t^+$. Let $n$ be the number of vertices and $\tau$ be the number of timestamps in the lifetime of the temporal graph. Our data structure explicitly maintains this information in linear arrays using $O(n^2\tau)$ space so that sequential accesses on disk are prioritized. Furthermore, it adds a new unsorted contact $(u, v, t)$ accessing $O\left(\frac{n^2\tau}{B}\right)$ sequential pages in the worst-case, where $B$ is the of pages on disk; it answers whether there is of a journey from a vertex $u$ to a vertex $v$ within a time interval $[t_1, t_2]$ accessing a single page; it answers whether all vertices can reach each other in $[t_1, t_2]$; and it reconstructs a valid journey that validates the reachability from a vertex $u$ to a vertex $v$ within $[t_1, t_1]$ accessing $O\left(\frac{n\tau}{B}\right)$ pages. Our experiments show that our novel data structure are better that the best known approach for the majority of cases using synthetic and real world datasets.
Luiz F. Afra Brito, Marcelo Keese Albertini, Bruno A. N. Travençolo
2023-06-24T10:58:23Z
http://arxiv.org/abs/2306.13937v1
# A Dynamic Data Structure for Representing Temporal Transitive Closures on Disk ###### Abstract Temporal graphs represent interactions between entities over time. These interactions may be direct, a contact between two vertices at some time instant, or indirect, through sequences of contacts called journeys. Deciding whether an entity can reach another through a journey is useful for various applications in complex networks. In this paper, we present a disk-based data structure that maintains temporal reachability information under the addition of new contacts in a non-chronological order. It represents the _timed transitive closure_ (TTC) by a set of _expanded_ R-tuples of the form \((u,v,t^{-},t^{+})\), which encodes the existence of journeys from vertex \(u\) to vertex \(v\) with departure at time \(t^{-}\) and arrival at time \(t^{+}\). Let \(n\) be the number of vertices and \(\tau\) be the number of timestamps in the lifetime of the temporal graph. Our data structure explicitly maintains this information in linear arrays using \(O(n^{2}\tau)\) space so that sequential accesses on disk are prioritized. Furthermore, it adds a new unsorted contact \((u,v,t)\) accessing \(O\left(\nicefrac{{n^{2}\tau}}{{B}}\right)\) sequential pages in the worst-case, where \(B\) is the of pages on disk; it answers whether there is of a journey from a vertex \(u\) to a vertex \(v\) within a time interval \([t_{1},t_{2}]\) accessing a single page; it answers whether all vertices can reach each other in \([t_{1},t_{2}]\); and it reconstructs a valid journey that validates the reachability from a vertex \(u\) to a vertex \(v\) within \([t_{1},t_{1}]\) accessing \(O\left(\nicefrac{{n\tau}}{{B}}\right)\) pages. Our experiments show that our novel data structure are better that the best known approach for the majority of cases using synthetic and real world datasets. ## 1 Introduction Temporal graphs represent interactions between entities over time. These interactions often appear as contacts at specific timestamps. Entities can also interact indirectly with each other by chaining several contacts. For example, in a communication network, devices that are physically connected can send new messages or propagate received ones; thus, by first sending a new message and repeatedly propagating messages over time, remote entities can communicate indirectly. Time-respecting paths in temporal graphs are known as temporal paths, or simply _journeys_, and when a journey exists from one vertex to another one, we say that the first can _reach_ the second. In a computational environment, it is often useful to check whether entities can reach each other [19, 6, 21, 23, 22, 3, 15, 5]. Beyond the sole reachability, some applications also require the ability to reconstruct a journey if one exists [23, 11, 25, 13, 14, 16]. In standard graphs, the problem of updating reachability information is known as _dynamic connectivity_ and it has been extensively studied [17, 12, 9, 26, 18, 20]. In temporal graphs, fewer studies addressed this problem. For instance, in [2, 21], the authors assume that input is chronologically ordered and they give worst-case complexities. In [24], the authors assume non-chronological input, whereas their strategy is optimized for the average case. Particularly to our interest, in [4], the authors considered the _only-incremental_ problem, which supports only the addition of unsorted contacts. Their data structure supports the following four operations, where, by convention, \(\mathcal{G}\) is a temporal graph, \(u\) and \(v\) are vertices of \(\mathcal{G}\), and \(t,t_{1}\), and \(t_{2}\) are timestamps: 1. add_contact(u,v,t), which updates information based on a contact from \(u\) to \(v\) at time \(t\); 2. can_reach(\(u\),\(v\),\(t_{1}\),\(t_{2}\)), which returns true if \(u\) can reach \(v\) within the interval \([t_{1},t_{2}]\); 3. is_connected(\(t_{1}\),\(t_{2}\)), which returns true if \(\mathcal{G}\) restricted to the interval \([t_{1},t_{2}]\) is temporally connected, _i.e._, all vertices can reach each other within the interval \([t_{1},t_{2}]\); and 4. reconstruct_journey(\(u\),\(v\),\(t_{1}\),\(t_{2}\)), which returns a journey (if one exists) from \(u\) to \(v\) occurring within the interval \([t_{1},t_{2}]\). Their update algorithm maintains a _timed transitive closure_ (ttc), a concept that generalizes the transitive closure for temporal graphs based on _reachability tuples_ (r-tuples), in the form \((u,v,t^{-},t^{+})\), representing journeys from vertex \(u\) to \(v\) departing at \(t^{-}\) and arriving at \(t^{+}\). Their data structure uses \(O\left(n^{2}\tau\right)\) space while supporting add_contact, can_reach, is_connected, and reconstruct_journey, respectively, in \(O\left(n^{2}\log\tau\right)\), \(O\left(\log\tau\right)\), \(O\left(n^{2}\log\tau\right)\), and \(O\left(k\log\tau\right)\) worst-case time, where \(n\) is the number of vertices of the temporal graph, \(\tau\) is the number of time instances, and \(k\) is the length of the resulting journey. However, they keep their data structure in primary memory and the cost of storing and maintaining large ttcs is prohibitive. We conducted a simple experiment to show how much space is necessary for temporal reachability. First, we generated random temporal graphs using the Edge-Markovian Evolving Graph (EMEG) model [8]. In this model, if an edge is active at time \(t-1\), then it has probability \(p\) of disappearing at time \(t\), otherwise, it has probability \(q\) of appearing at time \(t\). We represented temporal graphs in memory using adjacency matrices storing, in each cell, timestamps at which edges are active. Then, we built the corresponding ttcs using the approach described in [4]. In this experiment, we varied the number of vertices \(n\) and the number of time instances \(\tau\) while fixing \(p=0.1\) and \(q=0.3\). In Table 1, we see, for example, that a temporal graph with 512 vertices and \(\tau=64\) produced by the EMEG model has 2.8 million contacts, and we needed around 33 MBs of space to store it in memory. Besides, we needed around 156 MBs of space to store the corresponding ttc, which, in this case, it is almost five times the space needed to store the temporal graph. Next, we built a linear regression model with the data presented in Table 1 in order to extrapolate the input parameters. Consider, for example, the scenario in which one million people use a bluetooth device that registers when and who gets close to each other and sends this information to a centralized server. Consider also that each individual makes in average 30 contacts per day. In this setting, by using our model, we could check that a centralized server would require at least 100 GBs of space in less than a year to store just the plain contacts as a temporal graph. If one needs to support reachability queries by using a ttc, it would be necessary roughly 600 GBs of space. Motivated by such scenarios, we investigate the problem of maintaining ttcs on disk. A simple, but not efficient, approach would be to naively implement a data structure for disk based on the \begin{table} \begin{tabular}{r r r r r} \hline \(n\) & \(\tau\) & \(|C|\) & \(data({\cal G})\) & \(data({\rm\tt ITC})\) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: Space for storing temporal graphs with \(n\) vertices, \(\tau\) time instances and \(|C|\) contacts, and their corresponding TTCs. Columns \(data({\cal G})\) and \(data({\rm\tt ITC})\) represent, respectively, the space in megabytes of the generated temporal graphs and their TTCs. approach described in [4]. Briefly, their strategy maintains self-balanced binary search trees (BSTs) containing time intervals for each pair of vertices in order to retrieve reachability information. However, this approach does not consider data locality, thus each update operation would randomly access an excessive amount of pages on disk to retrieve information from each BST. For instance, if we use B\({}^{+}\)-trees [1] as a replacement for BSTs, their algorithm for answering add_contact would access \(O\left(n^{2}\right)\) B\({}^{+}\)-trees and, in each B\({}^{+}\)-tree, it would access \(O\left(\log_{B}\tau\right)\) pages, where \(B\) is the page size, resulting in \(O\left(n^{2}\log_{B}\tau\right)\) random accesses on disk. Therefore, we need a novel approach that better organizes data on disk. We propose in this paper an incremental disk-based data structure that reduces the number of disk accesses for both update and query operations while prioritizing sequential accesses. The core idea of our novel approach is to maintain explicitly an _expanded_ set of non-redundant R-tuples containing \(n^{2}\tau\) elements. Conceptually, we maintain it using two 3-dimensional arrays, \(M_{out}\) and \(M_{in}\), of size \(n\times\tau\times n\) such that \(M_{out}[u,t^{-},v]=t^{+}\) and \(M_{in}[v,t^{+},u]=t^{-}\). The former supports querying the earliest arrival time \(t^{+}\) a journey departing from vertex \(u\) at time \(t^{-}\) can arrive at vertex \(v\), and the latter supports querying the latest departure time \(t^{-}\) a journey arriving to vertex \(v\) at time \(t^{+}\) can depart from vertex \(u\). Our algorithm to compute add_contact eagerly updates both arrays accessing \(O\left(n^{2}\tau/B\right)\) disk pages in the worst case. Despite having a linear factor on \(\tau\) instead of logarithmic, the expected cost of our update routine reduces considerably as we insert new contacts. This is because journey schedules become stricter and the probability of replacing them with faster ones reduces. Since we explicitly maintain reachability information, our algorithms to answer can_reach, is_connected, and reconstruct_journey access, respectively, one, \(\Theta\left(n^{2}/B\right)\) and \(\Theta\left(n/B\right)\) pages. We compare our novel data structure with a naive adaptation of the approach introduced in [4] using B\({}^{+}\)-trees as replacement for BSTs. Our experiments show that our novel data structure performs better on the synthetic datasets and on the majority of real-world datasets we used. Even though the worst-case complexity of our algorithm for the add_contact(u,v,t)uvt operation is linear in \(\tau\) instead of logarithmic, it runs much faster on average. We attribute this behavior to the fact that as new contacts are inserted, our data structure updates on average only a few cells of both arrays \(M_{out}\) and \(M_{in}\). We organized this paper as follows. In Section 2, we present the definitions used throughout this paper. In Section 3, we define our expanded set of r-tuples, introduce our new data structure to represent ttcs on disk, and provide low-level primitives for manipulating them. In Section 4, we describe our algorithms for each operation using our data structure along with their complexities in terms of number of disk accesses. In Section 5, we investigate the execution of our algorithms by comparing them with our implementation using B\({}^{+}\)-trees. Finally, Section 6 concludes with some remarks and open questions. ## 2 Definitions Following the definition in [7], a temporal graph is a tuple \(\mathcal{G}=(V,E,\mathcal{T},\rho,\zeta)\). Sets \(V\) and \(E\subseteq V\times V\) represent the vertices and the edges of the underlying standard graph. Interval \(\mathcal{T}=[1,\tau]\subset\mathbb{N}\) describes the lifetime of the temporal graph. We consider in this paper that \(E\) is a set of directed edges. Functions \(\rho:E\times\mathcal{T}\rightarrow\{0,1\}\) and \(\zeta:E\times\mathcal{T}\mapsto\mathbb{N}\) are, respectively, the _presence function_ and the _latency function_. The presence function expresses whether an edge is present at a time instant. We also call \((u,v,t)\) a _contact_ in \(\mathcal{G}\) if \(\rho((u,v),t)=1\). The latency function expresses the duration of an interaction for an edge at a time. Here, we use a constant latency function, namely \(\zeta=\delta\), where \(\delta\) is any fixed positive integer. We define reachability in temporal graphs in a time-respecting way, by requiring that a path travels along non-decreasing (\(\delta=0\)) or increasing (\(\delta\geq 1\)) times. These paths are called temporal paths or journeys interchangeably. **Definition 1** (Journey).: _A journey from \(u\) to \(v\) in \(\mathcal{G}\) is a sequence of contacts \(\mathcal{J}=\langle c_{1},c_{2},\ldots,c_{k}\rangle\), whose sequence of underlying edges form a valid time-respecting path from \(u\) to \(v\). For each contact \(c_{i}=(u_{i},v_{i},t_{i})\), it holds that \(\rho((u_{i},v_{i}),t_{i})=1\), \(v_{i}=u_{i+1}\), and \(t_{i+1}\geq t_{i}+\delta\) for \(i\in[1,k-1]\). We say that \(departure(\mathcal{J})=t_{1}\), \(arrival(\mathcal{J})=t_{k}+\delta\) and \(duration(\mathcal{J})=arrival(\mathcal{J})-departure(\mathcal{J})\). A journey is trivial if it comprises a single contact._ **Definition 2** (Reachability).: _A vertex \(u\) can reach a vertex \(v\) within time interval \([t_{1},t_{2}]\) iff there is a journey \(\mathcal{J}\) from \(u\) to \(v\) in \(\mathcal{G}\) that departs at \(departure(\mathcal{J})\geq t_{1}\) and arrives at \(arrival(\mathcal{J})\leq t_{2}\)._ Just as the number of paths in a standard graph, the number of journeys in a temporal graph could be too large to be stored explicitly (typically, factorial in \(n\)). To avoid this problem, r-tuples capture the fact that a vertex can reach another one within a certain time interval without storing the corresponding journeys [4]. **Definition 3** (r-tuple).: _A r-tuple is a tuple \(r=(u,v,t^{-},t^{+})\), where \(u\) and \(v\) are vertices in \(\mathcal{G}\), and \(t^{-}\) and \(t^{+}\) are timestamps in \(\mathcal{T}\). It encodes the fact that vertex \(u\) can reach vertex \(v\) through a journey \(\mathcal{J}\) such that \(departure(\mathcal{J})=t^{-}\) and \(arrival(\mathcal{J})=t^{+}\). If several such journeys exist, then they are all represented by the same R-tuple._ Lastly, given a temporal graph \(\mathcal{G}\), the timed transitive closure (TTC) of \(\mathcal{G}\) is a directed multigraph on the same set of vertices, whose edges correspond to the minimal, _i.e._, non-redundant, set of r-tuples of \(\mathcal{G}\). The purpose of tTCs is to encode reachability information among vertices, parametrized by time intervals, so that one can subsetuently decide if a new contact can be composed with existing journeys. In paper [4], the authors showed that there are \(O\left(n^{2}\tau\right)\) non-redundant r-tuples in a temporal graph \(\mathcal{G}\) and it comprises those whose intervals do not include each other for the same pair of vertices, _i.e._, only information regarding the fastest journeys. ## 3 Disk-Based Timed Transitive Closure In this section, we describe our novel approach to maintain tTCs in secondary memory. First, in Section 3.1, we define the concept of an _expanded_ set of representative r-tuples and show that it has size \(\Theta\left(n^{2}\tau\right)\). Then, in Section 3.2, we introduce our new data structure that uses this expanded set in order to improve the maintenance of data in non-uniform access storages and provide direct access to reachability information. ### Expanded Reachability Tuples (Expanded r-tuples) The data structure introduced in [4] spreads the minimal set of r-tuples into multiple BSTs, each one concerning a unique pair of vertices. The authors store these BSTs in separated regions of memory and, therefore, the organization of data is not optimal when working with storages that have non-uniform access time. In order to mitigate this problem, we define an expanded set of r-tuples \((u,v,t^{-},t^{+})\) that is easier to maintain sequentially, since we can use continuous arrays indexed by \(t^{-}\) or \(t^{+}\). First, we define the _left_ and _right_ expansion of a single r-tuple. **Definition 4** (Left and right expansion).: _The left expansion of a R-tuple \(r=(u,v,t^{-},t^{+})\) is the set containing all R-tuples \((u,v,t,t^{+})\) for \(1\leq t\leq t^{-}\). Similarly, the right expansion of \(r\) is the set containing all R-tuples \((u,v,t^{-},t)\) for \(t^{+}\leq t\leq\tau+\delta\)._ The r-tuples produced by the left expansion of a r-tuple \(r\) are valid because a source vertex departing earlier can simply wait until the departure time of \(r\), and take the original journey described by \(r\). Similarly, the r-tuples produced by the right expansion of \(r\) are valid because, after taking the original journey described by \(r\), a destination vertex can simply wait until the arrival time of the new r-tuple. Applying both expansions to each R-tuple in a set \(\mathcal{R}\) and taking the union of the sets produced by the same expansion creates two separated expanded sets, the _left-expanded_ set \(\mathcal{R}_{\text{left}}\), and the _right-expanded_ set \(\mathcal{R}_{\text{right}}\). For each expanded set, we define an inclusion operator. **Definition 5** (Left and right inclusion).: _Given any two R-tuples \(r_{1}=(u_{1},v_{1},t_{1}^{-},t_{1}^{+})\) and \(r_{2}=(u_{2},v_{2},t_{2}^{-},t_{2}^{+})\) in \(\mathcal{R}_{\text{left}}\), \(r_{1}\subseteq_{\text{left}}r_{2}\) if and only if \(u_{1}=u_{2}\), \(v_{1}=v_{2}\), \(t_{1}^{-}=t_{2}^{-}\), and \(t_{1}^{+}\leq t_{2}^{+}\). Similarly, if \(r_{1}\) and \(r_{2}\) are in \(\mathcal{R}_{\text{right}}\), \(r_{1}\subseteq_{\text{right}}r_{2}\) if and only if \(u_{1}=u_{2}\), \(v_{1}=v_{2}\), \(t_{1}^{-}\geq t_{2}^{-}\), and \(t_{1}^{+}=t_{2}^{+}\)._ However, r-tuples produced by expansion can share redundant information. For example, consider the r-tuples \(r_{1}=(a,b,2,7)\) and \(r_{2}=(a,b,2,9)\). Both r-tuples represent journeys that departs from vertex \(a\) at time \(2\) and arrives at vertex \(b\), one at time \(7\) and the other at time \(9\). In this case, \(r_{2}\) can be safely discarded since we can take a journey represented by \(r_{1}\) ending at time \(7\) and wait at vertex \(v\) until time \(9\). Redundancy of r-tuples in \(\mathcal{R}_{\text{left}}\) and \(\mathcal{R}_{\text{right}}\) is treated differently using their corresponding inclusion operators. **Definition 6** (Left and right redundancy).: _Let \(r\in\mathcal{R}_{\text{left}}\), \(r\) is called left-redundant in \(\mathcal{R}_{\text{left}}\) if there is \(r^{\prime}\in\mathcal{R}_{\text{left}}\) such that \(r^{\prime}\subseteq_{\text{left}}r\). Similarly, if \(r\in\mathcal{R}_{\text{right}}\), \(r\) is called right-redundant in \(\mathcal{R}_{\text{right}}\) if there is \(r^{\prime}\in\mathcal{R}_{\text{right}}\) such that \(r^{\prime}\subseteq_{\text{right}}r\). A set \(\mathcal{R}_{\text{left}}^{*}\) with no left-redundant R-tuple is called left non-redundant and a set \(\mathcal{R}_{\text{right}}^{*}\) with no right-redundant R-tuple is called right no-redundant._ **Lemma 1**.: _The maximum size of a left non-redundant or right non-redundant set of R-tuples for \(\mathcal{G}\) is \(\Theta\left(n^{2}\tau\right)\)._ Proof.: It suffices to prove that the maximum number of pairwise incomparable r-tuples in \(\mathcal{R}_{\text{left}}\) and \(\mathcal{R}_{\text{right}}\) is \(O\left(n^{2}\tau\right)\), since some graphs induce \(\Theta\left(n^{2}\tau\right)\) incomparable r-tuples from unexpanded sets, see [4]. We prove that the sizes of \(\mathcal{R}_{\text{left}}\) and \(\mathcal{R}_{\text{right}}\) are \(O\left(n^{2}\tau\right)\) as follows. There are \(\Theta\left(n^{2}\right)\) ordered pairs of vertices. Thus, it is enough to show that for each pair \((u,v)\), the number of incomparable r-tuples in \(\mathcal{R}_{\text{left}}^{u,v}\) and \(\mathcal{R}_{\text{right}}^{u,v}\), whose source vertex is \(u\) and destination vertex is \(v\), is \(\Theta\left(\tau\right)\). Let \(\mathcal{R}_{\text{left}}^{u,v}\) be a left non-redundant set of such R-tuples, as every incomparable r-tuple has different arrival timestamps, \(|\mathcal{R}_{\text{left}}|\leq\tau\). Similarly, let \(\mathcal{R}_{\text{right}}^{u,v}\) be a right non-redundant set of such r-tuples, as every incomparable r-tuple has different departure timestamps, \(|\mathcal{R}_{\text{right}}|\leq\tau\). ### Encoding the ttc on Disk We encode the ttc using two 3-dimensional arrays, \(M_{out}[u,t^{-},v]=t^{+}\) and \(M_{in}[v,t^{+},u]=t^{-}\), both with dimensions \(n\times\tau\times n\), representing expanded sets of r-tuples. Each cell in \(M_{out}\) represents a r-tuple in \(\mathcal{R}^{*}_{\text{left}}\) by storing the earliest possible arrival time \(t^{+}\) at which a vertex \(u\) departing at time \(t^{-}\) can reach a vertex \(v\) through a journey. If there is a cell \(M_{out}[u,t^{-},v]=t^{+}\), then all cells \(M_{out}[u,t,v]\), for \(t\in[1,t^{-}-1]\) must have an arrival time \(t_{\text{left}}\leq t^{+}\), since a journey from \(u\) departing at a time \(t<t^{-}\) can simply wait at vertex \(u\) until time \(t^{-}\) and then use the remaining path already described by \(M_{out}[u,t^{-},v]=t^{+}\). Similarly, each cell in \(M_{in}\) represents a r-tuple in \(\mathcal{R}^{*}_{\text{right}}\) by storing the latest possible departure time \(t^{-}\) at which a vertex \(v\) can arrive at time \(t^{+}\) to a vertex \(u\) through a journey. If there is a cell \(M_{in}[v,t^{+},u]=t^{-}\), then all cells \(M_{in}[v,t,u]\), for \(t\in[t^{+}+1,\tau+\delta]\), must have a departure time \(t_{\text{right}}\geq t^{+}\), since a journey to \(v\) arriving at a time \(t>t^{+}\) can use the path already described by \(M_{in}[v,t^{+},u]=t^{-}\) and then simply wait at vertex \(v\) until time \(t\). During the creation of a ttc, \(M_{out}\) cells are initialized with \(\infty\) and \(M_{in}\) cells with \(-\infty\). Figure 1 illustrates both \(M_{out}\) and \(M_{in}\). Internally, we represent \(M_{out}\) and \(M_{in}\) as one-dimensional arrays using, respectively, the mapping functions \(F_{out}\colon(u,t^{-},v)\mapsto n(u\tau+\tau-(t^{-}+1))+v+1\) and \(F_{in}\colon(v,t^{+},u)\mapsto n(v\tau+t^{+}-\delta)+u+1\). Observing Figure 1, \(F_{out}\) arranges the cells of \(M_{out}\) by row (left to right) and, for each source vertex, later departures come first. \(F_{in}\) also arranges \(M_{in}\) by row but, in contrast, for each destination vertex, earlier arrivals come first. By subtracting \(\delta\) from \(t^{+}\) in \(F_{in}\), we ensure all \(t^{+}\) values fit in \(M_{in}\). Thus, reading sequentially the range \([F_{out}(u,t^{-},1),F_{out}(u,t^{-},n)]\) from \(M_{out}\) gives direct access to the earliest arrival times to reach all vertices when departing from \(u\) at time \(t^{-}\). Similarly, reading sequentially the range \([F_{in}(v,t^{+},1),F_{in}(v,t^{+},n)]\) from \(M_{in}\) gives direct access to the latest Figure 1: Temporal graph and its associated reachability data structure. In (a), we show a temporal graph with three vertices. Numbers on edges represent the time in which edges are active. Edges with the same color form a journey from vertex \(u\) to vertex \(v\). In (b), we show the corresponding arrays \(M_{out}\) and \(M_{in}\) considering \(\delta=1\). Both arrays are depicted as 2-dimensional arrays by grouping their first two dimensions. For instance, \(M_{out}[u,2,w]=M_{out}[(u,2),w]=3\). Cells have the same color as the contacts, _i.e._, the edge at a timestamp, that originated the update. \(M_{out}\) stores the minimum possible arrival timestamps to destinations and \(M_{in}\) stores the maximum possible departure timestamps from origins. departure times to leave all vertices when arriving at \(v\) at time \(t^{+}\). Finally, assuming a general function \(F\) that maps to \(F_{out}\), whether accessing \(M_{out}\), or \(F_{in}\), whether accessing \(M_{in}\), we provide the following low-level operations for manipulating our data structures on disk: 1. read_cell(\(M,w_{1},t,w_{2}\)), which returns the value of \(M\) (\(M_{out}\) or \(M_{in}\)) at position \(F(w_{1},t,w_{2})\); 2. write_cell(\(M,w_{1},t_{1},w_{2},t_{2}\)), which replaces the value of \(M\) at position \(F(w_{1},t_{1},w_{2})\) with \(t_{2}\); 3. read_adjacency(\(M,w,t\)), which returns a list containing the values of \(M\) in the interval \([F(w,t,1),F(w,t,n)]\), _i.e._, the minimum possible timestamps to arrive at any vertex while departing from \(w\) at timestamp \(t\); 4. write_adjacency(\(M,w,t,L\)), which replaces the values of \(M\) values in the interval \([F(w,t,1),F(w,t,n)]\) with the values of the list \(L\), _i.e._, the maximum possible timestamps to depart from any vertex while arriving at \(w\) at timestamp \(t\). Operations (1) and (2) access \(O\left(1\right)\) pages on disk, while operations (3) and (4) access \(O\left(\nicefrac{{n}}{{B}}\right)\) pages, where \(B\) is the page size. ## 4 Ttc Operations n this section, we describe algorithms for the operations described in [4]: the update operation add_contact(u,v,t); the query operations can_reach(\(u\),v,\(t_{1}\),\(t_{2}\)) and is_connected(\(t_{1}\),\(t_{2}\)); and the reconstruction operation reconstruct_journey(\(u\),\(v\),\(t_{1}\),\(t_{2}\)). In Section 4.1, we present our algorithm for add_contact(u,v,t) that receives a contact and adds to our data structure the reachability information related to the new available journeys passing thought it. In Section 4.2, we breafly describe algorithms for can_reach(\(u\),\(v\),\(t_{1}\),\(t_{2}\)) and is_connected(\(t_{1}\),\(t_{2}\)) since, as reachability information can be directly accessed, they are straightforward. Finally, in Section 4.3, we detail our algorithm for reconstruct_journey(\(u\),\(v\),\(t_{1}\),\(t_{2}\)) that reconstructs a valid journey by concatenating one contact at a time. ### Update operation An algorithm to perform add_contact(u,v,t) must first add the reachability information regarding the new trivial journey \(\mathcal{J}_{triv}\) from vertex \(u\) to vertex \(v\) departing at time \(t\) and arriving at time \(t+\delta\). Next, for all vertices \(w^{+}\) that \(v\) can reach when departing at a time later than or exactly \(t+\delta\), the algorithm updates the reachability information from \(u\) to \(w^{+}\) whether the new available journey passing through \(\mathcal{J}_{triv}\) has earlier arrival time. Then, for all vertices \(w^{-}\) that can reach \(u\) when arriving at a time earlier than or exactly \(t\), the algorithm updates the reachability information from \(w^{-}\) to \(v\) whether the new available journey passing through \(\mathcal{J}_{triv}\) has later departure time. Finally, the algorithm must consider all new available journeys from vertices \(w^{-}\) to vertices \(w^{+}\) that pass through \(\mathcal{J}_{triv}\) and update the current reachability information if necessary. Algorithm 1 describes the maintenance of both arrays \(M_{out}\) and \(M_{in}\) when inserting a new contact. In line 1, the algorithm checks if the structure already has the information of the new contact \((u,v,t)\). If it still has not, in line 2, it retrieves the latest departure timestamps of journeys departing from vertices \(w^{-}\) and arriving at vertex \(u\) at timestamp \(t\) as an array \(T^{-}\). In line 3, the algorithm retrieves the earliest arrival timestamps of journeys departing from vertex \(v\) at timestamp \(t+\delta\) and arriving at vertices \(w^{+}\) as an array \(T^{+}\). In lines 4 and 5, it sets the reachability information about the new trivial journey \(\mathcal{J}_{triv}=(u,v,t)\) that departs at timestamp \(t\) and arrives at \(t+\delta\). From line 6 to 14, the algorithm eagerly updates all cells \(M_{out}[w^{-},t^{\prime},w^{+}]=t^{+}\) for \(t^{-}\geq t^{\prime}\geq 1\). In this part, the algorithm proceeds by first iterating through all vertices \(w^{-}\), _i.e._, those that reached \(u\) before than or exactly at timestamp \(t\), and retrieving their departure timestamps \(t^{-}\). Then, it progressively retrieves the current arrival timestamps to reach vertices \(w^{+}\) when departing at timestamp \(t^{\prime}\), by reading the range \([F_{out}(w^{-},t^{\prime},1),F_{out}(w^{-},t^{\prime},n)]\), and updates it whether the new journeys passing through \(\mathcal{J}_{triv}\) have earlier arrival timestamps. Note that, vertices that could not reach \(u\) before than or exactly at timestamp \(t\) have their arrival time equals to \(-\infty\); therefore, they are not considered in the while loop starting at line 8. This process continues until the current reachability information in the whole range does not change or \(t^{\prime}<1\). Similarly, from line 6 to 14, the algorithm eagerly updates all cells \(M_{in}[w^{+},t^{\prime\prime},w^{-}]=t^{-}\) for \(t^{+}\leq t^{\prime\prime}\leq\tau+\delta\). The algorithm proceeds by first iterating through all vertices \(w^{+}\), _i.e._, those that \(v\) can reach departing after or exactly at timestamp \(t+\delta\), and retrieving their arrival timestamps \(t^{+}\). Then, it progressively retrieves the current departure timestamps in which vertices \(w^{-}\) departs present in range \([F_{in}(w^{+},t^{\prime\prime},1),F_{in}(w^{+},t^{\prime\prime},n)]\) and updates it whether the new available journeys passing through \(\mathcal{J}_{triv}\) have later departure timestamps. This process continues until the current reachability information in the range does not change or \(t^{\prime\prime}>\tau+\delta\). Figure 2 illustrates the addition of new contacts to a temporal graph along with the maintenance of the arrays \(M_{out}\) and \(M_{in}\). **Theorem 2**.: _Algorithm 1 access \(O\left({}^{n^{2}\tau}\!/\!B\right)\) pages on disk._ Proof.: The read_cell operation in line 1 access a single page. The two read_adjacency operations in lines 2 and 3 access \(O\left({}^{n}\!/\!B\right)\) sequential pages each. In lines 4 and 5, the algorithm writes the reachability of the new trivial journey \(J_{triv}=\{(u,v,t)\}\) in main memory. The for loop starting at line 6 iterates over \(n\) vertices \(w^{-}\) and, the while loop starting at line 8 iterates through \(O(\tau)\) timestamps \(t^{\prime}\). At each of the \(O(n\tau)\) iterations, it calls read_adjacency in order to read \(n\) cells, and then (possibly) calls write_adjacency to write the \(n\) cells back while accessing, in each operation, \(O\left({}^{n}\!/\!B\right)\) sequential pages. Due to our mapping function \(F_{out}\), at every timestamp \(t^{\prime}\), the algorithm will read a page that is arranged sequentially on disk. The loop from line 15 to 23 does a similar computation. ### Reachability and Connectivity Queries Both algorithms for can_reach(\(u\),\(v\),\(t_{1}\),\(t_{2}\)) and is_connected(\(t_{1}\),\(t_{2}\)) are straightforward. The algorithm to perform can_reach(\(u\),\(v\),\(t_{1}\),\(t_{2}\)) comprises testing whether read_cell(\(M_{out},u,t_{1},v)\leq t_{2}\) while accessing only a single page from disk. The algorithm to perform is_connected(\(t_{1}\),\(t_{2}\)), for each origin vertex \(u\in V\), calls \(tmp\leftarrow\)read_adjacency(\(M_{out},u,t_{1}\)) and then for each destination vertex \(v\in V\), it checks whether \(tmp[v]\leq t_{2}\). As soon as a check is negative, the answer is false; otherwise, it is true. Therefore, while the algorithm reads all cells, it accesses \(O\left({}^{n^{2}}\!/\!B\right)\) pages on disk. ### Journey Reconstruction For the reconstruct_journey(\(u\),\(v\),\(t_{1}\),\(t_{2}\)) query, we need to augment each cell of \(M_{in}\) with the first successor vertex of the corresponding journeys. Algorithm 1 can be trivially modified to include ``` 1:\(u,v\in V\) with \(u\neq v\), \(n=|V|\), \(t\in\mathcal{T}\), \(\tau\), \(\delta\), \(M_{out}\), \(M_{in}\) 2:ifread_cell(\(M_{out},u,t,v\)) \(\neq t+\delta\)then\(\triangleright\) check whether \((u,v,t)\) was inserted 3:\(T^{-}\leftarrow\textsc{read\_adjacency}(M_{in},u,t)\) 4:\(T^{+}\leftarrow\textsc{read\_adjacency}(M_{out},v,t+\delta)\) 5:\(T^{-}[u]\gets t\)\(\triangleright\) add the new trivial journey information 6:\(T^{+}[v]\gets t+\delta\) 7:for\(w^{-}\) from \(1\) up to \(n\)do\(\triangleright\) will update \(M_{out}\) with new journeys from \(w^{-}\) 8:\(t^{\prime}\gets T^{-}[w^{-}]\) 9:while\(t^{\prime}\neq-\infty\) and \(t^{\prime}\geq 1\)do\(\triangleright\) loop for \(t^{-}\geq t^{\prime}\geq 1\) 10:\(T^{+}_{cur}\leftarrow\textsc{read\_adjacency}(M_{out},w^{-},t^{\prime})\) 11:\(T^{+}_{cur}[w^{+}]\leftarrow\min(T^{+}_{cur}[w^{+}],T^{+}[w^{+}])\) for \(w^{+}\in[1,n]\) 12:if\(T^{+}_{cur}\) has not changed then 13:break 14:write_adjacency(\(M_{out},w^{-},t^{\prime},T^{+}_{cur}\)) 15:\(t^{\prime}\gets t^{\prime}-1\) 16:for\(w^{+}\) from \(1\) up to \(n\)do\(\triangleright\) will update \(M_{in}\) with new journeys to \(w^{+}\) 17:\(t^{\prime\prime}\gets T^{+}[w^{+}]\) 18:while\(t^{\prime\prime}\neq\infty\) and \(t^{\prime\prime}\leq\tau+\delta\)do\(\triangleright\) loop for \(t^{+}\leq t^{\prime\prime}\leq\tau+\delta\) 19:\(T^{-}_{cur}\leftarrow\textsc{read\_adjacency}(M_{in},w^{+},t^{\prime\prime})\) 20:\(T^{-}_{cur}[w^{-}]\leftarrow\max(T^{-}_{cur}[w^{-}],T^{-}[w^{-}])\) for \(w^{-}\in[1,n]\) 21:if\(T^{-}_{cur}\) has not changed then 22:break 23:write_adjacency(\(M_{in},w^{+},t^{\prime\prime},T^{-}_{cur}\)) 24:\(t^{\prime\prime}\gets t^{\prime\prime}+1\) ``` **Algorithm 1**add_contact(u,v,t) \(u\)\(w\)\(v\)\(u,1\)\(u,2\)\(\infty\)\(\infty\)\(\infty\)\(u,3\)\(\infty\)\(\infty\)\(u,4\)\(\infty\)\(\infty\)\(\infty\)\(u,5\)\(\infty\)\(\infty\)\(u,5\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,2\)\(\infty\)\(\infty\)\(\infty\)\(u,3\)\(\infty\)\(\infty\)\(\infty\)\(u,4\)\(\infty\)\(\infty\)\(\infty\)\(u,5\)\(\infty\)\(\infty\)\(u,5\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,2\)\(\infty\)\(\infty\)\(u,3\)\(\infty\)\(\infty\)\(u,4\)\(\infty\)\(\infty\)\(u,5\)\(\infty\)\(u,5\)\(\infty\)\(\infty\)\(u,6\)\(\infty\)\(u,7\)\(\infty\)\(u,8\)\(\infty\)\(u,9\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,2\)\(\infty\)\(\infty\)\(u,2\)\(\infty\)\(\infty\)\(u,3\)\(\infty\)\(\infty\)\(u,4\)\(\infty\)\(\infty\)\(\infty\)\(u,5\)\(\infty\)\(\infty\)\(u,6\)\(\infty\)\(u,7\)\(\infty\)\(u,8\)\(\infty\)\(u,9\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,2\)\(\infty\)\(\infty\)\(\infty\)\(u,3\)\(\infty\)\(\infty\)\(u,4\)\(\infty\)\(\infty\)\(\infty\)\(u,5\)\(\infty\)\(\infty\)\(u,6\)\(\infty\)\(u,7\)\(\infty\)\(u,8\)\(\infty\)\(u,9\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,2\)\(\infty\)\(\infty\)\(\infty\)\(u,3\)\(\infty\)\(\infty\)\(\infty\)\(u,4\)\(\infty\)\(\infty\)\(\infty\)\(u,5\)\(\infty\)\(\infty\)\(\infty\)\(u,5\)\(\infty\)\(\infty\)\(\infty\)\(u,6\)\(\infty\)\(u,7\)\(\infty\)\(u,8\)\(\infty\)\(u,9\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\)\(u,1\)\(\infty\ this information. For instance, the successor vertex of a trivial journey from a contact \((u,v,t)\) is the vertex \(v\) since it is the first successor of \(u\). Thus, while composing previous r-tuples in our update algorithm, one would need to read previous reachability information and compose than appropriately considering also the successor vertex present in each cell of \(M_{in}\). Algorithm 2 gives the details to process the reconstruct_journey(\(u\),\(v\),\(t_{1}\),\(t_{2}\)) query. Its goal algorithm is to reconstruct a journey by unfolding the intervals and successor fields. In line 1, it initializes an empty journey \(\mathcal{J}\). In line 2, it retrieves the earliest timestamp \(t^{+}\) a journey from vertex \(u\) departing at timestamp \(t_{1}\) can arrive at vertex \(v\) by reading on disk the entry \(M_{out}[u,t_{1},v]\). If \(t^{+}\leq t_{2}\), it starts reconstructing the resulting journey, otherwise, it returns an empty journey since there is no journey completely in the interval \([t_{1},t_{2}]\). From lines 4 to 10, it reconstructs the resulting journey by: first, in lines 4 and 5, initializing the successor vertex \(succ\) to \(u\), and accessing on disk all the entries \(M_{in}[v,t^{+},w]\) for \(w\in V\); then, from lines 6 to 10, the journey s reconstructed by iteratively accessing the next earliest departing timestamp \(t^{-}\) and the corresponding successor vertex \(next\_succ\) that reaches \(v\) at timestamp \(t^{+}\) while concatenating the contact \((succ,next\_succ,t^{-})\) at the end of \(\mathcal{J}\) and updating the current successor vertex. ``` 1:\([t_{1},t_{2}]\subset\mathcal{T},u,v\in V\) with \(u\neq v\) 2:\(\mathcal{J}\leftarrow\{\}\) 3:\((t^{+},\_)\leftarrow\textsc{read\_cell}(M_{out},u,t_{1},v)\) 4:if\(t^{+}\leq t_{2}\)then 5:\(succ\gets u\) 6:\(in\leftarrow\textsc{read\_adjacency}(M_{in},v,t^{+})\) 7:while\(succ\neq v\)do 8:\(t^{-}\gets in[succ].t\) 9:\(next\_succ\gets in[succ].succ\) 10:\(\mathcal{J}\leftarrow\mathcal{J}\cup(succ,next\_succ,t^{-})\) 11:\(succ\gets next\_succ\) 12:return\(\mathcal{J}\) ``` **Algorithm 2** reconstruct_journey(\(u\),\(v\),\(t_{1}\),\(t_{2}\)) **Theorem 3**.: _Algorithm 2 sequentially accesses \(O\left(\nicefrac{{n}}{{B}}\right)\) pages on disk, where \(n\) is the number of vertices and \(B\) is the page size._ Proof.: The algorithm accesses one page by calling read_cell(\(M_{out},u,t_{1},v\)) in line 2. After that, it is known whether a journey exists or not. If a journey exists, it sequentially accesses \(\nicefrac{{n}}{{B}}\) pages by calling read_adjacency(\(M_{in},v,t^{+}\)) in line 5. The result of this call has all information needed to reconstruct a valid journey. Finally, in the loop from line 6 to line 8, the algorithm extends the resulting journey by one contact at each iteration using information already in memory. Thus, the number of pages accessed is dominated by the call read_adjacency(\(M_{in},v,t^{+}\)). ## 5 Experiments In this section, we will present experiments comparing our novel data structure based on sequential arrays with the approach we adapted based on [4] using B\({}^{+}\)-trees as a replacement for self-balanced binary search trees (BSTs). Briefly, the approach introduced in [4] stores, in a matrix \(n\times n\), pointers to BSTs containing time intervals. In each BST, only non-redundant intervals are kept, _i.e._, those that do not contain another interval in the same tree. The authors proposed to use _join-based_ operations in order to remove sequences of non-redundant intervals in \(\log\tau\) time. These operations can be found in 1 for B\({}^{+}\)-trees. In the following, we present two experiments in Sections 5.1 and 5.2. In the first one, we inserted unsorted contacts from complete temporal graphs, incrementally, in both data structures using the operation add_contact(u,v,t). In the second one, we inserted shuffled contacts from real-world datasets. ### Experiments with Synthetic Data In this first experiment, we generated complete temporal graphs with the number of vertices fixed to 100 and varied the number of timestamps \(\tau\) from 10 to 10000. Then, we inserted their shuffled contacts in both data structures using the add_contact(u,v,t) operation. The time to preallocate and initialize the arrays on disk for our array-based data structure was not considered in the total time. We note that this extra cost can be high for large parameters; therefore, one should consider it whenever applicable. Figure 3: Cumulative wall-clock time to maintain data structures for reachability queries on synthetic data. We inserted shuffled contacts from complete temporal graphs into the data structures varying the number of timestamps \(\tau\) while fixing the number of vertices to 100. Red lines represent our novel data structure based on sequential arrays. Blue lines represent our adaptation of the approach introduced in [4] using B\({}^{+}\)-trees as self-balanced BSTs. Figure 3 shows the mean cumulative wall-clock time, averaged over 10 executions, to maintain both data structures as new unsorted contacts were inserted. We see that our novel data structure performs better for all configurations. Even though the worst-case complexity of our algorithm for the add_contact(u,v,t) operation is linear in \(\tau\) instead of logarithmic, it runs much faster using synthetic data. We attribute this behavior to the fact that as new contacts are inserted, the probability of composing better r-tuples, _i.e._ journeys, decreases rapidly and, thus, our data structure updates on average only few cells per contact insertion. Next, we will argue why the run time of our algorithm reduces with the addition of contacts. Each pair of vertices \((u,v)\) are associated to a set \(\mathcal{I}\) containing intervals \([t^{-},t^{+}]\subseteq[1,\tau]\) in which \(u\) can reach \(v\) departing at \(t^{-}\) and arriving at \(t^{+}\). For a particular pair of vertices, when an algorithm inserts a new interval \(I\), all intervals \(I^{\prime}\) such that \(I\subseteq I^{\prime}\) can be safely removed since they become redundant. Our data structure organizes these intervals in the arrays \(M_{out}\) and \(M_{in}\), which fix, respectively, the left and right endpoints, and our update algorithm discards redundant intervals by updating their cells accordingly by using Definition 6. Consider the hierarchy of intervals illustrated in Figure 4(a) for \(\tau=4\). Each interval with length \(l\) is linked to the intervals with length \(l-1\) that it totally encloses. For example, interval \([0,5]\), with length \(5\), links to intervals \([0,4]\) and \([1,5]\), with length \(4\), because \([0,4]\subseteq[0,5]\) and \([1,5]\subseteq[0,5]\). Initially, all intervals are available for insertion in our data structure. When a new interval \([1,2]\) is inserted, as show in Figure 4(b), all intervals that contain it, including itself, are not available for insertion anymore. Our update algorithm conceptually removes these intervals by drawing left and right frontiers separating available and non-available intervals starting from \([1,2]\). For instance, intervals \([1,2]\) and \([0,2]\), which belong to the left frontier, are updated in \(M_{in}\) since they share the same right endpoint, and intervals \([1,2]\), \([1,3]\), \([1,4]\) and \([1,5]\), which belong to the right frontier, are updated in \(M_{out}\) since they share the same left endpoint. In this proccess, up to \(\tau\) cells are updated in both \(M_{in}\) and \(M_{out}\). Next, when a new interval \([3,5]\) is inserted, as shown in Figure 4(c), our algorithm must, again, draw the left and right frontiers starting from \([3,5]\); however it does not need to advance previously drawn frontiers. In this case, only intervals \([3,5]\) and \([2,5]\), which belong to the left frontier, are updated in \(M_{in}\). In Figure 4(d), interval \([2,3]\) is inserted and the same proccess repeats. We see that as new intervals are inserted, the number of available intervals rapidly reduces. Thus, even though our algorithm has complexity \(O\left(\nicefrac{{n^{2}\tau}}{{B}}\right)\), it can run much faster when considering a sequence of contact insertions since the number of cells to be updated reduces over time. Moveover, it is guaranteed that, for each new contact \((u,v,t)\), our algorithm will make unavailable for insertion every interval that is still available inside and at the frontiers starting from \([t,t+\delta]\) in the lowest level of the hierarchy associated with \((u,v)\). ### Experiments with Real-World Datasets In this second experiment, we downloaded small and medium real-world available on [https://networkrepository.com/dynamic.php](https://networkrepository.com/dynamic.php), and preprocessed them using our script available on [https://bitbucket.com/luizufu/temporalgraph-datasets-preprocessing](https://bitbucket.com/luizufu/temporalgraph-datasets-preprocessing). During the preprocessing, we rellabelled the vertices and shifted the timestamps of each dataset so that vertex identifiers were beetween \([1,n]\) and timestamp values start from \(1\). Then, we inserted the shuffled contacts of each dataset in both data structures using the add_contact(u,v,t) operation. We assumed that all used datasets represent temporal digraphs, and we used \(\delta=1\), _i.e._, traversing ## 5 Conclusion Figure 4: Illustration of the process performed by our update algorithm considering a fixed pair of vertices \((u,v)\) from a temporal graph with \(\tau=4\). Available intervals for insertion are colored in black, and invalidated intervals, _i.e._ intervals that should not be considered anymore by our update algorithm, are colored in different colors. Links represent the direct container relation between intervals with length \(l\) and intervals with length \(l-1\). any contact takes one time unit. Table 3 shows the mean wall-clock time, averaged over 10 executions, to insert all shuffled contacts of each dataset into both data structures. We see that our novel data structure performs better on the majority of datasets. However for the largest datasets, copresence-LH10 and copresence-LyonSchool, the tree-based data structure performed better. Both datasets have high values for \(\tau\) and low density. It means that, as density is too small, each insertion of a contact \((u,v,t)\) may trigger an initial update over arrays \(M_{out}\) and \(M_{in}\) that will touch many cells on disk. As in Figure 4(b), for most insertions, our update algorithm will draw left and right frontiers on the almost empty hierarchy associated with the pair of vertices \((u,v)\) starting from interval \([t,t+\delta]\). Therefore, in this case, the linear factor on \(\tau\) from the cost \(O(^{n^{2}\tau}/B)\) of our update algorithm will have a bigger impact on the run time since the sequence of insertions is not sufficiently long for our algorithm to benefit from later insertions. \begin{table} \begin{tabular}{l r r r r r r} \hline \hline dataset & \(n\) & \(\tau\) & contacts & density & **Array-Based** & **Tree-Based** \\ \hline aves-sparrow & 52 & 2 & 516 & 0.1 & \(0.01\pm 0\) & \(0.07\pm 0\) \\ aves-weaver & 445 & 23 & 1423 & 0.003 & \(0.19\pm 0\) & \(1.16\pm 0.01\) \\ aves-wildbird & 202 & 6 & 11900 & 0.05 & \(0.97\pm 0.01\) & \(9.52\pm 0.15\) \\ ant-colony1 & 113 & 41 & 111578 & 0.46 & \(25.84\pm 0.19\) & \(161.3\pm 1.03\) \\ ant-colony2 & 131 & 41 & 139925 & 0.2 & \(43.96\pm 0.49\) & \(261.98\pm 1.96\) \\ ant-colony3 & 160 & 41 & 241280 & 0.23 & \(89.37\pm 0.73\) & \(524.79\pm 6.71\) \\ ant-colony4 & 102 & 41 & 81599 & 0.19 & \(16.49\pm 0.12\) & \(104.62\pm 1.27\) \\ ant-colony5 & 152 & 41 & 194317 & 0.21 & \(166.93\pm 3.63\) & \(526.03\pm 144.32\) \\ ant-colony6 & 164 & 39 & 247214 & 0.24 & \(88.99\pm 0.93\) & \(608.87\pm 167.97\) \\ copresence-LH10 & 73 & 259181 & 150126 & 0.0001 & - & \(61.5\pm 0.53\) \\ copresence-LyonSchool & 242 & 117721 & 6594492 & 0.001 & - & \(14887.45\pm 1576.49\) \\ kilifi-within-households & 54 & 59 & 32426 & 0.19 & \(0.09\pm 0\) & \(0.21\pm 0\) \\ mammalia-primate & 25 & 19 & 1340 & 0.12 & \(0.09\pm 0\) & \(0.33\pm 0.01\) \\ mammalia-raccoon & 24 & 52 & 1997 & 0.06 & \(0.21\pm 0\) & \(0.44\pm 0.01\) \\ mammalia-voles-bhp & 1686 & 63 & 5324 & 0.00003 & \(13.76\pm 0.82\) & \(19.22\pm 0.24\) \\ mammalia-voles-kcs & 1218 & 64 & 4258 & 0.00004 & \(7.92\pm 0.27\) & \(11.78\pm 0.09\) \\ mammalia-voles-plj & 1263 & 64 & 3863 & 0.00003 & \(6.18\pm 0.23\) & \(10.68\pm 0.04\) \\ mammalia-voles-rob & 1480 & 63 & 4569 & 0.00003 & \(10.28\pm 0.41\) & \(15.09\pm 0.12\) \\ tortoise-bsv & 136 & 4 & 554 & 0.008 & \(0.01\pm 0\) & \(0.14\pm 0.01\) \\ tortoise-cs & 73 & 10 & 258 & 0.005 & \(0.01\pm 0\) & \(0.05\pm 0\) \\ tortoise-fi & 787 & 9 & 1713 & 0.0003 & \(0.15\pm 0\) & \(2.71\pm 0.01\) \\ trophallaxis-colony1 & 41 & 8 & 308 & 0.02 & \(0.02\pm 0\) & \(0.06\pm 0\) \\ trophallaxis-colony2 & 39 & 8 & 330 & 0.03 & \(0.02\pm 0\) & \(0.05\pm 0\) \\ \hline \hline \end{tabular} \end{table} Table 2: Total wall-clock time in seconds to insert all shuffled contacts from real-world datasets with number of vertices \(n\), number of timestamps \(\tau\), number of contacts into data structures for reachability queries, and the density of the temporal graph represented by the dataset. Values were rounded to two decimal places. Array-based refers to our novel data structure and tree-based refers to our implementation of the approach introduced in [4] using B\({}^{+}\)-trees as BSTs replacement. Executions that reached the time limit of 5 hours are marked with the symbol “-”. Concluding remarks We presented in this paper an incremental disk-based data structure to solve the dynamic connectivity problem in temporal graphs. Our data structure prioritizes query time, answering reachability queries by accessing only one page. Based on the ability to quickly retrieve reachability information among vertices inside time intervals, it can: insert contacts in a non-chronological order accessing \(O\left(n^{2}\nicefrac{{}}{{p}}\right)\) pages, where \(B\) is the size of disk pages; check whether a temporal graph is connected within a time interval accessing \(O\left(n^{2}\nicefrac{{}}{{B}}\right)\) pages, and reconstruct journeys accessing \(O\left(\nicefrac{{}}{{n}}\right)\) pages. Our algorithms exploit the special features of non-redundant (minimal) reachability information, which we represent explicitly through the concept of expanded r-tuples. As in [4], the core of our data structure, is essentially a collection of non-redundant r-tuples, whose size (and that of the data structure itself) cannot exceed \(O\left(n^{2}\tau\right)\). However, in our approach, all this space must be preallocated on disk. The benefit of our data structure is that algorithms explicitly manage data sequentially and, therefore, it is more suitable for secondary memories in which random accesses are expensive. Further investigations could be done in the direction of improving the complexity of our update algorithm. Can add_contact(u,v,t) access less than \(O\left(n^{2}\nicefrac{{}}{{B}}\right)\) pages? Another direction could be designing efficient disk-based data structures for the decremental and the fully-dynamic versions of this problem. With _unsorted_ contact insertion and deletion, it seems to represent both a significant challenge and a natural extension of the present work, one that would certainly develop further our common understanding of temporal reachability. Finally, it could be worth to investigate compressing algorithms to reduce the space of our data structure and the number of pages accessed by our update algorithm. Specifically, we think that compression algorithms based on differences and run-length coding [10] could achieve a very high compression rate since the arrays \(M_{out}\) and \(M_{in}\) store repeating ordered values. The compressing schema could also solve the preallocation and initialization problem since all cells of \(M_{out}\) and \(M_{in}\) have, initially, the same value, which are very compressible. ## Appendix A Join and split operations for B\({}^{+}\)-trees Let each leaf node of B\({}^{+}\)-trees contains an array \(K\) of keys of size \(N\) and a pointer to its next sibling. Let each non-leaf node contains an array of keys \(K\) of size \(M\) and an additional array of pointers \(C\) to child nodes of size \(M+1\). Additionally, assume that every node contains the height of the sub-tree it belongs, a pointer to its leftmost leaf child and a pointer to its rightmost leaf child. We note that we use these additional per-node data to simplify our algorithms and discussions. In a real implementation, only the root node (the tree itself) must maintain them during the insertion and update operations. Information regarding the rest of the nodes can be computed during the execution of the next algorithms without increasing complexities. ### Join operation on B\({}^{+}\)-trees Algorithm 3 performs the operation join for B\({}^{+}\)-trees. Given two B\({}^{+}\)-trees \(T_{left}\) and \(T_{right}\), such that keys present in \(T_{left}\) are smaller to keys in \(T_{right}\), it must merge both trees in order to create a new valid B\({}^{+}\)-tree \(T\) containing all keys present in \(T_{left}\) and \(T_{right}\). As B\({}^{+}\)-trees place leaf nodes at the same level, it simply inserts or shares the data present in the root node of the smaller tree into the appropriate node at the same height in the bigger tree. Then it maintains the \(\textsc{B}^{+}\)-tree invariances up to its root node of the changed bigger tree whenever necessary. First, in line 11, the algorithm sets the next sibling of the rightmost leaf of \(T_{left}\) to be the leftmost leaf of \(T_{right}\). Then, if \(\texttt{height}(T_{left})\geq\texttt{height}(T_{right})\), in lines 13 and 14, it adds \(T_{right}\) to \(T_{left}\) by calling \(\texttt{joinRight}(T_{left},T_{right})\) and returns \(T_{left}\); otherwise, in lines 16 and 17, it adds \(T_{left}\) to \(T_{right}\) by calling \(\texttt{joinLeft}(T_{left},T_{right})\) and returns \(T_{right}\). From lines 1 to 11, we detail the \(\texttt{joinRight}\) routine, the \(\texttt{joinLeft}\) routine is implemented symmetrically. In line 1, the algorithm descends \(T_{left}\) until reaching the rightmost node \(n_{left}\) at the same height of the root node of \(T_{right}\). If a single node of size \(B\) can fit the content of both \(n_{left}\) and the root node of \(T_{right}\), in line 5, it simply merges both nodes by adding to \(n_{left}\) the data present in the root node of \(T_{right}\). Otherwise, in line 7, it equally shares the data of both nodes, and, in line 8, it inserts into the parent of \(n_{left}\) a new key together with a pointer to the root node of \(T_{right}\). If the parent node has no space left to accommodate the new data, a node splitting routine must be invoked and this process can continue up to the root node of \(T_{left}\). Finally, if the algorithm needs to split the current root node of \(T_{left}\), then it creates a new root node, and, in this case, In line 10, it increments the height of \(T_{left}\) by one. ``` 1:Two trees of intervals \(T_{left}\) and \(T_{right}\) 2:procedurejoinRight(\(T_{left},T_{right}\)) 3:\(n_{left}\leftarrow\texttt{descend\_right}(T_{left},\texttt{height}(T_{left})- \texttt{height}(T_{right}))\) 4:\(n_{right}\leftarrow\texttt{root}(T_{right})\) 5:if\(\texttt{size}(n_{left})+\texttt{size}(n_{right})\leq B\)then 6:\(n_{left}\leftarrow\texttt{merge}(n_{left},n_{right})\) 7:else 8:\(\texttt{share}(n_{left},n_{right})\) 9:insert\(\texttt{rec}(\texttt{parent}(n_{left}),\texttt{min\_key}(n_{right}),n_{right})\) 10:if new root node was created then 11:\(\texttt{height}(T_{left})\leftarrow\texttt{height}(T_{left})+1\) 12:next\(\texttt{leaf}(\texttt{rightmost\_leaf}(T_{left}))\leftarrow\texttt{leftmost\_leaf}(T_{right})\) 13:if\(\texttt{height}(T_{left})\geq\texttt{height}(T_{right})\)then 14:\(\texttt{joinRight}(T_{left},T_{right})\) 15:return\(T_{left}\) 16:else 17:\(\texttt{joinLeft}(T_{left},T_{right})\) 18:return\(T_{right}\) ``` **Algorithm 3**join Theorem 4.: _Algorithm 3 accesses \(O(|\texttt{height}(T_{left})-\texttt{height}(T_{right})|)\) pages in the worst-case._ Proof.: In line 11, the algorithm accesses one page to set the next child node of the rightmost leaf node of \(T_{left}\). Then, it calls \(\texttt{joinRight}\) or \(\texttt{joinLeft}\) depending on the heights of \(T_{left}\) and \(T_{right}\), both accessing the same amount of pages. Without loss of generality, assume that \(\texttt{height}(T_{left})\geq\texttt{height}(T_{right})\) and it calls thus the \(\texttt{joinRight}\) procedure. Then, at line 2, the algorithm accesses \(\texttt{height}(T_{left})-\texttt{height}(T_{right})\) pages while descending to the rightmost node of \(T_{left}\) at height height(\(T_{right}\)). Next, if there is enough room to fit the data of both nodes being merged in a single node, it accesses \(O(1)\) pages and the algorithm ends. Otherwise, it accesses \(O(1)\) pages to share the content present in the considered nodes. Then, it accesses, again, \(O(\texttt{height}(T_{left})-\texttt{height}(T_{right}))\) pages in order to insert new key and pointer pairs up to the root of \(T_{left}\) in the worst-case. Finally, if a new node is created, it access one more page to increment the height of \(T_{left}\) and the algorithm ends. ### Split operation on B\({}^{+}\)-trees Algorithm 4 performs the operation split for B\({}^{+}\)-trees. Given an interval key \(L\), it must split a tree \(T\) in two trees \(T_{left}\) and \(T_{right}\) such that all keys in \(T_{left}\) are smaller than \(I\) and all keys in \(T_{right}\) are greater or equal to \(I\). To accomplish this task, it recursively descends \(T\) from the root node to the leaf node containing the biggest key less than \(L\) while partitioning nodes appropriately and, during the backward phase of the recursion, progressively building \(T_{left}\) and \(T_{right}\). During each recursive step, in line 1, the algorithm first finds the position \(k\) in the current root node such that \(K[k]\geq I\), where \(C[k]\) is the pointer that branches to the next child node for non-leaf nodes. If the current root node is a leaf, in line 3, it partitions the current node in two sub-trees: \(T_{left}\), containing a node with \(K[1\ldots k-1]\); and \(T_{right}\), containing a node with \(K[k\ldots N]\). Then, in line 4, it sets the next sibling of \(T_{left}\)'s root to \(nil\); in line 5, it sets the next sibling of \(T_{right}\) to the next sibling of \(T\)'s root; and, in line 6, it returns \((T_{left},T_{right})\). Note that no other leaf node besides the affected ones must update the pointer to its next siblings since the resulting trees will reuse the previous linkages. Next, if the current node is a non-leaf, in line 7, the algorithm partitions the current node in three sub-trees: \(T_{left}\), containing a root node with \(K[1\ldots k-2]\) and \(C[1\ldots k-1]\); \(T_{child}\), containing a root node with \(K[k-1\ldots k]\) and \(C[k]\); and (3) \(T_{right}\), containing a root node with \(K[k+1\ldots M]\) and \(C[k+1\ldots M+1]\). Additionally, whenever a sub-tree have only one pointer in its root node, its respective root node becomes the child pointed by it in order to maintain the correct B\({}^{+}\)-tree layout. Then, in line 8, it calls the split algorithm itself passing \(T_{child}\) as parameter and obtaining two sub-trees \(T^{\prime}_{left}\) and \(T^{\prime}_{right}\) as the intermediate result. Finally, in line 9, it advances the intermediate result by joining them appropriately with the sub-trees of the current level. ``` 1:A tree of intervals \(T\) and a key interval L 2:\(k\leftarrow\texttt{find\_key\_position}(\texttt{root}(T),L)\) 3:if\(\texttt{root}(T)\) is a leaf then 4:\((T_{left},T_{right})\leftarrow\texttt{split\_leaf}(root(T),k)\) 5:\(\texttt{next\_leaf}(\texttt{root}(T_{left}))\gets nil\) 6:\(\texttt{next\_leaf}(\texttt{root}(T_{right}))\leftarrow\texttt{next\_leaf }(\texttt{root}(T))\) 7:return\((T_{left},T_{right})\) 8:\((T_{left},T_{child},T_{right})\leftarrow\texttt{split\_non\_leaf}(root(T),k)\) 9:\((T^{\prime}_{left},T^{\prime}_{right})\leftarrow\texttt{split}(T_{child},L)\) 10:return\((\texttt{join}(T_{left},T^{\prime}_{left}),\texttt{join}(T^{\prime}_{ right},T_{right}))\) ``` **Algorithm 4**split **Theorem 5**.: _Algorithm 4 accesses \(O(\log_{B}\left(\tau))\) pages in the worst-case where \(\tau\) is the maximum number of keys in the tree._ Proof.: During each recursive step, Algorithm 4 needs to: (1) partition the current node being considered in at least two sub-trees; (2) change pointers to next leaf siblings, whether the current node is a leaf; and (3) join the current sub-trees with the sub-trees resulting from the next recursive step. For (1), the algorithm accesses \(O(1)\) pages. In the worst-case scenario, if the root node of a sub-tree has only a single child, the algorithm reads this child in order to make it the new root node. For (2), the algorithm also accesses \(O(1)\) pages since only two leaf nodes are updated. For (3), the algorithm calls the join algorithm twice. Without loss of generality, consider only calls maintaining \(T_{left}\). From the node above the leaf node containing the split key up to the root of \(T\), the algorithm joins the current left sub-tree \(T_{left}\) with the intermediate left sub-tree \(T^{\prime}_{left}\) resulting from the previous iteration. At each iteration there is a \(\texttt{join}(T_{left},T^{\prime}_{left})\) call in which \(T_{left}\) is either empty or non-empty. In case \(T_{left}\) is empty, the algorithm pays nothing and, at the next iteration, the difference in height between \(T_{left}\) and \(T^{\prime}_{left}\) increases by one. In case \(T_{left}\) is non-empty, the algorithm pays the difference in height accumulated so far and, at the next iteration, the difference in height resets to one. Therefore, as the summation of all payments is at most the height of the tree, the algorithm accesses \(O(\log_{B}(\tau))\) pages while processing all join calls. AcknowledgementsThis study was financed in part by Fundacao de Amparo a Pesquisa do Estado de Minas Gerais (FAPEMIG) and the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001* - under the "CAPES PrInt program" awarded to the Computer Science Post-graduate Program of the Federal University of Uberlandia.
2303.07144
Real-Time Adaptive Abstraction and Approximation Using Validity Frames -- an Experience Report
Designing a Cyber-Physical System (CPS), including modeling the control components and services, is a challenging task. Using models and simulations during run-time is crucial for successfully implementing advanced control and prediction components. The complexity of designing an effective CPS system increases due to real-time constraints. Generating accurate predictions and making decisions using detailed models in various contexts is %can be computationally demanding and complex to manage within the available computational resources. Employing approximated models and switching to the most suited model adaptively at run-time is an effective technique. But an approximated model is most probable not valid in all the different contexts the system will be in. This experience report uses the Validity Frame concept to enable %this adaptation at run-time. In each environment, some influencing factors are outside the model's control, but these properties influence the model's behavior. By defining Validity Frames, based on specific contexts and related models, we present a possible perspective to address the issue of selecting the more appropriate model in various contexts. Furthermore, we discuss the insights and lessons obtained and determine future challenges.
Raheleh Biglari, Joachim Denil
2023-03-09T14:42:52Z
http://arxiv.org/abs/2303.07144v2
# Real-Time Adaptive Abstraction and Approximation Using Validity Frames - An Experience Report # Real-Time Adaptive Abstraction and Approximation Using Validity Frames - An Experience Report Raheleh Biglari Joachim Denil Department of Electronics and ICT Faculty of Applied Engineering Cosys-lab, University of Antwerp Flanders Make@Uantwerpen Groenenborgerlaan 171, Antwerp, Belgium {raheleh.biglari, joachim.denil}@uantwerpen.be ###### Abstract Designing a Cyber-Physical System (CPS), including modeling the control components and services, is a challenging task. Using models and simulations during run-time is crucial for successfully implementing advanced control and prediction components. The complexity of designing an effective CPS system increases due to real-time constraints. Generating accurate predictions and making decisions using detailed models in various contexts is computationally demanding and complex to manage within the available computational resources. Employing approximated models and switching to the most suited model adaptively at run-time is an effective technique. But an approximated model is most probable not valid in all the different contexts the system will be in. This experience report uses the Validity Frame concept to enable adaptation at run-time. In each environment, some influencing factors are outside the model's control, but these properties influence the model's behavior. By defining Validity Frames, based on specific contexts and related models, we present a possible perspective to address the issue of selecting the more appropriate model in various contexts. Furthermore, we discuss the insights and lessons obtained and determine future challenges. **Keywords:** Cyber-Physical Systems (CPS), Model-Based Systems Engineering (MBSE), Validity Frames, Adaptive Approximation, Real-time. Introduction In the design and development of software-intensive systems, particularly in the avionics and automotive industries, engineers are faced with the challenge of dealing with highly complex systems composed of various interrelated and deeply integrated components (Lee, 2008). These systems, known as Cyber-Physical Systems (CPS) typically operate in harsh conditions with rapidly fluctuating environmental conditions. In these dynamic environments, the ability to predict the behavior and performance of the system is often not guaranteed(Palumbo et al., 2017). To address the challenges presented by a changing environment, adaptivity is integrated into the design of CPSs to allow the system to respond to environmental changes. This adaptation must typically be performed in a decentralized manner due to the autonomous nature of many real-time CPSs, which increases the overall complexity of designing these adaptive systems (Kit et al., 2015). Cyber-Physical Systems, which integrate embedded control, mechanics, and networking, are typically developed using a model-based approach. This approach involves utilizing one or more physics-based models for designing and simulating the system. Since a CPS is a complicated system, its model also tends to be complicated. Additionally, these models are employed for control of the system at run-time. Complicated models have a heavy computational load on the embedded devices that typically implement the control on the system. As such, implementing these computationally demanding models is challenging, possibly resulting in missed deadlines and increased costs. One way to balance improved performance and cost reduction in CPS is to use adaptive abstraction and approximation techniques (Franceschini et al., 2019; Franceschini et al., 2019; Biglari et al., 2022). Adaptive approximation specifically switches between multiple prediction models to achieve improved computational efficiency while maintaining an acceptable level of control performance. The adaptive approximation technique enables the utilization of a simpler, less detailed model with a lower computational cost. In this experience report, we applied the idea of utilizing the Validity Frame to ease the development of real-time Cyber-Physical Systems while using the run-time adaptive abstraction and approximation technique. We use the properties of interest and environment properties which are known as influencing factors within a Validity Frame. Moreover, we exploit the relations between different Validity Frames to choose the most appropriate model from the library of models. The structure of this paper is as follows: section 2 provides the related work. Next, section 3 introduces our case study and details the adaptive approximation method applied to the system under study. Afterward, section 4 discusses the challenges encountered, lessons gained, and insights of this research. Finally, section 5 concludes with a summary and future research based on the insights gained. Related Work This section describes related work for both adaptive abstraction and approximation and validity frames. ### Adaptive Abstraction and Approximation Due to the complexity of CPSs, CPS models also possess similar complexity. Utilizing models with heavy computational requirements is challenging as it can cause missing deadlines. To avoid computationally expensive models, there is a possibility of eliminating the reasoning on a set of properties within a model. This is known as a more abstract model. As the model removes certain properties, it is typically also less computationally expensive to simulate. An alternative way to exclude factors from a model is the approximation of resolution which means removing detail, while retaining the set of properties one can reason on with the model. As such a more approximate model uses a simpler model to approximate the results of a more complicated model, assuming the difference between the two is insignificant for the results of the simulation (Frantz and Ellor, 1996). It is possible to have multiple abstracted and approximated models. To switch between these models at run-time, we benefit from the idea of adaptation by (Mathieu et al., 2018; Franceschini et al., 2019; Franceschini et al., 2019) that dynamically switches between abstractions during simulation or execution or simulation. Therefore, the adaptive approximation technique allows for the use of a simpler, less detailed model with a lower computational cost. Allowing adaptivity at run-time is one technique for resolving the conflict between better performance and reduced cost in CPS. However, for this adaptation between the different models, we need to establish that these models are substitutable in a specific situation. Figure 1 represents the conceptual foundation from (Biglari et al., 2022) for reasoning about run-time adaptive approximation in a dynamic situation. This conceptual foundation is based on (Barroca et al., 2014) that combines the use of ontology and language engineering. A simulation of a model([[.]]) produces a trace. By analyzing this trace, we can use a function \((f())\) to calculate the system's prediction value related to a logical property. This prediction value is used in the decision-making process to make a decision. To reason about an allowed approximation of a model, we must consider the goal models, decision-making processes, and context. The sensitivities of the logical property in relation to decision-making in a specific context and for a specific goal must be mapped to enable this reasoning. The same reasoning process is applied when a different, more approximate decision-making model is used in a specific context. We explore the impact of approximating a model on cost reduction as a foundation for a framework that enables real-time system adaptation. This framework allows for swapping models with more approximate versions based on the available library of models. Utilizing the insights from the conceptual framework, Biglari et al. proposed an architecture for adaptive approximation in real-time systems. This architecture in figure2 is based on MAPE-K architecture. MAPE-K is a high-level control loop for self-adaptive systems from IBM (Kephart and Chess, 2003). An approximated model is most probably not valid in all the different contexts the system will be in. Consequently, the question arises: How can we do the adaptive approximation and abstraction at run-time, and find the most appropriate model for a specific context? We use the Validity Frame concept to enable the adaptive approximation at run-time. ### Validity Frames The concept of frames has been present for some time, dating back to the early 1980s when the original idea of "experimental frames" was introduced by (Zeigler, 1984). (Klikovits et al., 2017) addressed the structure of modeling frames, which depends on the activity performed, and this work describes why different activities require different frames. Extending the work of (Klikovits et al., 2017), (Van Acker et al., 2021) formalizes the Validity Frame concept to specify the range-of-validity of a model. A Validity Frame (\(VF\)) contains all necessary data and processes for determining the appropriate usage of the model. A \(VF\) is defined as a structure: \[VF=<S_{M},\pi_{M},\gamma_{M},map_{\pi}\to S_{M},map_{\gamma}\to S_{M}, SPEC_{exe},Val_{M},\alpha_{val}> \tag{1}\] Where: Figure 1: Conceptual Foundation for Model Approximation Technique from (Bigli et al., 2022). \(S_{M}\) is the model structure, which defines the set of model components and their relationship, \(\pi_{M}\) is the modeled properties. \(\gamma_{M}\) is the influencing factor that is captured within \(VF\) and enables reasoning about the utilization of the model in terms of what properties the model can provide valid answers for and under which influence factors. Properties of the environment related to the system under study that can potentially influence a model's behavior and/or the range of validity concerning a particular property, even though they are outside of the model's control, are referred to as "influencing factors" (Mierlo et al., 2020). \(map_{\pi}{\rightarrow}S_{M}\) and \(map_{\gamma}{\rightarrow}S_{M}\) are explicitly mapped onto the model's implementation. The \(SPEC_{cve}\) shows the specification of the simulation environment, the simulator, or the specification of the embedded platform (\(SPEC_{MBD}\)) on which M is executed. Model validity, \(Val_{M}\), is a crucial part of \(VF\), allowing reason about how well model M reflects the real-world counterpart \(Sys\). Last is the set of validation activities \(\alpha_{val}\) includes a set of validation conditions \(v\), used to validate or invalidate the behavior of M in known or unknown contexts. A \(VF\) is used to explicitly identify the influence factors and range-of-validity for a model and to provide methods and processes to ensure the model accurately represents the source system. This includes methods for calibrating the model, experiment design for validation, validation metrics, etc. The purpose of validity frames is to clearly express the contexts in which a model generates valid results for a specific set of properties concerning a real-world system. To convey different contexts, specific properties of the environment must be considered within our system. Figure 2: Real-Time Adaptive Abstraction and Approximation Architecture from (Biglari et al., 2022). Adaptive Abstraction and Approximation Using Validity Frames In this section, we present the process we followed to implement the adaptive approximation technique using \(VF\) in our case study. ### Case study: Lane Changing We use the Highway Lane change system as a case study to demonstrate our idea. A lane-changing system controls the movement of the ego (target car) in both the longitudinal (forward/backward) and lateral (left/right) directions. According to the environmental properties, the ego car changes or follows the lane. This research presents a scenario of lane changing depicted in Figure 3. We refer to this illustration throughout the paper to demonstrate our concepts and findings. In this example, there are three cars, ego car (Ego), middle car (MC), and front car (FC), which have the velocity of \(v_{ego}\), \(v_{mc}\), and \(v_{fc}\) respectively. Moreover, \(v_{fc}<v_{mc}<v_{ego}\) To predict the positions of the different cars, the simulation can use several models with different levels of abstraction and approximation. The first uses the kinematic equation \(x(t)=1/2*a*t^{2}+v*t+x_{0}\). In this formula, \(a\) is the vehicle's acceleration in \(m/s^{2}\), \(v\), the velocity of the vehicle in \(m/s\), and \(x\), the vehicle's position. This model is the original model. However, there is a possibility of using a more simplified model. We obtain this model by removing the acceleration term. We call this model the approximated model, which is less detailed but has a lower computational cost. These two models only take into account that the cars do not change lanes during the prediction of their position. In (Biglari and Denil, 2022), the adaptive approximation is applied to a similar lane changing case study using these two different models. In the study of Biglari and Denil, a controller needs to select the most appropriate model at run-time between an original and approximated model. However, both models reason on the same set of properties and have the same environmental context, namely, the cars do not change lanes. Hence, selecting one valid model within the specific context is essential during model selection. The concept of a Validity Frame is highly beneficial in this regard. The scenario in this paper describes an additional feature, a turning or blinking light. A blinking light indicates the driver's intention to change lanes. In our running example, blinking is the influencing factor of interest, an environmental factor that is outside of the model's control but impacts model selection. Figure 3: Scenario. We also added a much more computationally expensive model. This model simulates the lane changing behaviour of the different cars as well not only the ego car, and as such is valid both in the blinking and non-blinking light context. In this model, we simulate every car and considered each of them as an ego car, and run the simulation to perceive the behaviour of the system. ### Adaptation using a Validity Frame Our validity frame defines the different contexts of the system based on _influencing factors_ in which the original model and approximated models are valid. For applying the adaptation with frames, two challenges arise: **Challenge 1 - Properties of the environment and frames:**: Influencing factors are those properties of the environment that influence a model's behavior and are quantified. We must identify these properties as factors that exert a significant impact. Once the properties are properly identified, characterised, a validity frame needs to be created. We will not focus on the aspect of building a frame but assume that proper techniques are available for creating the frame. **Challenge 2 - Run-time selection of models:**: Having different models available, both abstracted, approximated and combinations, requires model management techniques. Organising these models is required. Furthermore, selecting the correct model at run-time requires proper data-structures and algorithms. ### Design-time Model Organisation As a starting point, we use the Validity Frame Graph (VFG) proposed by Van Acker in (Van Acker et al., 2021). The validity graph encodes the relations between different models based on their validity. It is therefore classified as a mega-model (Favre, 2006). Mega-modeling, as defined in (Favre, 2006), is the practice of creating a model that illustrates the global relationships between various modeling artifacts without focusing on their specific content. This is precisely what a Validity Frame Graph (VFG) does, capturing the abstract relationships between interrelated Validity Frames and their containing models without considering their specific modeling details. However, in our case, the data-structure is quite naive, in a sense that all models are represented as a Vertex within the graph. Edges between the vertices encode how properties are removed, added, or the range of properties is changed between the models. The organisation in this data-structure results in a fully connected graph. Figure 4 depicts an example of \(VFG\). This figure shows five interrelated Validity Frames and corresponding models, which create a Validity Frame Graph. It visualizes the relationships between various models using the sets of properties and factors of influence, \(\pi\), and \(\gamma\). In the \(VFG\), instances that depict the same \(\pi\) and \(\gamma\) are connected by an abstraction relation represented by a solid arrow. If an instance represents only a portion of the \(\pi\) and/or \(\gamma\), it is connected to the corresponding instance via a view decomposition relation represented by a dashed arrow (Van Acker et al., ). Using the \(VFG\) technique, we define and classify models within a Validity Frame. Figure 5 illustrates the \(VFG\) for this Highway Lane change scenario. In Figure 5, \(\pi\) is the set of properties and \(\gamma\) is the set of influencing factors. \(VF_{d}\) is the head of the \(VFG\), and the contained model, \(M_{d}\) which is the most detailed model. The following relations are between the contained models of the VFs: * \(M_{a}=Abstract(M_{d})\) * \(M_{a_{1}}=Approximate(M_{a})\) * \(M_{b}=Abstract(M_{d})\) * \(M_{b_{1}}=Approximate(M_{b})\) \(M_{a}\) is the abstract version of \(M_{d}\) and \(M_{a1}\) is the approximated model based on \(M_{a}\). And also \(M_{b}\) is the abstracted model of \(M_{d}\), and \(M_{b1}\) is the approximated model based on \(M_{b}\). ### Run-time Model Organisation: Decision Tree As hinted in the previous section, a fully connected graph is not ideal for selecting models at run-time. As such, another data-structure is needed. We utilize a decision tree to select the most suitable models for our system in a specific context, as depicted in figure 6. The organisation in a decision tree also requires a significant amount of domain knowledge. The designer needs to select in what order the properties and influence factors are ordered in the tree or possibly combined within a single splitting of the tree. To select a set of models, we use Decision Tree, and its time complexity is \(O(depth)\). Using a fully connected graph needs choosing between Breadth First Search (BFS) and Depth First Search (DFS). The time complexity of the DFS and BFS graphs are represented in the form of \(O(|V|+|E|)\), where \(V\) is the number of nodes and \(E\) is the number of edges. Therefore decision tree is computationally less expensive than BFS or DFS (Cormen et al., 2022). Figure 4: Validity Frame Graph example from (Van Acker et al., ). We select the model during _run-time_ but build the decision tree at _design time_ to avoid increasing run-time computational cost. In Figure 6, we traverse from the root to the leaf nodes according to the context. Afterward, we employ the response surface graph for the model selection method suggested by Biglari and Denil. The approximated model is a model with uncertainty and there is no possibility of using as much uncertainty as you like. So, you can switch to the approximated model if the uncertainty is within the bound. In (Biglari and Denil, 2022), the response surface graph shows how to find the bound and where you are allowed to swap between models. Algorithm 1 shows the pseudocode of selecting from the decision tree's leaf nodes and deciding to use the original or approximated model. This algorithm compares nextTrajectories, which correspond to different decisions. For example, the ego car may decide to change lanes or not, and if so, when and where. If simulations conclude the same decision, then we use the approximated model, which has less computational cost. Finally, to ensure that the model tree is optimized for real-time performance, we follow the MAPE-K control loop in figure 2, and constraints are applied to eliminate infeasible solutions. As this is a real-time system, not all possible solutions are suitable. Figure 5: validity frames Graph for the Highway Lane change scenario. Figure 6: Decision Tree of Highway Lane Change Scenario. Thus, solutions that cannot meet deadlines are pruned. ### Experiments and Results of the Adaptation In our running example scenario in Figure 3, the velocity of \(v_{ego}\), \(v_{mc}\), and \(v_{fc}\) are as follows: \(v_{fc}\) i \(v_{mc}\) i \(v_{ego}\). We show the simulation results for different contexts. Figure 7 shows the simulation result when there is a non-blinking context. Accordingly, we add the influencing factor of blinking. Figure 8 shows the simulation result in the blinking context. The optimal trajectory that the ego car follows is depicted by the arrowed line, while the other lines represent alternative trajectories that are deemed inappropriate. The simulation results depict that in a blinking context when the ego car knows that MC intends to change the lane, in Figure 8, the ego car keeps the lane but in a non-blinking context, the simulation result is different, Figure 7, the ego car decides to change the lane. Additionally, our scenario involves both an original and an approximated model. The simulation will determine if either model leads to a differing conclusion. By using Algorithm 1 and response surface graph, if there is no different conclusion and the next trajectories are the same, the approximated model is considered suitable, as it has lower computational cost. Logically, using an approximated model instead of the Figure 7: Non-Blinking Context Simulation. e models have a simpler structure with fewer parameters compared to more detailed models. ## 4 Discussion In section 3, we described the experiments and simulations we performed to reduce the computational cost while using the run-time adaptive approximation technique. Consequently, we chose the more appropriate model from our library of models and interrelated Validity Frames for our system under study. In this section, we discuss some of these steps, presenting lessons learned and highlighting additional questions raised and identified challenges: **Challenge A: A library of related models' data-structure.**: We use the library of related models. Accordingly, we need to find out what such a library of related models looks like. In our case, we used the validity frame graph at design time and the decision tree at run-time. Multiple models are created for a single system, _Sys_. By clearly defining the validity of each model within its Validity Frame (\(VF\)), using a Validity Frame Graph \(VFG\) (Van Acker et al., ) can visualize the relationships between the models. To overcome challenge A, we also need to solve some sub-challenges. **Encoding relations between properties.**: There are three types of properties in a system: (Van Acker et al., ) system properties, properties of interest, and Influencing factors (environment properties). * System properties: related to the real-world entity. It is part of the possibly infinite set of properties required to grasp all possible contexts and facets of the considered real-world entity. In practice, one cannot fully specify this property set, as it is impossible to capture all object's facets in all its possible contexts. Figure 8: Blinking Context Simulation. * Properties of interest. The set of properties of interest gives rise to the abstraction relation between model M and system Sys: a model only provides (correct) answers with respect to these properties, while other properties are abstracted away. More specifically, the implemented set of properties of model M is a subset of the system properties. * Influencing factors. The set of influence factors can also influence the abstraction relation between model M and system Sys: a model is only supposed to provide answers concerning the modeled properties under the modeled influence factors, while other influence factors are abstracted away. More specifically, the implemented set of influence factors of the model is a subset of the system properties. Therefore, properties of interest and influencing factors are properties that we consider. After bringing out properties in our example are distance and blinking, the next step is finding the relation between these properties. How to encode this relation is still the question. **Order in the property set.**: There seems to be an intuitive concept on ordering in the set of models that exist. However, the ordering can only be partial as we cannot really compare different properties and their ranges to each other and make a conclusion if the model has a bigger validity. As such a partially ordered set or poset might be a better visualisation. In a poset, each element can be compared to each other element and results in a bigger, smaller, equal or incomparable result. However, when we create relations per property, we get a fully ordered relation. Although these options still need more thinking, handling multiple properties is still one of the unsolved challenges. **Challenge B: How to select a model at run-time to use in a specific situation?**: We can not use all our models at run-time in all different situations (context). Then two questions pop up. **Search Algorithm.**: We use Decision Tree so the time complexity is \(O(depth)\). The designer must decide on the arrangement of properties and impact factors in the tree or combine them into a single splitting. There are some splitting decision tree methods to choose from. **Translation beforehand to a data-structure**: CPS is a real-time system, so we can not do everything at run-time and tend to increase the computational cost at run-time. To avoid the overhead at run-time, we propose to do translation beforehand to a data-structure at design time that is more feasible. For instance, making a decision tree at design time. **Challenge C: Can we automatically convert a mega-model to a decision tree (or related data-structure)?**: Currently, converting a mega-model to a decision tree is performed manually as we fully understand our system, but this may not be the case in other environments and systems. As previously mentioned, the order of the decisions and organisation of properties and influence factors within the decision tree affects its run-time performance. The challenge is to automatically create a decision tree that is optimal to use at run-time. Figure 6 represents our first idea of implementing the decision tree. In this data structure, level 1 of the tree includes influencing factors. The nodes would be sorted down as the number of branches in level 1 is less than level 2 of the tree, which is depicted in Figure 9. For building the decision tree, we utilize properties of interest and influencing factors, and there are relationships between these properties. However, there is a need to optimise the organisation of the tree. For this, there is a need for domain knowledge about which contexts are accessed often. A _heat map_ is a graphical representation of data to visualize the relationship between variables. Heat maps might help discover the feature importance of each property in the tree. A heat map of feature importance provides a clear view of which properties have the greatest impact and which play a less significant role. Figure 10 represents the heatmap where the possibility of non-braking/non-blinking context is more than other ones. Furthermore, it is essential to show that the resulting tree can be used in all different contexts as the mega-model. However, this does not mean that all models in the mega-model should be present in the run-time data structure. This is closely related to the next challenge. **Challenge D: Which surrogate models to create?** It is possible to create a set of sur Figure 10: Heat Map. Figure 9: Decision Tree Representation. rogate models starting from a single model. Typically, the smaller the context, the less computational effort is required to simulate the model. However, creating a surrogate model, either by hand or computationally, requires a tremendous amount of effort. Balancing the number of models and their operating range will be needed. As such, guidelines and heuristics will need to be created to assist designers in making appropriate decisions. Again, the domain knowledge about often used contexts is important to see which surrogates to create for the model at design time. Information from the operation phase of the system might be needed to make optimised decisions. Digital twins are needed for such a setup. This experience report acknowledges that validation is a crucial step in modeling. However, for this study, we proceed with the assumption that the models have been thoroughly validated and are functioning as intended. In our running example, we used blinking as the influencing factor of interest to represent the validity frame concept. There are other influencing factors that we could add to our validity frame, for example, traffic, road and weather conditions. ## 5 Conclusions Real-time adaptive approximation in Cyber-Physical Systems is a method used to reduce the computational cost of running models. However, an approximated model is not valid in all the various contexts the system may operate in. In this experience report, we applied the adaptive approximation technique using the Validity Frames concept in our case study. We utilized the concept of Validity Frames to enable adaptation at run-time in different contexts. In this work, we used the properties of interest and environment properties known as influencing factors within a validity frame, and the relations between different validity frames to choose the most appropriate model from our library of models. We discussed identified challenges, lessons learned, and additional questions raised. ## Acknowledgments Raheleh Biglari is funded by the BOF fund at the University of Antwerp.
2305.15816
DDDM-VC: Decoupled Denoising Diffusion Models with Disentangled Representation and Prior Mixup for Verified Robust Voice Conversion
Diffusion-based generative models have exhibited powerful generative performance in recent years. However, as many attributes exist in the data distribution and owing to several limitations of sharing the model parameters across all levels of the generation process, it remains challenging to control specific styles for each attribute. To address the above problem, this paper presents decoupled denoising diffusion models (DDDMs) with disentangled representations, which can control the style for each attribute in generative models. We apply DDDMs to voice conversion (VC) tasks to address the challenges of disentangling and controlling each speech attribute (e.g., linguistic information, intonation, and timbre). First, we use a self-supervised representation to disentangle the speech representation. Subsequently, the DDDMs are applied to resynthesize the speech from the disentangled representations for denoising with respect to each attribute. Moreover, we also propose the prior mixup for robust voice style transfer, which uses the converted representation of the mixed style as a prior distribution for the diffusion models. The experimental results reveal that our method outperforms publicly available VC models. Furthermore, we show that our method provides robust generative performance regardless of the model size. Audio samples are available https://hayeong0.github.io/DDDM-VC-demo/.
Ha-Yeong Choi, Sang-Hoon Lee, Seong-Whan Lee
2023-05-25T07:59:03Z
http://arxiv.org/abs/2305.15816v1
DDDM-VC: Decoupled Denoising Diffusion Models with Disentangled Representation and Prior Mixup for Verified Robust Voice Conversion ###### Abstract Diffusion-based generative models have exhibited powerful generative performance in recent years. However, as many attributes exist in the data distribution and owing to several limitations of sharing the model parameters across all levels of the generation process, it remains challenging to control specific styles for each attribute. To address the above problem, this paper presents decoupled denoising diffusion models (DDDMs) with disentangled representations, which can control the style for each attribute in generative models. We apply DDDMs to voice conversion (VC) tasks to address the challenges of disentangling and controlling each speech attribute (e.g., linguistic information, intonation, and timbre). First, we use a self-supervised representation to disentangle the speech representation. Subsequently, the DDDMs are applied to resynthesize the speech from the disentangled representations for denoising with respect to each attribute. Moreover, we also propose the prior mixup for robust voice style transfer, which uses the converted representation of the mixed style as a prior distribution for the diffusion models. The experimental results reveal that our method outperforms publicly available VC models. Furthermore, we show that our method provides robust generative performance regardless of the model size. Audio samples are available 3. Footnote 3: [https://hayeong0.github.io/DDDM-VC-demo/](https://hayeong0.github.io/DDDM-VC-demo/) ## 1 Introduction Denoising diffusion models (Ho et al., 2020; Dhariwal and Nichol, 2021; Song et al., 2021) have achieved significant success in image generation tasks (Ramesh et al., 2022; Saharia et al., 2022). Diffusion models have also attracted increasing interest in the audio domain in recent years, owing to their ability to synthesize high-quality speech (e.g., Mel-spectrogram and audio). Various applications employ diffusion models, such as text-to-speech (TTS) (Popov et al., 2021; Kim et al., 2022, 2022), neural vocoder (Kong et al., 2021; Chen et al., 2021; Huang et al., 2022), speech enhancement (Han and Lee, 2022), and voice conversion (VC) (Liu et al., 2021; Popov et al., 2022). Although diffusion models have achieved success in most speech applications owing to their powerful generative performance, there remains room for improvement in conventional diffusion models. As data include many attributes, it is difficult to control specific styles for each attribute with a single denoiser that shares the model parameters across all levels of generation process. To reduce this burden in the image generation domain, eDiff-i (Balaji et al., 2022) subdivides the single denoiser into multiple specialized denoisers that originate from the single denoiser progressively according to specific iterative steps. However, a limitation still exists in controlling each attribute within entirely the same conditioning framework for every iteration, which results in a lack of controllability. To address the above issues, we first present decoupled denoising diffusion models (DDDMs) with disentangled representations. As illustrated in Figure 1, we disentangle the denoiser into specific attribute-conditioned denoisers to improve the model controllability for each attribute. Subsequently, each denoiser focuses on the noise from its own attribute at the same noise level and removes the noise at each intermediate time step. To demonstrate the effectiveness of DDDMs, we focus on the VC tasks that still face challenges in disentangling and controlling each speech attribute (Choi et al., 2021). VC is a task for transferring or controlling the voice style while maintaining the linguistic information. As speech consists of various attributes such as linguistic information, intonation, rhythm, and timbre, it remains challenging to transfer the voice style in zero/few-shot scenarios (Lee et al., 2022). Based on the DDDMs, we present DDDM-VC which can effectively transfer and control the voice style for each attribute. We first utilize the self-supervised representation to disentangle the speech representation based on the source-filter theory (Fant, 1970). Subsequently, we resynthesize the speech for each attribute from the disentangled representation using DDDMs. We also propose the prior mixup, a novel verified robust voice style transfer training scenario that uses the converted speech as a prior distribution for the diffusion model that is generated from the mixed speech representation, and restores the source speech. Thus, although DDDM-VC is trained by reconstructing the source speech, the prior mixup can reduce the train-inference mismatch problem for VC tasks. We demonstrate that DDDMs can effectively transfer the voice style even with lower model parameters compared to the state-of-the-art VC model (Popov et al., 2022). Furthermore, the experimental results reveal the effectiveness of speaker adaptation in the zero/one-shot scenarios. The main contributions of this study are as follows: * We propose decoupled denoising diffusion models (DDDMs), which can effectively control the style for each attribute in generative models by decoupling attributes and adopting the disentangled denoisers. * To demonstrate the effectiveness of DDDMs, We present DDDM-VC, which can disentangle and resynthesize speech for each attribute with self-supervised speech representation. Furthermore, we propose a prior mixup to improve voice style transfer performance. * Our model provides better performance in both many-to-many and zero-shot voice style transfer compared with the state-of-the-art VC model. We can also successfully adapt to novel voice with a single sample. ## 2 Related Works: Voice Conversion The aim of VC is to convert the source speaker voice into the target speaker voice while preserving the linguistic (content) information (Yi et al., 2020). For this purpose, it is critical to decompose the linguistic, timbre, and pitch information such as intonation. Many VC methods have been presented with the goal of disentangling the speech representation by decomposing the speech into various components: two methods: (1) information bottleneck and (2) information perturbation. An information bottleneck is used to constrict the information flow through a carefully designed bottleneck. AutoVC (Qian et al., 2019) presented a method for inducing the disentanglement of the content and timbre by restricting the dimension of the latent vector between the encoder and decoder. Figure 1: Speech synthesis in DDDMs and the standard diffusion model. Whereas a single denoiser is used for all denoising steps in standard diffusion models, we subdivide the denoiser into multiple denoisers for each attribute using a self-supervised representation. Each denoiser focuses on removing the single noise from its own attribute in each intermediate time step. Subsequently, F0-AutoVC (Qian et al., 2020a) takes the idea from the constraints of layer dimension (Qian et al., 2019), conditioning the decoder with normalized F0. (Qian et al., 2021; Lee et al., 2021) presents a similarity-based information bottleneck for content and style disentanglement. However, in these models, the heuristic determination of appropriate bottleneck size is inevitable, which is directly related to the model performance, and it may even differ for each dataset. Information perturbation approaches have been proposed to overcome the above limitation. The basic concept of information perturbation is to remove unnecessary information for each speech representation through signal processing before feeding it to the network. SpeechFlow (Qian et al., 2020b) and NANSY (Choi et al., 2021) adopt perturbation methods for the input waveforms and encourage the encoded feature to be correctly removed so that only content information remains. Recently, (Lee et al., 2022c; Popov et al., 2022) extracted speaker-irrelevant linguistic information using phoneme information. These models can simply execute content information disentanglement, and convert the speech with accurate pronunciation using explicit phoneme information; however, phoneme information must be extracted in advance and phoneme-level downsampling may induce the loss of content information. Recent studies have utilized a self-supervised speech representation as the linguistic content representation, (Polyak et al., 2021; Choi et al., 2021; Huang et al., 2021, 2022b) for the disentanglement of speech components. Despite the significant advances in VC, there are still certain limitations. The information loss that occurs when disentangling speech representations results in degradation of the synthesized speech quality in terms of both the audio and speaker adaptation quality. Therefore, in this study, we focus on synthesizing high-quality speech by disentangling speech representations appropriately and restoring lost information by using diffusion models. Although (Popov et al., 2022) employed diffusion models to VC tasks, their method still exhibited limitations in pronunciation and speaker adaptation quality of converted speech. To solve these problems, we present DDDMs and the prior mixup. The details of our methods are described in the following sections. ## 3 Decoupled Denoising Diffusion Models ### Background: Diffusion Models Denoising diffusion models have significantly improved various generative tasks such as image generation (Ramesh et al., 2022; Rombach et al., 2022), image inpainting (Saharia et al., 2022; Lugmayr et al., 2022), and audio generation (Chen et al., 2021; Kong et al., 2021; Huang et al., 2022a). These models typically consist of a forward process that gradually adds random noise, and a reverse process that progressively removes random noise and restores the original sample. Unlike the original diffusion model that uses a discrete-time diffusion process by Markov chains (Ho et al., 2020), the score-based generative model uses a stochastic differential equation (SDE)-based continuous-time diffusion process (Song et al., 2021). The stochastic forward process is defined as follows: \[d\mathbf{x}=f(\mathbf{x},t)dt+g(t)d\mathbf{w}, \tag{1}\] where \(f(.,t)\) is the drift coefficient of \(\mathbf{x}(t)\), \(g(\mathbf{t})\) is the diffusion coefficient, and \(\mathbf{w}\) denotes the Brownian motion. The reverse-time SDE can be expressed as: \[d\mathbf{x}=[f(\mathbf{x},t)-g^{2}(t)\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x })]dt+g(t)d\bar{\mathbf{w}}, \tag{2}\] where \(\bar{\mathbf{w}}\) is the Brownian motion for the time flowing in backward, and \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\) represents the score-function. To estimate \(\mathbf{s}_{\theta}(\mathbf{x},t)\simeq\nabla_{\mathbf{x}}\log p_{t}(\mathbf{ x})\), the score-based diffusion model is trained with score matching objective: \[\theta^{*}=\operatorname*{arg\,min}_{\theta}\mathds{E}_{t}\Big{\{}\lambda_{t} \mathds{E}_{\mathbf{x_{0}}}\mathds{E}_{\mathbf{x_{t}}|\mathbf{x_{0}}}\big{[} \|s_{\theta}(\mathbf{x}_{t},t)-\nabla_{\mathbf{x}_{t}}\log p_{t|0}(\mathbf{x}_ {t}\mid\mathbf{x_{0}})\|_{2}^{2}\big{]}\Big{\}}. \tag{3}\] ### Disentangled Denoiser To effectively control the style for each attribute in generative models, we propose decoupled denoising diffusion models (DDDMs) with multiple disentangled denoisers. Although an ensemble of diffusion models was presented in (Balaji et al., 2022), only a single expert is used at the specific denoising step in this method. In contrast, we investigate the decomposition of diffusion models in a the general diffusion process, which employs a single denoiser, we subdivide the denoiser into \(N\) denoisers with disentangled representations. Following the use of data-driven priors in (Popov et al., 2022), we use a disentangled representation of an attribute \(Z_{n}\) as the prior for each attribute denoiser. Therefore, the forward process can be expressed: \[dX_{n,t}=\frac{1}{2}\beta_{t}(Z_{n}-X_{n,t})dt+\sqrt{\beta_{t}}dW_{t}\;, \tag{4}\] where \(n\in[1,N]\), \(n\) denotes each attribute, \(N\) is the total number of attributes, \(\beta_{t}\) regulates the amount of stochastic noise and \(W_{t}\) is the forward Brownian motion. Reverse trajectories exist for the given forward SDE of each attribute (4). The reverse process of each disentangled denoiser can be defined as follows: \[d\hat{X}_{n,t}=\Bigg{(}\frac{1}{2}(Z_{n}-\hat{X}_{n,t})-\sum_{n=1}^{N}s_{\theta _{n}}(\hat{X}_{n,t},Z_{n},t)\Bigg{)}\beta_{t}dt+\sqrt{\beta_{t}}d\bar{W}_{t}, \tag{5}\] where \(t\in[0,1]\), \(s_{\theta_{n}}\) represents the score function of each attribute \(n\) parameterized by \(\theta_{n}\) and \(\bar{W}_{t}\) denotes the backward Brownian motion. The forward process (4) that generates a noisy sample \(X_{n,t}\) with each prior attribute \(n\) is as follows: \[p_{t|0}(X_{n,t}|X_{0})=\mathcal{N}\left(e^{-\frac{1}{2}\int_{0}^{t}\beta_{s}ds }X_{0}+\left(1-e^{-\frac{1}{2}\int_{0}^{t}\beta_{s}ds}\right)Z_{n},\left(1-e^{ -\int_{0}^{t}\beta_{s}ds}\right)\mathrm{I}\right), \tag{6}\] where \(\mathrm{I}\) is the identity matrix. The distribution (6) is Gaussian, thus we have the following equation: \[\nabla\log p_{t|0}(X_{n,t}|X_{0})=-\frac{X_{n,t}-X_{0}(e^{-\frac{1}{2}\int_{0} ^{t}\beta_{s}ds})-Z_{n}(1-e^{-\frac{1}{2}\int_{0}^{t}\beta_{s}ds})}{1-e^{- \int_{0}^{t}\beta_{s}ds}}. \tag{7}\] The reverse process (5) is trained by optimizing the parameter \(\theta_{n}\) using the following objective: \[\theta_{n}^{*}=\operatorname*{arg\,min}_{\theta_{n}}\int_{0}^{1}\lambda_{t} \mathds{E}_{X_{0},X_{n,t}}\bigg{\|}\sum_{n=1}^{N}s_{\theta_{n}}(X_{n,t},Z_{n},s,t)-\nabla\mathrm{log}\,p_{t|0}(X_{n,t}|X_{0})\bigg{\|}_{2}^{2}dt, \tag{8}\] where \(\theta=[\theta_{1},\cdots,\theta_{N}]\) and \(\lambda_{t}=1-e^{-\int_{0}^{t}\beta_{s}ds}\). Furthermore, we derive fast sampling using the ML-SDE solver (Popov et al., 2022), which maximizes the log-likelihood of forward diffusion with the reverse SDE solver. We extend DDDMs to DDDM-VC to control the voice style for each attribute in the following Section. In addition, we show that DDDMs can be applied to audio mixing by leveraging multiple denoisers to blend the sound and speech with the desired balance in Appendix H. ## 4 Dddm-Vc DDDM-VC consists of a source-filter encoder and source-filter decoder as illustrated in Figure 2. We first disentangle the speech using self-supervised speech representations as in subsection 4.1. Thereafter, we use these disentangled speech representations to control each attribute and to generate high-quality speech with the proposed disentangled denoiser as explained in subsection 4.2. Furthermore, we propose the prior mixup for a robust voice conversion scenario in subsection 4.3. Figure 2: Overall framework of DDDM-VC ### Speech Disentanglement Content RepresentationTo extract the content representation relating to the phonetic information, we utilize self-supervised speech representations. Unlike (Polyak et al., 2021) utilizing the discrete representation of audio from HuBERT, we use a continuous representation of audio from XLS-R, which is Wav2Vec 2.0 trained with a large-scale cross-lingual speech dataset. Furthermore, before fed to the filter encoder, audio is perturbed to remove the content-independent information following (Choi et al., 2021). As (Lee et al., 2022) demonstrated that the representation from the middle layer of XLS-R contains substantial linguistic information, we adopt this representation as the content representation. Pitch RepresentationFollowing (Polyak et al., 2021), we extract the fundamental frequency (F0) from the audio using YAPPT algorithm (Kasi and Zaharian, 2002) to encode the intonation such as the speaker-irrelevant speaking style. The F0 from each sample is normalized for each speaker for speaker-independent pitch information, and VQ-VAE is used to extract the vector-quantized pitch representation. For a fair comparison, we normalize the F0 for each sentence, not for a speaker, during inference. Speaker RepresentationVC transfers the voice style, and our goal is to achieve robust zero-shot voice style transfer from novel speakers. To this end, we use style encoder (Min et al., 2021) that can extract the speaker representation from the Mel-spectrogram of the target speech. The extracted speaker representation is averaged per sentence for global speaker representation, and fed to all encoders and decoders for the speaker adaptation. ### Speech Resynthesis Source-filter EncoderIn this work, we simply define the speech attributes according to the source-filter theory (Fant, 1970). The filter encoder takes the content and speaker representations, whereas the source encoder takes the pitch and speaker representations. Previously, (Lee et al., 2022) demonstrated that the data-driven prior in the diffusion process can simply guide the starting point of the reverse process. (Popov et al., 2022) adopted an average phoneme-level Mel encoder for voice conversion with a data-driven prior. However, this method requires a text transcript to extract the phoneme-level average Mel-spectrogram and pre-trained average Mel-encoder, and the smoothed Mel representation results in mispronunciation. To achieve a substantially more detailed prior, we use the entirely reconstructed source and filter Mel-spectrograms, \(Z_{src}\) and \(Z_{ftr}\) which are regularized by the target Mel-spectrogram \(X_{mel}\) as follows: \[\mathcal{L}_{rec}=\|X_{mel}-(Z_{src}+Z_{ftr})\|_{1}, \tag{9}\] where \[Z_{src}=E_{src}(pitch,s),\ Z_{ftr}=E_{ftr}(content,s). \tag{10}\] It is worth noting that the disentangled source and filter Mel-spectrograms from the disentangled representations are simply converted with different speaker representation \(s\). Thus, we utilize the converted source and filter Mel-spectrogram as each prior in each denoiser for VC. Source-filter DecoderWe utilize disentangled denoisers for the source and filter representations based on our DDDMs. The source decoder takes a source representation \(Z_{src}\) as a prior and the filter Figure 3: (a) Speech resynthesis from disentangled speech representations (training). (b) Voice conversion from converted speech representations (inference). (c) Prior mixup for better speaker adaptation quality. To reduce the train-inference mismatch problem, the decoder also learns to convert the randomly converted representations into input speech during training. decoder takes a filter representation \(Z_{ftr}\) as a prior. Subsequently, each denoiser is trained to generate a target Mel-spectrogram from each prior with the same noise, which is conditioned on a speaker representation. Each denoiser can focus on removing the single noise from its own attribute. The forward process is expressed as: \[dX_{src,t}=\frac{1}{2}\beta_{t}(Z_{src}-X_{src,t})dt+\sqrt{\beta_{t}}dW_{t}, \tag{11}\] \[dX_{ftr,t}=\frac{1}{2}\beta_{t}(Z_{ftr}-X_{ftr,t})dt+\sqrt{\beta_{t}}dW_{t}, \tag{12}\] where \(t\in[0,1]\), \(X_{src,t}\) and \(X_{ftr,t}\) are the generated noisy samples with each prior attribute (i.e., source-related and filter-related attribute respectively). For the given forward SDE of each attribute (11) and (12), there exist reverse trajectories. The reverse process is expressed as: \[d\hat{X}_{src,t}=\left(\frac{1}{2}(Z_{src}-\hat{X}_{src,t})-s_{\theta_{src}} (\hat{X}_{src,t},Z_{src},s,t)\right.-s_{\theta_{tr}}(\hat{X}_{ftr,t},Z_{ftr}, s,t)\left)\beta_{t}dt+\sqrt{\beta_{t}}d\bar{W}_{t}, \tag{13}\] \[d\hat{X}_{ftr,t}=\left(\frac{1}{2}(Z_{ftr}-\hat{X}_{ftr,t})-s_{\theta_{tr}}( \hat{X}_{ftr,t},Z_{ftr},s,t)\right.-s_{\theta_{src}}(\hat{X}_{src,t},Z_{src},s,t)\left)\beta_{t}dt+\sqrt{\beta_{t}}d\bar{W}_{t}, \tag{14}\] where \(s_{\theta_{src}}\) and \(s_{\theta_{ftr}}\) denote the score function parameterized by \(\theta_{src}\) and \(\theta_{ftr}\) respectively. ### Prior Mixup Although the speech can be disentangled into several attributes and resynthesized with high-quality using the self-supervised representation and diffusion processes, we still train the model by only reconstructing or using the input speech as the target speech in both the reconstruction and diffusion processes, which induces the train-inference mismatch problem. In non-parallel voice conversion scenario, the ground-truth of the converted speech does not exist; Thus, the model is trained only by reconstructing the source speech. However, as we convert the source speech with a different voice style for VC, we shift our focus from reconstruction to conversion even in the training scenario. To achieve this, we propose a prior mixup in the diffusion process, which uses the randomly converted representation instead of the reconstructed representation as a prior distribution as illustrated in Figure 3c. Specifically, because the source-filter encoder can also be trained to reconstruct a source and filter of speech from the disentangled representation, the converted source and filter can be obtained with the randomly selected speaker style \(s_{r}\) as follows: \[Z_{src,r}=E_{src}(pitch,s_{r}),\ Z_{ftr,r}=E_{ftr}(content,s_{r}). \tag{15}\] Subsequently, the randomly converted source and filter, \(Z_{src,r}\) and \(Z_{ftr,r}\) are used as the prior for each denoiser as below: \[dX_{src,t}=\frac{1}{2}\beta_{t}(Z_{src,r}-X_{src,t})dt+\sqrt{\beta_{t}}dW_{ t}\, \tag{16}\] \[dX_{ftr,t}=\frac{1}{2}\beta_{t}(Z_{ftr,r}-X_{ftr,t})dt+\sqrt{\beta_{t}}dW_{t} \tag{17}\] The reverse process for the given forward SDE of each attribute (16) and (17) is expressed as: \[d\hat{X}_{src,t}=\left(\frac{1}{2}(Z_{src,r}-\hat{X}_{src,t})-s_{\theta_{src} }(\hat{X}_{src,t},Z_{src,r},s,t)-s_{\theta_{tr}}(\hat{X}_{ftr,t},Z_{ftr,r},s, t)\right)\beta_{t}dt+\sqrt{\beta_{t}}d\bar{W}_{t}, \tag{18}\] \[d\hat{X}_{ftr,t}=\left(\frac{1}{2}(Z_{ftr,r}-\hat{X}_{ftr,t})-s_{\theta_{tr}}( \hat{X}_{ftr,t},Z_{ftr,r},s,t)-s_{\theta_{src}}(\hat{X}_{src,t},Z_{src,r},s, t)\right)\beta_{t}dt+\sqrt{\beta_{t}}d\bar{W}_{t}. \tag{19}\] Hence, the prior mixup can alleviate the train-inference mismatch problem as the model is trained to convert the converted speech into the source speech even when reconstructing the source speech. Moreover, the voice style can be adapted in the source-filter decoder when the source-filter encoder may not execute VC effectively during inference. The entire model, including the style encoder, source-filter encoder, and decoder without pre-trained XLS-R and F0 VQ-VAE, is jointly trained in an end-to-end manner with Equation (8) for each attribute and Equation (9). Training ObjectivesAs described in section 4.2, the reconstruction loss \(\mathcal{L}_{rec}\) (9) is used to regulate the encoder output for the data-driven prior of diffusion models. The reverse SDE of the source attribute (18) and filter attribute (19) is trained with the neural network \(\theta_{src}\) and \(\theta_{ftr}\) to approximate the gradient of the log-density of noisy data \(X_{t}\). Each attribute network is parameterized using the following objective: \[\theta^{*}=\operatorname*{arg\,min}_{\theta}\int_{0}^{1}\lambda_{t}\mathds{E }_{X_{0},X_{t}}\bigg{\|}\Big{(}s_{\theta_{src}}(X_{src,t},Z_{src,r},s,t)+s_ {\theta_{ftr}}(X_{ftr,t},Z_{ftr,r},s,t)\Big{)}-\nabla\!\log p_{t|0}(X_{t}|X_{0}) \bigg{\|}_{2}^{2}\!dt, \tag{20}\] where \(\theta=[\theta_{src},\theta_{ftr}]\). Hence, the diffusion loss can be expressed as the following: \[\mathcal{L}_{diff}=\mathds{E}_{X_{0},X_{t}}\lambda_{t}\left[\bigg{\|}\bigg{(} s_{\theta_{src}}(X_{src,t},Z_{src,r},s,t)+s_{\theta_{ftr}}(X_{ftr,t},Z_{ftr,r},s,t) \bigg{)}-\nabla\!\log p_{t|0}(X_{t}|X_{0})\bigg{\|}_{2}^{2}\right]. \tag{21}\] The final objectives for DDDM-VC can be defined as follows: \[\mathcal{L}_{total}=\mathcal{L}_{diff}+\lambda_{rec}\mathcal{L}_{rec}\, \tag{22}\] where we set \(\lambda_{rec}\) to 1. ## 5 Experiment and Result ### Experimental Setup DatasetsWe used the large-scale multi-speaker LibriTTS dataset to train the model. The _train-clean-360_ and _train-clean-100_ subsets of LibriTTS, which consist of 110 hours of audio samples for 1,151 speakers, were used for training. Thereafter, we evaluated VC performance on LibriTTS and VCTK dataset for many-to-many and zero-shot VC scenarios. For zero-shot cross-lingual voice conversion scenarios, we used the CSS10 dataset which includes 10 different languages. PreprocessingWe resampled the audio from the sampling rate of 24,000 Hz to 16,000 Hz using the Kaiser-best algorithm of torchaudio Python package. We use the downsampled audio waveform as the input for XLS-R (0.3B) (Babu et al., 2022) to extract the self-supervised speech representation. For the target speech and the input of speaker encoder, we used log-scale Mel-spectrogram with 80 bins. To map the time frames between the self-supervised representation and Mel-spectrogram without any interpolation, Mel-spectrogram was transformed with hop size of 320, window size of 1280, and 1280-point Fourier transform. TrainingFor reproducibility, we attached the source code of DDDM-VC in the Supplementary materials. We trained DDDM-VC using the AdamW optimizer (Loshchilov and Hutter, 2019) with \(\beta_{1}=0.8\), \(\beta_{2}=0.99\), and weight decay \(\lambda=0.01\), and applied the learning rate schedule with a decay of \(0.999^{1/8}\) at an initial learning rate of \(5\times 10^{-5}\). We train all models including ablation study with a batch size of 64 for 200 epochs. Architecture details are described in Appendix A. For prior mixup, we mixed the speaker representation using binary selection between the original and shuffled representations in the same batch. For zero-shot voice conversion, we did not fine-tune the model. For one-shot speaker adaptation, we fine-tuned the model with only one sentence of novel speakers for 500 steps with optimizer initialization and an initial learning rate of \(2\times 10^{-5}\). We used the pre-trained Vocoder to convert the Mel-spectrogram into waveform. For vocoder, we used HiFi-GAN V1 (Kong et al., 2020) as an generator, and we used multi-scale STFT-based discriminators (MS-STFTD) of EnCodec (Defossez et al., 2022) which use a complex-valued STFT with real and imaginary components. ### Evaluation Metrics Subjective MetricsWe measured the mean opinion score (MOS) for the speech naturalness and speaker similarity in VC tasks. At least 20 listeners rated each sample from the source and converted speech on a scale of 1 to 5 for the speech naturalness MOS (nMOS). At least 20 listeners rated the target and converted speech on a scale of 1 to 4 for the speaker similarity MOS (sMOS). Objective MetricsWe calculated the character error rate (CER) and word error rate (WER) using Whisper (Radford et al., 2022) which is public available automatic speech recognition (ASR) model4 with large-scale multi-lingual and multitask supervision for the content consistency measurement. We evaluated the equal error rate (EER) of automatic speaker verification (ASV) model (Kwon et al., 2021), which is trained with large-scale speech recognition dataset, VoxCeleb2 (Chung et al., 2018) for the speaker similarity measurement. Furthermore, we determined the speaker encoder cosine similarity (SECS) for the additional similarity measurement. As VCTK provided a paired utterance per speaker, we also evaluated the Mel-cepstral distortion (MCD). We produced all possible pairs from the converted and target speech (400\(\times\)20 = 8,000), and calculated all the evaluation metrics. Footnote 4: [https://github.com/openai/whisper](https://github.com/openai/whisper). We used a large model of Whisper with 1,550M parameters, and used a presented text normalizer before calculating the CER and WER. ### Many-to-Many Voice Conversion We performed the many-to-many VC task with seen speakers during the training, and compared our models with various VC models. As indicated in Table 1, DDDM-VC-Small also outperformed the other models in all subjective and objective metrics without ASR results. Although VoiceMixer had a lower CER and WER, it had a lower voice style transfer performance in terms of the EER and SECS. Furthermore, we compared the converted speech generated with 6 and 30 iterations to evaluate the performance with fast sampling. Although the objective results of the model with 6 iterations were better than those of the model with 30 iterations, the model with 30 iterations achieved better performance in both the nMOS and sMOS evaluations. Thus, the audio quality was perceptually improved and the generated samples had better diversity with the stochastic iterative processes. \begin{table} \begin{tabular}{l|c|c c|c c|c c|c} \hline \hline Method & iter. & nMOS (\(\uparrow\)) & sMOS (\(\uparrow\)) & CER (\(\downarrow\)) & WER (\(\downarrow\)) & EER (\(\downarrow\)) & SECS (\(\uparrow\)) & Params. (\(\downarrow\)) \\ \hline GT & - & 3.82\(\pm\)0.05 & 3.44\(\pm\)0.03 & 0.54 & 1.84 & - & - & - \\ GT (Mel + Vocoder) & - & 3.81\(\pm\)0.05 & 3.23\(\pm\)0.05 & 0.60 & 2.19 & - & 0.986 & 13M \\ \hline AutoVC (Qian et al., 2019) & - & 3.62\(\pm\)0.05 & 2.44\(\pm\)0.04 & 5.34 & 8.53 & 33.30 & 0.703 & 30M \\ VoiceMixer (Lee et al., 2021) & - & 3.75\(\pm\)0.05 & 2.74\(\pm\)0.05 & 2.39 & 4.20 & 16.00 & 0.779 & 52M \\ SR (Polyak et al., 2021) & - & 3.62\(\pm\)0.05 & 2.55\(\pm\)0.04 & 6.63 & 11.72 & 33.30 & 0.693 & 15M \\ \hline DiffVC (Popov et al., 2022) & 6 & 3.77\(\pm\)0.05 & 2.72\(\pm\)0.05 & 7.28 & 12.80 & 10.50 & 0.817 & 123M \\ DiffVC (Popov et al., 2022) & 30 & 3.77\(\pm\)0.05 & 2.77\(\pm\)0.05 & 7.99 & 13.92 & 11.00 & 0.817 & 123M \\ DDDM-VC-Small (Ours) & 6 & 3.75\(\pm\)0.05 & 2.75\(\pm\)0.05 & 3.25 & 5.80 & 6.25 & 0.826 & 21M \\ DDDM-VC-Small (Ours) & 30 & **3.79\(\pm\)0.05** & **2.81 \(\pm\)0.05** & 4.25 & 7.51 & 6.25 & 0.827 & 21M \\ DDDM-VC-Base (Ours) & 6 & 3.75\(\pm\)0.05 & 2.75\(\pm\)0.05 & **1.75** & **4.09** & **4.00** & 0.843 & 66M \\ DDDM-VC-Base (Ours) & 30 & **3.79\(\pm\)0.05** & 2.80\(\pm\)0.05 & 2.60 & 5.32 & 4.24 & **0.845** & 66M \\ \hline \hline \end{tabular} \end{table} Table 1: Many-to-many VC results on seen speakers from LibriTTS dataset \begin{table} \begin{tabular}{l|c|c c|c c|c c} \hline \hline Method & iter. & nMOS (\(\uparrow\)) & sMOS (\(\uparrow\)) & CER (\(\downarrow\)) & WER (\(\downarrow\)) & EER (\(\downarrow\)) & SECS (\(\uparrow\)) & MCD\({}_{13}\) (\(\downarrow\)) \\ \hline GT & - & 4.28\(\pm\)0.06 & 3.87\(\pm\)0.03 & 0.21 & 2.17 & - & - & - \\ GT (Mel + Vocoder) & - & 4.03\(\pm\)0.07 & 3.82\(\pm\)0.03 & 0.21 & 2.17 & - & 0.989 & 0.67 \\ \hline AutoVC (Qian et al., 2019) & - & 2.49\(\pm\)0.09 & 1.88\(\pm\)0.08 & 5.14 & 10.55 & 37.32 & 0.715 & 5.01 \\ VoiceMixer (Lee et al., 2021) & - & 3.43\(\pm\)0.08 & 2.63\(\pm\)0.08 & 10.8 & 3.31 & 20.75 & 0.797 & 4.49 \\ SR (Polyak et al., 2021) & - & 2.58\(\pm\)0.10 & 2.03\(\pm\)0.07 & 2.12 & 6.18 & 27.24 & 0.750 & 5.12 \\ \hline DiffVC (Popov et al., 2022) & 6 & 3.48\(\pm\)0.07 & 2.62\(\pm\)0.08 & 5.82 & 11.76 & 25.30 & 0.786 & 4.82 \\ DiffVC (Popov et al., 2022) & 30 & 3.62\(\pm\)0.07 & 2.50\(\pm\)0.07 & 6.92 & 13.19 & 24.01 & 0.785 & 5.00 \\ DDDM-VC-Small (Ours) & 6 & 3.76\(\pm\)0.07 & 2.99\(\pm\)0.07 & 1.27 & 3.77 & 6.51 & 0.852 & **4.39** \\ DDDM-VC-Small (Ours) & 30 & 3.84\(\pm\)0.06 & 2.96\(\pm\)0.07 & 1.95 & 4.70 & 6.89 & 0.851 & 4.55 \\ DDDM-VC (Ours) & 6 & 3.74\(\pm\)0.07 & 2.98\(\pm\)0.07 & **1.00** & **3.49** & **6.25** & 0.856 & 4.42 \\ DDDM-VC (Ours) & 30 & **3.88\(\pm\)0.06** & **3.05\(\pm\)0.07** & 1.77 & 4.35 & 6.49 & **0.858** & 4.54 \\ \hline DDDM-VC-Fine-tuning (Ours) & 6 & 3.74\(\pm\)0.07 & 3.07\(\pm\)0.07 & 1.26 & 3.80 & 0.81 & 0.910 & 4.27 \\ DDDM-VC-Fine-tuning (Ours) & 30 & 3.86\(\pm\)0.07 & 3.06\(\pm\)0.07 & 1.87 & 4.63 & 0.82 & 0.913 & 4.38 \\ \hline \hline \end{tabular} \end{table} Table 2: Zero-shot VC results on unseen speakers from VCTK dataset. We additionally report the one-shot speaker adaptation result of DDDM-VC model (DDDM-VC-Fine-tuning) which is fine-tuned with only single sample per speaker for 500 steps. ### Zero-shot Voice Conversion We also report the results of the zero-shot VC tasks. As indicated in Table 2, our models significantly outperformed the baseline models in terms of speaker similarity. In particular, only the DDDM-VC models could adapt the voice style with novel speakers in terms of EER and SECS. We found that increasing iteration steps improved the diversity of converted speech in that CER, WER, and EER were increased, but the nMOS was consistently improved. We analyzed the effectiveness of each proposed component in the ablation study. In addition, we can control each attribute by transferring different styles to each attribute respectively as indicated in Appendix E. ### One-shot Speaker Adaptation For better speaker adaptation, we additionally fine-tuned our model on the VCTK dataset. We only used one sample per speaker, which is under ten seconds per speaker. As indicated in Table 2, the speaker similarity in terms of EER and SECS is consistently improved but the CER increased after the model overfitted the small training samples. With a small iteration of training, our model trained with large-scale speech dataset could effectively adapt to novel speaker by only one sample. ### Zero-shot Cross-lingual Voice Conversion We performed the zero-shot cross-lingual VC to demonstrate the zero-shot generation performance, even with unseen languages. We first produced all possible pairs from two samples of each language (20\(\times\)20=400). Subsequently, we calculated the EER of all speakers of all languages (400\(\times\)20=8000), the results reveal an EER of 9.75\(\%\) which is similar to the zero-shot performance of seen language. Moreover, the CER results in Figure 4 demonstrate that our model could perform generalization for disentangling and converting speech even in zero-shot cross-lingual scenarios. ### Ablation Study Prior MixupWe trained the DDDM-VC model without the prior mixup to clarify the reduction in the train-inference mismatch. As indicated in Table 3, the prior mixup could improve the generalization performance with better speaker adaptation in that the EER of the model with the prior mixup decreased and the SECS increased. However, the naturalness was slightly decreased, which can occur in VC since it does not take into account the target rhythm on the fixed-length of input speech. The research on the rhythm conversion could address this issue and we leave it for the future work. Disentangled DenoiserWe observed that removing the disentangled denoiser (employing only a single denoiser) decreased the performance in all metrics. It indicates that the disentangled denoiser can improve the model performance by effectively adapting each representation to the target voice style, compared to a single denoiser. Normalized F0We determined that removing the normalized F0 conditioning decreases the VC performance. Without the pitch contour, the encoder may not disentangle the content information of the speech effectively, resulting in a degradation of the VC performance. As it is difficult to reconstruct the speech from the perturbed speech representation, the use of additional pitch information that can be extracted from the ground-truth speech may improve the stability of the model. Figure 4: CER results for zero-shot cross-lingual VC on unseen languages from CSS10 multi-lingual dataset. \begin{table} \begin{tabular}{l|c|c c|c c|c c|c} \hline \hline Method & iter. & nMOS (\(\uparrow\)) & sMOS (\(\uparrow\)) & CER (\(\downarrow\)) & WER (\(\downarrow\)) & EER (\(\downarrow\)) & SECS (\(\uparrow\)) & Params. (\(\downarrow\)) \\ \hline DDDM-VC (Ours) & 30 & 3.76\(\pm\)0.05 & 3.08\(\pm\)0.05 & 2.60 & 5.32 & 4.24 & 0.845 & 66M \\ w.o Prior Mixup & 30 & 3.79\(\pm\)0.05 & 3.03\(\pm\)0.05 & 3.28 & 5.66 & 7.99 & 0.821 & 66M \\ w.o Disentangled Denoiser & 30 & 3.76\(\pm\)0.05 & 3.00\(\pm\)0.05 & 3.20 & 5.57 & 9.75 & 0.815 & 36M \\ w.o Normalized F0 & 30 & 3.78\(\pm\)0.05 & 3.00\(\pm\)0.05 & 3.27 & 5.88 & 10.25 & 0.811 & 33M \\ w.o Data-driven Prior & 30 & 3.83\(\pm\)0.05 & 2.87\(\pm\)0.05 & 2.32 & 4.86 & 19.25 & 0.786 & 66M \\ \hline \hline \end{tabular} \end{table} Table 3: Results of ablation study on many-to-many VC tasks with seen speakers from LibriTTS. Data-driven PriorAs noted in (Lee et al., 2022), a data-driven prior can improve the performance of diffusion model. We minimize the L1 distance of Mel-spectrogram between the ground-truth Mel-spectrogram and output of the source-filter encoder as Equation (10) for the data-driven prior. Each output from the source and filter encoder was used for the prior of each diffusion model, which was disentangled by the source-filter theory. Although nMOS was reported slightly lower, the performance of speaker adaptation significantly increased with data-driven prior. In the VC tasks, using the converted Mel-spectrogram performs better than using the average Mel-spectrogram (Popov et al., 2022). Besides, we think that the enhanced prior through normalizing flow (Kim et al., 2020; Ren et al., 2021) may also improve the performance of models. ## 6 Conclusion We have presented DDDMs for the robust control of various data components in diffusion models. We successfully demonstrated that DDDMs can improve the style transfer performance in VC tasks. DDDM-VC can convert the voice style even in zero-shot voice style transfer tasks by improving the speaker adaptation quality significantly. We have also proposed the prior mixup, which can improve the robustness of style control by learning to restore the data from converted representations for better generalization with reduced train-inference mismatch. Furthermore, we demonstrated that our model can robustly convert the voice with high-quality regardless of the model size. The small model also achieved better performance than state-of-the-art VC models. ## 7 Broader Impact and Limitation Practical ApplicationWe present DDDMs, which can control the style for each attribute in generative models. We verify the effectiveness of DDDMs with DDDM-VC which can convert the voice style by disentangling the speech and resynthesizing the speech from the disentangled representation. These VC systems could be utilized in various applications such as dubbing systems for the game and film industries. For more practical application, we also extend our model to text-to-speech system by utilizing the pre-trained DDDM-VC in Appendix I. In addition, we also present an audio mixing system, DDDM-Mixer in Appendix H. Social Negative ImpactAlthough TTS or VC systems could be positively utilized in various applications, they also have possible threats of malicious uses such as fake audio generation and voice spoofing. To alleviate these potential harms, audio fingerprint and fake audio detection systems should be presented with the development of speech technology. LimitationAlthough our model can improve the speaker adaptation quality significantly, there is room for improvement in the speech naturalness for zero-shot cross-lingual VC or noisy speech scenarios, which results from the inaccurate pitch modeling and style transfer with noise. Hence, in future works, we will first attempt to train the model with a cross-lingual dataset using the language-independent speech decomposition to improve the speech naturalness. In addition, we will separate the noise from speech for noise-free voice conversion with noise disentanglement. ## Acknowledgements This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2019-0-00079, Artificial Intelligence Graduate School Program(Korea University), No. 2019-0-01371, Development of Brain-inspired AI with Human-like Intelligence, No. 2021-0-02068, Artificial Intelligence Innovation Hub, and No. 2022-0-00984, Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation) and ESTsoft Corp., Seoul, Korea.
2304.13196
Simple closed curves, non-kernel homology and Magnus embedding
We consider the subspace of the homology of a covering space spanned by lifts of simple closed curves. Our main result is the existence of unbranched covers where this is a proper subspace. More generally, we exhibit covers whose homology is not generated by the non-kernel of any fixed solvable quotient of the fundamental group. We explain how the existing approach by Malestein and Putman for branched covers relates to Magnus algebra, which significantly simplifies their proof. We then generalise it by producing embeddings of surface groups into units of certain algebras, which may be of independent interest.
Adam Klukowski
2023-04-25T23:29:25Z
http://arxiv.org/abs/2304.13196v1
# Simple closed curves, non-kernel homology and Magnus embedding ###### Abstract We consider the subspace of the homology of a covering space spanned by lifts of simple closed curves. Our main result is the existence of unbranched covers where this is a proper subspace. More generally, we exhibit covers whose homology is not generated by the non-kernel of any fixed solvable quotient of the fundamental group. We explain how the existing approach by Malestein and Putman for branched covers relates to Magnus algebra, which significantly simplifies their proof. We then generalise it by producing embeddings of surface groups into units of certain algebras, which may be of independent interest. ## 1 Introduction Consider a finite-degree cover \(p:\widetilde{\Sigma}\to\Sigma_{g,n}\) of an orientable surface \(\Sigma_{g,n}\) of genus \(g\) with \(n\) punctures. The _simple closed curve homology_\(\mathrm{H}_{1}^{\mathrm{scc}|\Sigma_{g,n}}(\widetilde{\Sigma};M)\) is defined as the subspace of \(\mathrm{H}_{1}(\widetilde{\Sigma};M)\) spanned by elements of the form \[\left\{[\widetilde{\gamma}]\in\mathrm{H}_{1}(\widetilde{\Sigma};M)\ \Big{|}\ \widetilde{\gamma}\text{ is a component of }p^{-1}(\gamma)\text{ for a simple closed curve }\gamma\text{ on }\Sigma_{g,n}\right\} \tag{1}\] Our main theorem is: **Theorem 1.0.1:** For any \(g\geq 2\), \(n\geq 0\), and odd prime \(r\) there is a cover \(\widetilde{\Sigma}\to\Sigma_{g,n}\) whose degree is a power of \(r\) and such that \(\mathrm{H}_{1}^{\mathrm{scc}}(\widetilde{\Sigma};\mathbb{Q})\neq\mathrm{H}_{1 }(\widetilde{\Sigma};\mathbb{Q})\). The question whether \(\mathrm{H}_{1}^{\mathrm{scc}}=\mathrm{H}_{1}\) was an open problem asked by Marche [10] and independently by Looijenga [7]. An affirmative answer for abelian and 2-step nilpotent covers of graphs was given by Farb and Hensel [2]. Koberda and Santharoubane [6] used TQFT methods to find covers of surfaces with \(\mathrm{H}_{1}^{\mathrm{scc}}(\widetilde{\Sigma};\mathbb{Z})\neq\mathrm{H}_{1 }(\widetilde{\Sigma};\mathbb{Z})\), but their method could not rule out the possibility of \(\mathrm{H}_{1}^{\mathrm{scc}}(\widetilde{\Sigma};\mathbb{Z})\) being finite-index (equivalently it did not apply to rational instead of integer coefficients). An example of an unbranched cover \(\Sigma_{648}\to\Sigma_{2}\) with \(\mathrm{H}_{1}^{\mathrm{scc}|\Sigma_{2}}(\Sigma_{648};\mathbb{Q})\neq\mathrm{H}_{ 1}(\Sigma_{648};\mathbb{Q})\) was noted by Markovic and Tosic [12] based on the work of Markovic [11] and Bogomolov and Tschinkel [1]. This article builds on the work of Malestein and Putman [9] and earlier ideas of Farb and Hensel [2]. In [9] they constructed branched (equivalently required \(n\geq 1\)) covers where \(\mathrm{H}_{1}^{\mathrm{scc}}(\widetilde{\Sigma};\mathbb{Q})\neq\mathrm{H}_{ 1}(\widetilde{\Sigma};\mathbb{Q})\). Here we settle the general case and exhibit unbranched covers with \(\mathrm{H}_{1}^{\mathrm{scc}}(\widetilde{\Sigma};\mathbb{Q})\neq\mathrm{H}_{ 1}(\widetilde{\Sigma};\mathbb{Q})\) in all genera. We relate earlier approaches to the Magnus embedding [8]. We then produce similar embeddings of surface groups, which may be of independent interest. ### Homologies of covers and subgroups One can ask about the subspaces of homology of a covering space \(\widetilde{\Sigma}\) spanned by more general sets. Following [2], we define an _elevation_ of \(\gamma\in\pi_{1}\Sigma\) to be a minimal concatenation1 of lifts of \(\gamma\) which forms a closed curve in \(\widetilde{\Sigma}\). Given a set \(\mathcal{O}\subseteq\pi_{1}\Sigma\) of curves on \(\Sigma\) we define \(\mathrm{H}_{1}^{\mathcal{O}|\Sigma}(\widetilde{\Sigma};M)\) to be the subspace of \(\mathrm{H}_{1}(\widetilde{\Sigma};M)\) spanned by Footnote 1: I.e. \(\tilde{\gamma}\) is a lift of \(\gamma^{k}\), where \(k\) is the minimal positive integer for which \(\widetilde{\gamma}\) is a closed curve \[\left\{[\tilde{\gamma}]\ \big{|}\ \tilde{\gamma}\ \text{is an elevation of a curve}\ \gamma\in\mathcal{O}\right\} \tag{2}\] We present an insufficiency result for a large class of subsets in theorem 1.1.2. For a quotient \(\theta:\pi_{1}\Sigma\to Q\) write \(\mathrm{H}_{1}^{\theta\neq 1}(\widetilde{\Sigma};\mathbb{Q})=\mathrm{H}_{1}^{ \pi_{1}\Sigma\setminus\ker\theta|\Sigma}(\widetilde{\Sigma};\mathbb{Q})\) to be the subspace spanned by elevations of curves which are nontrivial in \(Q\). **Theorem 1.1.2:** Suppose \(g\geq 2\), \(n\geq 0\), and let \(\theta:\pi_{1}\Sigma_{g,n}\to Q\) be a nontrivial finite solvable quotient of odd order. There exists a normal cover \(\widetilde{\Sigma}\to\Sigma_{g,n}\) such that \(\mathrm{H}_{1}^{\theta\neq 1}(\widetilde{\Sigma};\mathbb{Q})\neq\mathrm{H}_{ 1}(\widetilde{\Sigma};\mathbb{Q})\). Furthermore, we can require that all prime factors of the degree of this cover divide \(|Q|\). Let us point out some consequences of 1.1.2, which appeared in various places in existing literature. **Corollary 1.1.3:** Let \(g\geq 2\), \(n\geq 0\). For each of the following sets \(\mathcal{O}\) of loops on \(\Sigma_{g,n}\) there exists a normal cover \(\widetilde{\Sigma}\to\Sigma_{g,n}\) such that \(\mathrm{H}_{1}^{\mathcal{O}|\Sigma_{g,n}}(\widetilde{\Sigma};\mathbb{Q})\) is not the whole of \(\mathrm{H}_{1}(\widetilde{\Sigma};\mathbb{Q})\). 1. Theorem 1.0.1: \(\mathcal{O}\) consisting of simple closed curves. 2. Primitive homology: \(\mathcal{O}\) consisting of the elements which belong to some minimal generating set of \(\pi_{1}\Sigma_{g,n}\). 3. More generally, any set \(\mathcal{O}\) which is contained in the union of finitely many \(\mathrm{Aut}(\pi_{1}\Sigma_{g,n})\)-orbits. 4. \(d\)-primitive homology \(\mathrm{H}_{1}^{d-\mathrm{prim}}\): \(\mathcal{O}\) consisting of all loops which map to nonzero vectors in \(\mathrm{H}_{1}(\Sigma_{g,n};\frac{\mathbb{Z}}{d})\), for an odd square-free \(d>1\). Primitive homology (2.) was introduced in [2]. Statements 3. and 4. are respectively generalisations of theorems C and D from [9]. Both aforementioned papers only considered free fundamental groups (\(n\geq 1\)), whereas we work with free and surface groups. **Remark 1.1.4:** In the statements 1., 2., 3., for any odd prime \(r\) we can additionally require the degree of the cover to be a power of \(r\). In 4. we can require the degree to divide a power of \(d\). **Remark 1.1.5:** It was asked by Kent [10] whether we can take \({\cal O}\) to be the set of all curves which do not fill \(\Sigma_{g,n}\). This question is not answered by theorem 1.1.2: if \(|Q|\) is a finite quotient of the fundamental group and \(\gamma\) a non-filling curve, then \(\gamma^{|Q|}\) is non-filling and trivial in \(Q\). **Remark 1.1.6:** When \(n\geq 1\) we can remove the restriction of odd order of \(G\) from theorem 1.1.2 - see remark 3.1.13. However, our proof still relies on solvability (this is why we emphasise it, even though every finite group of odd order is already solvable [4]). **Algebraic interpretation** Let \(R\) be a finite-index subgroup of \(\pi_{1}(\Sigma_{g,n})\) and \({\cal O}\subseteq\pi_{1}\Sigma_{g,n}\). Algebraically the elevation of \(\gamma\) is the2 element \(\gamma^{m}\), where \(m\) is the minimal positive integer with \(\gamma^{m}\in R\). Then \({\rm H}_{1}^{\cal O}(R;M)\) is the subspace of \({\rm H}_{1}(R;M)\) spanned by the set Footnote 2: The way we set up the definitions, algebraic elevation is unique whereas topological is not. This is because passing to algebraic description we implicitly choose a preferred basepoint of \(\widetilde{\Sigma}\). \[\left\{[x^{m}]\in{\rm H}_{1}(\pi_{1}\Sigma_{g,n};M)\right|\,x\in\pi_{1}( \Sigma_{g,n})\mbox{ and }m\in\mathbb{Z}\mbox{ such that }x^{m}\in R\right\} \tag{3}\] When \(R=\pi_{1}\widetilde{\Sigma}\) is the subgroup corresponding to the cover, then naturally \({\rm H}_{1}^{\cal O}(R;M)\cong{\rm H}_{1}^{\cal O}(\widetilde{\Sigma};M)\). **Remark 1.1.7:** For the purpose of the results in this paper, boundary components behave identically to punctures. **Other consequences for surface groups** A couple of interesting statements about free groups was noted by Malestein and Putman (theorems F and G in [9]) based on an argument of [6]. Our theorem 1.1.2 allows to apply their argument to surface groups, giving the following corollary. **Theorem 1.1.FG:** Let \(g\geq 2\), and \(\theta:\pi_{1}\Sigma_{g}\to Q\) be a finite solvable quotient of odd order. There exists an integral linear representation \(\rho:\pi_{1}\Sigma_{g}\to{\rm GL}_{d}\mathbb{Z}\) with infinite image such that every element of \(\rho(\pi_{1}\Sigma_{g}\setminus\ker\theta)\) has finite order. The statement of 1.1.FG remains true if we can replace the complement of a kernel with any of the special subsets from corollary 1.1.3. **Acknowledgements** I wish to thank Vlad Markovic, Misha Schmalian, Ognjen Tosic, Dawid Kielak and Ric Wade for helpful suggestions and discussions. This work was supported by the Simons Foundation. ### Outline and organisation Our approach consists of two steps. \[\begin{CD}\text{thm 1.1.2, cor 1.1.3}@<{\text{sec 2.2}}<{}<\text{cor 1.1.3.4}@<{\text{sec 2.3}}<{}<\text{sec 2.2}<{}<\text{proj 2.3.9}\\ @V{}V{(G,C,\rho,\alpha,\Psi)}V\end{CD}\] The first step, described in section 2, is to construct covers whose homology can see the interesting curves. In section 2.2 we deduce theorem 1.0.1 on simple closed curve homology from theorem 1.1.2, and reduce theorem 1.1.2 to its special case - part 4 of corollary 1.1.3 on \(d\)-primitive homology. Section 2.3 serves as a bridge between the first and second step. Here we reduce the corollary 1.1.3.4 on \(d\)-primitive homology to a group-theoretic 2.3.9. The idea is to view the homology of a cover as a representation of the deck, and find a representation of the deck group with certain special properties. The second step - construction of a special quotient of the fundamental group and proof of proposition 2.3.9 - is done in section 3. We start with the case of punctured surfaces in section 3.1, and translate Malestein's and Putman's proof into the language of Magnus embedding, which leads to significant simplifications. In section 3.2 we carry out an analogous construction for surface groups. ## 2 Relationships between homologies of covers The aim of this section is to relate various flavours of cover homology, and eventually reduce the insufficiency of \(\mathrm{H}_{1}^{\mathrm{scc}}\), \(\mathrm{H}_{1}^{\mathcal{O}}\) and \(\mathrm{H}_{1}^{\theta\neq 1}\) to the insufficiency of \(\mathrm{H}_{1}^{d-\mathrm{prim}}\). Our approach is, given a set \(\mathcal{O}\) of curves on \(\Sigma\), to find a cover \(\widehat{\Sigma}\to\Sigma\) with a nicer set \(\widehat{\mathcal{O}}\) (of curves on \(\widehat{\Sigma}\)) such that all further covers \(\widetilde{\Sigma}\to\widehat{\Sigma}\) satisfy \[\mathrm{H}_{1}^{\mathcal{O}|\Sigma}\left(\widetilde{\Sigma};\mathbb{Q}\right) \ \subseteq\ \mathrm{H}_{1}^{\widehat{\mathcal{O}}|\widehat{\Sigma}}\left( \widetilde{\Sigma};\mathbb{Q}\right). \tag{4}\] Then insufficiency of \(\widehat{\mathcal{O}}\)-homology implies the insufficiency of \(\mathcal{O}\)-homology. ### Stability of curve characteristics under taking covers Let us note that each of the interesting properties is stable with respect to taking covers. These facts will be used in section 2.2 to reduce the general theorem 1.1.2 to the special case of \(d\)-primitive homology. **Lemma 2.1.8:** Let \(p:\widetilde{\Sigma}\to\Sigma\) be a finite-degree cover, \(\gamma\) a curve on \(\Sigma\), and \(\tilde{\gamma}\) its elevation to \(\widetilde{\Sigma}\). Then: 1. If \(\gamma\) is simple, then so is \(\tilde{\gamma}\). 2. If \(\gamma\) is simple and nonseparating, then so is \(\tilde{\gamma}\). 3. If \(\gamma\) does not fill \(\Sigma\) then \(\tilde{\gamma}\) does not fill \(\widetilde{\Sigma}\). 4. If \([\gamma]\neq 0\) in \(\mathrm{H}_{1}(\Sigma;\mathbb{Z})\) then also \([\tilde{\gamma}]\neq 0\) in \(\mathrm{H}_{1}(\widetilde{\Sigma};\mathbb{Z})\). 5. If the cover is normal and \([\gamma]\neq 0\) in \(\mathrm{H}_{1}(\Sigma;\mathbb{F}_{r})\), then also \([\tilde{\gamma}]\neq 0\) in \(\mathrm{H}_{1}(\widetilde{\Sigma};\mathbb{F}_{r})\). _Proof:_ 1. A transverse self-intersection of \(\tilde{\gamma}\) would map to a transverse self-intersection of \(\gamma\) under \(p\). 2. Take \(\delta\in\pi_{1}\Sigma\) that intersects \(\gamma\) once. Then a concatenation of some lifts of \(\delta\) is a path which joins the opposite sides of \(\tilde{\gamma}\). Therefore the complement of \(\tilde{\gamma}\) is connected. 3. Let \(X\) be a component of \(\Sigma\setminus\gamma\) which is not a topological disk. Its preimage \(p^{-1}(X)\) is a component of \(\widetilde{\Sigma}\setminus p^{-1}(\gamma)\) which is not homeomorphic to a disk, so \(p^{-1}(\gamma)\) does not fill \(\widetilde{\Sigma}\). Since \(\tilde{\gamma}\) is a subset of \(p^{-1}(\gamma)\), it does not fill \(\widetilde{\Sigma}\) either. 4. The image of \([\tilde{\gamma}]\) under \(p_{*}\) is a positive multiple of \([\gamma]\) and thus nonzero. 5. First, suppose \(\tilde{\gamma}\) maps onto \(\gamma\) with degree \(d\) coprime to \(r\). Then \(p_{*}([\tilde{\gamma}])=d[\gamma]\neq 0\) in \(\mathrm{H}_{1}(\Sigma;\mathbb{F}_{r})\), so \([\tilde{\gamma}]\neq 0\) in \(\mathrm{H}_{1}(\widetilde{\Sigma};\mathbb{F}_{r})\). Second, consider the case where the deck group \(\mathrm{Aut}(\tilde{\Sigma}/\!\Sigma)\cong\frac{\mathbb{Z}}{r}\) is cyclic of order \(r\) and generated by \(\gamma\). A part of the five-term exact sequence is (5) Therefore there exist bases in which \(p_{*}\) becomes the standard inclusion \[\mathrm{H}_{1}(\widetilde{\Sigma};\mathbb{Z})_{\mathrm{Aut}(\widetilde{ \Sigma}/\!\Sigma)}\cong r\mathbb{Z}\oplus\mathbb{Z}^{m-1}\hookrightarrow\mathbb{ Z}^{m}\cong\mathrm{H}_{1}(\Sigma;\mathbb{Z}) \tag{6}\] Let \(\pi:\mathrm{H}_{1}(\Sigma;\mathbb{Z})\to\mathbb{Z}\) be the projection onto the first factor. \(\gamma\) generates the deck group, so \(\pi(\gamma)\in\mathbb{Z}\setminus r\mathbb{Z}\), and \(\pi\circ p(\tilde{\gamma})=\pi(\gamma^{r})\in r\mathbb{Z}\setminus r^{2} \mathbb{Z}\). Then \(r^{-1}\cdot\pi\circ p_{*}\) is a \(\mathbb{Z}\)-module homomophism \(\mathrm{H}_{1}(\widetilde{\Sigma};\mathbb{Z})\to\mathbb{Z}\) which sends \([\tilde{\gamma}]\) to \(\mathbb{Z}\setminus r\mathbb{Z}\), so \([\tilde{\gamma}]\neq 0\) in \(\mathrm{H}_{1}(\widetilde{\Sigma};\mathbb{F}_{r})\). Third, let \(\widetilde{\Sigma}/\!\Sigma\) be a general cyclic cover. It can be realised as a tower of cyclic covers of prime degree. We have exhausted the possibilities for these, so by induction \([\gamma]\neq 0\Rightarrow[\tilde{\gamma}]\neq 0\). Finally consider a general normal cover \(p:\widetilde{\Sigma}\to\Sigma\). Define the space \(\widehat{\Sigma}=\widetilde{\Sigma}/\!\left\langle\gamma\right\rangle_{ \mathrm{Aut}(\widetilde{\Sigma}/\!\Sigma)}\). It fits in the sequence (7) of covering maps, where \(\hat{\gamma}\) is the lift of \(\gamma\) to \(\widehat{\Sigma}\) and \(m\) is the order of \(\gamma\) in \(\mathrm{Aut}(\widetilde{\Sigma}/\!\Sigma)\). \(\hat{p}\) maps \(\hat{\gamma}\) onto \(\gamma\) with degree \(1\), so \([\hat{\gamma}]\neq 0\) in \(\mathrm{H}_{1}(\widehat{\Sigma};\mathbb{F}_{r})\). \(\tilde{p}:\widetilde{\Sigma}\to\widehat{\Sigma}\) is a cyclic cover and \(\tilde{\gamma}\) is an elevation of \(\hat{\gamma}\), so \([\tilde{\gamma}]\neq 0\) in \(\mathrm{H}_{1}(\widetilde{\Sigma};\mathbb{F}_{r})\). \(\square\) ### Reduction to \(d\)-primitive homology Here we reduce theorem 1.1.2 and corollaries 1.1.3 to the statement 4. of corollary 1.1.3 - insufficiency of \(d\)-primitive homology. _Proof of corollaries 1.1.3, assuming theorem 1.1.2_ 3. (orbits) \(\Rightarrow\) 1. (scc): \(\operatorname{Mcg}(\Sigma_{g,n})\) acts transitively on non-separating simple closed curves, and there are finitely many orbits of separating simple closed curves, parameterised by genus and partition of punctures (see chapter 1.3 in [3]). \(\operatorname{Mcg}(\Sigma_{g,n})\) acts on \(\pi_{1}\Sigma_{g,n}\) by automorphisms, so the set of simple closed curves is contained in a union of finitely many \(\operatorname{Aut}(\pi_{1}\Sigma_{g,n})\)-orbits. 3. (orbits) \(\Rightarrow\) 2. (primitive): \(\operatorname{Aut}(\pi_{1}\Sigma_{g,n})\) acts transitively on primitive elements. Thm 1.1.2 \(\Rightarrow\) 3. (orbits): Let \(r\) be a prime number. Both free and surface groups are residually \(r\)-groups, so there exists a finite \(r\)-group quotient \(\theta:\pi_{1}\Sigma_{g,n}\to Q\) whose kernel is disjoint from \(\mathcal{O}\). Then \(\operatorname{H}_{1}^{\mathcal{O}}(\widetilde{\Sigma};\mathbb{Q})\subseteq \operatorname{H}_{1}^{\theta\neq 1}(\widetilde{\Sigma};\mathbb{Q})\) for any cover \(\widetilde{\Sigma}\to\Sigma_{g,n}\). Also, \(Q\) is an \(r\)-group and hence solvable. \(\square\) Clearly, corollary 1.1.3.4 about \(d\)-primitive homology is just a special case of theorem 1.1.2, when the quotient \(Q\) is \(\operatorname{H}_{1}(\Sigma;\frac{\mathbb{Z}}{d})\). Below we reduce the general to the special case. _Proof of theorem 1.1.2 assuming corollary 1.1.3.4_ Let \(\widehat{\Sigma}\) be the cover corresponding to \(\ker\theta\). Denote the derived series of \(Q\) as \[Q=Q^{(0)}\triangleright Q^{(1)}\triangleright\cdots\triangleright Q^{(s)}=\{1\} \tag{8}\] By refining if necessary, we can assume that \(\frac{Q^{(i)}}{Q^{(i+1)}}\) is abelian of square-free order. Let \(\Sigma^{(i)}\) be the cover corresponding to \(\theta^{-1}(Q^{(i)})\). This gives a tower of abelian covers \[\widehat{\Sigma}=\Sigma^{(s)}\to\Sigma^{(s-1)}\to\cdots\to\Sigma^{(0)}=\Sigma _{g,n} \tag{9}\] Consider an elevation \(\hat{\gamma}\) of \(\gamma\in\pi_{1}\Sigma_{g,n}\setminus\ker\theta\) to \(\widehat{\gamma}\). If \(\hat{\gamma}\) maps onto \(\gamma\) with degree 1, then \(\pi_{1}\widehat{\Sigma}\) contains a conjugate of \(\gamma\), which is impossible. Let \(i\) be the largest index for which \(\hat{\gamma}\) maps onto its image \(\gamma^{(i)}\) in \(\Sigma^{(i)}\) with degree greater than 1. This means that \(\theta(\gamma^{(i)})\in Q^{(i)}\setminus Q^{(i+1)}\), so its image in \(\frac{Q^{(i)}}{Q^{(i+1)}}\) is non-trivial, so \([\gamma^{(i)}]\neq 0\) in \(\operatorname{H}_{1}(\Sigma^{(i)};\mathbb{F}_{p})\) for some \(p\) dividing \(|Q|\). But \(\hat{\gamma}\) is an elevation of \(\gamma^{(i)}\), so by "stability of \(r\)-nontriviality" - part 5 of lemma 2.1.8 - we know that \([\hat{\gamma}]\neq 0\) in \(\operatorname{H}_{1}(\widehat{\Sigma};\mathbb{F}_{p})\). We have proved that any elevation of any curve in \(\pi_{1}\Sigma_{g,n}\setminus\ker\theta\) to \(\widehat{\Sigma}\) is non-trivial in \(\operatorname{H}_{1}(\widehat{\Sigma};\frac{\mathbb{Z}}{d})\), where \(d\) is the radical of \(|Q|\). Therefore, for any further cover \(\widetilde{\Sigma}\to\widehat{\Sigma}\) we have \[\operatorname{H}_{1}^{\theta\neq 1|\Sigma_{g,n}}(\widetilde{\Sigma};\mathbb{Q}) \subseteq\operatorname{H}_{1}^{d-\operatorname{prim}|\widehat{\Sigma}}( \widetilde{\Sigma};\mathbb{Q}) \tag{10}\] so a witness to insufficiency of \(d\)-primitive homology relative to \(\widehat{\Sigma}\) (part 4 of corollary 1.1.3) is also a witness to theorem 1.1.2 for \(\Sigma_{g,n}\). \(\square\) ### Insufficiency of \(d\)-primitive homology Here we prove part 4. of corollary 1.1.3. We assume the key proposition 2.3.9 below, whose proof of is deferred to section 3. **Proposition 2.3.9:** Suppose \(F\) is a free or surface group, and let \(n\) be the rank of \(F\). For any odd prime \(r\) and large enough \(k\in\mathbb{N}\), there exists a finite \(r\)-group \(G\), a central subgroup \(C\leq G\), and group homomorphisms \(\rho:F\to G\), \(\alpha:G\to\mathbb{F}_{r}^{n}\), and \(\Psi:C\to\mathbb{F}_{r}\) such that \(\bullet\) The composition \(\alpha\circ\rho\) is precisely the mod-\(r\) abelianisation \(F\twoheadrightarrow\mathbb{F}_{r}^{n}=\mathrm{H}_{1}(F;\mathbb{F}_{r})\). \(\bullet\) If \(g\in G\setminus\ker\alpha\), then \(g^{r^{k}}\) lies in \(C\setminus\ker\Psi\). We will use the following two lemmas to relate the homology of a cover to the representation theory of the deck group. The first one is a classical result of Gaschutz [5]. **Lemma 2.3.10a:** Let \(R\triangleleft F_{n}\), \(G=\frac{F_{n}}{R}\), and \(k\) be a field of characteristic \(0\). Then \(\mathrm{H}_{1}(R;k)\cong k\oplus k[G]^{n-1}\) as a \(G\)-module. The situation with surface groups is not very different. **Lemma 2.3.10b:** Let \(R\triangleleft\pi_{1}\Sigma_{g}\), \(G=\frac{\pi_{1}\Sigma_{g}}{R}\), and \(k\) be a field of characteristic \(0\). Then \(\mathrm{H}_{1}(R;k)\cong k^{2}\oplus(k[G])^{2g-2}\) as a \(G\)-module. _Proof:_ Let \(F=\pi_{1}(\Sigma_{g})\) be a surface group. Take the standard cell complex for \(\Sigma_{g}\) and let \(\Sigma_{R}\) be the corresponding cover. The chain complex \(C_{\bullet}(\Sigma_{R})\otimes k\) is \[\begin{CD}k[G]@>{\partial_{2}}>{}>k[G]^{2g}@>{\partial_{1}}>{}>k[G]\end{CD} \tag{11}\] Write \(I=\{x\in k[G]|\sum_{g\in G}x_{g}=0\}\) for the augmentation ideal. A 2-chain is sent to \(0\) precisely when any 2-cells have the same coefficients, so \(\ker\partial_{2}\cong k\) and \(\mathrm{im}\ \partial_{2}\cong I\). By semisimplicity \(C_{1}(\Sigma_{R})\otimes k=U\oplus k\oplus k[G]^{2g-1}\), where \(U\cong I\) and \(\mathrm{im}\ \partial_{2}=U\oplus 0\oplus 0\). We have \(\mathrm{im}\ \partial_{1}=I\). Again by semisimplicity \(C_{1}(\Sigma_{R})\otimes k=V\oplus k\oplus k[G]^{2g-1}\) for some \(V\cong I\) such in this decomposition \(\partial_{1}\) is the projection onto \(V\). Since \(\partial_{1}\circ\partial_{2}=0\), the subspaces \(U,V\) intersect trivially. Hence \(C_{1}(\Sigma_{R})\otimes k=U\oplus V\oplus k^{2}\oplus k[G]^{2g-2}\), so \(H_{1}(\Sigma_{R};k)=k^{2}\oplus k[G]^{2g-2}\). \(\Box\) We are now ready to prove the insufficiency of \(d\)-primitive homology. _Proof of corollary 1.1.3 part 4, conditional on proposition 2.3.9:_ First a technicality: we promote "prime \(r\)" to "square-free \(d\)" in proposition 2.3.9. To this end, pick large \(k\in\mathbb{N}\), factorise \(d=\prod_{i}r_{i}\), and for \(i\)-th prime factor \(r_{i}\) take \((G_{i},C_{i},\rho_{i},\alpha_{i},\Psi_{i})\) provided by proposition 2.3.9. Then define \(G=\prod_{i}G_{i}\), \(C=\prod_{i}C_{i}\leq Z(G)\), and \(\rho=\prod_{i}\rho_{i}:\pi_{1}\Sigma_{g,n}\to G\). By ChRT there exist \(q_{i}\equiv\delta_{ij}\pmod{r_{j}^{k+1}}\). Set \(\alpha=\sum_{i}q_{i}\cdot\alpha_{i}:G\to\mathrm{H}_{1}(\Sigma_{g,n};\frac{ \mathbb{Z}}{d})\) and \(\Psi=\sum_{i}q_{i}\cdot\Psi_{i}:C\to\frac{\mathbb{Z}}{d}\). It is straightforward to check these are well-defined and satisfy \(\bullet\) Composition \(\alpha\circ\rho\) is the standard map \(F\to\mathrm{H}_{1}(\Sigma_{g,n};\frac{\mathbb{Z}}{d})\) \(\bullet\) For any \(g\in G\setminus\ker\alpha\) we have \(g^{e}\in C\setminus\ker\Psi\), where \(e=\sum_{i}q_{i}r_{i}^{k}\). Having the structure \((G,C,\rho,\alpha,\Psi)\) with the properties above, we proceed to produce a nontrivial irreducible rational representation \(V\) of \(G\) such that elements of \(G\setminus\ker\alpha\) have no nonzero fixed vectors. Let \(\omega\) be a primitive \(d\)-th root of unity and let \(W=\mathbb{Q}[\omega]\) be the cyclotomic field. We make \(W\) into a representation of \(C\) by letting \(c\in C\) act by \(\omega^{\Psi(c)}\), and induce to \(V^{\prime}=\mathrm{Ind}_{C}^{G}W\). Suppose that \(g\in G\setminus\ker\alpha\) fixes some \(v\in V^{\prime}\). By the properties of \(G\), \(g\) has a power \(c\in C\setminus\ker\Psi\), which must also fix \(v\). Choosing a set of coset representatives \(\Lambda\) for \(C\) in \(G\) allows us to describe the induced representation explicitly as \(V^{\prime}=\bigoplus_{\lambda\in\Lambda}\lambda\cdot W\). Using this description we can write \(v=\sum_{\lambda\in\Lambda}\lambda\cdot v_{\lambda}\). Since \(c\) is central, we have \[v=c\cdot v=\sum_{\lambda\in\Lambda}\lambda\cdot(c\cdot v_{\lambda})=\sum_{ \lambda\in\Lambda}\lambda\cdot\omega^{\Psi(c)}v_{\lambda}=\omega^{\Psi(c)}v \tag{12}\] But \(\Psi(c)\not\equiv 0\pmod{d}\), so \(v\) must be the zero vector. Thus we can take \(V\) to be any irreducible factor of \(V^{\prime}\). Now we have at our disposal a nontrivial irrep \(V\) of \(G\) such that any \(g\in G\setminus\ker\alpha\) fixes only \(0\). Set \(R=\ker\rho\). By lemmas 2.3.10a and 2.3.10b, \(\mathrm{H}_{1}(R;\mathbb{Q})\) contains \(\mathbb{Q}[G]\) as a direct summand, and hence it contains \(V\) too. Take any \(d\)-primitive \(x\), and observe that this means \(\rho(x)\in G\setminus\ker\alpha\). Suppose \(x^{m}\in R\). The vector \([x^{m}]\in\mathrm{H}_{1}(R;\mathbb{Q})\) is fixed by \(\rho(x)\), so its projection onto \(V\) must be \(0\). But this holds for any generator of \(d\)-primitive homology, so \(\mathrm{H}_{1}^{d-\mathrm{prim}}(R;\mathbb{Q})\cap V=\{0\}\), which implies that \(\mathrm{H}_{1}^{d-\mathrm{prim}}(R;\mathbb{Q})\neq\mathrm{H}_{1}(R;\mathbb{Q})\). \(\square\) ## 3 Proof of proposition 2.3.9 In this section we prove proposition 2.3.9. We split it into two cases: free fundamental group (\(n\geq 1\)) in proposition 3.1.9a, and surface groups (\(n=0\)) in proposition 3.2.9b. The latter case follows a similar basic idea, but requires some modifications to preserve the relation in surface groups. We will need a preliminary lemma **Lemma 3.0.11:** For any \(n\geq 1\), prime \(r\), and \(k\) such that \(r^{k}>(n-1)(r-1)\), there exists a homogeneous polynomial \(P\in\mathbb{F}_{r}[a_{1},\ldots,a_{n}]\) of degree \(r^{k}\) which is nonzero on \(\mathbb{F}_{r}^{n}\setminus\{0\}\). _Proof:_ Let \[P_{1}(a_{1},a_{2})=a_{1}-a_{1}a_{2}^{r-1}+a_{2} \tag{13}\] The value of \(P_{1}(a_{1},a_{2})\) is equal to \(a_{2}\) when \(a_{2}\neq 0\) and \(a_{1}\) otherwise. In either case \(P_{1}\neq 0\) if any of its arguments is nonzero. We then inductively define \[P_{i}(a_{1},\ldots,a_{i+1})=P_{1}\left(P_{i-1}(a_{1},\ldots,a_{i}),a_{i+1}\right) \tag{14}\] The polynomial \(P_{n-1}\) is nonzero on \(\mathbb{F}_{r}^{n}\setminus\{0\}\), has total degree \((n-1)(r-1)+1\) and its every monomial has degree congruent to \(1\) modulo \(r-1\). Consider a monomial \(m\) of \(P_{n-1}\) of degree \(e\), and suppose the variable \(a_{i}\) appears with nonzero exponent. Then the monomial \(m^{\prime}=a_{i}^{r^{k}-e}m\) takes the same value as \(m\) everywhere on \(\mathbb{F}_{r}^{n}\), and is of degree \(r^{k}\). Replacing every monomial \(m\) of \(P_{n-1}\) with such \(m^{\prime}\) gives a polynomial which fulfils the premises of lemma 3.0.11. \(\Box\) ### Free groups Recall **Proposition 3.1.9a:** For \(n\in\mathbb{N}_{+}\), prime \(r\), and sufficiently large \(k\), there exists a finite \(r\)-group \(G\), central subgroup \(C\leq G\), group homomorphisms \(\rho:F_{n}\to G\), \(\alpha:G\rightarrow\mathbb{F}_{r}^{n}\), and \(\Psi:C\rightarrow\mathbb{F}_{r}\) such that \(\bullet\) The composition \(\alpha\circ\rho\) is the mod-\(r\) abelianisation \(F_{n}\twoheadrightarrow\mathbb{F}_{r}^{n}=\mathrm{H}_{1}(F_{n};\mathbb{F}_{r})\). \(\bullet\) If \(g\in G\setminus\ker\alpha\), then \(g^{r^{k}}\in C\setminus\ker\Psi\). _Proof:_ Let \[\mathcal{F}=\frac{\mathbb{F}_{r}\langle X_{1},\ldots,X_{n}\rangle}{\left(X_{1},\ldots,X_{n}\right)^{r^{k+1}}} \tag{15}\] be the free associative \(\mathbb{F}_{r}\)-algebra on \(\{X_{i}\}_{1\leq i\leq n}\) truncated at the degree \(r^{k}\). That is, \(\mathcal{F}\) consists of polynomials with coefficients \(\mathbb{F}_{r}\) in non-commuting variables \(X_{1},\ldots,X_{n}\) of degree at most \(r^{k}\). Inside \(\mathcal{F}\) lives the set of polynomials with trailing term one \[G_{\mathcal{F}}=1+(X_{1},\ldots,X_{n})=\{P\in\mathcal{F}|P(0,\ldots,0)=1\}=\{ 1+(\mathrm{degree}\geq 1)\} \tag{16}\] This is a group under multiplication. The only non-trivial thing to check is the existence of inverses, which can be found by \[(1+A)^{-1}=1+\sum_{e=1}^{r^{k}}(-1)^{e}A^{e} \tag{17}\] We define \(\rho\) as sending \(i\)-th generator of \(F_{n}\) to \(1+X_{i}\in G_{\mathcal{F}}\). The group \(G_{\mathcal{F}}\) surjects onto \(\mathbb{F}_{r}^{n}\) via forgetting degree two and higher \[\alpha:1+a_{1}X_{1}+\cdots+a_{n}X_{n}+(\deg\geq 2)\mapsto\begin{pmatrix}a_{1} \\ \vdots\\ a_{n}\end{pmatrix} \tag{18}\] These maps are compatible, in the sense that \(\alpha\circ\rho\) is the mod-\(r\) abelianisation \(F_{n}\to\mathrm{H}_{1}(F_{n};\mathbb{F}_{r})\). Inside \(\mathcal{F}\) we also have \[C_{\mathcal{F}}=1+\left(X_{1},\ldots,X_{n}\right)^{r^{k}}=\left\{1+\left(\text{ terms of degree }r^{k}\right)\right\} \tag{19}\] All terms of degree at least \(r^{k}+1\) vanish in \(\mathcal{F}\), so \(C_{\mathcal{F}}\) is central in \(\mathcal{F}\) as a subalgebra, so it is central in \(G_{\mathcal{F}}\) as a subgroup. As a group it is isomorphic to \(\mathbb{F}_{r}\)-vector space spanned by monomials of degree \(r^{k}\) with addition. Raising to \(r^{k}\)-th power in \(\mathcal{F}\) does \[\left(1+\sum_{i=1}^{n}a_{i}X_{i}+\left(\deg\geq 2\right)\right)^{r^{k}}=1+\sum_{ i_{1},\ldots,i_{p^{k}}}\prod_{j=1}^{n}a_{j}^{\#\text{ of occurences of }j\text{ among i's}}\cdot\prod_{l=1}^{r^{k}}X_{i_{l}} \tag{20}\] Since the terms \(\prod_{l=1}^{r^{k}}X_{i_{l}}\) are a basis for \(C_{\mathcal{F}}\), we can read off the coefficients in 20 using group homomorphisms \(C\to\mathbb{F}_{r}\). Formally, the group \(G_{\mathcal{F}}\) has the property **Property 3.1.12a:** For any degree \(r^{k}\) homogeneous polynomial \(P\in\mathbb{F}_{r}[a_{1},\ldots,a_{n}]\) we can find a group homomorphism \(\Psi_{P}:C_{\mathcal{F}}\to\mathbb{F}_{r}\), such that for \(g\in G_{\mathcal{F}}\) we have \(g^{r^{k}}\in C_{\mathcal{F}}\) and \(\Psi_{P}\big{(}g^{r^{k}}\big{)}=P\big{(}\alpha(g)\big{)}\). Suppose \(k\) satisfies \(r^{k}>(n-1)(r-1)\). By lemma 3.0.11, there is a homogeneous polynomial \(P\in\mathbb{F}_{r}[a_{1},\ldots,a_{n}]\) of degree \(r^{k}\) which does not vanish on \(\mathbb{F}_{r}^{n}\setminus\{0\}\). Let \(\Psi_{P}\) be the homomorphism corresponding to \(P\) provided by property 3.1.12a. Then \((G_{\mathcal{F}},C_{\mathcal{F}},\rho,\alpha,\Psi_{P})\) fulfills the requirements of proposition 3.1.9a. \(\Box\) **Remark 3.1.13:** This proof still works when \(r=2\). **Remark 3.1.14:** We can make the group \(G_{\mathcal{F}}\) smaller by adding relations \(\{X_{i}X_{j}=0\}_{i>j}\) to \(\mathcal{F}\). ### Surface groups Here we prove the surface case of proposition 2.3.9. **Proposition 3.2.9b:** For any \(g\geq 2\), odd prime \(r\) and sufficiently large \(k\), there exists a finite \(r\)-group \(G\), a central subgroup \(C\leq G\), group homomorphisms \(\rho:\pi_{1}(\Sigma_{g})\to G\), \(\alpha:G\to\mathbb{F}_{r}^{2g}\), and \(\Psi:C\to\mathbb{F}_{r}\) such that \(\bullet\) Composition \(\alpha\circ\rho\) is the mod-\(r\) abelianisation \(\pi_{1}(\Sigma_{g})\twoheadrightarrow\mathbb{F}_{r}^{2g}=\mathrm{H}_{1}(\Sigma _{g};\mathbb{F}_{r})\). \(\bullet\) If \(g\in G\setminus\ker\alpha\), then \(g^{r^{k}}\in C\setminus\ker\Psi\). Our proof strategy will be to mimic the construction from the proof of proposition 3.1.9a. In subsections 3.2.2 and 3.2.3 we construct two types of algebras, whose groups of units admit a homomorphism from a surface group "with appropriate linear terms". Both algebras come with data analogous to \((G,C,\rho,\alpha,\Psi)\) from the free case 3.1. We assemble these objects and finalise the proof in subsection 3.2.4. See 37 for a diagram of the whole construction. #### 3.2.1 Notation and preliminary observations Fix the presentation \(\pi_{1}\Sigma_{g}=\left\langle x_{1},y_{1},\ldots,x_{g},y_{g}\right|\prod_{i=1 }^{g}x_{i}y_{i}x_{i}^{-1}y_{i}^{-1}\right\rangle\). Then \(\mathrm{H}_{1}(\pi_{1}(\Sigma_{g});k)\) is isomorphic to \(k^{2g}\) with basis \(\{x_{i},y_{i}\}_{1\leq i\leq g}\). Let us begin with a more careful analysis of the polynomial in variables \(\{x_{i},y_{i}\}_{1\leq i\leq g}\) which does not vanish on \(\mathbb{F}_{r}^{2g}\setminus\{0\}\). **Observation 3.2.15:** The polynomial provided by lemma 3.0.11 has only three types of monomials: \(\bullet\) type I: powers of single variables \(a_{j}^{v}\) with \(v\equiv 1\mod(r-1)\) \(\bullet\) type II: products of powers of at least three different variables \(\bullet\) type III: monomials of the form \(a_{j}^{v}a_{j^{\prime}}^{v^{\prime}}\) with \(v\equiv 1,v^{\prime}\equiv 0\mod(r-1)\) _Proof:_ Initial \(P_{1}\) in equation 13 contains only monomials of type I and III. Inductive equation 14 can copy existing monomials, produce new monomials of type II or III from old monomials, and add a monomial of type I. The final homogenisation step preserves every exponent modulo \(r-1\). \(\Box\) Subdivide type III monomials into type IIIa if they involve \(x_{i},x_{j}\) or \(y_{i},y_{j}\) or \(x_{i},y_{j}\) with \(i\neq j\), and type IIIb if they are of the form \(x_{i}^{u}y_{i}^{v}\). #### 3.2.2 Algebra \(\mathcal{M}\) and type I, II and IIIa terms In this section we demonstrate the algebra \(\mathcal{M}\) which admits an interesting mapping \(\pi_{1}\Sigma_{g}\to G_{\mathcal{M}}\). It will build the upper row of the diagram 37, where it will provide type I, II and IIIa terms of a non-vanishing polynomial. Define \[\mathcal{M}=\frac{\mathbb{F}_{p}\langle X_{1},Y_{1},\ldots,X_{g},Y_{g}\rangle }{(X_{i},Y_{i})^{r^{k+1}}+(X_{i}Y_{i},Y_{i}X_{i})} \tag{21}\] that is, a free associative algebra on \(X_{i},Y_{i}\) where we ignore all terms of total degree greater than \(r^{k}\), and also those with adjacent \(X_{i},Y_{i}\). Note we still keep \(X_{i}Y_{j}\) for \(i\neq j\). We define \(G_{\mathcal{M}}\), \(\alpha_{\mathcal{M}}\), \(C_{\mathcal{M}}\) analogously as earlier in equations 16, 18 and 19. The symbols \(X_{i},Y_{i}\) commute in the algebra \(\mathcal{M}\), so \(1+X_{i},1+Y_{i}\) commute in the group \(G_{\mathcal{M}}\). Therefore we have a well-defined group homomorphism \(\rho_{\mathcal{M}}:\pi_{1}\Sigma_{g}\to G_{\mathcal{M}}\) given by \[x_{i}\mapsto 1+X_{i}\qquad\qquad y_{i}\mapsto 1+Y_{i} \tag{22}\] Note this is compatible with abelianisation, i.e. \(\alpha_{\mathcal{M}}\circ\rho_{\mathcal{M}}\) is the canonical map \(\pi_{1}\Sigma_{g}\to\mathrm{H}_{1}(\Sigma_{g};\mathbb{F}_{r})\). The group \(G_{\cal M}\) has the weaker property **Property 3.2.12b':** For any degree \(r^{k}\) polynomial \(P\) consisting of type I, II or IIIa terms, we can find \(\Psi_{P}:C_{\cal M}\rightarrow\mathbb{F}_{r}\), such that for all \(g\in G_{\cal M}\) we have \(g^{r^{k}}\in C_{\cal M}\) and \(\Psi_{P}\big{(}g^{r^{k}}\big{)}=P\big{(}\alpha_{\cal M}(g)\big{)}\). _Proof:_ The group \(C_{\cal M}\) is isomorphic to \(\mathbb{F}_{r}\) vector space, whose basis are monomials of degree \(r^{k}\) without adjacent \(X_{i},Y_{i}\). When \(P\) is a monomial of type I, II or IIIa, we will construct \(\Psi_{P}\) as a projection onto a suitable basis element (compare with equation 20). First, consider a type I monomial \(P=x_{i}^{r^{k}}\). Then \(X_{i}^{r^{k}}\) is a basis vector, and we can take \(\Psi_{P}\) to be linear the projection onto its span. We do the same for \(y_{i}^{r^{k}}\). Second, suppose \(P\) is a monomial of type II or IIIa. Then it must have two factors which do not come from the same pairs of generators; WLOG they are \(x_{i}^{u_{i}},y_{j}^{v_{j}}\) for \(i\neq j\). Consider the corresponding formal product \(Q=X_{i}^{u_{i}}Y_{j}^{v_{j}}\). Now, iterate over the remaining factors of \(P\). For each factor \(x_{l}^{u_{l}}\) (respectively \(y_{l}^{v_{l}}\)) multiply \(Q\) by the corresponding symbol \(X_{l}^{u_{l}}\) (respectively \(Y_{l}^{v_{l}}\)). At every step the leftmost and rightmost symbol of \(Q\) are different, so either left or right multiplication avoids producing adjacent \(X_{i}Y_{i}\). After exhausting the factors of \(P\) we have produced a formal product \(Q\) which does not vanish in \({\cal M}\) and whose exponents match the corresponding exponents of \(P\). Then \(Q\) is a basis vector for \(C_{\cal M}\) and a projection onto this element is the desired \(\Psi_{P}\). \(\Box\) #### 3.2.3 Algebra \({\cal H}\) and type IIIb terms Here we construct the algebra \({\cal H}\) and an interesting mapping \(\pi_{1}\Sigma_{2}\to G_{\cal H}\). We will use it in the bottom row of the diagram 37, where it will supply type IIIb terms to the non-vanishing polynomial. Consider the commutative ring \(\frac{\mathbb{F}_{r}[A,B]}{(A^{r^{k}+1},A^{r^{k}}B,B^{2})}\) and take its quaternion algebra \[{\cal H}=\left(\frac{\mathbb{F}_{r}[A,B]}{(A^{r^{k}+1},A^{r^{k}}B,B^{2})} \right)[i,j,k] \tag{23}\] The variables \(A,B\) commute with each other and with \(i,j,k\). Note that \({\cal H}\) can also be considered as a polynomial ring over the skew-commutative ring \(\mathbb{F}_{r}[i,j,k]\). As before, we define the multiplicative group \(G_{\cal H}=1+(A,B)\) and its central subgroup \(C_{\cal H}=1+(A,B)^{r^{k}}=1+(A^{r^{k}},A^{r^{k}-1}B)\). Explicitly, \[C_{\cal H}=\left\{1+\sum_{l\in\{1,i,j,k\}}a_{l}A^{r^{k}}l+b_{l}A^{r^{k}-1}Bl \ \Big{|}\ a_{l},b_{l}\in\mathbb{F}_{r}\right\}\cong\mathbb{F}_{r}^{8} \tag{24}\] Forgetting quadratic and higher terms is a homomorphism \(G_{\cal H}\rightarrow\mathbb{F}_{r}^{8}\) given by \[\alpha^{\prime}:1+\sum_{l\in\{1,i,j,k\}}a_{l}Al+b_{l}Bl+(\deg\geq 2)\ \mapsto\ (a_{l},b_{l})_{l\in\{1,i,j,k\}}\in\mathbb{F}_{r}^{8} \tag{25}\] We will need only a part of the information in \(\alpha^{\prime}\). Define \(x_{1},y_{1},x_{2},y_{2}:G_{\cal H}\to\mathbb{F}_{r}\) as the linear projections onto the coefficients of \(Ai,Bj,Aj,Bi\) respectively (i.e. extracting \(a_{i},b_{j},a_{j},b_{i}\)), and let \[\alpha_{\cal H}=(x_{1},y_{1},x_{2},y_{2}):G_{\cal H}\to\mathbb{F}_{r}^{4} \tag{26}\] We now establish two facts about these objects. First, there is a homomorphism from a surface group with appropriate linear terms (lemma 3.2.16). Second, \(r^{k}\)-th power produces a type IIIb term from the linearisation (property 3.2.12b"). **Lemma 3.2.16:** There is a homomorphism \(\tau:\pi_{1}\Sigma_{2}\to G_{\cal H}\) satisfying \[\tau(x_{1})\equiv 1+Ai,\quad\tau(y_{1})\equiv 1+Bj,\quad\tau(x_{2})\equiv 1+ Aj,\quad\tau(y_{2})\equiv 1+Bi\qquad\mbox{mod }(A,B)^{2} \tag{27}\] _Proof:_ It is enough to arrange the tail terms in equation (27) so as to kill the relation in \(\pi_{1}(\Sigma_{2})\). We make an ansatz \[\tau(x_{1})=1+Ai+Ek\quad\tau(y_{1})=1+Bj\quad\tau(x_{2})=1+ Aj-Ek\quad\tau(y_{2})=1+Bi \tag{28}\] for a polynomial \(E\in A^{2}\mathbb{F}_{r}[A]\) to be determined later. Then the commutator \([\tau(x_{1}),\tau(y_{1})]\) becomes \[\tau(x_{1}) \tau(y_{1})\tau(x_{1})^{-1}\tau(y_{1})^{-1}=\] \[=(1+Ai+Ek)(1+Bj)(1+Ai+Ek)^{-1}(1+Bj)^{-1}=\] \[=1+[Ai+Ek,Bj](1+A^{2}+E^{2})^{-1}(1-Ai-Ek)(1-Bj)=\] \[=1+2B(1+A^{2}+E^{2})^{-1}(Ak-Ei-A^{2}j-E^{2}j) \tag{29}\] Analogous computation for the second commutator \([\tau(x_{2}),\tau(y_{2})]\) gives \[\tau(x_{2})\tau(y_{2})\tau(x_{2})^{-1}\tau(y_{2})^{-1}=1+2B(1+A^{2}+E^{2})^{-1 }(-Ak-Ej-A^{2}i-E^{2}i) \tag{30}\] The relation would be \[[\tau(x_{1}),\tau(y_{1})][\tau(x_{2}),\tau(y_{2})]=1-2B(1+A^{2}+E^{2})^{-1}(A^ {2}+E+E^{2})(i+j) \tag{31}\] Therefore, it is enough to find a power series \(E\in\mathbb{F}_{r}[\![A]\!]\), with no constant or linear terms, such that \(A^{2}+E+E^{2}=0\). This is given by \[E=\tfrac{1}{2}\left(\left(1-4A^{2}\right)^{\frac{1}{2}}-1\right) \tag{32}\] which is well-defined in \(\mathbb{F}_{r}[\![A]\!]\), because the denominators in Taylor expansion of \(\sqrt{1+x}\) are powers of \(2\). \(\Box\) **Property 3.2.12b":** There exists a group homomorphism \(\Psi_{\mathcal{H}}:C_{\mathcal{H}}\to\mathbb{F}_{r}\) and a polynomial \(R\in\mathbb{F}_{r}[x_{1},y_{1},x_{2},y_{2}]\) consisting only of terms of type II, such that for any \(g\in\mathrm{im}\ \tau\leq G_{\mathcal{H}}\) we have \(\Psi_{\mathcal{H}}\big{(}g^{r^{k}}\big{)}=x_{1}(g)^{r^{k}-1}y_{1}(g)+R\big{(} \alpha_{\mathcal{H}}(g)\big{)}\), where \(x_{1},y_{1}\) are the coordinates in the abelianisation as defined in equation 26 _Proof:_ Take \(g\in G_{\mathcal{H}}\) of the form \(g=1+x_{1}Ai+y_{1}Bj+x_{2}Aj+y_{2}Bi+(\deg\geq 2)\). Then the image of \(g\) under \(\alpha_{\mathcal{H}}\) is the vector \((x_{1},y_{1},x_{2},y_{2})^{\top}\in\mathbb{F}_{r}^{4}\). Moreover, \[g^{r^{k}} =1+\big{(}(x_{1}A+y_{2}B)i+(x_{2}A+y_{1}B)j\big{)}^{r^{k}}= \tag{33}\] \[=\big{[}-(x_{1}A+y_{2}B)^{2}-(x_{2}A+y_{1}B)^{2}\big{]}^{\frac{r^ {k}-1}{2}}\cdot\big{[}(x_{1}A+y_{2}B)i+(x_{2}A+y_{1}B)j\big{]}\] The expression standing next to \(j\) is \((-1)^{\frac{r^{k}-1}{2}}\) times \[\big{[}(x_{1}^{2}+x_{2}^{2})A^{2}+2(x_{1}y_{2}+x_{2}y_{1})AB\big{]} ^{\frac{r^{k}-1}{2}}\cdot(x_{2}A+y_{1}B)=\] \[=\big{[}(x_{1}^{2}+x_{2}^{2})^{\frac{r^{k}-1}{2}}A^{r^{k}-1}-(x_{ 1}^{2}+x_{2}^{2})^{\frac{r^{k}-3}{2}}(x_{1}y_{2}+x_{2}y_{1})A^{r^{k}-2}B\big{]} \cdot(x_{2}A+y_{1}B)=\] \[=x_{2}(x_{1}^{2}+x_{2}^{2})^{\frac{r^{k}-1}{2}}A^{r^{k}}+(x_{1}^{ 2}+x_{2}^{2})^{\frac{r^{k}-3}{2}}\big{(}(x_{1}^{2}+x_{2}^{2})y_{1}-(x_{1}y_{2} +x_{2}y_{1})x_{2}\big{)}A^{r^{k}-1}B=\] \[=x_{2}(x_{1}^{2}+x_{2}^{2})^{\frac{r^{k}-1}{2}}A^{r^{k}}+(x_{1}^{ 2}+x_{2}^{2})^{\frac{r^{k}-3}{2}}(x_{1}^{2}y_{1}-x_{1}x_{2}y_{2})A^{r^{k}-1}B \tag{34}\] Recall equation 24: the group \(C_{\mathcal{H}}\) is isomorphic to \(\mathbb{F}_{r}^{8}\) with the basis \[\Big{\{}A^{r^{k}-1}\cdot A^{u}B^{1-u}\cdot l\big{|}u\in\{0,1\},l\in\{1,i,j,k\} \Big{\}} \tag{35}\] Let \(\Psi_{\mathcal{H}}:C_{\mathcal{H}}\to\mathbb{F}_{r}\) be the projection onto \(A^{r^{k}-1}Bj\). Then (possibly up to sign) \[\Psi_{\mathcal{H}}\big{(}g^{r^{k}}\big{)}=(x_{1}^{2}+x_{2}^{2})^{\frac{r^{k}- 3}{2}}(x_{1}^{2}y_{1}-x_{1}x_{2}y_{2}) \tag{36}\] This a sum of a single type IIIb monomial \(x_{1}^{r^{k}-1}y_{1}\) and \(r^{k}-4\) terms of type II, as required. \(\Box\) #### 3.2.4 Proof of proposition 3.2.9b Recall that to prove proposition 3.2.9b we need to find an \(r\)-group \(G\), central \(C\leq G\), homomorphisms \(\rho:\pi_{1}\Sigma_{g}\to G\), \(\alpha:G\to\mathbb{F}_{r}^{2g}\) and \(\Psi:C\to\mathbb{F}_{r}\) such that \(\bullet\) The composition \(\alpha\circ\rho:\pi_{1}\Sigma_{g}\to\mathbb{F}_{r}^{2g}=\mathrm{H}_{1}(\Sigma _{g};\mathbb{F}_{r})\) is the Hurewicz map \(\bullet\) If \(g\in G\setminus\ker\alpha\) then \(g^{r^{k}}\in C\setminus\ker\Psi\). As parts of the construction we will use \((G_{\mathcal{M}},C_{\mathcal{M}},\rho_{\mathcal{M}},\alpha_{\mathcal{M}})\) from section 3.2.2 and \((G_{\mathcal{H}},C_{\mathcal{H}},\tau,\alpha_{\mathcal{H}},\Psi_{\mathcal{H}})\) from section 3.2.3. We will assemble them according to the following diagram: (37) The top row will supply type I, II and IIIa terms of the non-vanishing polynomial, while the bottom row will one-by-one provide type IIIb terms. _Proof of proposition 3.2.9b_ Define \[G^{\prime}=G_{\mathcal{M}}\times G_{\mathcal{H}}^{2g} \tag{38}\] and let \(f_{\mathcal{M}},\{f_{\mathcal{H},i}\}_{1\leq i\leq 2g}\) be the projections onto each factor. We need to build a homomorphism \(\rho:\pi_{1}\Sigma_{g}\to G^{\prime}\). For \(1\leq i\leq g\) let \(h_{2i},h_{2i+1}\) be the projections \(\pi_{1}\Sigma_{g}\to\pi_{1}\Sigma_{2}\) forgetting all but two generator pairs, given by \[h_{2i}:\left\{\begin{array}{ccc}x_{i+a}&\mapsto&x_{a}&\text{ if }a\in\{0,1\}\\ y_{i+a}&\mapsto&y_{a}&\text{ if }a\in\{0,1\}\\ x_{j},y_{j}&\mapsto&1&\text{ if }j\notin\{i,i+1\}\end{array}\right.\qquad h_{2i+1}: \left\{\begin{array}{ccc}x_{i+a}&\mapsto&y_{a}&\text{ if }a\in\{0,1\}\\ y_{i+a}&\mapsto&x_{a}&\text{ if }a\in\{0,1\}\\ x_{j},y_{j}&\mapsto&1&\text{ if }j\notin\{i,i+1\}\end{array}\right. \tag{39}\] Alternatively \(h_{2i+1}=r\circ h_{2i}\), where \(r\) is the automorphism of \(\pi_{1}\Sigma_{2}\) which exchanges \(x_{1,2}\leftrightarrow y_{1,2}\). Geometrically these maps come from collapsing \(g-2\) handles of \(\Sigma_{g}\) to points. Define \(\rho=\rho_{\mathcal{M}}\times\prod_{i=1}^{2g}\tau\circ h_{i}\), where \(\tau\) is the homomorphism \(\pi_{1}\Sigma_{g}\to G_{\mathcal{H}}\) provided by lemma 3.2.16. Define \(G=\text{im }\rho\leq G^{\prime}\). We have a map \(\alpha=\alpha_{\mathcal{M}}\circ f_{\mathcal{M}}:G\to\mathbb{F}_{r}^{2g}\). Then \(\alpha\circ\rho\) equals \(\alpha_{\mathcal{M}}\circ\rho_{\mathcal{M}}\), which is the standard map \(\pi_{1}\Sigma_{g}\to H_{1}(\Sigma_{g};\mathbb{F}_{r})\). Note that on \(G\) this is compatible with the abelianisations coming from each factor \(G_{\mathcal{H}}\), in the sense that \(\alpha_{\mathcal{H}}\circ f_{\mathcal{H},i}\big{|}_{G}\) is equal to \(\alpha\) followed by some coordinate projection \(\mathbb{F}_{r}^{2g}\to\mathbb{F}_{r}^{4}\). The subgroup \(C^{\prime}=C_{\mathcal{M}}\times C_{\mathcal{H}}^{2g}\) is central in \(G^{\prime}\), and it contains all the \(r^{k}\)-th powers. We define \(C=G\cap C^{\prime}\). Suppose \(k\) satisfies \(r^{k}>(n-1)(r-1)\), and let \(P\) be the polynomial of degree \(r^{k}\) vanishing nowhere on \(\mathbb{F}_{r}^{2g}\) produced by lemma 3.0.11. By observation 3.2.15, it can be assumed to have the form \[P(x_{1},y_{1},\ldots,x_{g},y_{g})=Q(x_{1},y_{1},\ldots,x_{g},y_{g})+\sum_{i=1} ^{g}a_{i}x_{i}^{r^{k}-1}y_{i}+b_{i}x_{i}y_{i}^{r^{k}-1} \tag{40}\] where \(a_{i},b_{i}\in\mathbb{F}_{r}\), and \(Q\) contains only terms of type I, II or IIIa. By the property 3.2.12b", there exist group homomorphisms \(\Psi_{2i},\Psi_{2i+1}:C_{\mathcal{H}}\to\mathbb{F}_{r}\) such that for any \(g\in G\) with \(\alpha(g)=(x_{i},y_{i})_{i}^{\top}\) we have \[\Psi_{2i}\circ f_{\mathcal{H},i}\left(g^{r^{k}}\right)=x_{i}^{r^{k}-1}y_{i}+R_{ 2i}\big{(}\alpha(g)\big{)}\qquad\Psi_{2i+1}\circ f_{\mathcal{H},i}\left(g^{r^{k }}\right)=x_{i}y_{i}^{r^{k}-1}+R_{2i+1}\big{(}\alpha(g)\big{)} \tag{41}\] where again \(R_{j}\) contain only terms of type I, II or IIIa. By the property 3.2.12b', there is a group homomorphism \(\Psi_{0}:G_{\mathcal{M}}\to\mathbb{F}_{r}\) satisfying \[\Psi_{0}\circ f_{\mathcal{M}}\left(g^{r^{k}}\right)=\left(Q-\sum_{i=1}^{g}(a_ {i}R_{2i}+b_{i}R_{2i+1})\right)\big{(}\alpha(g)\big{)} \tag{42}\] Finally, we combine them as \[\Psi=\left(\Psi_{0}\circ f_{\mathcal{M}}+\sum_{i=1}^{g}a_{i}\Psi_{2i}\circ f _{\mathcal{H},2i}+b_{i}\Psi_{2i+1}\circ f_{\mathcal{H},2i+1}\right)\Bigg{|}_{C} \tag{43}\] By construction \(\Psi\big{(}g^{r^{k}}\big{)}=P\big{(}\alpha(g)\big{)}\). Then \((G,C,\rho,\alpha,\Psi)\) is a witness to proposition 3.2.9b. \(\square\) ## 4 Comments It is natural to ask whether the assumption of solvability in theorem 1.1.2 is necessary. We ask the following **Question 4.0.17:** Let \(g\geq 2\), \(n\geq 0\) and \(\theta:\pi_{1}\Sigma_{g,n}\to G\) be an arbitrary finite quotient. Does exists a cover \(\widetilde{\Sigma}\to\Sigma_{g,n}\) with \(\mathrm{H}_{1}^{\theta\neq 1}(\widetilde{\Sigma};\mathbb{Q})\neq\mathrm{H}_{1}( \widetilde{\Sigma};\mathbb{Q})\)? Our proof fundamentally relies on detecting loops using abelian quotients, so solvable \(G\) seems to be the most general we can hope for with the current approach. Let us note a simple obstacle to potential approches to conjecture 4.0.17: **Observation 4.0.18:** Let \(\widetilde{\Sigma}\to\Sigma\) be a finite normal cover with deck group \(G\), and \(\theta:\pi_{1}\Sigma\to Q\) be a finite nontrivial quotient. If the orders of \(G\) and \(Q\) are coprime, then \(\mathrm{H}_{1}^{\theta\neq 1}(\widetilde{\Sigma};\mathbb{Z})=\mathrm{H}_{1}( \widetilde{\Sigma};\mathbb{Z})\). _Proof:_ Consider \(\gamma\in\pi_{1}\widetilde{\Sigma}\). If \(\gamma\notin\ker\theta\) then \([\gamma]\in\mathrm{H}_{1}(\widetilde{\Sigma};\mathbb{Z})\). Otherwise take any \(\lambda\in\pi_{1}\Sigma\setminus\ker\theta\). Then \(\lambda^{|G|}\) is a loop in \(\widetilde{\Sigma}\) and \[\theta\left(\gamma\lambda^{|G|}\right)=\theta\left(\lambda^{|G|}\right)\neq 1 _{Q} \tag{44}\] Therefore \([\gamma\lambda^{|G|}],[\lambda^{|G|}]\in\mathrm{H}_{1}^{\theta\neq 1}(\widetilde{ \Sigma};\mathbb{Z})\). They differ by gamma, so \([\gamma]\in\mathrm{H}_{1}^{\theta\neq 1}(\widetilde{\Sigma};\mathbb{Z})\). Hence \([\gamma]\in\mathrm{H}_{1}^{\theta\neq 1}(\widetilde{\Sigma};\mathbb{Z})\) for any \(\gamma\in\pi_{1}\widetilde{\Sigma}\), so \(\mathrm{H}_{1}^{\theta\neq 1}(\widetilde{\Sigma};\mathbb{Z})=\mathrm{H}_{1}( \widetilde{\Sigma};\mathbb{Z})\). \(\square\)
2302.00502
A support theorem for parabolic stochastic PDEs with nondegenerate Hölder diffusion coefficients
In this paper we work with parabolic SPDEs of the form $$ \partial_t u(t,x)=\partial_x^2 u(t,x)+g(t,x,u)+\sigma(t,x,u)\dot{W}(t,x) $$ with Neumann boundary conditions, where $x\in[0,1]$, $\dot{W}(t,x)$ is the space-time white noise on $(t,x)\in[0,\infty)\times [0,1]$, $g$ is uniformly bounded, and the solution $u\in\mathbb{R}$ is real valued. The diffusion coefficient $\sigma$ is assumed to be uniformly elliptic but only H\"older continuous in $u$. Previously, support theorems for SPDEs have only been established assuming that $\sigma$ is Lipschitz continuous in $u$. We obtain new support theorems and small ball probabilities in this $\sigma$ H\"older continuous case via the recently established sharp two sided estimates of stochastic integrals.
Yi Han
2023-02-01T15:17:03Z
http://arxiv.org/abs/2302.00502v2
The full support property of parabolic stochastic PDEs with nondegenerate Holder diffusion coefficients ###### Abstract. In this paper we work with parabolic SPDEs of the form \[\partial_{t}u(t,x)=\partial_{x}^{2}u(t,x)+g(t,x,u)+\sigma(t,x,u)dW(t,x)\] where \(x\in[0,1]\), \(W\) is the space-time white noise on \([0,1]\) and \(g\) is uniformly bounded. The diffusion coefficient \(\sigma\) is assumed to be uniformly elliptic but only Holder continuous in \(u\). Previously support theorems have only been established when \(\sigma\) is Lipschitz continuous in \(u\). We obtain new support theorems and small ball probabilities in this \(\sigma\) Holder continuous case via the recently established sharp two sided estimates of stochastic integrals. Supported by EPSRC grant EP/W524141/1 ## 1. Introduction The support theorem for stochastic processes has a long history. One of its simplest forms can be phrased as follows: let \(B_{t}\) be a \(d\)-dimensional Brownian motion started from \(0\), then for any \(\epsilon,t>0\) we have \[\mathbb{P}(\sup_{s\leq t}|B_{s}|<\epsilon)>0.\] This follows from the reflection principle of Brownian motion. Via a Girsanov change of measure, we can deduce that for any continuous \(\psi:[0,t]\to\mathbb{R}^{d}\) with \(\psi(0)=0\), we have \[\mathbb{P}(\sup_{s\leq t}|B_{s}-\psi(s)|<\epsilon)>0.\] See for example [3], page 59-60, (6.5) and (6.6). Both claims rely on Gaussian structure of the process \(B\). There is another notion, usually named Stroock-Varadhan support theorem for stochastic processes, that has a different flavour. Consider the parabolic SPDE \[\partial_{t}u(t,x)=\partial_{x}^{2}u(t,x)+g(t,x,u)+\sigma(t,x,u)dW(t,x),\quad x \in[0,1]. \tag{1.1}\] Denote by \(\mathcal{H}:=\{h:[0,T]\times[0,1]\to\mathbb{R},h\text{ absolutely continuous },\dot{h}\in L^{2}([0,T]\times[0,1])\}\) and let \(S(h)\) denote the solution of (1.1) when we take \(\dot{h}\in\mathcal{H}\) in place of the white noise \(dW\). If we assume \(\sigma\) is Lipschitz, \(u(0,\cdot)\) is Holder continuous and \(g\) has sufficiently many derivatives, it is proved in [2] that the support of \(u\) on the Wiener space is the closure of \(S(h),h\in\mathcal{H}\). Several extensions of this support theorem have appeared, see for example [4], [5] and [7], with the same conclusion that the topological support of the solution \(u\) is the closure of \(S(h)\), the solution of the SPDE driven by the control \(h\) in place of the noise. The support theorem to singular SPDEs has been obtained in [10], with the feature that the coset structure in the renormalization group plays a key role in characterizing topological supports of the solution. In all these works, the coefficients \(g\) and \(\sigma\) are nice enough so that the control problem \(S(h)\) can be properly solved, and the support of (1.1) can be characterized in terms of solutions to the control problem \(S(h)\). In this paper we consider support theorems of SPDEs in the lens of the regularization by noise phenomenon. In the setting of (1.1), this means that the coefficients \(g\) or \(\sigma\) (or both) are not necessarily locally Lipschitz continuous. One usually requires \(\sigma\) to be uniformly elliptic, so that roughness of the driving noise can restore well-posedness of the equation. We generally do not have a Stroock-Varadhan support theorem since the ODE or PDE for the control process \(S(h)\) is in general not well-posed. However it is still possible to prove the solution has full support or obtain small ball probability estimates For additive noise, i.e., \(\sigma=I_{d}\), and assuming the drift \(g\) is not too singular, we can remove the drift via Girsanov transform and show the solution has full support because white noise has. A related example for finite dimensional SDEs with singular drifts can be found in [12]. The story is the same when \(\sigma\) is Lipschitz continuous and uniformly elliptic. We are particularly interested in the _robustness_ of the support theorems, in the remaining (hardest) case that \(\sigma\) is only \(\alpha\)-Holder continuous in \(u,\,\alpha\in(0,1]\). More precisely, we assume that for some \(\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{D}>0\) we have \[|\sigma(t,x,u)-\sigma(t,x,v)|\leq\mathcal{D}|u-v|^{\alpha},\quad\mathcal{C}_{ 1}\leq|\sigma(t,x,u)|\leq\mathcal{C}_{2}.\] for any \(t>0\), \(x\in[0,1]\) and \(u,v\in\mathbb{R}\). In this case the Girsanov transform or the control processes \(S(h)\) do not tell us the answer in the same way as before. Before stating our support theorems, it is crucial to discuss the well-posedness issues of (1.1). For \(\sigma\) to be \(\alpha\)-Holder continuous in \(u\), \(\alpha>\frac{3}{4}\), strong existence and uniqueness have been established in [14] (see also [11] for a different perspective, also with \(\alpha>\frac{3}{4}\)). In general we may consider probabilistic weak solutions to (1.1) without the knowledge of uniqueness. So long as \(\alpha>0\), the lower bound in our main theorem still holds, and in particular every weak solution to (1.1) has full support. Our strategy to characterize the support of (1.1) goes as follows. Instead of viewing (1.1) merely as an evolution equation, we regard it as a random field and probe into fine geometric properties of its fluctuations. Since \(\sigma\) is non-degenerate, one expects that (1.1) lies in the same universality class as the linear stochastic heat equation \[\partial_{t}u(t,x)=\partial_{x}^{2}u(t,x)+dW(t,x), \tag{1.2}\] which is also named as the Edwards-Wilkinson universality class. In this universality class we observe nontrivial limiting behavior under the 1:2:4 scaling relation \(u(t,x)\mapsto\epsilon^{-1}u(\epsilon^{-4}t,\epsilon^{-2}x)\). From this scaling relation we can expect fairly sharp two-sided probability estimates for the solution of (1.1) on small scales, even in the case that \(\sigma\) is not constant in \(u\). Such computations have been carried out in [1], obtaining matching small ball probability estimates of solutions to (1.1) when \(\sigma\) is Lipschitz continuous in \(u\), with a Lipschitz constant sufficiently small. When \(\sigma\) is merely Holder continuous, we expect that it induces a highly nonlinear stretching of space and time. To overcome this, we adjust the 1:2 space-time parabolic scaling through a reduction of the temporal length scale while keeping the spatial scale fixed. Consequently, we can still obtain nontrivial (upper and lower) probability estimates of fine scale properties of the solution, and this is already sufficient for us to prove the support theorem. The upper and lower bounds of small ball probabilities in this Holder continuous case however do not have matching exponents in \(\epsilon\), reflecting the fact that irregularity of \(\sigma\) induces high order stretching in space and time. We note that for the linear stochastic heat equation (1.2), we can obtain small ball probabilities where the upper and lower bounds have matching exponents in \(\epsilon\), see [6], page 171, Theorem 5.1. We now state the main theorem. Denote by \(\mathcal{P}\) the \(\sigma\)-field of the noise \(W(t,x)\), generated by functions of the form \(f(x,t,\omega)=X(\omega)\cdot 1_{A}(x)\cdot 1_{(a,b]}(t)\), with \(A\subset[0,1]\) and \(X\) some \(\mathcal{F}_{a}\)-measurable random variable. We say \(h\in\mathcal{PC}_{b}^{2}\) if almost surely, \(h,\partial_{t}h,\partial_{x}^{2}h\) are bounded by a fixed constant \(C\). **Theorem 1.1**.: _Consider solutions \(u(t,x)\) to the stochastic heat equation_ \[\partial_{t}u(t,x)=\frac{1}{2}\partial_{x}^{2}u(t,x)+g(t,x,u(t,x))+\sigma(t,x,u(t,x))dW(t,x),\quad u(0,x)=u_{0}(x). \tag{1.3}\] _Assume \(u_{0},h\in\mathcal{PC}_{b}^{2}\) with_ \[\sup_{x\in[0,1]}|u_{0}(x)-h(0,x)|<\epsilon/2,\] _and that for some constants \(D,\mathcal{C}_{1},\mathcal{C}_{2}>0\), \(\alpha\in(\frac{3}{4},1],\) we have_ \[|\sigma(t,x,u)-\sigma(t,x,v)|\leq\mathcal{D}|u-v|^{\alpha},\] \[\mathcal{C}_{1}\leq|\sigma(t,x,u)|\leq\mathcal{C}_{2}\] _for all \(x\in[0,1]\), \(u,v\in\mathbb{R}\) and \(t\geq 0\)._ _Then for any \(\beta>2-\alpha\) we may find positive constants \(C_{0},C_{1},C_{2},C_{3}\) and \(\epsilon_{0}\) depending on \(\beta,\mathcal{C}_{1},\mathcal{C}_{2}\) and \(\sup_{t,x,u}|b(t,x,u)|\), such that for any \(0<\epsilon<\epsilon_{0}\), we have_ \[C_{0}\exp(-\frac{C_{1}T}{\epsilon^{2+4\beta}})\leq P(\sup_{0\leq t\leq T,x\in [0,1]}|u(t,x)-h(t,x)|\leq\epsilon)\leq C_{2}\exp(-\frac{C_{3}T}{(1+\mathcal{ D}^{2})\epsilon^{4+2\alpha}}). \tag{1.4}\] _If we only assume \(\alpha>0\), then the lower bound in (1.4) holds, that is,_ \[C_{0}\exp(-\frac{C_{1}T}{\epsilon^{2+4\beta}})\leq P(\sup_{0\leq t\leq T,x\in [0,1]}|u(t,x)-h(t,x)|\leq\epsilon).\] Since \(h\) is arbitrary, this in particular implies that the solution \(u\) has full support on Wiener space with respect to the supremum norm. _Remark 1.2_.: The technical assumption \(\alpha\in(\frac{3}{4},1]\) is only used to match with the well-posedness results in [14] (see also [11]). Via a compactness argument is is easy to show existence of a (probabilistic) weak solution to (1.3) for any \(\alpha>0\) (see for example [9]). A natural question is whether every weak solution to (1.3) has the full support property. For the upper bound to be proved in Section 2.3, we are not sure if the proof carries over for any \(\alpha>0\) or not because we have to solve another SPDE (2.3) 1 on the same probability space. This procedure requires strong well-posedness for every diffusion coefficient with the given Holder continuity. Footnote 1: with the diffusion coefficient \(\sigma\) only \(\alpha\)- Hölder continuous in the solution argument For the lower bound which is more relevant to the full support property, a careful check of the proof in Section 2.4 shows that the lower bound in (1.4) holds for any weak solution of (1.3) whenever \(\alpha>0\). Therefore, every weak solution to (1.3) must have full support. _Remark 1.3_.: We have for simplicity worked on the unit interval \([0,1]\), but everything carries over to finite intervals \([0,J]\) for any \(J>0\). In this paper we assume the solution \(u\) is real-valued, but everything carries over to the vector valued case \(\mathbf{u}(t,x)\in\mathbb{R}^{d}\), \(d\in\mathbb{N}_{+}\) without change. These extensions can be found in [1]. There are a few remaining questions. One of them is to obtain support theorems in Holder semi-norm rather than the supremum norm. Fairly sharp results have been obtained when \(\sigma\) is Lipschitz continuous in \(u\) (see [8] for recent progress), but adapting estimates in the existing literature to our Holder continuous \(\sigma\) seems a bit out of reach. Another more fundamental question is if we assume \(\sigma\) is uniformly elliptic, it is not clear whether the assumption that the Holder index \(\alpha>\frac{3}{4}\) is necessary for (strong or weak )well-posedness of (1.3). We believe that \(\alpha>0\) is enough for weak well-posedness but could not give a proof. Note that when \(\sigma\) is not assumed to be uniformly elliptic, then \(\alpha>\frac{3}{4}\) is a sharp condition, see [13] for the case \(\alpha<\frac{3}{4}.\) ## 2. Proof of main theorem ### Reduction to simple cases We first show that after some simple reductions we can assume \(g=0\) and \(h=0\). These reductions follow from Girsanov theorem and the non-degeneracy of \(\sigma\). The solution to (1.3) can be reformulated as \[\partial_{t}u(t,x)=\frac{1}{2}\partial_{x}^{2}u(t,x)+\sigma(t,x,u(t,x))[dW(t, x)+\sigma^{-1}g(t,x,u(t,x))],\] we consider the probability law \(Q_{t}\) defined as \[\frac{dQ_{t}}{dP_{t}}=\int_{0}^{t}\int_{0}^{1}\sigma^{-1}g(t,x,u(t,x))dW(t,x) -\frac{1}{2}\int_{0}^{t}\int_{0}^{1}(\sigma^{-1}g(t,x,u(t,x))^{2}dxdt. \tag{2.1}\] By Girsanov theorem, \(dW(t,x)+\sigma^{-1}g(t,x,u(t,x))\) is a space-time white noise with respect to \(Q_{t}\). Denote by \(A\) the event that \[A=\{\sup_{s\in[0,T],y\in[0,1]}|u(s,y)-h(s,y)|<\epsilon\},\] then by Cauchy-Schwartz inequality \[Q_{T}(A)\leq\sqrt{P_{T}(A)}\sqrt{E(\frac{dQ_{T}}{dP_{T}})^{2}}\leq\sqrt{P_{T} (A))}M, \tag{2.2}\] where \(M\) depends only on \(T,\mathcal{C}_{1}\) and \(\sup_{t,x,y}|g(t,x,y)|.\) This implies the lower bound in (1.4) for general \(g\) can be deduced from the lower bound in the case \(g=0.\) For the upper bound, a similar argument holds: one only needs to swap \(P\) and \(Q\) in (2.2) and replace \(g\) by \(-g\) in (2.1). Now we show why we can take \(h=0\). This is outlined in page 6 of [1] but we reproduce here for completeness. Let \(H:=\partial_{t}-\frac{1}{2}\partial_{x}^{2}\) and consider the process \[w(t,x)=u(t,x)-u_{0}(x)-h(t,x)+h_{0}(x),\] so that \(w(0,x)=0.\) If we set \(\sigma_{1}(t,x,w)=\sigma(t,x,u)\) and \[g_{1}(t,x,w)=g(t,x,u)-Hu_{0}(x)-Hh(t,x)+Hh_{0}(x),\] we have \[\partial_{t}w(t,x)=\frac{1}{2}\partial_{x}^{2}w(t,x)+g_{1}(t,x,w)+\sigma_{1}( t,x,w)dW(t,x).\] Since \(u_{0},h\in\mathcal{PC}_{b}^{2},\)\(\sup_{t,x,\omega}|g_{1}(t,x,\omega)|<\infty,\) so we are reduced to the case \(h=0\) and \(u(0,x)\equiv 0,x\in[0,1].\) ### Sharp two-sided estimates Recall the heat kernel on \([0,1]\) is given by \[G(t,x)=\sum_{n\in\mathbb{Z}}(2\pi t)^{-1/2}\exp(-\frac{(x+n)^{n}}{2t}).\] Consider the noise term \(\mathbf{N}\) defined as \[\mathbf{N}(t,x):=\int_{0}^{t}\int_{0}^{1}G(t-s,x-y)\sigma(s,y,u(s,y))W(dyds).\] We quote the following large deviations estimate from [1], Proposition 3.4 and Remark 3.1, which is a very precise formulation of the 1:2:4 scaling of Gaussian processes: **Proposition 2.1**.: _Assume that \(\sup_{s,y}|\sigma(s,y,u(s,y))|\leq\mathcal{C}<\infty\). Then we can find universal constants \(K_{1}\) and \(K_{2}\) such that, for any \(\alpha,\lambda,\epsilon>0\),_ \[\mathbb{P}\left(\sup_{0\leq t\leq\alpha\epsilon^{4},x\in[0,\epsilon^{2}]}| \mathbf{N}(t,x)|>\lambda\epsilon\right)\leq\frac{K_{1}}{1\wedge\sqrt{\alpha}} \exp(-K_{2}\frac{\lambda^{2}}{\mathcal{C}^{2}\sqrt{\alpha}}).\] We fix a sufficiently small \(c_{0}>0\) such that \(0<c_{0}<\max\{(\frac{K_{2}}{36\log K_{1}\mathcal{C}_{2}^{2}})^{2},1\}\), and define the discretized mesh of time as: 2 Footnote 2: Later we will introduce a different scheme to divide time intervals, which doesn’t follow the 1:2 parabolic scaling. \[t_{n}=nc_{0}\epsilon^{4},\quad n\geq 0,\] and denote by \(I_{n}:=[t_{n},t_{n+1}]\) the time interval with numbering \(n\). Choose some \(\theta=\theta(\mathcal{C}_{1},\mathcal{C}_{2})>0\) sufficiently large (with the precise condition given in [1], (2.11)) and fix \(c_{1}^{2}=\theta c_{0}\), the spatial mesh points are chosen as \[x_{n}=nc_{1}\epsilon^{2},n\geq 0.\] This time-space mesh respects the parabolic 1:2 scaling. Fix a terminal time \(T>0\) and define the terminal index \[n_{1}:=\min\{n\geq 1:t_{n}>T\},\quad n_{2}:=\min\{n\geq 1:x_{n}>1\}.\] Write \(p_{i,j}=(t_{i},x_{j})\), and consider the following two series of events \[A_{n}=\{|u(t_{n+1},x)|\leq\frac{\epsilon}{3},\quad x\in[0,1],\text{ and }|u(t,x)|\leq\epsilon,t\in I_{n},x\in[0,1]\},\] and \[F_{n}=\{|u(p_{nj})|<\epsilon\text{ for all }j\leq n_{2}-2\}.\] The strategy of proof is first fix the \(u\) component of \(\sigma\) and obtain an estimate in the Gaussian case, then deduce the general case via an interpolation argument. For the Gaussian case (when \(\sigma\) does not depend on \(u\)), we quote the following result from [1], Proposition 2.1: **Proposition 2.2**.: _Under the assumptions of Theorem 1.1, assume further that \(g=0\), \(u_{0}(x)\equiv 0\) and \(\sigma(t,x,u)\) does not depend on \(u\)._ _Then there exists constants \(\epsilon_{0},C_{4},C_{5}>0\) which depend only on \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) such that for any \(0<\epsilon<\epsilon_{0},\)_ \[P(F_{n}\mid\cap_{k=0}^{n-1}F_{k})\leq C_{4}\exp(-C_{5}\epsilon^{-2}),\] _and we can find constants \(C_{6},C_{7}>0\) which depend only on \(\mathcal{C}_{1},\mathcal{C}_{2}\) such that for any \(0<\epsilon<\epsilon_{0},\)_ \[P(A_{n}\mid\cap_{k=0}^{n-1}A_{k})\geq C_{6}\exp(-C_{7}\epsilon^{-2}).\] Now we prove the general case (i.e., \(\sigma\) depends on \(u\)). ### Upper bound, general case Define a function \[f_{\epsilon}(z)=\begin{cases}z,\quad|z|<\epsilon,\\ \frac{\epsilon}{|z|}z,\quad|z|>\epsilon,\end{cases}\] so that \(|f_{\epsilon}(z)|\leq\epsilon\) and \(f_{\epsilon}\) is Lipschitz continuous. We solve the following SPDE \[\partial_{t}v(t,x)=\frac{1}{2}\partial_{x}^{2}v(t,x)+\sigma(t,x,f_{\epsilon}( v(t,x)))\cdot dW(t,x) \tag{2.3}\] with \(v(0,x)=u_{0}(x)\), which is well posed because \(\sigma(t,x,f_{\epsilon}(u))\) is \(\alpha\)-Holder continuous in \(u\), for \(\alpha>\frac{3}{4}\). As long as \(|u(t,x)|\leq\epsilon\) for all \(x\in[0,1]\) and \(t\in[0,t_{1}]\), we have \(v(t,x)=u(t,x)\), so we proceed with the proof for \(v\). The point is to compare \(v\) with an auxiliary process \(v_{g}\) defined by \[\partial_{t}v_{g}(t,x)=\frac{1}{2}\partial_{x}^{2}v_{g}(t,x)+\sigma(t,x,f_{ \epsilon}(u_{0}(x)))\cdot dW(t,x),\] with \(v_{g}(0,x)=u_{0}(x)\), where the diffusion coefficient is independent of \(v_{g}\). The difference process \(D(t,x):=V(t,x)-V_{g}(t,x)\) is a stochastic integral satisfying \[D(t,x)=\int_{0}^{t}\int_{0}^{1}G(t-s,x-y)[\sigma(s,y,f_{\epsilon}(v(s,y))- \sigma(s,y,f_{\epsilon}(u_{0}(y))]\cdot W(dyds).\] Define \[H_{j}=\{v(p_{1j})|\leq\epsilon\}\] and consider the events \[A_{1,j}=\{|v_{g}(p_{1j})|\leq 2\epsilon\},\] \[A_{2,j}=\{|D(p_{1j})|\geq\epsilon\}.\] It is clear that \(H_{j}\subset A_{1,j}\cup A_{2,j}\). Define another sequence of events \[B_{n}=\{|u(t,x)|\leq\epsilon,t\in I_{n-1},x\in[0,1].\},\quad n\geq 1.\] Then clearly \(B_{n}\subset F_{n}\) and on \(B_{n}\), we have \(u(t_{n},x)=v(t_{n},x)\). Therefore \[P(B_{1})\leq P(\cap_{j=1}^{n_{2}-2}H_{j})\leq P(\cap_{j=1}^{n_{2}-2}(A_{1,j} \cup A_{2,j})).\] An elementary set-inclusion argument implies \[P(B_{1})\leq P(\cap_{j=1}^{n_{2}-2}A_{1,j})+\sum_{j=1}^{n_{2}-2}P(A_{2,j}).\] We now apply Proposition 2.2 to the process \(v_{g}\) to deduce \[P(\cap_{j=1}^{n_{2}-2}A_{1,j})=P(|v_{g}(p_{1,j})|\leq 2\epsilon,\quad j=1, \cdots,n_{2}-2)\leq C_{2}\exp(-C_{3}\epsilon^{-2}).\] By Holder continuity of \(\sigma\) in \(u\), we deduce that \[|\sigma(s,y,f_{\epsilon}(v(s,y)))-\sigma(s,y,f_{\epsilon}(v_{0}(y)))|\leq \mathcal{D}(2\epsilon)^{\alpha},\] so that by Proposition 2.1, we have for \(j=1,\cdots,n_{2}-2\), \[P(A_{2,j})\leq K_{1}\exp(-\frac{K_{2}}{4\epsilon^{2\alpha}\mathcal{D}^{2}\sqrt{c _{0}}}).\] Therefore \[P(B_{1}) \leq C_{2}\exp(-C_{3}\epsilon^{-2})+\sum_{j=1}^{n_{2}-2}K_{1}\exp(- \frac{K_{2}}{4\epsilon^{2\alpha}\mathcal{D}^{2}\sqrt{c_{0}}})\] \[\leq C_{2}\exp(-C_{3}\epsilon^{-2})+\frac{1}{c_{1}\epsilon^{2}}K_ {1}\exp(-\frac{K_{2}}{4\epsilon^{2\alpha}\mathcal{D}^{2}\sqrt{c_{0}}})\] \[\leq C_{4}\exp\left(-\frac{C_{5}}{8(1+\mathcal{C}_{2})^{2}(1+ \mathcal{D}^{2})\epsilon^{2\alpha}\sqrt{c_{0}}}\right),\] whenever \(\epsilon\) is small enough, the \(\epsilon^{-2\alpha}\) term in the exponent wins over the \(\epsilon^{-2}\) term, so we keep the former. 3 Footnote 3: We have used another approximation which follows from the elementary inequality \[\sup_{x\geq 0}x^{n}e^{-Cx}<\infty\] for any \(C>0\) and \(n\geq 0\). \(C_{4}\) and \(C_{5}\) are universal constants that depend only on \(\mathcal{C}_{2}\). The expression shows that when \(\sigma\) is merely Holder continuous, i.e. \(\alpha<1\), the \(\epsilon^{2\alpha}\) term dominates in the upper bound. The upper and lower bounds we obtain will not have matching exponents of \(\epsilon\) (they do if \(\alpha=1\)), but both bounds are nontrivial and in particular they lead to the desired support theorem. By the Markov property, for each \(n\leq n_{1}\), \[P(B_{n}\mid\cap_{j=1}^{n-1}B_{j})\leq\exp(-\frac{C_{7}}{(1+\mathcal{D}^{2}) \epsilon^{2\alpha}})\] where \(C_{7}\) depends only on \(\mathcal{C}_{1},\mathcal{C}_{2}\) and \(\mathcal{D}\). Therefore \[P(|u(t,x)|\leq\epsilon,t\in[0,T],x\in[0,1]) =P(\cap_{j=1}^{n_{1}-1}B_{j})\] \[\leq\exp(-\frac{\mathcal{C}_{7}}{(1+\mathcal{D}^{2})\epsilon^{2 \alpha}})^{\frac{T}{\epsilon^{4}}}\] \[\leq\exp(-\frac{C_{7}T}{(1+\mathcal{D}^{2})\epsilon^{2\alpha+4}}).\] This establishes the upper bound in (1.4). ### Lower bound, general case We now proceed to prove the corresponding lower bound. The argument roughly follows that in [1], while the last key estimates are different. Fix some \(\beta>1\) to be determined later, consider a new time mesh as follows: \[\hat{t}_{n}:=nc_{0}\epsilon^{4\beta},n\geq 0,\] and the corresponding time intervals \(\hat{I}_{n}:=[\hat{t}_{n},\hat{t}_{n+1}]\). This introduces a finer grid of time when \(\epsilon\) is sufficiently small. We analogously define the events \[\hat{A}_{n}:=\{|u(\hat{t}_{n+1},x)|\leq\frac{\epsilon}{3},\quad x\in[0,1],\text { and }|u(t,x)|\leq\epsilon,t\in\hat{I}_{n},x\in[0,1]\},\] Assuming that \(|u_{0}(x)|\leq\frac{\epsilon}{3},\)\(x\in[0,1]\). Define the stopping time \[\tau=\inf\{t\geq 0:\sup_{x\in[0,1]}|u(t,x)-u_{0}(x)|>2\epsilon\},\] such that on the event \(\hat{A}_{0}\) we have \(\tau\geq\hat{t}_{1}.\) Consider the process \[\widetilde{D}(t,x)=\int_{0}^{t}\int_{0}^{1}G(t-s,x-y)[\sigma(s,y,u(s\wedge\tau, y))-\sigma(s,y,u_{0}(y))]W(dyds),\] and the auxiliary comparison process \(u_{g}\) solving \[\partial_{t}u_{g}=\frac{1}{2}\partial_{x}^{2}u_{g}+\sigma(t,x,u_{0}(x))dW(t,x ),\quad u_{g}(0,x)=u_{0}(x),\] and we write \(u(t,x)=u_{g}(t,x)+D(t,x)\) as before, so that \[D(t,x)=\int_{0}^{t}\int_{0}^{1}G(t-s,x-y)[\sigma(s,y,u(s,y))-\sigma(s,y,u_{0}( y))]W(dyds).\] It is clear that \(D(t,x)=\widetilde{D}(t,x)\) whenever \(\tau\geq\hat{t}_{1}\). Consider the event \[\widetilde{B}_{0}:=\{|u_{g}(\hat{t}_{1},x)|\leq\frac{\epsilon}{6},x\in[0,1], \quad|u_{g}(t,x)|\leq\frac{2\epsilon}{3}\forall t\in\hat{I}_{0},x\in[0,1]\}.\] Then we have the following sequence of set inclusions \[P(\hat{A}_{0}) \geq P\left(\widetilde{B}_{0}\cap\{\sup_{0\leq t\leq\hat{t}_{1},x \in[0,1]}|D(t,x)|\leq\frac{\epsilon}{6}\}\right) \tag{2.4}\] \[=P\left(\widetilde{B}_{0}\cap\{\sup_{0\leq t\leq\hat{t}_{1},x\in [0,1]}|\widetilde{D}(t,x)|\leq\frac{\epsilon}{6}\}\right)\] \[\geq P(\widetilde{B}_{0})-P(\sup_{0\leq t\leq\hat{t}_{1},x\in[0,1 ]}|\widetilde{D}(t,x)|>\frac{\epsilon}{6}).\] The equality in the second line needs some explanation. If \(\tau\geq\hat{t}_{1}\), then \(D=\widetilde{D}\) on \([0,\hat{t}_{1}]\). On \(\widetilde{B}_{0}\cap\{\tau<\hat{t}_{1}\}\), one must have \(\sup_{x}|u_{g}(\tau,x)-u_{0}(x)|>\epsilon\), so that \(\sup_{x}|\widetilde{D}(\tau,x)|>\epsilon\). Since \(\beta>1\) and \(\epsilon<1\), one must have \(\hat{t}_{1}<t_{1}\), so that by Proposition 2.2, \[P(\widetilde{B}_{0})\geq P(\bar{B}_{0})\geq C_{1}\exp(-\frac{C_{2}}{\epsilon^ {2}}),\] where \(C_{1},C_{2}\) depend only on \(\mathcal{C}_{1},\mathcal{C}_{2}\), and where we define \[\bar{B}_{0}:=\{|u_{g}(t_{1},x)|\leq\frac{\epsilon}{6},x\in[0,1],\quad|u_{g}(t, x)|\leq\frac{2\epsilon}{3}\forall t\in I_{0},x\in[0,1]\}.\] It remains to estimate the probability of the last event in (2.4). \[P(\sup_{0\leq t\leq\hat{t}_{1},x\in[0,1]}|\widetilde{D}(t,x)|> \frac{\epsilon}{6}) \leq\frac{1}{\sqrt{c_{0}}\epsilon^{2}}P\left(\sup_{0\leq t\leq \hat{t}_{1},x\in[0,\sqrt{c_{0}}\epsilon^{2}]}|\widetilde{D}(t,x)|>\frac{ \epsilon}{6}\right)\] \[\leq\frac{1}{\sqrt{c_{0}}\epsilon^{2}}\frac{1}{\epsilon^{2\beta -2}}K_{1}\exp(-\frac{K_{2}}{\epsilon^{2\alpha}\mathcal{D}^{2}\mathcal{C}_{2}^ {2}\sqrt{c_{0}}\epsilon^{2\beta-2}}).\] To be a bit more precise about the exponent of \(\epsilon\), we take \(\alpha=\epsilon^{4\beta-4}\) and the constant \(\mathcal{C}:=\mathcal{D}(2\epsilon)^{\alpha}\) in the setting of Proposition 2.1. Comparing the exponents, we see that as long as we take \(\alpha+\beta>2\), we can find some \(C_{8},C_{9}\) depending only on \(\mathcal{C}_{1},\mathcal{C}_{2}\) and \(\mathcal{D}\) such that \[P(\sup_{0\leq t\leq\hat{t}_{1},x\in[0,1]}|\widetilde{D}(t,x)|>\frac{\epsilon}{ 6})\leq C_{8}\exp(-\frac{C_{9}}{\epsilon^{2(\alpha+\beta-1)}})\] and finally \[\mathbb{P}(\hat{A}_{0})\geq C_{1}\exp(-\frac{C_{2}}{\epsilon^{2}})-C_{8}\exp(- \frac{C_{9}}{\epsilon^{2(\alpha+\beta-1)}}),\] with the first term dominating. So we conclude that when \(\epsilon>0\) is sufficiently small, we can find \(C_{3},C_{4}\) depending only on \(\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{D}\) such that \[\mathbb{P}(\hat{A}_{0})\geq C_{3}\exp(-\frac{C_{4}}{\epsilon^{2}}).\] By the Markov property, for each \(n\leq\hat{n}_{1}:=\lfloor\frac{T}{\epsilon^{4\beta}}\rfloor\) we have \[\mathbb{P}(\hat{A}_{n}\mid\cap_{j=0}^{n-1}\hat{A}_{j})\geq C_{3}\exp(-\frac{C _{4}}{\epsilon^{2}}).\] Thus for some constant \(C_{1}>0\) depending on \(\mathcal{C}_{1},\mathcal{C}_{2}\), \[P\left(|u(t,x)|\leq\epsilon,t\in[0,T],x\in[0,1]\right)\geq\exp(-\frac{C_{1}}{ \epsilon^{2}}\frac{T}{\epsilon^{4\beta}})\geq\exp(-\frac{C_{1}T}{\epsilon^{4 \beta+2}}).\] The various constants \(C_{1},C_{2},\cdots,C_{8},C_{9}\) depend only on \(\mathcal{C}_{1},\mathcal{C}_{2}\) and may change from line to line. This establishes the lower bound in (1.4).
2303.01521
Probing long-lived axions at the KOTO experiment
While the main goal of the J-PARC KOTO experiment is to measure the rare decay $K_L \to \pi^0 \nu \bar \nu$, the unique setup of KOTO raises the possibility to search for physics beyond the Standard Model, in an attempt to probe parts of the parameter space which are not covered by other experiments. In this paper, we test the possibility of using KOTO to search for heavy QCD axions, or axion-like particles, a well-motivated extension of the Standard Model emerging in a variety of models. In particular, we estimate the sensitivity of the current KOTO setup as well as the KOTO Step-2 for various benchmark scenarios of axion coupling to the Standard Model. We find that KOTO Step-2 can probe new regions in the parameter space, while KOTO with its current form can only reaffirm the existing bounds. The obtained axion datasets are available as an update of the public code of the ALPINIST framework, including implementation of KOTO setups in the simulation, allowing for interpretation of various analyses as searches for axions in custom models.
Yoav Afik, Babette Döbrich, Jan Jerhot, Yotam Soreq, Kohsaku Tobioka
2023-03-02T19:00:01Z
http://arxiv.org/abs/2303.01521v3
# Probing Long-lived Axions at the KOTO Experiment ###### Abstract While the main goal of the J-PARC KOTO experiment is to measure the rare decay \(K_{L}\to\pi^{0}\nu\bar{\nu}\), the unique setup of KOTO raises the possibility to search for physics beyond the Standard Model, in an attempt to probe parts of the parameter space which are not covered by other experiments. In this paper, we test the possibility of using KOTO to search for heavy QCD axions, or axion-like particles, a well-motivated extension of the Standard Model emerging in a variety of models. In particular, we estimate the sensitivity of the current KOTO setup as well as the KOTO Step-2 for various benchmark scenarios of axion coupling to the Standard Model. We find that KOTO Step-2 can probe new regions in the parameter space, while KOTO with its current form can only reaffirm the existing bounds. The obtained axion datasets are available as an update of the public code of the Alpinist framework, including implementation of KOTO setups in the simulation, allowing for interpretation of various analyses as searches for axions in custom models. + Footnote †: preprint: IRMP-CP3-23-10, MPP-2023-40, KEK-TH-2499 ## I Introduction Rare Kaon decays are a well-known test of the Standard Model (SM) and serve as a very sensitive probe of New Physics (NP). Two golden channels are \(K^{+}\to\pi^{+}\nu\bar{\nu}\) and \(K_{L}\to\pi^{0}\nu\bar{\nu}\), which are very rare decays with branching ratio (BR) at \(\sim 10^{-11}\) level. The NA62 [1] and KOTO [2] experiments aim at measuring these BRs for the first time, with the latest results given in [3; 4]. Besides testing the SM, including the Grossman-Nir bound [5], these measurements probe feebly interacting particles (FIPs) which contribute to the \(K\to\pi+\) invisible decay, see _e.g._ the recent review [6]. Both KOTO and NA62 are based on proton-fixed targets, with 30 GeV and 400 GeV beams, respectively, and far detection systems. They can effectively serve as beam-dump experiments probing NP without relying on Kaon decays since NP particles can be produced already in the target. This was pointed out in Ref. [7] in the context of NA62 by using a special running mode of the experiment, see also [8]. This beam-dump potential was pointed out as a possible explanation for the three candidate events in the KOTO 2019 data [9]. In this work, we study the potential of the KOTO experiment to serve as a proton beam-dump for sub-GeV NP searches. Unlike NA62, KOTO can probe long-lived new particles in a di-gamma final state during its Kaon physics running without needing a dedicated trigger or run, in a completely parasitic scheme, see Fig. 1. This limitation is caused by the NA62 trigger requiring the presence of a Kaon in the standard data-taking. Due to the differences in the beam energy and other geometrical factors, we expect that KOTO will explore a different region of the parameter space than NA62 and past proton beam-dump experiments. In particular, we show that the future run of KOTO can search for NP particles, which are associated with solutions to the Strong CP problem. Our primary benchmark model is the axion, \(a\), which is a compelling addition to the SM because it potentially solves the Strong CP problem [10; 11; 12; 13] and it can be by itself a dark matter (DM) candidate [14; 15; 16]. A similarly motivated case is the axion-like-particle (ALP) as a pseudo-Nambu-Goldstone boson of spontaneously broken global symmetry at a high scale \(f_{a}\). It can also provide a portal to the dark sector [17; 18; 19; 20]. In both cases, we focus on the SM gauge field interactions, which are given by \[\mathcal{L} \supset c_{GG}\frac{\alpha_{s}a}{8\pi f_{a}}G^{a}_{\mu\nu}\tilde{G}^{a \mu\nu}+c_{BB}\frac{\alpha_{Y}a}{8\pi f_{a}}B_{\mu\nu}\tilde{B}^{\mu\nu}\] \[+c_{WW}\frac{\alpha_{2}a}{8\pi f_{a}}W_{\mu\nu}\tilde{W}^{\mu\nu}\,, \tag{1}\] where \(c_{GG,BB,WW}\) are dimensionless parameters. The SM gauge field strength is given by \(G^{a}_{\mu\nu}\), \(B_{\mu\nu}\) and \(W_{\mu\nu}\) for the strong, hypercharge, and weak interactions, respectively, and \(\alpha_{s}=g_{s}^{2}/(4\pi)\) is the strong gauge coupling and similarly for \(\alpha_{Y}\) and \(\alpha_{2}\). We assume the axion/ALP mass is sub-GeV, heavier than the QCD contribution in the light of heavy QCD axion models [21; 22; 23; 24; 25; 26; 27; 28], which revives the low decay constant from the long-standing bounds [29; 30; 31; 32; 33; 34]. Furthermore, \(f_{a}\lesssim 10\) TeV is favored by the axion quality problem [23]. Since this scenario is potentially discovered in laboratories, experimental data has been reinterpreted, leading to additional constraints. promising probes based on the future experiments were also proposed [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43]. Hereafter, we collectively refer to the heavy QCD axion and ALPs as axions, where the mass and the couplings are independent parameters. In the following, we discuss the KOTO experimental setup and data-taking modes in Sec. II. In Sec. III, we describe the axion production and decay. The quantitative impact of this analysis is shown in Sec. IV, where we derive the bounds from current data and estimate the projection for future data-taking. We conclude in Sec. V. ## II KOTO setup and data-taking In this work, we exploit several past and future setups of the KOTO experiment while accounting for the available information on the experimental conditions. These setups fall into two independent categories: one regarding the experiment layout and one regarding the data-taking mode. We consider two experimental layouts as follows: * Step-1: the present 2022 layout, which was also used in the 2015 data-taking [44]. * Step-2: the proposed setup for the future as described in Ref. [45]. A schematic view of the setups is found in Fig. 1. For both steps we consider two data-taking modes: * Kaon mode: the standard mode with a \(K_{L}\) beam. * Beam-dump mode: a special run in a beam-dump mode, which includes a shield that blocks the beam (beam plug) and different selection cuts. ### The experimental setups In both setups, the experiment uses a primary \(30\,\mathrm{GeV}\) proton beam from the J-PARC Main Ring. The proton beam impinges on a golden target T1 and generates a secondary hadronic beam which, besides other particles, consists of \(K_{L}\). In the present setup, Step-1 [44], the experiment axis is under \(16^{\circ}\) angle with respect to the primary proton beam, and its front-end is located \(21\,\mathrm{m}\) from the T1 target with a set of collimators, sweeping magnets in between for forming the neutral \(K_{L}\) beam, and veto detectors for upstream background suppression. The CsI calorimeter (ECAL), located \(27\,\mathrm{m}\) from the T1 target, has a \(2\,\mathrm{m}\) diameter with \(15\,\mathrm{cm}\times 15\,\mathrm{cm}\) central hole for the beam. The decay volume is \(2.9\,\mathrm{m}\) long and precedes the ECAL. The proposed KOTO Step-2 setup [45] with a higher intensity beam assumes a \(5^{\circ}\) angle between the detector and the primary beam. We assume the beginning of the decay volume to be \(45.75\,\mathrm{m}\) away from the target. The calorimeter size is increased to a \(3\,\mathrm{m}\) diameter with a \(20\,\mathrm{cm}\times 20\,\mathrm{cm}\) central hole for the beam and it is located \(64\,\mathrm{m}\) far from the target. In a special beam-dump mode during the operation with the Step-1 setup [46], a beam plug was placed to close the \(K_{L}\) beamline, however, the sweeping magnet was not functional. The dataset of this run is smaller than in the Kaon mode. We do not know if the backgrounds stated in [46] could be further reduced at the analysis level, but without a functioning sweeping magnet, a \(0\)-background setting with the acquired data seems unlikely to us. With a functional sweeping system or a dedicated run with optimized magnet sweeping, one may be optimistic that a small background can be achieved. The above-described experimental layouts and modes are implemented in the Alphist framework [47] together with specific selection conditions, such that simplified simulations of axions production and decays can be performed for the interpretation of KOTO sensitivity for axion detection. The updated code and datasets are publicly available at [https://github.com/jjerhot/ALPINIST](https://github.com/jjerhot/ALPINIST). Details of the datasets and selection conditions are given in the following Sec. II.2. ### KOTO data-taking modes and their interpretation While operation in the beam-dump mode can potentially allow a direct search for particles beyond the SM in a background-clean environment, the majority of the data is collected in the Kaon mode, with the main aim to measure the extremely rare \(K_{L}\to\pi^{0}\nu\bar{\nu}\) decay. Figure 1: A schematic illustration of the KOTO layout for Step-1 (Step-2). See text for details. Kaon mode In the Kaon mode, searching for particles of different origin than in the \(K_{L}\) decay may prove to be challenging due to the backgrounds originating from both the beam and from upstream. Nevertheless, since the \(K_{L}\to\pi^{0}\nu\bar{\nu}\) with a consequent \(\pi^{0}\to\gamma\gamma\) decay has the same signature as the axion \(a\to\gamma\gamma\) decay, this provides an opportunity to re-interpret the (non-)observation of the \(\pi^{0}\nu\bar{\nu}\) signal to constrain also the axion parameter space, see _e.g._[9]. Unlike SM particles, axions can propagate through the beamline elements and upstream detectors and they can decay off-axis from the neutral \(K_{L}\) beam. Obviously, the different kinematics of these two options would render a dedicated analysis for the case of the axion possibly more sensitive than what we state below. For the re-interpretation of the \(K_{L}\to\pi^{0}\nu\bar{\nu}\) search as an axion search, we re-analyze the \(a\to\gamma\gamma\) decays simulated using the toy Monte Carlo implemented in the Alpinist framework, assuming the \(K_{L}\to\pi^{0}\nu\bar{\nu}\) selection conditions for Step-1 analysis of 2015 data [44], for the future Step-1 run [48] and for Step-2 [45]. These are summarized in Tab. 1. In particular, we are implementing cuts on the following kinematic variables. The photon cluster coordinates in the plane perpendicular to the \(K_{L}\) beam axis, \(x_{\gamma_{1,2}}\) and \(y_{\gamma_{1,2}}\), and \(r\) which is the separation distance between the photon clusters. \(R_{\rm COE}\) is the center of energy deposited distance from the beam, based on the photon position at the calorimeter (\(x_{\gamma_{1,2}}\), \(y_{\gamma_{1,2}}\)) and the final photon energies, \(E_{\gamma_{1,2}}\). The photon separation angle projection on the calorimeter plane and the angle between the beam axis and the \(\pi\nu\nu\)-hypothesis-reconstructed photon momenta are denoted as \(\theta_{\gamma,\rm calo}\) and \(\theta_{\gamma,\rm beam}\), respectively. The \(z_{\rm vtx}\) position is calculated assuming that we are reconstructing an on-axis \(\pi^{0}\to\gamma\gamma\) decay, where \(z_{\rm vtx}=0\) corresponds \(z=21\,\rm m\) from the T1 target for Step-1 and \(44\,\rm m\) for Step-2. Finally, we quote \(\mathcal{A}_{\rm add}\), the selection efficiency of additional shape-related cuts (cluster shape, pulse shape and shower depth). Since we do not have more detailed information about \(\mathcal{A}_{\rm add}\), we assume a uniform distribution over the whole signal region, using the numbers quoted in [44; 45]. As we can see from Tab. 1, the main difference between the current Step-1 data and the future planned run is the collected statistics in terms of the number of protons on target \(N_{\rm PoT}\) and the selection efficiency of the shape-related cut algorithms which has improved considerably while keeping a good rejection power for hadronic backgrounds. In the search for \(a\to\gamma\gamma\) signal events, we are not limited to the \(K_{L}\to\pi^{0}\nu\bar{\nu}\) signal region, since the \(a\to\gamma\gamma\) events have different kinematics, for details see Sec. III. Therefore, we need to estimate the expected \(N_{\rm exp}\) and observed \(N_{\rm obs}\) number of SM events in the whole \(z_{\rm vtx}\)-\(p_{T,\pi^{0}}\) plane for the various KOTO datasets. For Step-1 2015 dataset, we make estimations in all regions in the \(z_{\rm vtx}\)-\(p_{T,\pi^{0}}\) plane, using the \(N_{\rm exp}\) and \(N_{\rm obs}\) shown in Fig. 3 of [44]. In addition, we estimate the sensitivity of the region of \(p_{T,\pi^{0}}>0.5\,\rm GeV\) that is out of the range of the referential figure. Since the pion transverse momentum is expected to be smaller than \(0.5\,\rm GeV\) for most of the known physics processes [49], we assume that there is no SM background in this region, i.e. \(N_{\rm exp}=0\). In order to project the KOTO sensitivity for the future Step-1 dataset, we use the \(N_{\rm exp}\) backgrounds in the various \(z_{\rm vtx}\)-\(p_{T,\pi^{0}}\) regions which were presented in [4] for the \(N_{\rm PoT}=3.05\times 10^{19}\) statistics and rescale these numbers to the expected statistics \(N_{\rm PoT}=14\times 10^{19}\)[48]. If we exclude the \(\pi\nu\bar{\nu}\) and the surrounding region (\(z_{\rm vtx,\pi\nu\bar{\nu}}<5.1\,\rm m\) and \(p_{T,\pi^{0}}<0.26\,\rm GeV\)), where the number of background events is large compared to the expected number of \(a\to\gamma\gamma\) events,1 we get \(N_{\rm exp}=3.35\) events with \(N_{\rm PoT}=14\times 10^{19}\). Footnote 1: The SM \(K_{L}\to\pi^{0}\nu\bar{\nu}\) is considered to be a background for the search for \(a\to\gamma\gamma\) decay. For KOTO Step-2, we use Ref. [49] to estimate the background rate in the \(1.75\,\,\,{\rm m}<z_{\rm vtx}<15\,\rm m\) and \(p_{T,\pi^{0}}>0.4\,\rm GeV\) regions. We find \(N_{\rm exp}\approx 1.38\) events with \(N_{\rm PoT}=6\times 10^{20}\) statistics (assuming again that there is no background in the \(p_{T,\pi^{0}}>0.5\,\rm GeV\) region for \(z_{\rm vtx}<15\,\rm m\)). #### iii.1.2 Beam-dump mode In the beam-dump mode, so far KOTO collected data corresponding to \(N_{\rm PoT}=2.2\times 10^{17}\)[46], which is about two orders of magnitude less than in the Kaon mode. For our projection of the beam-dump mode, we assume that ten times more data will be collected in this mode, i.e. \(N_{\rm PoT}=2.2\times 10^{18}\), while keeping the background under control, i.e. we consider a background-free search. We \begin{table} \begin{tabular}{c c c} \hline \hline & Step-1 & Step-2 \\ \hline \hline \(\sqrt{x_{\gamma_{1,2}}^{2}}+y_{\gamma_{1,2}}^{2}\) & \(<0.85\,\rm m\) & \(<1.35\,\rm m\) \\ \(\min(|x_{\gamma_{1,2}}|,|y_{\gamma_{1,2}}|)\) & \(>0.15\,\rm m\) & \(>0.175\,\rm m\) \\ \(R_{\rm COE}\) & \(>0.2\,\rm m\) & \(-\) \\ \(r\) & \(>0.3\,\rm m\) & \(>0.3\,\rm m\) \\ \(\theta_{\gamma,\rm calo}\) & \(<150^{\circ}\) & \(<150^{\circ}\) \\ \(E_{\gamma_{1}}+E_{\gamma_{2}}\) & \(>0.65\,\rm GeV\) & \(>0.5\,\rm GeV\) \\ \(E_{\gamma_{1,2}}\) & \(\in[0.1,2.0]\,\rm GeV\) & \(>0.1\,\rm GeV\) \\ \(E_{\gamma_{1}}/E_{\gamma_{2}}\) & \(>0.2\) & \(-\) \\ \(z_{\rm vtx}\) & \(\in[2.9,6.0]\,\rm m\) & \(\in[1.75,15]\,\rm m\) \\ \(\theta_{\gamma,\rm beam}\) & \(>2.5^{\circ}\,\rm GeV\) & \(-\) \\ \(\mathcal{A}_{\rm add}\) & \(0.52\,(0.9)\) & \(0.73\) \\ \hline \(N_{\rm PoT}\times 10^{19}\) & \(2.2\,(14)\) & \(60\) \\ \hline \hline \end{tabular} \end{table} Table 1: The selection conditions for \(K_{L}\to\pi^{0}\nu\bar{\nu}\) for Step-1 [44] and Step-2 [45]. The \(N_{\rm PoT}\) and shape-related cut efficiency (in parenthesis) are for the future Step-1 run [48]. explore this case for both KOTO Step-1 and KOTO Step-2 layouts assuming \(\mathcal{A}_{\rm add}=1\). The selection conditions are simply both photons being in the calorimeter acceptance with cluster distance \(>0.3\,\mathrm{m}\) (as used in the \(\pi^{0}\nu\bar{\nu}\) analysis). For KOTO Step-1 dump-mode we require at least \(50\,\mathrm{MeV}\) deposited on the calorimeter per photon and for KOTO Step-2 at least \(100\,\mathrm{MeV}\) per photon and at least \(500\,\mathrm{MeV}\) in total deposited on the calorimeter. ## III Axion production and detection The conventional axion production at the \(K_{L}\) experiments is from \(K_{L}\to\pi^{0}a\) decay, which is CP violating and thereby suppressed. Here, the relevant production to probe the long-lived axion occurs at the fixed target T1 where \(K_{L}\) is produced, i.e. in the proton-gold collisions. We consider two production mechanisms: Primakoff production and axion-meson mixing, both are implemented in the Alpinist framework, which uses Pythia 8[50] to generate meson distributions. In the following, we show the axion production yields for these mechanisms for the \(30\,\mathrm{GeV}\) proton beam. The validation of obtained yields with the detector under the \(16^{\circ}\) angle is given in App. A. As shown in Ref. [8], the yield and the momentum spectrum is well described by Pythia 8 also for angles smaller than \(16^{\circ}\) and higher beam energy. ### Axion production in the target The gluon coupling \(c_{GG}\) induces the axion mixing to the neutral mesons of the same quantum numbers, \(P\in\left\{\pi^{0},\eta,\eta^{\prime}\right\}\). The axion yield from mixing production is then approximately given by \[N_{a}^{\rm mix}\approx N_{\pi^{0}}\cdot|\theta_{a\pi}|^{2}+N_{\eta}\cdot| \theta_{a\eta}|^{2}+N_{\eta^{\prime}}\cdot|\theta_{a\eta^{\prime}}|^{2} \tag{2}\] where \(N_{\pi^{0}}\) is the production yield of neutral pion, \(\theta_{a\pi}\) is the pion-axion mixing angle, and similar notations are applied to \(\eta\) and \(\eta^{\prime}\). To leading order in \(f_{\pi}/f_{a}\), the mixing angles \(\theta_{aP}\) are \[\theta_{aP}\approx\frac{f_{\pi}}{f_{a}}\frac{K_{aP}m_{a}^{2}+m_{aP}^{2}}{m_{a }^{2}-m_{P}^{2}}, \tag{3}\] where \(f_{\pi}\approx 93\,\mathrm{MeV}\) and we use the same notation for the kinetic- and mass-mixing (\(K_{aP}\) and \(m_{aP}\)) as in [47; 38]. The \(\eta\)-\(\eta^{\prime}\) mixing angle used is \(\sin\theta_{\eta\eta^{\prime}}=-1/3\). The kinematics for the axion mixing production is treated according to App. D of [47]. The Primakoff process is governed by the axion-photon interaction \[c_{\gamma\gamma}\frac{\alpha_{\rm EM}a}{8\pi f_{a}}F_{\mu\nu}\tilde{F}^{\mu\nu}, \tag{4}\] where \(\alpha_{\rm EM}\) is the fine-structure constant and \(c_{\gamma\gamma}\) is the effective axion-photon coupling. The axion-photon coupling is generated by \(c_{BB}\) and \(c_{WW}\) couplings and for \(m_{a}\lesssim m_{\rho}\) also by the low-energy contribution of \(c_{GG}\)[51; 52; 37]: \[c_{\gamma\gamma}= c_{BB}+c_{WW}-c_{GG}\left(1.92+2\sum_{P}\frac{f_{a}}{f_{P}} \theta_{aP}\right), \tag{5}\] where \(f_{\eta^{\prime}}\approx 73\,\mathrm{MeV}\). There are two sources of photons in the target that can produce axions in the interaction with the target nuclei via the Primakoff process: off-shell photons from the proton of the primary beam [7] and photons from decays of secondary neutral pseudoscalars produced in the target. The latter has been shown to be dominating [8]. The axion yield from these processes is given by their sum and is denoted as \(N_{a}^{\rm Prim}\). Additional production could be from flavor-changing Kaon decays near the fixed target. With the gluon cou Figure 2: Expected axion production yield \(N_{a}\) at the target as a function of axion energy, \(E_{a}\), and its angle to the incident proton beam for two cases of axion masses and couplings. The respective angles at which the detector is placed in Step-1 and Step-2 are shown by solid lines with dashed lines indicating the part of the distribution of interest given the ECAL edges. pling \(c_{GG}\), the \(K^{\pm}(K_{S})\to\pi^{\pm}(\pi^{0})a\) is not suppressed by a loop or CP violation [6; 43; 53]. The \(c_{WW}\) coupling induces the same processes at one loop [54]. These production rates could be sizable because the total width of Kaons is small, i.e., the BR is enhanced. However, most Kaons are removed by the collimators or deflected by the magnetic field. Including these effects, we estimate that \(K^{+}\to\pi^{+}a\) is subdominant. Still, the production from \(K_{S}\) is potentially interesting due to the shorter lifetime, but it requires a simulation of \(K_{S}\) transport, which is beyond the scope of this paper. Therefore, we neglect axion production from \(K^{+}\) and \(K_{S}\) decays. To summarize, the total axion yield at the T1 target is approximated to be \[N_{a}\approx N_{a}^{\rm mix}+N_{a}^{\rm Prim}\,, \tag{6}\] where the exact relative contributions of the two mechanisms depend on the specific model. We show the differential distributions with respect to the axion energy (\(E_{a}\)) and its production angle to the proton beam (\(\theta_{a}\), see Fig. 1) in Fig. 2 for two benchmarks with axion mass of \(40\,\text{MeV}\) or \(400\,\text{MeV}\) and the coupling being dominated by \(c_{GG}\). In this coupling benchmark for the \(m_{a}=400\,\text{MeV}\) case, the overall production yield from the mixing with \(\pi^{0}\) and \(\eta\) exceeds significantly the Primakoff production while for smaller axion masses, the Primakoff production becomes more relevant, similarly to what has been observed in [47] for experiments operating with higher beam energies. ### Axion detection mechanism Given the beam energy and the distance between the axion production location and the detector, the KOTO experiment can search for long-lived axions. Below, we see that for the relevant masses, the dominant decay channel is \(a\to\gamma\gamma\), which is given by \[\Gamma_{\gamma\gamma}=\frac{\alpha_{\rm EM}^{2}m_{a}^{3}}{256\pi^{3}}\frac{c_ {\gamma\gamma}^{2}}{f_{a}^{2}}\,, \tag{7}\] where the effective photon coupling \(c_{\gamma\gamma}\) is defined in Eq. (5). For an axion heavier than \(1\,\text{GeV}\) with non-zero \(c_{GG}\) coupling, the width of hadronic decay modes, such as \(a\to\pi\pi\eta\), estimated in [38], dominate the total width. However, the relevant final state for search at KOTO is di-photon, which typically becomes a sub-dominant mode for \(m_{a}\gtrsim 0.5\,\text{GeV}\) (with large model dependence), resulting in reduced sensitivity. Details on KOTO sensitivity for hadronic axion decays can be found in App. B. Long-lived axions produced at the fixed target can reach the distant decay volume, and a di-photon decay leaves a characteristic signal, see a schematic picture of the axion decay event in Fig. 3. When axions enter the decay volume, they are almost parallel to the \(K_{L}\) beam axis, because the distance to the ECAL is larger than the ECAL size, but away from the axis with the distance \(\rho_{a}\). The two photons from a \(a\to\gamma\gamma\) decay in the decay volume can then hit the ECAL, mimicking the signal of \(K_{L}\to\pi^{0}\nu\bar{\nu}\). If the standard reconstruction algorithm for \(K_{L}\to\pi^{0}\nu\bar{\nu}\) is applied, the reconstructed position (\(dz\)) will be different from the true distance between the axion decay point and the ECAL (\(dz_{a}\)), but the event is not discarded. In this sense, this signal is similar to the halo \(K_{L}\to\gamma\gamma\) background that the KOTO collaboration found in the earlier data, but the reconstructed distributions are typically different. Contrary to the Kaon mode, the detection mechanism in the beam-dump mode is relatively straightforward as it is a dedicated run to search for long-lived particles decaying to photons. ### Signal from axion decay in Kaon mode The long-lived axion \(a\to\gamma\gamma\) decay passes the event selections described in Sec. II.2.1 when \(\rho_{a}>0\) which introduces factitious transverse momentum of the di-photon system. Some selection criteria are universal, often limited by the experimental resolution, but the remaining cuts assume the topology of \(K_{L}\to\pi^{0}(\to\gamma\gamma)\nu\bar{\nu}\). The KOTO experiment could implement a dedicated analysis for the long-lived particles, but here we simply adopt the standard \(K_{L}\to\pi^{0}\nu\bar{\nu}\) analysis because the backgrounds are well-investigated. Assuming \(K_{L}\to\pi^{0}\nu\bar{\nu}\) topology for the long-lived axion decays leads to several non-trivial characteristics of the axion signal event distribution \(N_{\rm sig}\) in the \(z_{\rm vtx}\)-\(p_{T,\pi^{0}}\) plane. Example distributions for parameters for which we expect interesting sensitivities of Step-2 are shown in Fig. 4. In the following, we give analytic understandings of the characteristics based on several simplifications. Figure 3: A schematic picture of the axion event in the Kaon mode. The disk represents the ECAL, and the axis is the \(K_{L}\) beamline. The distance from the ECAL to the upstream is \(dz\). A typical axion trajectory is almost parallel to the beam axis in the distance \(\rho_{a}\) (see Fig. 1). If the final state photons from the axion decay at \(dz_{a}\) leave the energy of \(E_{1,2}\) with a separation of \(D_{12}\) on the ECAL, the vertex position \(dz\) is reconstructed on the beam axis. We assume that the axion enters the decay volume in parallel to the beamline with distance \(\rho_{a}\). The axion invariant mass is given by the photon energies, \(E_{1,2}\), and the opening angle, \(\theta_{12}\), \[m_{a}^{2}=2E_{1}E_{2}(1-\cos\theta_{12})\simeq E_{1}E_{2}\theta_{12}^{2}\,. \tag{8}\] Using this, the separation of the two photons on the ECAL, \(D_{12}\), is approximately \[D_{12}\simeq dz_{a}\theta_{12}\simeq dz_{a}\frac{m_{a}}{\sqrt{E_{1}E_{2}}}\,. \tag{9}\] Then, the _reconstructed_ vertex position, assuming the event topology of \(K_{L}\rightarrow\pi^{0}\nu\bar{\nu}\), is given by \[dz\equiv D_{12}\frac{\sqrt{E_{1}E_{2}}}{m_{\pi^{0}}}\simeq dz_{a}\frac{m_{a}} {m_{\pi^{0}}}\,. \tag{10}\] The distance from the ECAL (\(dz\)) is translated to the standard coordinate system by \(z_{\rm vtx}=L-dz\) where \(L=\)6 (20) m in Step-1 (Step-2). Because only the axion decays in the decay volume are accepted, there exists a limitation of \(dz_{a}<dz_{a}^{\rm max}=2.9\,(18.25)\) m in Step-1 (Step-2). This limitation together with Eq. (10) gives us a condition on the maximum spread of the distribution over the reconstructed vertex position as \(z_{\rm vtx}>L-dz_{a}^{\rm max}(m_{a}/m_{\pi^{0}})\). The boundary \(z_{\rm vtx}^{\rm min}\) is shown as a blue vertical line of Fig. 4 left. Another feature can be seen as a correlation with both \(p_{T}^{\pi^{0}}\) and \(z_{\rm vtx}\). In the case of \(K_{L}\rightarrow\pi^{0}\nu\bar{\nu}\), the transverse kick is from \(K_{L}\) decay, and hence, \(p_{T}^{\pi^{0}}<m_{K_{L}}\). However, the transverse asymmetry of the ECAL hits is merely from the transverse position of the incident axion. Suppose the distance from the axion to the beamline is \(\rho_{a}\), the reconstructed \(p_{T}^{\pi^{0}}\) is roughly \[p_{T}^{\pi^{0}}\simeq\frac{E_{a}\rho_{a}}{\sqrt{\rho_{a}^{2}+dz^{2}}}\simeq \frac{E_{a}}{\sqrt{1+(L-z_{\rm vtx})^{2}/\rho_{a}^{2}}}. \tag{11}\] where \(E_{a}=E_{1}+E_{2}\) is the axion energy. The correlation between \(p_{T}^{\pi^{0}}\) and \(z_{\rm vtx}\) is explained by the above formula. Note that \(p_{T}^{\pi^{0}}\) can easily exceed \(m_{K_{L}}\) because \(E_{a}\sim\mathcal{O}(1)\) GeV. The last feature that can be observed is the vanishing of the distribution at large \(p_{T}^{\pi^{0}}\) and \(z_{\rm vtx}\), which is a consequence of the two-photon separation cut, \(D_{12}\leq 0.3\) m. This can be understood by combining Eqs. (9), (10) and (11) with a simplification of \(E_{1,2}\approx E_{a}/2\) as \[p_{T}^{\pi^{0}}\lesssim\frac{2m_{\pi^{0}}}{0.3\;\mathrm{m}\sqrt{(L-z_{\rm vtx} )^{-2}+\rho_{a}^{-2}}}\,. \tag{12}\] Therefore, the distribution between the two dashed lines from Eq. (11) vanishes at large \(p_{T}^{\pi^{0}}\) and \(z_{\rm vtx}\) at an approximate bound corresponding to Eq. (12) with \(\rho_{a}=1.35\) m. Since \(p_{T}^{\pi^{0}}\) of the long-lived axion events can be significantly larger compared to the \(K_{L}\) events or background, we use high \(p_{T}^{\pi^{0}}\) region. The detail of the signal region and the expected background yield is discussed in Sec. II.2.1. ## IV Bounds and projections for axions We derive the bounds for different axion models using the current KOTO Step-1 results and project the sensitivities of the future runs. For this purpose, we simulate the signal, including all production processes, i.e. Primakoff and meson mixing mechanisms. The various experimental setups are implemented in the Alpinist Figure 4: Axion signal event distributions \(N_{\rm sig}\) as could be found in the \(\pi\nu\bar{\nu}\) analysis at KOTO Step-2 for specific axion models. For the analytic understanding of the distributions, we show the lines corresponding to the Eq. (11) with \(\rho_{a}=0.175,1.35\) m and a specific \(E_{a}\) as dashed magenta lines. framework where we have also generated the datasets2 with the number of events expected \(N_{\rm sig}\) for each setup and production mode. These tables can be further used for showing the \(N_{\rm sig}\) distributions for any setup of model-dependent parameters using the re-scaling module of the framework. In Fig. 5, we show the current bounds and projections for the expected sensitivity with future data corresponding to the following three benchmark models: Footnote 2: For increasing the precision of the estimated number of observed events, each mass and decay width bin is evaluated with 2 million axion decay events. 1. Heavy QCD axion, \(c_{GG}\neq 0\) with \(c_{BB}=c_{WW}=0\); 2. Hypercharge dominant, \(c_{BB}\neq 0\) with \(c_{WW}=c_{GG}=0\); 3. Codominance, \(c_{BB}=c_{WW}=c_{GG}\); and in Fig. 6, we show the current and projected results for several fixed masses for variable \(c_{GG}\) vs \(c_{BB}\) couplings, which is expected to be similar to the case of \(c_{GG}\) vs \(c_{WW}\), up to the FCNC production. We compare our KOTO bounds and projections to the existing bounds from different experiments. We consider electron beam-dumps E137 [35] and E141 [56], where we use the Alpinist framework for the interpretation of the data provided in [57], implementation that has been already done for [58], and we perform a dedicated analysis to derive the proton beam-dump bounds based on CHARM [33] and NuCal [36], as done in [47] (bounds in gray shade). The axion can be produced by flavor-changing meson decays, especially in the presence of \(c_{WW}\) and \(c_{GG}\) (the corresponding bounds are shown in the blue shade). We adopt the results of \(K^{+}\to\pi^{+}a\) from Ref. [43] and apply the bounds of E949 [59; 34] and NA62 [60; 3]. The scheme of the recast is found in [61]. For \(B\to Ka(\to\gamma\gamma)\), the BaBar bound can be used [62; 42]. The two-loop produc Figure 5: 90% CL exclusion bounds (filled contours) and projected limits (empty contours) for scenarios (i)-(iii) for all KOTO setups considered compared to the exclusions from beam-dump experiments, exclusion from \(B\to Ka\) and \(K\to\pi a\) decays and the bound from the supernova SN1987A (shown as a dashed line as it is affected by significant uncertainties, see _e.g._[55; 51]). tion calculation with \(c_{GG}\) is found in [41]. The contribution from \(c_{WW}\) at one-loop is calculated in [54], but it is numerically subdominant for the benchmark (iii), so for simplicity, we approximate \(B\to Ka\) by the \(c_{GG}\) contribution. The total width \(K^{+}\) would be modified significantly for the low \(f_{a}/c_{GG}\), which leads to a relevant bound at \(m_{a}\sim m_{\pi^{0}}\). Requiring \(\text{BR}(K^{+}\to\pi^{+}a)<3\times 10^{-3}\) based on Sec. 2.2.2 of [6] results in the bound \(f_{a}/c_{GG}\lesssim 5\) GeV. The \(c_{BB}\) only scenario is not significantly constrained by the meson decays since the production originates from electroweak two-loop diagrams. Therefore, in Fig. 6 the meson decay bounds are omitted because the corresponding bounds in the limit of the benchmark (ii) are unknown. Finally, the shown SN1987A bounds are those derived in [51] but are plotted with a dashed line as their robustness is under debate [55]. We find that at Step-1, KOTO cannot probe new regions in the parameter space and it is sensitive only in regions already covered by other proton beam-dump experiments for all coupling scenarios. The _non_-observation of additional signal on top of the expected backgrounds in the past KOTO \(K_{L}\to\pi^{0}\bar{\nu}\nu\) analyses only confirms the results of these past experiments. While for scenarios with photon coupling domination in the future also KOTO Step-2 cannot compete with the past electron beam-dump experiments E137 and E141, it can probe new regions of parameter space for larger masses (\(m_{a}\gtrsim m_{\pi^{0}}\)) for scenarios with gluonic coupling thanks to enhanced axion production through mixing with other neutral pseudoscalars. ## V Conclusions In this paper, we show that the KOTO experiment, beyond its conventional purposes, can perform long-lived particle searches in its two different data-taking modes, the Kaon and the beam-dump. In both modes, NP particles are produced at the proton target interaction point and can decay in the whole decay volume of the detector (see Figs. 1 and 3). We show that the future KOTO runs will explore uncharted parameter space of sub-GeV Figure 6: 90% CL exclusion bounds and projected limits for fixed axion mass and variable \(c_{BB}\) and \(c_{GG}\) couplings. axions which may address the Strong CP problem. Firstly, we show that the Kaon mode, where the majority of the KOTO data is taken, is sensitive not only to \(K_{L}\to\pi^{0}\nu\bar{\nu}\) and \(K_{L}\to\pi^{0}a\) but also to axions which are originating from the interaction in the proton target and mimic the rare Kaon decay signal. The main difference between the two signals is that the axion events extend the distribution of \(p_{T}^{\pi^{0}}\) greater than \(m_{K_{L}}\), as shown in Fig. 4. This region is unphysical for the di-photon events from the Kaon decays, thus, we assume no SM background there. Even though our derived constraints based on KOTO 2015 dataset are currently not competitive, they reaffirm the constraints obtained with experiments of very different topology and proton impact energy. We have also shown in Fig. 5 that KOTO in Step-2 can indeed explore new parameter space for axions with \(c_{GG}\) coupling and \(m_{a}\gtrsim 100\,\mathrm{MeV}\), without changes of the main analysis steps. Secondly, we have evaluated projections of KOTO running in the beam-dump mode as recently presented by the collaboration [46]. Here we have shown that KOTO, due to its low proton beam energy and large angle between the beam and the detector can especially well explore parameter space at very low couplings, complementary to such searches at higher energies, _e.g._ NA62 [7], FASER [63] or DarkQuest [64]. Although the analysis in beam-dump mode is suitable for searches for long-lived particles, the sensitivity to the axions is weaker than in the Kaon mode because the expected statistics is significantly lower considering the KOTO physics goals. In this study, we have focused on the axions to demonstrate the proof of concept, whereas similar analyses could be performed to update the bounds of other long-lived particles from past proton beam experiments. Furthermore, a dedicated analysis for long-lived particles rather than reinterpretation of the \(K_{L}\to\pi^{0}\nu\bar{\nu}\) analysis could further improve the sensitivity although it requires additional background studies. This work also updates the Alphinst framework [65] to include the KOTO geometry and the kinematics of the various processes. Thereby, other scenarios, including axions with different parameter combinations, can be easily studied. ###### Acknowledgements. BD acknowledges funding through the European Research Council under grant ERC-2018-StG-802836 (AxScale project) and the Lise Meitner program of the Max Planck society. JJ acknowledges funding by the F.R.S.-FNRS, Belgium, through grant FRIA/FC-36305. YS is supported by grants from NSF-BSF (No. 2021800), ISF (No. 482/20), BSF (No. 2020300) and the Azrielii foundation. KT acknowledges funding through the US Department of Energy grant DE-SC0010102 and Japan Society for the Promotion of Science KAKENHI No. 21H01086. ## Appendix A Validation of the Simulation For the estimation of the axion flux from both the Primakoff and the mixing production, we first need to estimate the flux of the \(\pi^{0}\), \(\eta\) and \(\eta^{\prime}\) mesons. For this purpose, we use Pythia 8[50] for the generation of the \(pp\) interactions. Since we cannot directly validate the meson flux simulation, we use the measured \(K_{L}\) flux at KOTO [66] to normalize the meson yields. The measured number of \(K_{L}\) collimated into \(8\times 8\,\mathrm{cm}^{2}\)[44] at the end of a \(20\,\mathrm{m}\) long beamline is \((4.2\pm 0.02_{\mathrm{stat}}\pm 0.06_{\mathrm{sys}})\times 10^{7}\)\(K_{L}\) per \(2\times 10^{14}\) protons on target, which corresponds to measured number of \(K_{L}\) per proton on target \(N_{K_{L}}^{\mathrm{KOTO}}\sim 2.1\times 10^{-7}\). Therefore, we normalize the number of mesons simulated by Pythia 8 to the measured number as: \[N_{P}=\frac{N_{P}^{\mathrm{sim}}}{N_{K_{L}}^{\mathrm{sim}}}\times N_{K_{L}}^{ \mathrm{KOTO}}, \tag{1}\] where \(N_{P}^{\mathrm{sim}}\) is the number of (\(\pi^{0}\), \(\eta\), \(\eta^{\prime}\)) mesons produced per \(pp\) interaction in the simulation. For \(10^{8}\) simulated \(pp\) interactions, after accounting for \(K_{L}\) decays assuming the peak \(K_{L}\) momentum \(1.4\,\mathrm{GeV}\)[44] and \(60\%\) loss of \(K_{L}\) due to absorption in the beamline material [66], we obtain the simulated number of \(K_{L}\) per \(pp\) interaction \(N_{K_{L}}^{\mathrm{sim}}\sim 2.0\times 10^{-7}\). Therefore \(N_{K_{L}}^{\mathrm{KOTO}}/N_{K_{L}}^{\mathrm{sim}}\sim 1.05\), showing a good agreement between the total number of \(K_{L}\) measured and simulated using Pythia 8. For the multiplicities of (\(\pi^{0}\), \(\eta\), \(\eta^{\prime}\)) mesons, based on the simulation, we obtain: \[\frac{N_{\pi^{0}}^{\mathrm{sim}}}{N_{K_{L}}^{\mathrm{sim}}}\sim 21\,,\quad \frac{N_{\eta}^{\mathrm{sim}}}{N_{K_{L}}^{\mathrm{sim}}}\sim 2.2\,,\quad\frac{N_{ \eta^{\prime}}^{\mathrm{sim}}}{N_{K_{L}}^{\mathrm{sim}}}\sim 0.17\,. \tag{2}\] Furthermore, since axions interact rarely with ordinary matter, the surface of the axion flux potentially entering the detector \(S_{\mathrm{axion}}^{\mathrm{sim}}\) is much larger, occupying the whole decay volume plane. When compared to the \(K_{L}\) flux, Figure 7: Comparison of the \(K_{L}\) total momentum distribution from the simulation used in this work (Pythia 8) and the data derived by KOTO [66] (KOTO Data). The red curve presents a fit of the data done by KOTO (KOTO Fit). which is collimated to the \(S_{K_{L}}^{\rm KOTO}=8\times 8\,\rm cm^{2}\) profile at the end of the beamline, the ratio of the two surfaces is about \(S_{\rm axion}^{\rm sim}/S_{K_{L}}^{\rm KOTO}\sim 490\). Finally, in order to validate the kinematic distributions, we compare the distributions of \(K_{L}\) obtained with Pythia 8 with the distributions measured by KOTO [66]. As shown in Fig. 7, a good agreement is observed between the shapes of the distribution of the \(K_{L}\) total momentum from the simulation used in this paper and the data measured by KOTO. This validation for \(K_{L}\) gives some credibility also to the simulated (\(\pi^{0},\eta,\eta^{\prime}\)) distributions at \(30\,\rm GeV\) and therefore to the validity of the expected axion distributions which are used in this work. The obtained axion distributions are publicly available at [65]. ## Appendix B Sensitivity for hadronic axion decays KOTO could be in principle sensitive to decays \(a\to\pi^{0}\pi^{0}\eta\) or \(a\to 3\pi^{0}\) with a subsequent \(\pi^{0}(\eta)\to\gamma\gamma\) decay resulting in a 6-cluster event. While studying hadronic decays in the Kaon mode would require a dedicated analysis to address the various backgrounds, in the case of beam-dump mode, it is stated in [46] that no 6-cluster events have been found in the collected sample, indicating that the \(K_{L}\) background is kept under control in this case. After running a simulation with Alpinist for \(a\to 3\pi^{0}\) and \(a\to\pi^{0}\pi^{0}\eta\) decays with simple selection criteria on minimal cluster energy and cluster separation as for the \(a\to\gamma\gamma\) decay in the beam-dump mode, we did not find KOTO sensitivity with hadronic decays to surpass the sensitivity using \(a\to\gamma\gamma\) even for larger axion masses. As can be seen in Fig. 8 for KOTO Step-2, the difference in the number of observable signal events \(N_{\rm sig}\) is by several orders of magnitude smaller for combined search for \(a\to 3\pi^{0}\) and \(a\to\pi^{0}\pi^{0}\eta\) decays compared to \(a\to\gamma\gamma\). Nevertheless, for convenience, we also provide the resulting datasets for these hadronic decays.
2308.14538
Production of the heavy-flavour decay lepton in high-energy nuclear collisions
This paper presents a theoretical study on the production of the heavy-flavour decay lepton (HFL) in high-energy nuclear collisions at the LHC. The pp-baseline is calculated by the FONLL program, which matches the next-to-leading order pQCD calculation with the next-to-leading-log large-$p_T$ resummation. The in-medium propagation of heavy quarks is driven by the modified Langevin equations, which consider both the elastic and inelastic partonic interactions. We propose a method to separate the respective influence of the five factors, such as pp-spectra, the cold nuclear matter (CNM) effects, in-medium energy loss (E-loss), fragmentation functions (FFs), and decay channels, which may contribute to the larger $R_{AA}$ of HFL $\leftarrow b$ compared to that of HFL $\leftarrow c$ in nucleus-nucleus collisions. Based on quantitative analysis, we demonstrate that different decay channels of charm- and bottom-hadrons play an important role at $p_T<$5 GeV, while the mass-dependent E-loss dominates the higher $p_T$ region. It is also found that the influences of the CNM effects and FFs are insignificant, while different initial pp-spectra of charm and bottom quarks have a considerable impact at $p_T>$ 3 GeV. Furthermore, we explore the path-length dependence of jet quenching by comparing the HFL $R_{AA}$ in two different collision systems. Our investigations show smaller HFL $R_{AA}$ in Pb+Pb than that in Xe+Xe within the same centrality bin, which is consistent with the ALICE data. The longer propagation time and more effective energy loss of heavy quarks in Pb+Pb collisions play critical roles in the stronger yield suppression of the HFL compared to that in Xe+Xe. In addition, we observe a scaling behaviour of the HFL $R_{AA}$ in Xe+Xe and Pb+Pb collisions.
Sa Wang, Yao Li, Shuwan Shen, Ben-Wei Zhang, Enke Wang
2023-08-28T12:49:24Z
http://arxiv.org/abs/2308.14538v1
# Production of the heavy-flavour decay lepton in high-energy nuclear collisions ###### Abstract This paper presents a theoretical study on the production of the heavy-flavour decay lepton (HFL) in high-energy nuclear collisions at the LHC. The pp-baseline is calculated by the FONLL program, which matches the next-to-leading order pQCD calculation with the next-to-leading-log large-\(p_{T}\) resummation. The in-medium propagation of heavy quarks is driven by the modified Langevin equations, which consider both the elastic and inelastic partonic interactions. We propose a method to separate the respective influence of the five factors, such as pp-spectra, the cold nuclear matter (CNM) effects, in-medium energy loss (E-loss), fragmentation functions (FFs), and decay channels, which may contribute to the larger \(R_{AA}\) of HFL \(\gets b\) compared to that of HFL \(\gets c\) in nucleus-nucleus collisions. Based on quantitative analysis, we demonstrate that different decay channels of charm- and bottom-hadrons play an important role at \(p_{T}<\)5 GeV, while the mass-dependent E-loss dominates the higher \(p_{T}\) region. It is also found that the influences of the CNM effects and FFs are insignificant, while different initial pp-spectra of charm and bottom quarks have a considerable impact at \(p_{T}>\) 3 GeV. Furthermore, we explore the path-length dependence of jet quenching by comparing the HFL \(R_{AA}\) in two different collision systems. Our investigations show smaller HFL \(R_{AA}\) in Pb+Pb than that in Xe+Xe within the same centrality bin, which is consistent with the ALICE data. The longer propagation time and more effective energy loss of heavy quarks in Pb+Pb collisions play critical roles in the stronger yield suppression of the HFL compared to that in Xe+Xe. In addition, we observe a scaling behaviour of the HFL \(R_{AA}\) in Xe+Xe and Pb+Pb collisions. pacs: 13.87.-a; 12.38.Mh; 25.75.-q ## I Introduction Over the past few decades, the main goal of the high-energy nuclear collision programs at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) is revealing the mystery of the de-confined nuclear matter, quark-gluon plasma (QGP), at extremely hot and dense condition [1; 2]. Investigations on the properties of the QGP are fundamental and essential to test the basic theory of Quantum Chromodynamics (QCD) and understand the phase transition of nuclear matter at high temperature and density [3; 4; 5; 6]. The strong interactions between the initial-produced high-\(p_{T}\) jet/parton in hard QCD scattering and the thermal particle in the QGP medium, known as the "jet quenching" phenomenon, provides a new arena to explore the properties of the QGP [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Heavy quarks (charm and bottom) have attracted a lot of attention in the community of heavy-ion collision physics owing to their unique advantages [21; 22; 23; 24; 25]. Firstly, their initial yields in hard QCD processes are perturbatively calculable results from their large mass (\(m_{Q}\gg\Lambda_{QCD}\)) [26], while their thermal production is negligible in the collision energy of current heavy-ion programs [27]. Secondly, heavy quarks contain their flavour identities as they interact with the thermal quasi-particle in the QGP. In the past decade, plentiful experimental measurements on heavy-flavour hadrons, such as the nuclear modification factor \(R_{AA}\)[28; 29; 30; 31; 32; 33; 34; 35], collective flow (direct flow \(v_{1}\)[36; 37], elliptical flow \(v_{2}\)[38; 39; 40; 41]) and baryon-to-meson ratio [42; 43], have been extensively made in the high-energy nuclear collisions both at the RHIC and LHC. These efforts greatly facilitate our understanding of the in-medium interaction mechanisms, dynamical correlations, and hadronization patterns of heavy quarks in the strongly-coupled QCD matter [44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56]. Among them, the mass effect of jet quenching is especially a fascinating topic [57; 58; 59; 60], which has been deeply explored by comparing the yield suppression of charm- and bottom-hadron in nucleus-nucleus collision relative to p+p in both experiment [61; 62; 63; 64; 65; 66; 67] and theory [68; 69; 70; 71; 72]. However, it is still challenging to reconstruct bottom-hadrons in the experiment due to their small cross section and many decay channels [73], which results in large uncertainties in the current measurements. The lepton from heavy-flavour hadron decays (denoted as HFL, including muon and electron), is viewed as a complementary tool to investigate the mass effect of jet quenching in the QGP. Recent measurements in Au+Au [74; 75] and Pb+Pb [76] collisions show that the yield suppression of the HFL from charm-hadron may be stronger than that from bottom-hadron. Nevertheless, the final-state yield of the HFL in A+A collisions is a complicated interplay of several factors, the initial \(p_{T}\) spectra, initial-state nuclear matter effect, in-medium energy loss of heavy quarks, fragmentation functions, and decay channels. Investigating how each factor plays a role in the obtained HFL \(R_{AA}\) in A+A collisions is crucial, which is essential to capture the mass effect of jet quenching from the current measurements. On the other hand, tremendous efforts have been made to address the system size dependence of jet quenching in different collision systems [77; 78; 79; 80; 81]. In this context, the yield suppression of the HFL in Xe+Xe and Pb+Pb collisions will also provide new opportunity to address the path-length dependence of partonic energy loss. In addition, it is of particular interest to test the "scaling behaviour" of the HFL \(R_{AA}\) in different collision systems, while that of heavy-flavour hadron has been systematically discussed in Ref. [82]. This work will investigate the production of the heavy-flavour decay lepton in high-energy nuclear collisions at the LHC. The pp-baseline is provided by the FONLL program, which matches the next-to-leading order pQCD calculation with the next-to-leading-log large-\(p_{T}\) resummation [83; 84]. The in-medium evolution of heavy quarks is driven by the Langevin transport approach [85; 86; 87; 88; 89; 90] which takes into account the collisional and radiative energy loss [91; 68; 92]. We will focus on the mass effect and system size dependence of jet quenching by estimating the yield suppression of the HFL in nucleus-nucleus collisions at the LHC. For the former target, we will present a strategy to separate the contributions of several factors that may lead to the different \(R_{AA}\) of HFL \(\gets c\) and HFL \(\gets b\), such as pp-spectra, cold nuclear matter (CNM) effect, in-medium energy loss (E-loss), fragmentation functions (FFs) and decay channels. For the latter target, we will estimate the averaged energy loss and propagation time of heavy quarks in the two different collision systems, Xe+Xe and Pb+Pb, which may help find the key factor that leads to stronger suppression of the HFL in Pb+Pb compared to that in Xe+Xe. Besides, we will also discuss the scaling behaviour of the HFL \(R_{AA}\) in the two different collision systems of Xe+Xe and Pb+Pb. The remainder of this paper is organized as follows. In Sec. II, the theoretical frameworks used to study the HFL production in nucleus-nucleus collisions will be introduced. In Sec. III, we will present our main results and conclusions about the mass and path-length dependence of the HFL yield suppression. At last, we will summarize this work in Sec. IV. ## II Theoretical Framework The differential cross section (\(d\sigma/dp_{T}\)) of heavy quarks in p+p collisions is usually utilized as the baseline to study the yield suppression of the HFL in high-energy nuclear collisions. The higher order corrections and logarithmic-terms resummation play important roles in the heavy quark production in hadron collisions at the LHC [93; 94; 95]. In this work, the initial \(p_{T}\) spectra of charm and bottom quarks are provided by the FONLL scheme [83], which matches the Fixed Order (FO) next-to-leading order hard QCD processes with the Next-to-Leading-Log (NLL) large-\(p_{T}\) resummation [84]. The FONLL program has been successfully employed to describe the measurement of heavy flavour production at the LHC [96; 97]. As shown in Fig. 1, the differential cross section of the muon from heavy quarks ( \(\mu\gets c,b\) ) calculated by FONLL in p+p collisions at \(\sqrt{s}=5.02\) TeV is compared to the ALICE data [98]. It is found that FONLL calculations give a nice agreement of the experimental measurements. The contribution from \(\mu\gets c\) and \(\mu\gets b\) are also estimated in Fig. 1. We observe that the muon production is dominated by \(\mu\gets c\) at \(p_{T}<\)4 GeV but by \(\mu\gets b\) at high \(p_{T}\). In addition, it should be noted that we take into account the initial cold nuclear matter (CNM) effect in A+A collisions by using the EPPS16 parameterization [99] to modify the parton distribution function (PDF) of a free proton, then obtain the \(p_{T}\) spectra of heavy quarks by the FONLL as the input of the calculation of the HFL production in A+A collisions. In nucleus-nucleus collisions, the evolution of heavy quarks in the strongly-coupled nuclear matter can be described by the modified Langevin transport equations [85; 86; 87; 88; 89; 90; 91; 92]. \[\Delta\vec{x}(t) =\frac{\vec{p}(t)}{E}\Delta t \tag{1}\] \[\Delta\vec{p}(t) =-\eta_{D}\vec{p}\Delta t+\vec{\xi}(t)\Delta t-\vec{p}_{\rm g}(t) \tag{2}\] These two equations correspond to heavy quarks' po Figure 1: Differential cross sections of \(\mu\gets c\) (blue dash-dot line), \(\mu\gets b\) (green dash-dot line) and \(\mu\gets c,b\) (black solid line) versus transverse momentum \(p_{T}\) in p+p collisions at \(\sqrt{s}\)=5.02 TeV calculated by FONLL, compared with ALICE data (red points) [98]. sition and momentum updates during an evolution time \(\Delta t\) in the QGP. The three terms on the right-hand side of Eq. (2) represent the drag term, the thermal stochastic term, and the momentum recoil term, respectively. Among them, the drag term denotes the energy dissipation as heavy quarks traversing the hot/dense nuclear matter, whose strength is controlled by the drag coefficient \(\eta_{D}\). The second stochastic term denotes the random kicks from quasi-particles in the QGP, and the random force \(\vec{\xi}(t)\) is modelled by a Gaussian distribution with mean value 0 and variance \(\kappa/\Delta t\), where \(\kappa\) is the diffusion coefficient in momentum space. These two terms describe the elastic interactions between the heavy quarks and thermal partons in the QGP. Note that \(\eta_{D}\) and \(\kappa\) can be correlated by the fluctuation-dissipation theorem \(\kappa=2\eta_{D}ET\)[100]. Since the medium-induced gluon radiation is significant to the energy loss of heavy quarks at high-\(p_{T}\) region (\(p_{T}^{Q}>5m_{Q}\)), a correction term \(-\vec{p}_{\rm g}\) is effectively considered to describe the momentum decrease caused by such radiation processes [68]. In our framework, the implementation of in-medium radiation processes of heavy quarks is based on the gluon spectrum calculated with the higher-twist approach [58; 101; 102; 103], \[\frac{dN_{g}}{dxdk_{\perp}^{2}dt}=\frac{2\alpha_{s}P(x)\hat{q}}{\pi k_{\perp} ^{4}}\sin^{2}(\frac{t-t_{i}}{2\tau_{f}})(\frac{k_{\perp}^{2}}{k_{\perp}^{2}+ x^{2}M^{2}})^{4}, \tag{3}\] where \(x\) and \(k_{\perp}\) are the radiated gluon's energy fraction and transverse momentum. \(P(x)\) is the quark splitting function [104], \(\tau_{f}=2Ex(1-x)/(k_{\perp}^{2}+x^{2}M^{2})\) the formation time of the daughter gluon. \(\hat{q}=q_{0}(T/T_{0})^{3}p_{\mu}u^{\mu}/E\) denotes the general jet transport parameter in the QGP [105], where \(T_{0}\) is the highest temperature in the most central A+A collisions, and \(u^{\mu}\) presents flow of the expanding QCD medium. It is reasonable to assume that the gluon radiation is a Poisson process [69], then the probability of radiation during a timestep \(\Delta t\) can be calculated as, \[P(n,t,\Delta t)=\frac{\lambda^{n}}{n!}e^{-\lambda} \tag{4}\] where n(=0, 1, 2...) denotes the number of radiation at the timestep \((t,t+\Delta t)\). \(\lambda\) is the average radiation number which can be determined by integrating the spectrum in Eq. 3. \[\lambda=\Delta t\int dxdk_{\perp}^{2}\frac{dN_{g}}{dxdk_{\perp}^{2}dt} \tag{5}\] We have two parameters \(\hat{q}\) and \(\kappa\) to be determined in the Langevin equations in Eq. 2. The former has been extracted by fitting the identified hadron production in A+A collisions [106], which gives the best value \(q_{0}=1.2\) GeV\({}^{2}\)/fm at the LHC. As the \(\hat{q}\) is fixed, we obtain \(\kappa\sim\pi T^{3}\) by a \(\chi^{2}\) fitting to the D meson \(R_{AA}\) data [30; 35], which is consistent with the results of Lattice QCD \(\kappa=(1.8\sim 3.4)T^{3}\)[107]. Note that some further and detailed studies have been made to explore the temperature and energy dependence of \(\hat{q}\)[108; 109; 110] and \(\kappa\)[111; 112; 113]. The results indicate the enhanced coupling strength of the hot and dense nuclear matter near the critical temperature, which plays a key role in resolving the \(R_{AA}\) and \(v_{2}\) puzzle [114; 115; 116]. In this study, the time-space evolution of the hot QCD medium is described by the CLVisc hydrodynamic model [117; 118; 119], which provides information on the temperature and velocity of the medium cells. The initial entropy density distribution \(s(\tau_{0},x,y)\) is provided by the Trento program [120] based on the Glauber model [121], in which the nuclear densities of Pb and Xe obey the Woods-Saxon distribution [122; 123], \[\rho(r,\theta)=\frac{\rho_{0}}{1+\exp[\frac{\tau-R(\theta)}{a}]} \tag{6}\] where \(\rho_{0}\) is the nuclear saturation density, \(a\) the nuclear skin thickness. \(R(\theta)=R_{0}(1+\beta_{2}Y_{20}(\theta))\) is the nuclear radius, where \(R_{0}\sim A^{1/3}\) is the average radius, and the spherical harmonic function \(Y_{20}(\theta)\) describes the deformation of an axially symmetric nucleus. The parameter setups of Xe and Pb are listed in Tab. 2. Heavy quarks propagate and lose energy in the QGP with Eq. 2 until the local medium reaches critical temperature (\(T_{c}=160\) Mev). After the in-medium evolution, the hadronization of heavy quarks into heavy-flavour mesons (\(c\to D\) and \(b\to B\)) are employed by the nonperturbative Peterson form fragmentation functions [124]. \[D(z)=\frac{N}{z(1-\frac{1}{1-z}-\frac{\epsilon_{Q}}{1-z})^{2}} \tag{7}\] where \(z\) denotes the momentum fraction carried by the fragmented heavy-flavour mesons from the parent heavy quarks. Note that \(\epsilon_{c}=0.05\) for \(c\to D\) and \(\epsilon_{b}=0.005\) for \(b\to B\) are the default setups in PYTHIA8 [125], and N is the normalization factor. At last, the semileptonic decay of heavy-flavour hadron into a lepton (\(B,D\rightarrow\)HFL) is implemented by the ParticleDecays module of PYTHIA8. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Nucleus & a [fm] & \(R_{0}\) [fm] & \(\beta_{2}\) \\ \hline \({}^{129}\)Xe & 0.590 & 5.40 & 0.18 \\ \hline \({}^{208}\)Pb & 0.546 & 6.62 & 0 \\ \hline \end{tabular} \end{table} Table 1: Parameter setup of Woods-Saxon distributions of Xe and Pb in the Glauber calculations [122; 123]. ## III Results and Discussions ### Mass dependence of the HFL yield suppression in Pb+Pb collisions In this section, we present the calculations and discussions on the HFL production in high-energy nuclear collisions at the LHC to study the mass dependence of jet quenching. First, we compare the calculated \(R_{AA}\) of HFL \(\gets c\) and HFL \(\gets b\) in Pb+Pb collisions with the available experimental data. Furthermore, since the final obtained HFL \(R_{AA}\) depends not only on the in-medium energy loss but also the other factors, such as the pp-spectra, the CNM effect, fragmentation functions, and decay channels, which may also result in the difference of \(R_{AA}\) between HFL \(\gets c\) and HFL \(\gets b\). To extract the net mass effect of E-loss reflected in the HFL \(R_{AA}\), we propose a strategy of the test particle to estimate the sensitivity of the HFL \(R_{AA}\) to those five factors. In the upper panel of Fig. 2, we present the calculated HFL \(R_{AA}\) in central \(0-10\%\) Pb+Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV compared to the available experimental data [76; 73; 126], where the nuclear modification factor \(R_{AA}\) quantifies the yield suppression of the HFL in A+A relative to p+p, and is usually defined as follows, \[R_{AA}(p_{T})=\frac{1}{\langle N_{\rm coll}\rangle}\frac{dN^{\rm AA}/dp_{\rm T }}{dN^{\rm pp}/dp_{\rm T}} \tag{8}\] where \(\langle N_{\rm coll}\rangle\) represents the average number of binary nucleon-nucleon collisions per A+A event. \(dN^{\rm AA}/dp_{\rm T}\) and \(dN^{\rm pp}/dp_{\rm T}\) are the \(p_{T}\) spectra of the HFL in A+A and p+p collisions respectively. Firstly, we show the theoretical results of \(e\gets c,b\)\(R_{AA}\) (solid blue line) to the ALICE data [126] (blue square points) which show a good agreement. In addition, to address the mass dependence of the HFL yield suppression, we also estimate the \(R_{AA}\) of \(e\gets c\), \(e\gets b\) and \(\mu\gets b\), as well as the comparison to the ATLAS [76] and ALICE data [73] respectively. For HFL \(\gets b\), our \(R_{AA}\) calculations generally agree with the experiment data. We observe no difference in the yield suppression between the electron and muon, which is consistent with the ATLAS measurements. For \(e\gets c\), our theoretical results appear to underestimate the \(R_{AA}\) at \(p_{T}<\)10 GeV. We find that the \(R_{AA}\) of HFL \(\gets c,b\) is close to that of HFL \(\gets c\) at low \(p_{T}\) but to that of HFL \(\gets b\) at high \(p_{T}\). It is because the HFL production is dominated by HFL \(\gets c\) at low \(p_{T}\) while by HFL \(\gets b\) at high \(p_{T}\) as shown in Fig. 1. In the most central (\(0-10\%\)) collisions, the experimental measurements of \(R_{AA}\) do not show much difference between \(\mu\gets c\) and \(\mu\gets b\) as our model predicted within uncertainties. However, as shown in the lower plot of Fig. 2, we also notice that in \(10-20\%\) centrality bin, the ATLAS data exhibits a visible distinction between \(\mu\gets c\) and \(\mu\gets b\) at \(p_{T}<\)10 GeV, which is well captured by our calculations. Therefore, further experimental efforts should clarify the potential differences of the HFL \(R_{AA}\) from the charm- and bottom-hadron decay at the LHC. These discussions may be essential to understand mass-dependent energy loss. The "dead-cone" effect [127; 57] suppresses the probability of the medium-induced gluon radiation of a massive quark within a small cone (\(\theta<\frac{m}{E}\)). Since the "dead-cone" effect is closely related to the quark mass, a heavier quark is expected to lose less energy than a relatively lighter one in the QGP, which leads to the mass hierarchy of partonic energy loss \(\Delta E_{q}>\Delta E_{c}>\Delta E_{b}\). The higher \(R_{AA}\) of \(e\gets b\) relative to \(e\gets c\) measured by the ATLAS Collaboration [76] as shown in the lower panel of Figure 2: The model calculations of the HFL \(R_{AA}\) as a function of \(p_{T}\) in \(0-10\%\) (upper panel) and \(10-20\%\) (lower panel) Pb+Pb collision at \(\sqrt{s_{NN}}=5.02\) TeV compared to the ALICE [73; 126] and ATLAS [76] data. Fig. 2 is consistent with our expectation, which may hint \(\Delta E_{c}>\Delta E_{b}\). However, besides the different mass effect of in-medium E-loss between charm and bottom quarks, one should keep in mind that the final obtained lepton \(R_{AA}\) of \(e\gets b\) and \(e\gets c\) also depend on other factors, e.g., pp-spectra, the cold nuclear matter effect, fragmentation functions, and semileptonic decay channels. In other words, even without the mass dependence of in-medium E-loss, the \(R_{AA}\) of \(e\gets b\) and \(e\gets c\) should not be the same. Therefore, it is crucial to understand how each of these factors influences the HFL \(R_{AA}\) in nucleus-nucleus collisions. In this work, we present a strategy to separate the respective contributions of these five factors (denoted as pp-spectra, CNM, E-loss, FFs, and Decay). To quantify their respective contribution, we will use the test particle (TP) method to estimate the influence of each factor by comparing it with the calculation of realistic charm quarks. Specifically, different from the \(R_{AA}\) calculations of HFL \(\gets c\), we implement the calculations of the test particles for five cases, in which the treatment of one of the five factors is replaced by that of bottom quarks and the rest four are kept the same as that of charm. The details of this strategy are introduced as follows. * **Case-1** : To estimate the influence of the **pp-spectra**, the initial transverse momentum of the test particles is sampled with the pp-spectra of bottom quarks calculated by FONLL. The rest of the treatments are kept the same as that of HFL \(\gets c\). * **Case-2** : Since the CNM effects in A+A collisions are different for the charm and bottom quarks, to estimate the difference caused by the initial **CNM** effect, the shadowing effects of bottom quarks are considered for the test particles. The rest of the treatments are kept the same as that of HFL \(\gets c\). * **Case-3** : To estimate the influence of the mass-dependent **E-loss** of heavy quarks, the test particles are assigned with the bottom quark mass provisionally when they propagate in the QGP medium. The rest of the treatments are kept the same as that of HFL \(\gets c\). * **Case-4** : Since the hadronization of bottom quarks is usually with harder **FFs** with respect to that of charm, to estimate the difference caused by the different FFs, the test particles will fragment into hadron by using parameter setup of the bottom quarks (\(\epsilon=0.005\)) in Peterson FFs. The rest of the treatments are kept the same as that of HFL \(\gets c\). * **Case-5** : The different **Decay** channels of \(D\rightarrow\)HFL and \(B\rightarrow\)HFL may also lead to the difference in the HFL \(R_{AA}\), the D mesons fragmented from the test particles will mimic as B meson (by changing their particle ID provisionally) to decay into the HFL within PYTHIA8. The rest of the treatments are kept the same as that of HFL \(\gets c\). By comparing the \(R_{AA}\) of the HFL decay from the test particles (HFL \(\leftarrow\) TP) for these five cases with that of HFL \(\gets c\), one can obtain the difference caused by the variation of each factor. In Fig. 3, we show the calculated muon \(R_{AA}\) decay from the test particles (TP) compared to the realistic case of charm quarks in central \(0-10\%\) Pb+Pb collisions at \(\sqrt{s_{NN}}\)=5.02 TeV. Note that the differences between the two \(R_{AA}\) curves of HFL \(\leftarrow\) TP and HFL \(\leftarrow\)\(c\) quantify the influence of the respective factor. The diagram **(a)** represents the influence from the choice of initial pp-spectra, in which we can observe visible larger \(R_{AA}\) of \(\mu\leftarrow\) TP at 3\(<p_{T}<\)10 GeV compared to that of \(\mu\gets c\). It indicates that the different initial spectra of bottom and charm quarks can result in larger \(R_{AA}\) of \(\mu\gets b\) relative to \(\mu\gets c\). The diagram **(b)** represents the difference caused by the CNM effect suffered in the initial production of charm and bottom quarks in nucleus-nucleus collisions. The muon \(R_{AA}\) has almost no difference, only slightly larger values of \(\mu\leftarrow\) TP compared to \(\mu\gets c\) at \(p_{T}<\)2 GeV. The diagram **(c)** shows the \(R_{AA}\) difference only caused by the mass-dependent energy loss of heavy quarks. It is found that the mass effect of in-medium energy loss leads to larger \(R_{AA}\) of Figure 3: The comparisons of the calculated \(R_{AA}\) between \(\mu\leftarrow\) TP and \(\mu\gets c\) for the five cases (a-e). The comparison of \(R_{AA}\) between \(\mu\gets b\) and \(\mu\gets c\) is also shown in the lower right panel (f). \(\mu\leftarrow\) TP compared to \(\mu\gets c\) at a vast \(p_{T}\) region of 2 to 30 GeV. The diagram **(d)** represents the \(R_{AA}\) difference caused by the FFs choices. Besides the moderately larger \(R_{AA}\) of \(\mu\leftarrow\) TP compared to \(\mu\gets c\) at \(p_{T}<\)5 GeV, we observe that using FFs of bottom quarks can lead to slightly smaller \(R_{AA}\) at higher \(p_{T}\). The diagram **(e)** shows the \(R_{AA}\) difference caused by different decay channels of \(\mu\gets c\) and \(\mu\gets b\). We observe that using the bottom-hadron's decay channel, the HFL from test particles has a considerably larger \(R_{AA}\) compared to that from charm quarks. It suggests that the different decay channels of the bottom- and charm-hadron may be critical to the larger \(R_{AA}\) of \(\mu\gets b\) compared to \(\mu\gets c\) at lower \(p_{T}\). The diagram **(f)** compares the \(R_{AA}\) of \(\mu\gets c\) with that of \(\mu\gets b\), corresponding to the total effect caused by the five factors' variations. As shown in Fig. 4, we plot the ratios of \(R_{AA}\) of \(\mu\gets\) TP to that of \(\mu\gets c\) for the five cases as a function of \(p_{T}\) in central \(0-10\%\) Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV. The ratio of \(R_{AA}^{b}/R_{AA}^{c}\) is also shown denoting the total effect from the variation of these five factors. As we can see, at \(p_{T}<\)4 GeV, the different decay channel is the dominant factor that leads to the larger \(R_{AA}\) of \(\mu\leftarrow\) TP compared to \(\mu\gets c\). We find that the mass effect of E-loss of heavy quarks dominates the higher \(p_{T}\) region. It is noted that the pp-spectra also has considerable contribution at \(p_{T}>\)5 GeV, which remains nonzero even at high \(p_{T}\). In addition, the FFs have considerable influence only at \(p_{T}<\) 5 GeV, while the influence of CNM effects is minimal at \(p_{T}>\) 2 GeV. Based on these investigations, we can now conclude that at lower \(p_{T}\) the decay channel of bottom quarks is the dominant factor that leads to larger \(R_{AA}\) of \(\mu\gets b\) compared to that of \(\mu\gets c\). At the same time, mass-dependent E-loss becomes the key factor at higher \(p_{T}\) (\(>\)5 GeV). ### Path-length dependence of the HFL yield suppression in nucleus-nucleus collisions Besides exploring the mass effect of in-medium energy loss, the HFL production in high-energy nuclear collisions can also help probe the path-length dependence of the jet quenching effect in different collision systems [77; 78; 79; 80; 81; 82]. The colliding nucleus with a larger radius (\(R_{0}\)) is expected to form a medium with a larger size, in which the average path length of the jet propagation should be longer. As shown in Fig. 5, we present the calculated \(R_{AA}\) of HFL \(\gets c,b\) in central \(0-10\%\) Xe+Xe collisions at \(\sqrt{s_{NN}}\) = 5.44 TeV and Pb+Pb at \(\sqrt{s_{NN}}\) = 5.02 TeV collisions compared to the ALICE data [126; 128]. Within the same centrality bin, our theoretical results show stronger yield suppression of HFL \(\gets c,b\) in Pb+Pb collisions relative to that in Xe+Xe, which are generally consistent with the trend observed in the current ALICE measurements at \(p_{T}<\) 8 GeV. Our calculations also predict that such distinction can be more significant at higher \(p_{T}\), which will be interestingly tested with more precise measurements at the LHC. It is indisputable that the pr Figure 4: Ratios of \(R_{AA}\) of \(\mu\leftarrow\) TP to \(\mu\gets c\) versus \(p_{T}\) in \(0-10\%\) Pb+Pb collision at \(\sqrt{s_{NN}}\) = 5.02 TeV for the five cases, case-1 (circle line), case-2 (up-triangle line), case-3 (down-triangle line), case-4 (rhombus line), case-5 (left-triangle line). The ratio of \(R_{AA}\) of \(\mu\gets b\) to \(\mu\gets c\) is also shown (square line). Figure 5: The calculated \(R_{AA}\) of HFL versus \(p_{T}\) in central \(0-10\%\) Xe+Xe collisions at \(\sqrt{s_{NN}}\) = 5.44 TeV and Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV compared to the ALICE data [126; 128]. quarks in the QGP depends on the colliding nucleus's size and is closely related to their energy loss in A+A collisions. As shown in Fig. 6, we estimate the mean propagation time \(\langle t_{p}\rangle\) of charm and bottom versus their initial \(p_{T}\) in central \(0-10\%\) Xe+Xe collisions at \(\sqrt{s_{NN}}\)=5.44 TeV and Pb+Pb collisions at \(\sqrt{s_{NN}}\)=5.02 TeV. Note that \(t_{p}\) represents the propagation time of heavy quarks within the QGP phase (\(T>T_{c}\)) during the in-medium evolution in nucleus-nucleus collisions. It is found that, generally, heavy quarks with lower \(p_{T}\) experience longer propagation time in the QGP due to their lower velocity. Similarly, for the same collision system, we also observe larger \(\langle t_{p}\rangle\) of the bottom compared to charm due to the larger mass. From the comparison of two collision systems, the \(\langle t_{p}\rangle\) in Pb+Pb collisions are larger than that in Xe+Xe collisions with the same centrality, consistent with our expectation that the Pb+Pb collisions may create a medium with larger size compared to Xe+Xe. The remarkable thing is that the ratio of \(\langle t_{p}\rangle\) in Pb+Pb and Xe+Xe, about 1.2 as shown in the lower panel, is closely related to the radius ratio of the colliding nucleus (\(R_{0}^{Pb}/R_{0}^{Xe}\sim 1.225\)) but independent on the flavour and initial \(p_{T}\) of heavy quarks. The longer \(\langle t_{p}\rangle\) of heavy quarks in Pb+Pb may play a key role in the stronger yield suppression of \({\rm HFL}\gets c,b\) compared to Xe+Xe. In Fig. 7, we estimate the mean energy loss of heavy quarks (charm and bottom) with initial energy \(E_{0}=25\) GeV as a function of propagation time \(t_{p}\) in central \(0-10\%\) Xe+Xe collisions at \(\sqrt{s_{NN}}\)=5.44 TeV and Pb+Pb collisions at \(\sqrt{s_{NN}}\)=5.02 TeV. It is clear to see that \(\langle\Delta E\rangle\) increases with \(t_{p}\), and at any \(t_{p}\) charm quarks lose more energy than bottom quarks due to the "dead-cone" effect, no matter in Xe+Xe or Pb+Pb collisions. At the same time, we observe that \(\langle\Delta E\rangle\) in Pb+Pb collisions is slightly larger than that in Xe+Xe with about 10% at \(t_{p}<4\) fm within the same centrality bin. It is because, within the same centrality bin, the QGP medium created in Pb+Pb collisions has a generally higher temperature than that in Xe+Xe. For instance, the temperature at the fireball's centre is about 500 MeV in Pb+Pb while 480 MeV in Xe+Xe with \(0-10\%\) centrality at \(\tau_{0}=0.6\) fm [118; 119]. Since the strength of partonic interactions is closely related to the medium temperature (\(\kappa\propto T^{3}\), \(\hat{q}\propto T^{3}\)) [100; 105], the higher temperature of the QGP medium formed in Pb+Pb leads to more effective energy loss of heavy quarks than that in Xe+Xe. From the above discussion, we conclude that the longer propagation time and more effective energy loss in Pb+Pb collisions play critical roles in the stronger yield suppression of the \({\rm HFL}\) compared to that in Xe+Xe. As shown in the upper panel of Fig. 8, we also calculate the integrated \(R_{AA}\) of \({\rm HFL}\gets c,b\) within \(3<p_{T}^{\mu}<8\) GeV as a function of centrality both in Xe+Xe and Pb+Pb collisions. As we can see, in the same centrality bin, the \(R_{AA}\) of \(\mu\gets c,b\) in Pb+Pb collisions are lower than that in Xe+Xe, the same as the case in the most central collisions. However, as shown in the lower panel of Fig. 8, Figure 6: Mean propagation time \(\langle t_{p}\rangle\) of heavy quarks (charm and bottom) as a function of their initial \(p_{T}\) in central \(0-10\%\) Xe+Xe collisions at \(\sqrt{s_{NN}}\)=5.44 TeV and Pb+Pb collisions at \(\sqrt{s_{NN}}\)=5.02 TeV, as well as the ratios of PbPb/XeXe in the lower panel. Figure 7: Mean energy loss \(\langle\Delta E\rangle\) of heavy quarks versus propagation time \(t_{p}\) in central \(0-10\%\) Xe+Xe collisions at \(\sqrt{s_{NN}}\)=5.44 TeV and Pb+Pb collisions at \(\sqrt{s_{NN}}\)=5.02 TeV. it is interesting to note that if one scales the \(R_{AA}\) by the number of participants \(\langle\)N\({}_{\rm part}\rangle\) instead of centrality, the differences of HFL \(R_{AA}\) in Xe+Xe and Pb+Pb almost disappear at the overlap \(\langle\)N\({}_{\rm part}\rangle\) region. Our findings are consistent with the previous study in Ref. [82], in which the scaling behaviour of heavy-flavour hadron between different collision systems has been thoroughly discussed. The scaling behaviour of the HFL \(R_{AA}\) indicates that the heavy quarks lose similar energy in the medium formed with the same \(\langle\)N\({}_{\rm part}\rangle\) even for different collision systems. ## IV Summary This paper presents a theoretical study of the lepton production from heavy flavour decay (\({\rm HFL}\gets c,b\)) in high-energy nuclear collisions at the LHC. The initial \(p_{T}\) spectra of heavy quarks are provided by the FONLL program, which matches the next-to-leading order hard QCD processes with the next-to-leading-log large-\(p_{T}\) resummation. The in-medium propagation of heavy quarks is driven by the modified Langevin equations, which consider both the elastic and inelastic energy loss. The 2+1D CLVisc hydrodynamic model describes the space-time evolution of the fireball. The hadronization of heavy quarks and the semileptonic decay of heavy-flavour hadrons are employed by PYTHIA8. We present the calculated suppression factor \(R_{AA}\) of \({\rm HFL}\gets c,b\) in Pb+Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV compared to the ALICE and ATLAS data, which suggests lager \(R_{AA}\) of \({\rm HFL}\gets b\) than that of \({\rm HFL}\gets c\). Furthermore, since the HFL \(R_{AA}\) results in a complicated interplay of several factors, such as the pp-spectra, CNM effects, in-medium E-loss, fragmentation functions, and decay channels, we propose a strategy to separate the individual influence of these factors to extract the net effect of mass-dependent E-loss. Based on quantitative analysis, we demonstrate that different decay channels of charm- and bottom-hadrons play an essential role at \(p_{T}<\)5 GeV, while the mass-dependent E-loss dominates the higher \(p_{T}\) region. It is also found that the influences of CNM effects and FFs are insignificant, while different initial pp-spectra of charm and bottom quarks have a considerable impact at high \(p_{T}\). In addition, we explore the path-length dependence of jet quenching by comparing the HFL \(R_{AA}\) in two different collision systems: Xe+Xe at \(\sqrt{s_{NN}}=5.44\) TeV and Pb+Pb at \(\sqrt{s_{NN}}=5.02\) TeV. We obtain a smaller HFL \(R_{AA}\) in Pb+Pb than that in Xe+Xe within the same centrality bin, which is consistent with the ALICE data. We find that both the longer propagation time and more effective energy loss of heavy quarks in Pb+Pb collisions play critical roles in the stronger yield suppression of the HFL compared to that in Xe+Xe. At last, we observe a scaling behaviour of the HFL \(R_{AA}\) versus \(\langle\)N\({}_{\rm part}\rangle\) in Pb+Pb and Xe+Xe collisions, which indicates that heavy quarks lose similar energy in the medium formed with the same \(\langle\)N\({}_{\rm part}\rangle\) even for different collision systems. _Acknowledgments:_ This research is supported by the Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, and the National Natural Science Foundation of China with Project Nos. 11935007, 12035007 and 12247127. S. Wang is supported by China Postdoctoral Science Foundation under project No. 2021M701279.
2304.14835
Regret Optimal Control for Uncertain Stochastic Systems
We consider control of uncertain linear time-varying stochastic systems from the perspective of regret minimization. Specifically, we focus on the problem of designing a feedback controller that minimizes the loss relative to a clairvoyant optimal policy that has foreknowledge of both the system dynamics and the exogenous disturbances. In this competitive framework, establishing robustness guarantees proves challenging as, differently from the case where the model is known, the clairvoyant optimal policy is not only inapplicable, but also impossible to compute without knowledge of the system parameters. To address this challenge, we embrace a scenario optimization approach, and we propose minimizing regret robustly over a finite set of randomly sampled system parameters. We prove that this policy optimization problem can be solved through semidefinite programming, and that the corresponding solution retains strong probabilistic out-of-sample regret guarantees in face of the uncertain dynamics. Our method naturally extends to include satisfaction of safety constraints with high probability. We validate our theoretical results and showcase the potential of our approach by means of numerical simulations.
Andrea Martin, Luca Furieri, Florian Dörfler, John Lygeros, Giancarlo Ferrari-Trecate
2023-04-28T13:21:46Z
http://arxiv.org/abs/2304.14835v3
# Regret Optimal Control for Uncertain Stochastic Systems ###### Abstract We consider control of uncertain linear time-varying stochastic systems from the perspective of regret minimization. Specifically, we focus on the problem of designing a feedback controller that minimizes the loss relative to a clairvoyant optimal policy that has foreknowledge of the system dynamics and the exogenous disturbances. In this competitive framework, establishing robustness guarantees proves challenging as, differently from the case where the model is known, the benchmark policy is not only inapplicable, but also impossible to compute without knowledge of the system parameters. To overcome this issue, we embrace a scenario optimization approach, and we propose minimizing regret robustly over a finite set of randomly sampled system parameters. We prove that this policy optimization problem can be efficiently solved through semidefinite programming, and that the corresponding solution retains strong probabilistic out-of-sample regret guarantees in face of the uncertain dynamics. Our method naturally extends to include satisfaction of safety constraints with high probability. We validate our theoretical results and showcase the potential of our approach by means of numerical simulations. ## I Introduction Inspired by online optimization and learning methods, control of dynamical system has recently been studied through the lens of regret minimization [1]. This emerging paradigm is competitive and nonstochastic, and aims at designing efficient control laws that minimize the worst-case loss relative to an optimal policy in hindsight. Algorithms with provable regret certificates hence offer attractive performance guarantees that - in contrast with the distributional and worst-case assumptions typical of \(\mathcal{H}_{2}\) and \(\mathcal{H}_{\infty}\) controllers [2] - hold independently of how disturbances are generated. Most prior work in this area employs gradient methods to deal with adversarially chosen cost functions and perturbations, and shows that the resulting control law achieves sublinear regret against meaningful policy classes [1, 3, 4]. A parallel line of research, initiated by [5] and [6], studies the problem of competing against the optimal control actions selected by a clairvoyant (noncausal) policy, independent of the policy class generating these decisions. While restricted to the case of known cost functions, the formulation of [5] and [6] has received increasing interest thanks to: optimality of the clairvoyant benchmark policy, possibility of computing the regret-minimizing controller, and remarkable performance reported in several applications, including longitudinal motion control of a helicopter and control of a wind energy conversion system [7]. In particular, among recent contributions, [8] and [9] proposed an efficient optimization-based synthesis framework to incorporate safety constraints, [10] and [11] considered partially-observed systems, [12] and [7] investigated the closely related metric of competitive ratio, [5] and [13] considered state estimation problems, and [14] studied connections with behavioral cloning approaches in imitation learning. Despite these advances, an important open challenge is how to track the performance of the clairvoyant optimal policy without knowledge of the underlying dynamics. In fact, as the systems under control become increasingly complex, assuming availability of precise mathematical models appears more and more impractical. Nevertheless, to the best of our knowledge, only [15] approached this problem, showing that several iterative control algorithms that combine system identification with gradient descent methods, e.g., [3, 4], also achieve, asymptotically, near-optimal competitive ratio relative to the clairvoyant optimal policy. However, this result only holds for systems with time-invariant dynamics and does not allow synthesizing control policies that, given a set of admissible plants, guarantee that the regret relative to the clairvoyant optimal policy is minimized robustly. Towards addressing these issues, in this paper we present a solution based on the scenario optimization approach [16, 17, 18], which is applicable to uncertain stochastic linear time-varying systems affected by a priori unknown but measurable disturbance processes.1 A key challenge lies in handling the different impact that parametric uncertainty has on the closed-loop behavior achieved by causal and clairvoyant control policies. In fact, simultaneously accounting for these effects considerably complicates the application of the analysis methods used in [20, 21, 22] to derive suboptimality and sample complexity bounds for classical linear quadratic control problems. Footnote 1: These include but are not limited to the class of linear parameter-varying systems – a middle ground between linear and nonlinear dynamics [19]. For a wide range of control applications, including robotics, building energy management, and power grids [23], designing a single state feedback policy that attains robust performance across all admissible system dynamics can prove overly conservative. Instead, it is beneficial to optimize for a unique closed-loop behavior - while allowing the state feedback law that achieves it to vary - leveraging a posteriori measurements of exogenous perturbations such as external forces, solar radiation, and electricity demands for control implementation. Motivated as above, we show how convex optimization techniques can be used to synthesize a disturbance feedback robust control policy with provable regret guarantees in spite of the uncertain dynamics. In particular, building upon [16, 17, 18], we propose constructing a scenario problem by appropriately sampling over the space of uncertain parameters. We prove that the policy that minimizes regret robustly over the considered scenarios can be computed efficiently via semidefinite programming, and that this optimal solution exhibits generalization capabilities - in the sense that the resulting regret bound holds true for all but a small fraction of uncertainty realizations whose probability is no larger than a prespecified tolerance level. Our approach naturally extends to include satisfaction of safety constraints with high probability. The advantages of our design method are twofold. First, contrary to worst-case solutions, which are known to be computationally hard to evaluate [24], and coherently with the theory of scenario optimization, our approach uses a finite number of randomly sampled uncertainty realizations only, and is thus tractable. Second, as opposed to probabilistic design based on classical \(\mathcal{H}_{\infty}\) specifications, our method leverages the cost of the optimal policy in hindsight to yield performance guarantees that are tailored to the specific uncertainty and disturbance realizations. As we discuss and validate by means of numerical simulations, this allows us to reduce conservatism by establishing tighter upper bounds on the realized cost - which in turn translate into improved closed-loop performance across all uncertain system dynamics for several disturbance profiles of practical relevance. ## II Problem Statement and Preliminaries ### _Dynamics, control objective, and constraints_ We consider an uncertain discrete-time linear time-varying dynamical system described by the state-space equation \[x_{t+1}=A_{t}(\theta_{t})x_{t}+B_{t}(\theta_{t})u_{t}+E_{t}(\theta_{t})w_{t}\,, \tag{1}\] where \(x_{t}\in\mathbb{R}^{n}\), \(u_{t}\in\mathbb{R}^{m}\), \(\theta_{t}\in\mathbb{R}^{d}\) and \(w_{t}\in\mathbb{R}^{p}\) are the system state, the control input, a vector of uncertain parameters that characterize the family of admissible plants, and a measurable disturbance process, respectively. As a first milestone towards designing a receding horizon control policy, we focus on optimizing the closed-loop behavior of this uncertain system over a finite-time planning horizon of length \(T\in\mathbb{N}\), and let \[\mathbf{x} =(x_{0},x_{1},\ldots,x_{T-1})\,,\ \ \mathbf{u}=(u_{0},u_{1},\ldots,u_{T-1})\,,\] \[\mathbf{w} =(x_{0},w_{0},\ldots,w_{T-2})\,,\ \ \mathbf{\theta}=(\theta_{0},\theta_{1},\ldots,\theta_{T-1})\,,\] for compactness. On the one hand, we do not make any assumptions about the statistical properties of the exogenous disturbance process \(\mathbf{w}\), that can also be adversarially selected. On the other hand, we assume that \(\mathbf{\theta}\) is drawn according to a probability distribution \(\mathbb{P}_{\mathbf{\theta}}\) with a possibly unknown and unbounded support set \(\mathbf{\Theta}\). This probability measure may reflect a priori knowledge about the actual likelihood of each realization of the system parameters, or may simply encode the relative importance that we attribute to each uncertainty instance. In particular, we do not require \(\mathbb{P}_{\mathbf{\theta}}\) to be known explicitly, but rely on a set \(\mathcal{D}=\{\mathbf{\theta}^{1},\ldots,\mathbf{\theta}^{N}\}\) of \(N\in\mathbb{N}\) independent samples only.2 Finally, we assume that the matrices \(E_{t}(\theta_{t})\) are full column rank for all \(t\in\mathbb{I}_{T}=\{0,\ldots,T-1\}\) and for all \(\theta_{t}\) such that \(\mathbf{\theta}\in\mathbf{\Theta}\). Footnote 2: Note that the individual parameter realizations \(\theta_{0}^{k},\ldots,\theta_{T-1}^{k}\) inside a training sample \(\mathbf{\theta}^{k}\in\mathcal{D}\) need not be independent and identically distributed. **Remark 1**: _Often times, the probability distribution \(\mathbb{P}_{\mathbf{\theta}}\) is unknown, yet uncertainty samples are directly made available to the policy designer as observations. For instance, this is the case when the realizations \(\mathbf{\theta}^{k}\in\mathcal{D}\) correspond to a series of system identification experiments, see, e.g., [16]._ **Remark 2**: _As previously discussed, measurable disturbance processes arise in several control applications and include, e.g., turbulence and wind gusts in aircraft control, and heat and humidity loads in heating, ventilating and air conditioning systems. Besides, our formulation encompasses the broad class of linear-parameter-varying systems. In fact, if \(\theta_{t}\) denotes an a priori uncertain but measurable scheduling parameter, then past disturbance realizations can be reconstructed by \(w_{t}=E_{t}(\theta_{t})^{\dagger}(x_{t+1}-A_{t}(\theta_{t})x_{t}-B_{t}( \theta_{t})u_{t})\), where \(E_{t}(\theta_{t})^{\dagger}\) is the Moore-Penrose inverse of \(E_{t}(\theta_{t})\). Differently from most literature on linear-parameter-varying systems, however, we allow generic nonlinear dependence with respect to \(\theta_{t}\) of the system matrices \(A_{t}(\theta_{t})\), \(B_{t}(\theta_{t})\), and \(E_{t}(\theta_{t})\)._ Motivated by the framework of nonstochastic control [1], we consider the problem of designing a causal decision policy \(\mathbf{\pi}=(\pi_{0},\ldots,\pi_{T-1})\), with \(u_{t}=\pi_{t}(x_{0},\ldots,x_{t},w_{0},\ldots,w_{t-1})\), that closely tracks the performance of an ideal clairvoyant policy \(\mathbf{\psi}=(\psi_{0},\ldots,\psi_{T-1})\). The noncausal benchmark policy \(\mathbf{\psi}\) selects the control actions with foreknowledge of \(\mathbf{w}\) and \(\mathbf{\theta}\), i.e., \(u_{t}=\psi_{t}(x_{0},w_{0},\ldots,w_{T-2},\theta_{0},\ldots,\theta_{T-1})\). More specifically, for any fixed \(\mathbf{w}\) and \(\mathbf{\theta}\), let \[J(\mathbf{\pi},\mathbf{w},\mathbf{\theta})=\mathbf{x}^{\top}\mathbf{Q}\mathbf{x}+\mathbf{u}^{\top}\mathbf{R} \mathbf{u}\,, \tag{2}\] with state and input weighting matrices \(\mathbf{Q}\succeq 0\) and \(\mathbf{R}\succ 0\), denote the quadratic control cost incurred by playing the policy \(\mathbf{\pi}\), and define the per-instance regret of \(\mathbf{\pi}\) relative to \(\mathbf{\psi}\) as: \[\mathtt{R}(\mathbf{\pi},\mathbf{\psi},\mathbf{w},\mathbf{\theta})=J(\mathbf{\pi},\mathbf{w},\mathbf{ \theta})-J(\mathbf{\psi},\mathbf{w},\mathbf{\theta})\,. \tag{3}\] Building upon ideas proposed in [5, 6] for the case where the system dynamics (1) are perfectly known, we then formulate the robust regret minimization problem as follows: \[\mathtt{R}^{*}(\mathbf{\psi})=\min_{\mathbf{\pi}}\ \max_{\mathbf{\theta}\in\mathbf{\Theta}}\ \max_{\|\mathbf{w}\|_{2}\leq 1}\ \mathtt{R}(\mathbf{\pi},\mathbf{\psi},\mathbf{w},\mathbf{\theta})\,. \tag{4}\] Remarkably, an optimal solution \(\mathbf{\pi}^{*}\) to (4) guarantees that its cost is always at most \(\mathtt{R}^{*}(\mathbf{\psi})\) higher than that of the ideal, yet inapplicable, benchmark policy \(\mathbf{\psi}\) - no matter how the disturbances are generated and which system dynamics realize. As modern engineering systems often feature safety-critical components, we include in the synthesis problem a robust constraint satisfaction requirement. In particular, to account for safety limitations on the physical variables of the system, we define a polytopic safe set in the space of state and input trajectories as follows: \[\mathcal{S}(\mathbf{\theta})=\{(\mathbf{x},\mathbf{u}):\mathbf{H}_{x}(\mathbf{\theta})\mathbf{x}+\mathbf{H}_{ u}(\mathbf{\theta})\mathbf{u}\leq\mathbf{h}(\mathbf{\theta})\}\,. \tag{5}\] Then, we consider the objective of minimizing the worst-case regret while ensuring that \((\mathbf{x},\mathbf{u})\in\mathcal{S}(\mathbf{\theta})\) robustly for all \(\mathbf{\theta}\in\mathbf{\Theta}\) and all \(\mathbf{w}\) belonging to the compact polytope \(\mathcal{W}(\mathbf{\theta})\) defined by \[\mathcal{W}(\mathbf{\theta})=\{\mathbf{w}:\mathbf{H}_{w}(\mathbf{\theta})\mathbf{w}\leq\mathbf{h}_{w}( \mathbf{\theta})\}\,. \tag{6}\] ### _Affine disturbance feedback policy_ In general, it is well-known that optimizing over the function space of feedback policies is computationally intractable. Therefore, as common in the control literature [25, 26], throughout this paper we restrict our attention to affine disturbance feedback policies of the form \(\mathbf{u}=\mathbf{\Phi}_{u}\mathbf{w}\), with \(\mathbf{\Phi}_{u}\) lower block-triangular to enforce causality. We note that affine policies enable attaining minimum regret against the optimal sequence of control actions in hindsight if the true system dynamics are known and the safety constraints are not active [5, 6]. Moreover, as we will show in the next section, this choice allows us to reformulate the intractable minimization of (4) subject to the dynamics (1) and the constraints (5) as a convex optimization problem. Let us define through diagonal concatenation of matrices the operators \(\mathbf{A}(\mathbf{\theta})=\mathrm{blkdiag}(A_{0}(\theta_{0}),\ldots,A_{T-1}(\theta_{T -1}))\), \(\mathbf{B}(\mathbf{\theta})=\mathrm{blkdiag}(B_{0}(\theta_{0}),\ldots,B_{T-1}(\theta_{ T-1}))\), and \(\mathbf{E}(\mathbf{\theta})=\mathrm{blkdiag}(I_{n},E_{0}(\theta_{0}),\ldots,E_{T-2}( \theta_{T-2}))\). With this notation in place, we observe that the closed-loop state trajectory under the feedback law \(\mathbf{u}=\mathbf{\Phi}_{u}\mathbf{w}\) can be expressed as a linear function of \(\mathbf{w}\) as per: \[\mathbf{x} =\mathbf{Z}\mathbf{A}(\mathbf{\theta})\mathbf{x}+\mathbf{Z}\mathbf{B}(\mathbf{\theta})\mathbf{u} +\mathbf{E}(\mathbf{\theta})\mathbf{w}\,, \tag{7}\] \[=(\mathbf{I}-\mathbf{Z}\mathbf{A}(\mathbf{\theta}))^{-1}(\mathbf{Z}\mathbf{B}(\mathbf{\theta })\mathbf{\Phi}_{u}+\mathbf{E}(\mathbf{\theta}))\mathbf{w}:=\mathbf{\Phi}_{x}(\mathbf{\theta})\mathbf{w}\,,\] where \(\mathbf{Z}\) is the block-downshift operator, namely, a matrix with identity matrices along its first block sub-diagonal and zeros elsewhere. ### _On the choice of the clairvoyant benchmark policy_ We conclude our problem formulation by commenting on the choice of the clairvoyant benchmark policy \(\mathbf{\psi}\). A first meaningful objective is that of competing against the best sequence of control actions in hindsight, without imposing any structure on \(\mathbf{\psi}\). In this case, it can be shown by adapting the derivations of [2, 8] that the optimal policy performs noncausal linear combinations of the disturbances, through weights that depend on past, present and future realizations of the uncertain system parameters. Specifically, it holds that: \[\mathbf{\psi}(\mathbf{w},\mathbf{\theta})=-(\mathbf{R}+\mathbf{F}(\mathbf{\theta})^{\top}\mathbf{Q}\mathbf{F}( \mathbf{\theta}))^{-1}\mathbf{F}(\mathbf{\theta})^{\top}\mathbf{Q}\mathbf{G}(\mathbf{\theta})\mathbf{w}\,, \tag{8}\] where \(\mathbf{F}(\mathbf{\theta})=(\mathbf{I}-\mathbf{Z}\mathbf{A}(\mathbf{\theta}))^{-1}\mathbf{Z}\mathbf{B}(\mathbf{ \theta})\) and \(\mathbf{G}(\mathbf{\theta})=(\mathbf{I}-\mathbf{Z}\mathbf{A}(\mathbf{\theta}))^{-1}\mathbf{E}(\mathbf{\theta})\) are the causal response operators that encode the uncertain dynamics (1) as \(\mathbf{x}=\mathbf{F}(\mathbf{\theta})\mathbf{u}+\mathbf{G}(\mathbf{\theta})\mathbf{w}\). Alternatively, leveraging the foreknowledge of all elements in \(\mathbf{\theta}\) and using results in Corollary 4 of [8], one may define more complex linear control benchmarks that, e.g., comply with prescribed safety constraints. In both cases, differently from the model-based setting considered [5, 6], the (nonlinear) dependence of \(\mathbf{\psi}\) on the system dynamics makes it impossible to compute the actual benchmark policy - and hence also the policy that minimizes regret against it - before \(\mathbf{\theta}\) gets revealed. To get around this problem without sacrificing the instance-wise optimality of \(\mathbf{\psi}\) - as would result, for instance, by artificially constructing a benchmark policy that achieves robust performance across all admissible system dynamics having foreknowledge of \(\mathbf{w}\) only - in the next section we present a randomized approach based on the scenario optimization framework [16, 17, 18]. ## III Main Results In this section, we show how a causal control policy with probabilistic certificates of regret and safety can be efficiently computed in spite of the uncertain dynamics. To do so, we first construct a scenario approximation of the robust regret minimization problem in (4) by restricting our focus to a finite number of uncertainty instances only. Then, inspired by [8], we prove that the policy that safely minimizes regret over the considered scenarios can be expressed as the solution of a semidefinite optimization problem. Finally, having shed light on the convexity properties of the problem at hand and leveraging results from the theory of uncertain convex programs [16, 17, 18], we derive strong guarantees on the probability of out-of-sample regret bound or safety constraint violation. For ease of presentation, we defer all proofs to the appendix. In what follows, we let \(\mathbf{\Psi}_{u}(\mathbf{\theta})\) and \(\mathbf{\Psi}_{x}(\mathbf{\theta})\) denote the closed-loop system responses that map \(\mathbf{w}\) to the control actions selected by the clairvoyant policy \(\mathbf{\psi}\) and to the corresponding state trajectory, respectively. For instance, if \(\mathbf{\psi}\) were chosen as the unconstrained optimal policy in hindsight, then we would have that \(\mathbf{\Psi}_{u}(\mathbf{\theta})=-(\mathbf{R}+\mathbf{F}(\mathbf{\theta})^{\top}\mathbf{Q}\mathbf{F}( \mathbf{\theta}))^{-1}\mathbf{F}(\mathbf{\theta})^{\top}\mathbf{Q}\mathbf{G}(\mathbf{\theta})\) by inspection of (8). Further, with slight abuse of notation, we will often use \(\mathbf{\Phi}_{u}\) and \(\mathbf{\Psi}_{u}\) instead of \(\mathbf{\pi}\) and \(\mathbf{\psi}\), respectively. We start by introducing the following epigraphic form of the robustly safe regret minimization problem: \[\min_{\mathbf{\Phi}_{u},\gamma}\ \gamma \tag{9a}\] \[\mathrm{subject\ to}\ \mathbf{\Phi}_{u}\ \mathrm{with\ causal\ sparsities}\,,\] (9b) \[\mathbf{\Phi}_{x}(\mathbf{\theta})=(\mathbf{I}-\mathbf{Z}\mathbf{A}(\mathbf{\theta}))^{ -1}(\mathbf{Z}\mathbf{B}(\mathbf{\theta})\mathbf{\Phi}_{u}+\mathbf{E}(\mathbf{\theta}))\,,\] (9c) \[\max_{\mathbf{w}\in\mathcal{W}(\mathbf{\theta})}\ \mathbf{H}_{x}(\mathbf{\theta})\mathbf{\Phi}_{x}(\mathbf{\theta})\mathbf{w}+\mathbf{H}_{u}(\mathbf{ \theta})\mathbf{\Phi}_{u}\mathbf{w}\leq\mathbf{h}(\mathbf{\theta})\,,\] (9d) \[\max_{\|\mathbf{w}\|_{2}\leq 1}\ \mathds{R}(\mathbf{\Phi}_{u},\mathbf{\Psi}_{u}(\mathbf{\theta}),\mathbf{w},\mathbf{ \theta})\leq\gamma\,,\ \forall\mathbf{\theta}\in\mathbf{\Theta}\,; \tag{9e}\] we denote the optimal value of (9) by \(\bar{\mathds{R}}^{\star}(\mathbf{\Psi}_{u}(\mathbf{\theta}))\). Despite we narrowed attention to affine disturbance feedback policies only, this optimization problem still remains intractable. In fact, if the uncertainty support set \(\mathbf{\Theta}\) has infinite cardinality, then (9) features an infinite number of constraints, each associated with a different instance of \(\mathbf{\theta}\). Besides, strong duality results do not apply in a straightforward way as \(\mathbf{\Theta}\) is not assumed to be connected, let alone convex. Motivated by the scenario optimization framework [16, 17, 18], we therefore propose replacing the maximization over \(\mathbf{\Theta}\) with a maximization over the finite set of sampled uncertainty realizations \(\mathcal{D}=\{\mathbf{\theta}^{1},\ldots,\mathbf{\theta}^{N}\}\) only. Thanks to strong generalization properties under convexity assumptions, this randomized approach has found important applications, e.g., in stochastic model predictive control [27, 28]. Here, we instead embrace a scenario optimization approach to establish regret guarantees relative to a clairvoyant policy that is impossible to compute without knowledge of the realized system dynamics. Proceeding as just outlined, we first approximate (9) by constructing its scenario counterpart as follows: \[\min_{\mathbf{\Phi}_{u},\gamma}\ \gamma \tag{10a}\] \[\operatorname*{subject\ to}\ \eqref{eq:constraint_1}\,,\ \eqref{eq:constraint_2}\,,\] (10b) \[\max_{\|\mathbf{\upsilon}\|_{2}\leq 1}\ \operatorname{R}(\mathbf{\Phi}_{u},\mathbf{ \Psi}_{u}(\mathbf{\theta}^{k}),\mathbf{w},\mathbf{\theta}^{k})\leq\gamma\,,\ \forall\mathbf{\theta}^{k}\in\mathcal{D}\,, \tag{10c}\] with \(\mathbf{H}_{x}^{k}=\mathbf{H}_{x}(\mathbf{\theta}^{k})\), \(\mathbf{H}_{u}^{k}=\mathbf{H}_{u}(\mathbf{\theta}^{k})\), and \(\mathbf{h}^{k}=\mathbf{h}(\mathbf{\theta}^{k})\) for brevity. Then, building upon the reformulations proposed in [8, 9] for the case of known system dynamics, we show that the optimal closed-loop map \(\mathbf{\Phi}_{u}^{*}(\mathbf{\Psi}_{u}(\mathbf{\theta}),\mathcal{D})\) that best tracks the performance of \(\mathbf{\Psi}_{u}(\mathbf{\theta})\) while ensuring safety across all sampled uncertainty instances \(\mathbf{\theta}^{k}\in\mathcal{D}\) can be computed by means of standard convex optimization techniques. **Proposition 1**: _The scenario optimization problem (10) constructed using the uncertainty samples \(\mathcal{D}=\{\mathbf{\theta}_{1},\ldots,\mathbf{\theta}_{N}\}\) is equivalent to the following semidefinite optimization problem:_ \[\min_{\mathbf{\Phi}_{u},\mathbf{Y},\gamma}\ \gamma \tag{11a}\] \[\operatorname*{subject\ to}\ \eqref{eq:constraint_1}\,,\ \mathbf{Y}_{ij}\geq 0\,,\ \forall\mathbf{\theta}^{k}\in\mathcal{D}\,,\] \[\mathbf{Y}^{\top}\mathbf{h}_{w}^{k}\leq\mathbf{h}^{k}\,,\ \mathbf{H}_{x}^{k}\mathbf{\Phi}_{x}(\mathbf{\theta}^{k})+\mathbf{H}_{u}^{k}\mathbf{\Phi}_{u}=\mathbf{Y }^{\top}\mathbf{H}_{w}\,,\] (11b) \[\begin{bmatrix}\mathbf{I}&\begin{bmatrix}\mathbf{Q}^{\frac{1}{2}}\mathbf{\Phi }_{x}(\mathbf{\theta}^{k})\\ \mathbf{R}^{2}\mathbf{\Phi}_{u}\end{bmatrix}\\ \star&\gamma\mathbf{I}+\begin{bmatrix}\mathbf{Q}^{\frac{1}{2}}\mathbf{\Psi}_{x}(\mathbf{\theta} ^{k})\\ \mathbf{R}^{\frac{1}{2}}\mathbf{\Psi}_{u}(\mathbf{\theta}^{k})\end{bmatrix}^{\top}\begin{bmatrix} \mathbf{Q}^{\frac{1}{2}}\mathbf{\Psi}_{x}(\mathbf{\theta}^{k})\\ \mathbf{R}^{\frac{1}{2}}\mathbf{\Psi}_{u}(\mathbf{\theta}^{k})\end{bmatrix}\end{bmatrix} \succeq 0\,, \tag{11c}\] _where \(\mathbf{H}_{w}^{k}=\mathbf{H}_{w}(\mathbf{\theta}^{k})\), \(\mathbf{h}_{w}^{k}=\mathbf{h}_{w}(\mathbf{\theta}^{k})\), and \(\star\) denotes entries that can be inferred from symmetry._ We remark that, for each \(\mathbf{\theta}^{k}\in\mathcal{D}\), the operators \(\mathbf{\Psi}_{x}(\mathbf{\theta}^{k})\) and \(\mathbf{\Psi}_{u}(\mathbf{\theta}^{k})\) in (11c) are the noncausal system responses associated with a benchmark policy that is optimal for the specific realization \(\mathbf{\theta}^{k}\) of the uncertain system parameters. For each sampled scenario, enforcing the linear matrix inequality constraint (11c) hence requires to first evaluate the corresponding optimal closed-loop behavior in hindsight by proceeding as in Corollary 4 of [8]. **Remark 3**: _As the number of uncertainty samples in \(\mathcal{D}\) increases, solving (11) through semidefinite programming may represent a major computational bottleneck. We refer the interested reader to [29, 30] and the references therein for state-of-the-art techniques that leverage diagonal dominance and chordal sparsity to improve scalability._ Let \(\bar{\mathbb{R}}_{N}^{*}(\mathbf{\Psi}_{u}(\mathbf{\theta}),\mathcal{D})\) denote the optimal value of (10).4 Since only a finite subset of the constraints of (9) are considered in (10), we have that \(\bar{\mathbb{R}}_{N}^{*}\leq\bar{\mathbb{R}}^{\star}\), that is, \(\bar{\mathbb{R}}_{N}^{*}\) is an optimistic lower bound on the true minimax regret \(\bar{\mathbb{R}}^{\star}\). Conversely, thanks to the convexity properties established in Proposition 1 and exploiting key results in scenario optimization, we now show that the solution of (10) is approximately feasible for (9) - in the sense that the measure of the set of original constraints that it violates rapidly approaches zero as \(N\) increases. Before formalizing this generalization property in the theorem below, we observe that multiple optimal policies for (11) may exist, since the function \(\lambda_{\max}(\cdot)\) is not strongly convex. In this case, uniqueness of \(\mathbf{\Phi}_{u}^{*}(\mathbf{\Psi}_{u}(\mathbf{\theta}),\mathcal{D})\) can be enforced by designing a convex tie-break rule, e.g., a lexicographic criterion. Conversely, if the safety constraints (10b) are overly restrictive, the scenario problem (11) may become unfeasible; if this were the case, however, the original problem (9) would also certainly be unfeasible, and one would need to consider broader classes of policies, or to relax the safety requirements, e.g., by introducing slack variables in (5). Footnote 4: In the interest of readability, in the following, we omit function arguments when clear from the context. **Theorem 1**: _Fix any violation and confidence levels, say \(\epsilon\) and \(\beta\), in the open interval \((0,1)\), and let \(\delta\) denote the number of optimization variables in (10). If (10) is feasible and \(N>\delta\) satisfies \(\sum_{j=0}^{\delta-1}\binom{N}{j}\epsilon^{j}(1-\epsilon)^{N-j}\leq\beta\), then, with probability of at least \(1-\beta\) given a dataset \(\mathcal{D}\sim\mathbb{P}_{\mathbf{\theta}}^{N}\), it holds that:_ \[\mathbb{P}_{\mathbf{\theta}}\Big{(}\max_{\|\mathbf{\upsilon}\|_{2}\leq 1}\ \mathbb{R}(\mathbf{\Phi}_{u}^{*},\mathbf{\Psi}_{u}(\mathbf{\theta}),\mathbf{w},\mathbf{\theta})\leq \bar{\mathbb{R}}_{N}^{*}\,,\] \[\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{and}\ (\mathbf{x},\mathbf{u})\in\mathcal{S}(\mathbf{\theta})\,,\ \forall\mathbf{w}\in\mathcal{W}(\mathbf{ \theta})\Big{)}\geq 1-\epsilon\,, \tag{12}\] _that is, given a parameter realization \(\mathbf{\theta}\sim\mathbb{P}_{\mathbf{\theta}}\), the probability that the optimal policy \(\mathbf{\Phi}_{u}^{*}(\mathbf{\Psi}_{u}(\mathbf{\theta}),\mathcal{D})\) computed solving (11) both incurs regret of at most \(\bar{\mathbb{R}}_{N}^{*}(\mathbf{\Psi}_{u}(\mathbf{\theta}),\mathcal{D})\) and complies with the safety constraints is lower bounded by \(1-\epsilon\)._ Theorem 1 presents an explicit sample complexity bound that, given a priori specified confidence and violation levels, ensures that our safety and regret guarantees extend to all but at most a fraction \(\epsilon\) of unseen dynamics \(\mathbf{\theta}\in\mathbf{\Theta}\) with arbitrarily high probability \(1-\beta\). As well-known in the literature on scenario optimization, the minimum number of scenarios \(N(\epsilon,\beta)\) required to fulfill the conditions of Theorem 1 grows at most logarithmically with \(\beta^{-1}\). Hence, even if a very small \(\beta\) is selected - so that (12) holds with practical certainty - the number of scenarios to be sampled remains manageable, see also [27]. Further, we note that the condition on the number \(N\) of uncertainty samples given in Theorem 1 is tight for fully-supported problems [17]; a simpler, albeit not tight, sufficient condition on \(N\) is given by [16]: \[N\geq 2\epsilon^{-1}(\delta+\log(\beta^{-1}))\,. \tag{13}\] We remark that following a scenario approach allows us to explicitly compute the clairvoyant optimal policy \(\mathbf{\psi}(\mathbf{w},\mathbf{\theta})\) by replacing the uncertain system dynamics with their sampled counterparts. Regret bounds relative to the instance-wise optimal benchmark \(\mathbf{\psi}(\mathbf{w},\mathbf{\theta})\) are attractive, as they yield upper bounds on the closed-loop cost that adapt to the realized dynamics \(\mathbf{\theta}\) and perturbation \(\mathbf{w}\). To illustrate this point more thoroughly, let us consider an alternative design based on a classical worst-case \(\mathcal{H}_{\infty}\) objective: \[\{\mathbf{\Phi}_{u,\mathbb{H}}^{\star},\ \bar{\mathbb{H}}_{N}^{\star}\} =\,\operatorname*{arg\,min}_{\mathbf{\Phi}_{u},\gamma}\,\gamma\] (14) \[\operatorname*{subject\,\,to}\,\,\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eq Motivated by this consideration and with the aim of reducing the computational complexity of our scheme, we plan to study the possible application of wait-and-judge [31] and constraint removal [32] approaches in future work. Next, to illustrate the potential of our synthesis method, we compare the performance of our regret optimal control policy, denoted here by \(\boldsymbol{\pi}_{\mathtt{R}}\) for brevity, with that achieved by a robust policy \(\boldsymbol{\pi}_{\mathtt{H}}\) synthesized using a worst-case objective. We first compute \(\boldsymbol{\pi}_{\mathtt{R}}\) and \(\boldsymbol{\pi}_{\mathtt{H}}\) by solving the scenario optimization problems (11) and (14), respectively, using \(N=5000\) random samples of \(\delta_{k}\) and \(\delta_{c}\). Then, for several classes of disturbances \(\boldsymbol{w}\) often encountered in practice, we evaluate the closed-loop control cost \(J(\boldsymbol{\pi}_{\mathtt{R}},\boldsymbol{w},\boldsymbol{\theta})\), \(J(\boldsymbol{\pi}_{\mathtt{H}},\boldsymbol{w},\boldsymbol{\theta})\) and \(J(\boldsymbol{\psi},\boldsymbol{w},\boldsymbol{\theta})\) that these control laws and the clairvoyant optimal policy incur for different values of the uncertain parameters \(\boldsymbol{\theta}\). In Figure (a)a, we plot \(J(\boldsymbol{\Psi}_{u}(\boldsymbol{\theta}),\boldsymbol{w},\boldsymbol{ \theta})\) and compare it with \(\tilde{\mathtt{H}}_{N}^{*}-\tilde{\mathtt{R}}_{N}^{*}\) to verify, according to (17), when (15) yields tighter upper bounds than (16) on the realized performance. Then, in Figure (b)b, we display the percentage increase in the realized cost due to using \(\boldsymbol{\pi}_{\mathtt{H}}\) instead of \(\boldsymbol{\pi}_{\mathtt{R}}\), that is,6 Footnote 6: For stochastic disturbances, results are averaged over 10000 realizations. \[\Delta\bar{J}(\boldsymbol{w},\boldsymbol{\theta})=\frac{J(\boldsymbol{\pi}_{ \mathtt{R}},\boldsymbol{w},\boldsymbol{\theta})-J(\boldsymbol{\pi}_{\mathtt{R} },\boldsymbol{w},\boldsymbol{\theta})}{J(\boldsymbol{\pi}_{\mathtt{R}}, \boldsymbol{w},\boldsymbol{\theta})}:=\Delta\bar{J}\,.\] As already observed in previous work for perfectly known systems [5, 7, 8], Figure 3 shows that regret minimization constitutes a viable control design strategy for improving the closed-loop performance when the disturbances do not match classical design assumptions - in terms of both lower upper bounds (Figure (a)a) and lower realized costs (Figure (b)b). Most importantly, our results show that regret optimal policies continue to offer these performance advantages consistently in face of the uncertain dynamics. Interestingly, we further observe that the policy \(\boldsymbol{\pi}_{\mathtt{R}}\) often outperforms \(\boldsymbol{\pi}_{\mathtt{H}}\) even for the worst-case disturbance \(\boldsymbol{w}\). While this may seem counterintuitive, we note that \(\boldsymbol{\pi}_{\mathtt{H}}\) ensures minimum cost on a single pair of worst-case disturbances and parameters \((\boldsymbol{w}_{\mathrm{worst}},\boldsymbol{\theta}_{\mathrm{worst}})\) only. Conversely, for randomly sampled instances of the uncertain parameters \(\boldsymbol{\theta}\neq\boldsymbol{\theta}_{\mathrm{worst}}\), the policy \(\boldsymbol{\pi}_{\mathtt{H}}\) retains no optimality guarantee on the cost that it incurs under the most average perturbation \(\boldsymbol{w}\) for that \(\boldsymbol{\theta}\). ## V Conclusion We have presented a novel method for convex synthesis of robust control policies with provable regret and safety guarantees in face of the uncertain stochastic dynamics. As the clairvoyant optimal policy we compete against is unknown in this setting, we have proposed sampling the space of parameters that characterize the system dynamics. Leveraging results from the theory of scenario optimization, we have shown that the policy that minimizes regret robustly over these randomly drawn uncertainty instances retains strong probabilistic out-of-samples guarantees. Finally, we have presented numerical experiments to corroborate our theoretical results, and to highlight the potential of regret minimization in adapting to heterogeneous dynamics and disturbance sequences. Interesting directions for future research encompass studying infinite-horizon control problems, addressing computational complexity challenges for real-time implementation, and extending the theory of this emerging nonstochastic framework to systems with nonlinear dynamics.
2308.13257
Alternating Shrinking Higher-order Interactions for Sparse Neural Population Activity
Neurons in living things work cooperatively and efficiently to process incoming sensory information, often exhibiting sparse and widespread population activity involving structured higher-order interactions. While there are statistical models based on continuous probability distributions for neurons' sparse firing rates, how the spiking activities of a large number of interacting neurons result in the sparse and widespread population activity remains unknown. Here, for homogeneous (0,1) binary neurons, we provide sufficient conditions under which their spike-count population distribution converges to a sparse widespread distribution of the population spike rate in an infinitely large population of neurons. Following the conditions, we propose new models belonging to an exponential family distribution in which the sign and magnitude of neurons' higher-order interactions alternate and shrink as the order increases. The distributions exhibit parameter-dependent sparsity on a bounded support for the population firing rate. The theory serves as a building block for developing prior distributions and neurons' non-linearity for spike-based sparse coding.
Ulises Rodríguez-Domínguez, Hideaki Shimazaki
2023-08-25T09:12:26Z
http://arxiv.org/abs/2308.13257v2
# Alternating Shrinking Higher-order Interactions for Sparse Neural Population Activity ###### Abstract Neurons in living things work cooperatively and efficiently to process incoming sensory information, often exhibiting sparse and widespread population activity involving structured higher-order interactions. While there are statistical models based on continuous probability distributions for neurons' sparse firing rates, how the spiking activities of a large number of interacting neurons result in the sparse and widespread population activity remains unknown. Here, for homogeneous (0,1) binary neurons, we provide sufficient conditions under which their spike-count population distribution converges to a sparse widespread distribution of the population spike rate in an infinitely large population of neurons. Following the conditions, we propose new models belonging to an exponential family distribution in which the sign and magnitude of neurons' higher-order interactions alternate and shrink as the order increases. The distributions exhibit parameter-dependent sparsity on a bounded support for the population firing rate. The theory serves as a building block for developing prior distributions and neurons' non-linearity for spike-based sparse coding. keywords: Sparse distribution, widespread distribution, binary patterns, higher-order interactions, exponential family distribution, neural population activity. ## 1 Introduction The fundamental constraint placed on neural systems operating in natural environments is efficiency. Neurons therefore exhibit sparsity in various aspects of their activity patterns [1; 2] such as in the distribution of individual neuron responses to multiple stimuli (lifetime sparseness) [3] and the response distribution of a population of neurons (population sparseness) [3; 4; 5]. These sparse distributions require non-trivial higher-order statistical structure. For continuous distributions, sparsity is characterized by the higher-order moments such as kurtosis, which measures the tailedness of the distributions. Many parametric sparse distributions have been proposed, often within the context of the Bayesian prior for sparse coding [6]. Nevertheless, understanding how such distributions arise from the spiking activities of interacting neurons remains elusive. One approach to understand cooperative spiking activities of neurons involves analyzing near-simultaneous activities by binarizing the spiking activity within short time windows. When expressed by the exponential family distributions with interactions of multiple orders, this analysis can reveal interactions among subsets of neurons in the population. Interactions among more than two neurons are often termed higher-order interactions (HOIs). The model that lacks HOIs is obtained by constructing a distribution that maximizes entropy while constraining activity rates of individual neurons and joint activity rates of neuron pairs. This model, in which all HOIs are fixed at zero, is called a pair-wise maximum entropy (MaxEnt) model (a.k.a the spin-glass or Ising model in statistical physics and the Boltzmann machine in machine learning). The pair-wise MaxEnt model highlights the role of HOIs. The joint activity of more than two neurons produced by this model appears as chance coincidences expected from the activity rates of individual neurons and neuron pairs. Consequently, if nonzero HOIs exist, they indicate deviations of the joint activities of more than two neurons from these chance coincidences. There is considerable evidence suggesting that HOIs are necessary for characterizing the activity of neural populations. Early in vitro studies reported that the pairwise MaxEnt model accounted for approximately 90% of activity patterns of small populations [7; 8], implying that HOIs made only marginal contributions. However, HOIs may become more prominent as the population size increases [9; 10; 11; 12]. In fact, significant HOIs were later found ubiquitously both in vitro and in vivo neurons [13; 14; 11; 15; 16]. Analyzing HOIs enables researchers to uncover the underlying circuitry [17] and provides insights into their stimulus coding [18; 19; 14; 15; 20]. One of the most striking features related to higher-order interactions (HOIs) is the sparse yet widespread distribution of neural activity. Spike-count histograms for the number of simultaneously active neurons often exhibit widespread distributions, with notably longer tails for probabilities in highly synchronous states compared to independent or pairwise MaxEnt models [11; 15]. This underscores the importance of HOIs. Furthermore, the presence of highly variable probabilities leads to increased heat capacity (i.e., the variance of log probabilities). This indicates that the HOIs facilitate neural systems transitioning to highly fluctuating regimes, which may manifest as a critical state of the systems [21]. At the same time, spike-count distributions of neurons in various brain regions exhibit sparsity. Evidence for this can be appreciated in the spike-count histogram of individual neurons, such as retinal ganglion cells [22], V1 neurons [23], and primary auditory cortex neurons [24]. Population-level histograms of neural activity also display sparse profiles. Neurons are only sparsely active over time, with the duration of a state in which all neurons are silent being significantly longer than the prediction made by the pairwise MaxEnt model in both in vitro [10; 11; 16] and in vivo [15; 14; 25] studies. The study in [16] showed that the simultaneous silence is a dominant factor representing the HOIs, resulting in the alternating structure with positive pairwise, negative triple-wise, positive quadruple-wise interactions and so on when the activity is represented by \((0,1)\). While it is evident that HOIs are involved in sparse, widespread distributions, their sufficient conditions and parametric models derived from them have not been proposed yet. In this work, we establish conditions for the sparse and widespread distributions for spiking activities of a homogeneous neural population and provide new parametric models belonging to the exponential family distribution based on the theory. The necessity of non-zero higher-order interactions in constructing the widespread distributions was also pointed out by Amari et al. [26]. As opposed to the previous study, we show that the base measure function in the exponential family distribution is an important factor in cancelling the entropy term of the combinatorial patterns that may otherwise dominate in the probability mass function. Further, since our theory makes it possible to construct the sparse widespread distributions belonging to the exponential family directly, we provide explicit models with structured higher-order interactions exhibiting parameter-dependent sparsity. The paper is organized as follows. In the following section (Section 2), we describe a probability mass function (PMF) of \((0,1)\) binary patterns using the exponential family distribution, assuming homogeneity over the neurons, and construct a population-count histogram, a distribution of the total activity in the population. We provide sufficient conditions that make a distribution widespread with its peak at a population spike count of zero in the limit of an infinite number of neurons. Section 3 introduces our alternating shrinking higher-order interaction models, whose corresponding probability density functions (PDFs) become widespread and remain sparse in the limit of a large number of neurons. Then in Section 4, we present the scenario when entropy in the PMF dominates in a large number of neurons, which hinders the widespread property. We conclude with a discussion in Section 5. ## 2 Homogeneous sparse population of neurons The activity of \(N\) neurons is represented by a set of binary random variables, using a column vector, \(\mathbf{X}=\left[X_{1},X_{2},\ldots,X_{N}\right]^{\mathsf{T}}\) where \(X_{i}\in\left\{0,1\right\}\) and for which we assume stationarity. The \(i\)th neuron activity \(X_{i}\) is \(1\) if the neuron is active and \(0\) otherwise. The probabilities of generating binary activity patterns, specified by \(\mathbf{x}=\left[x_{1},x_{2},\ldots,x_{N}\right]^{\mathsf{T}}\), where \(x_{i}\in\left\{0,1\right\}\) are given as \(\mathcal{P}\left(\mathbf{X=x}\right)\). This PMF can be written in the form of an exponential family distribution given by \[\mathcal{P}\left(\mathbf{X=x}\right)=\frac{h\left(\mathbf{x}\right)}{Z}\exp \left[\sum_{i=1}^{N}\theta_{i}x_{i}+\sum_{i_{1}<i_{2}}\theta_{i_{1}i_{2}}x_{i _{1}}x_{i_{2}}+\sum_{i_{1}<i_{2}<i_{3}}\theta_{i_{1}i_{2}i_{3}}x_{i_{1}}x_{i _{2}}x_{i_{3}}\right.\] \[+\ldots+\theta_{12\ldots N}x_{i_{1}}\ldots x_{i_{N}}\Biggr{]}, \tag{1}\] where \(Z\) is a normalization term, and the parameters \(\{\theta_{i}\}_{i=1}^{N}\), \(\{\theta_{i_{1}i_{2}}\}_{i_{1}<i_{2}}\), \(\ldots\), \(\theta_{1\ldots N}\) are called natural parameters. They characterize interactions among subset neurons indicated by the subscripts [27, 28]. The exponential family distribution allows the base measure function \(h\left(\mathbf{x}\right)\) to be a general nonnegative function of the vector pattern \(\mathbf{x}\). Here we will assume that \(h\left(\mathbf{x}\right)\) is a function of the total activity, \(\sum_{i=1}^{N}x_{i}\). Although Eq. 1 can realize arbitrary probabilities for all possible patterns even if \(h\left(\mathbf{x}\right)=1\), as we will show, the introduction of an appropriate base measure function simplifies the conditions for the sparse widespread distributions and for modelling the neural interactions. For simplicity, we use \(\mathcal{P}\left(\mathbf{x}\right)\) to represent \(\mathcal{P}\left(\mathbf{X}=\mathbf{x}\right)\). We study the activity of a population of homogeneous neurons. Homogeneity on its own is an important assumption for which specific preference over some neural activity patterns is ignored. Nonetheless, homogeneity allows us to change the analysis focus from a local to a global view on the sparse neural population activity in a region, which in turn facilitates the identification of theoretical properties. The binary activity of the homogeneous population is described by using single parameters \(\theta_{k}\) (\(k=1,2,\ldots,N\)) for all the combinatorial \(k\)-th order interactions in Eq. (1) \[\mathcal{P}\left(\mathbf{x}|\boldsymbol{\theta}_{N}\right)= \frac{h\left(\sum_{i=1}^{N}x_{i}\right)}{Z}\exp\left[\theta_{1} \sum_{i=1}^{N}x_{i}+\theta_{2}\sum_{i_{1}<i_{2}}x_{i_{1}}x_{i_{2}}+\ldots+ \theta_{N}x_{i_{1}}x_{i_{2}}\ldots x_{i_{N}}\right], \tag{2}\] where \(\boldsymbol{\theta}_{N}=\left(\theta_{1},\theta_{2},\ldots,\theta_{N}\right)\). This model extends the theoretical work by Amari et al. [26], where \(h\left(\sum_{i=1}^{N}x_{i}\right)=1\). The population activity of the homogeneous neurons is characterized by the distribution of the number of active neurons in the population. For homogeneous neurons any individual binary pattern where \(n\) neurons are active has the same probability. Therefore, the probability of having \(n\) number of active neurons in the population is given by \[\mathcal{P}\left(\sum_{i=1}^{N}X_{i}=n|\boldsymbol{\theta}_{N}\right)= \left(\begin{array}{c}N\\ n\end{array}\right)\mathcal{P}\left(x_{1}=1,\ldots,x_{n}=1,x_{n+1}=0,\ldots,x _{N}=0\mid\boldsymbol{\theta}_{N}\right)\] \[= \left(\begin{array}{c}N\\ n\end{array}\right)\frac{h\left(n\right)}{Z}\exp\left[\left(\begin{array}{c }n\\ 1\end{array}\right)\theta_{1}+\left(\begin{array}{c}n\\ 2\end{array}\right)\theta_{2}+\ldots+\left(\begin{array}{c}n\\ n\end{array}\right)\theta_{n}\right]\] \[= \left(\begin{array}{c}N\\ n\end{array}\right)\frac{h\left(n\right)}{Z}\exp\left[\sum_{k=1}^{n}\left( \begin{array}{c}n\\ k\end{array}\right)\theta_{k}\right]. \tag{3}\] Let the fraction of active neurons (or population rate) be \(R_{N}=\frac{1}{N}\sum_{i=1}^{N}X_{i}\). Using Eq. (3), the PMF of the random variable \(R_{N}\), \(\mathcal{P}\left(R_{N}=r_{N}|\boldsymbol{\theta}_{N}\right)\), where \(r_{N}\in S_{r}\) with \(S_{r}\equiv\left\{0,\frac{1}{N},\frac{2}{N},\ldots,1\right\}\), is \[\mathcal{P}\left(R_{N}=r_{N}|\boldsymbol{\theta}_{N}\right)= \mathcal{P}\left(\left.\sum_{i}X_{i}=Nr_{N}\right|\boldsymbol{ \theta}_{N}\right)\] \[= \left(\begin{array}{c}N\\ Nr_{N}\end{array}\right)\frac{h\left(Nr_{N}\right)}{Z}\exp\left[\sum_{k=1}^{Nr _{N}}\left(\begin{array}{c}Nr_{N}\\ k\end{array}\right)\theta_{k}\right]. \tag{4}\] We call this a PMF of the discrete population rate and we rewrite it as \[\mathcal{P}\left(R_{N}=r_{N}|\boldsymbol{\theta}_{N}\right)= \frac{1}{Z}\exp\left[NG_{N}(r_{N};\boldsymbol{\theta}_{N})\right], \tag{5}\] where \[G_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)=\frac{1}{N}\log\left( \begin{array}{c}N\\ Nr_{N}\end{array}\right)+\frac{1}{N}\log h\left(Nr_{N}\right)+\frac{1}{N}Q_{N }\left(r_{N};\boldsymbol{\theta}_{N}\right), \tag{6}\] and with the polynomial term defined as \[Q_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)=\sum_{k=1}^{Nr_{N}}\left( \begin{array}{c}Nr_{N}\\ k\end{array}\right)\theta_{k}. \tag{7}\] We note that the new underlying base measure function for such population rate distribution (Eq. (4)) consists of the binomial term multiplied by the \(h\left(\cdot\right)\) function, i.e., \(\left(\begin{array}{c}N\\ Nr_{N}\end{array}\right)h\left(Nr_{N}\right)\). As we stated before, such base measure function could alternatively be represented in a different way inside the (possibly non-polynomial) function \(Q_{N}\left(\cdot\right)\) as a function of the active neurons given the canonical parameters. Nonetheless, the representation we chose facilitates analysis in the limit of an infinitely large population of neurons as we will see. In the following, we use \(\mathcal{P}\left(r_{N}|\boldsymbol{\theta}_{N}\right)\) to represent the PMF above. We are interested in the behaviour of the PMF (Eq. (5)) in the limit of a large number of neurons (\(N\rightarrow\infty\)): Namely, the probability density function (PDF) given through the relation \(p\left(r|\boldsymbol{\lambda}\right)dr=\lim_{N\rightarrow\infty}\mathcal{P} \left(r_{N}|\boldsymbol{\theta}_{N}\right)\), where \(r\) is the continuous population rate defined in the support \([0,1]\) and \(\boldsymbol{\lambda}\) is the set of parameters for the PDF. We wish to know the conditions with which this PDF is sufficiently concentrated near its peak at \(0\). Such a PDF would be relevant to model experimentally observed sparse population activity across different cortical populations, where arbitrarily low firing rates were exhibited by most neurons [29]. Following Amari et al.'s framework to construct wide spread distributions [26], we provide a new theorem below with the sufficient conditions for having a sparse and widespread distribution. Theorem 1.: _Let \(G_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)\) be a non-positive strictly decreasing function with finite values for \(r_{N}\in S_{r}\). If \(NG_{N}\left(r_{N};\boldsymbol{\theta}\right)\) has the following order of magnitude in terms of \(N\),_ \[\mathcal{O}\left(NG_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)\right)= \mathcal{O}\left(1\right), \tag{8}\] _then the corresponding probability density function given through \(p\left(r|\boldsymbol{\lambda}\right)dr=\lim_{N\rightarrow\infty}\ \mathcal{P}\left(r_{N}| \boldsymbol{\theta}_{N}\right)\) is widespread in \(\left(0,1\right]\) with a single non-concentrated maximum at \(0\)._ For a proof, the reader can refer to A.1. Specifically, if we have the following form for the function \(h(\mathbf{x})\): \[h\left(\sum\overset{N}{{}_{i=1}}x_{i}\right)=1\left/\left(\begin{array}{c}N \\ \overset{N}{{}_{i=1}}x_{i}\end{array}\right)\right. \tag{9}\] then the first two terms in Eq. (6) cancel out, resulting in \[G_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)=\frac{1}{N}Q_{N}\left(r_{N}; \boldsymbol{\theta}_{N}\right). \tag{10}\] Thus we obtain the following corollary **Corollary 1**.: _Let \(h(\mathbf{x})\) be given by Eq. (9). If the polynomial term \(Q_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)\) satisfies_ \[\mathcal{O}\left(Q_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)\right)= \mathcal{O}\left(1\right), \tag{11}\] _and if \(q\left(r;\boldsymbol{\lambda}\right)=\lim_{N\rightarrow\infty}Q_{N}\left(r_{ N};\boldsymbol{\theta}_{N}\right)\) is a non-positive strictly decreasing function, then the probability density function \(p\left(r|\boldsymbol{\lambda}\right)\) is widespread in \(\left(0,1\right]\) with a single non-concentrated maximum at \(0\)._ Here we introduce the simplest homogeneous model able to produce sparse population activity, i.e., a homogeneous population of independent binary neurons with only the first-order parameters (\(\theta_{2}=\theta_{3}=\ldots=\theta_{N}=0\)). Using \(\theta_{1}=-f/N\), the binary population PMF of the independent homogeneous neurons is given as \[\mathcal{P}\left(\mathbf{x}|f\right)=\frac{h\left(\sum\overset{N}{{}_{i=1}}x_ {i}\right)}{Z}\exp\left[-f\frac{\sum\overset{N}{{}_{i=1}}x_{i}}{N}\right], \tag{12}\] where we assume that \(f>0\) and the function \(h\left(\cdot\right)\) is given by Eq. (9). With this \(h\left(\cdot\right)\) function that cancels out with the binomial term in Eq. (6), the corresponding population rate PMF (Eq. (5)) is given as \[\mathcal{P}\left(r_{N}|\theta_{1}\right)=\frac{1}{Z}e^{-fr_{N}}. \tag{13}\] The corresponding continuous PDF is obtained through \[p\left(r|f\right)dr=\lim_{N\rightarrow\infty}\mathcal{P}\left(r_{N}|\theta_{ 1}\right)=\frac{1}{Z}e^{-fr}dr, \tag{14}\] where the normalization constant is obtained as \[Z=\int_{0}^{1}e^{-fr}dr=\frac{1-e^{-f}}{f}. \tag{15}\] Since \(\mathcal{O}\left(-fr_{N}\right)=\mathcal{O}\left(1\right)\) and \(q\left(r;f\right)=-fr\) is a strictly decreasing function, the PDF in Eq. (14) is widespread in \(\left(0,1\right]\) with a single non-concentrated maximum at \(0\) (Corollary 1). The sparsity in such PDF is controlled by the \(f\) parameter. See the A.2 for the mean and the variance of the distribution. This density corresponds to the PDF of an exponential distribution with parameter \(f>0\) but with a compact support in \(\left[0,1\right]\), instead of the support in \(\left[0,\infty\right)\). This independent model serves as a baseline in investigating the effect of the pairwise and higher-order interactions in shaping the sparse distribution. In the next section (Section 3), we present our proposed model whose population rate PMF is a particular case of Eq. (5). The distribution satisfies the conditions in Theorem 1 and converges to a widespread continuous distribution with parameter-dependent sparsity. In the subsequent section (Section 4), we present a case to which Theorem 1 does not apply, resulting in a concentrated distribution. ## 3 The model with alternating and shrinking higher-order interactions Neuronal populations exhibit significant excess rate of simultaneous silence [14; 10; 11; 16], where all neurons become inactive, compared to the chance level predicted by the pairwise MaxEnt models. When expressed in \(\left(0,1\right)\) patterns, the probability of silence of all neurons are captured by the feature given by \(\prod_{i=1}^{N}(1-x_{i})\) using the exponential family distribution. The expansion of this feature leads to the HOIs with alternating signs for each order, which was proposed as a model of the simultaneous silence [16]. However, the simultaneous silence model is limited in that it captures only a state of total silence, whose measure becomes negligibly small for large number of neurons. Furthermore, as we show later, it is important to consider the shrinking strength in interactions as the order increases. The shrinking strength in interactions facilitates all orders of interaction to exist in the limit of a large number of neurons despite the alternating structure. To construct the sparse model applicable to large \(N\) limit, we consider the following distribution for the activity patterns of the homogeneous population of binary neurons: \[\mathcal{P}(\mathbf{x}|\boldsymbol{\omega})=\frac{h\left(\sum_{i=1}^{N}x_{i} \right)}{Z}\exp\left[-f\sum_{j=1}^{N}\left(-1\right)^{j+1}C_{j}\left(\frac{ \sum_{i=1}^{N}x_{i}}{N}\right)^{j}\right], \tag{16}\] where \(\boldsymbol{\omega}=\left\{f,C_{1},C_{2},\ldots,C_{N}\right\}\) is the set of parameters and \(Z\) is its partition function. We assume that \(f>0\) and \(C_{j}\) are positive (\(C_{j}>0\)) and decreasing with respect to \(j\), \(C_{j}<C_{j-1}\;\;\forall j=2,...,N\). In combination with \(\left(-1\right)^{j+1}\), such coefficients impose an alternating structure whose magnitude shrinks as the order of interaction increases. We will provide specific choices of \(C_{j}\) that make the alternating terms in the exponent converge to a decreasing function with respect to \(r_{N}\). In this model, we use Eq. (9) for the base measure function \(h(\mathbf{x})\). Using such function is one of the sufficient conditions required for the distribution to become widespread in the limit of a large number of neurons (Theorem 1). For a counter-example please see Section 4. Therefore, the population rate PMF becomes (see Eq. (10)) \[\mathcal{P}\left(r_{N}|\boldsymbol{\theta}_{N}\right)= \frac{1}{Z}\exp\left[Q_{N}\left(r_{N};\boldsymbol{\theta}_{N} \right)\right], \tag{17}\] where \(Q_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)\) is a polynomial given by Eq. (7), which will be calculated as follows. The canonical form of the homogeneous population activity is given by Eq. (2). From Eq. (16), the canonical parameters (\(\theta_{k}\), with interaction of the order \(k\)) of the alternating and shrinking interaction model are computed as \[\theta_{1}= \sum_{l=1}^{N}\left(-1\right)^{l}\frac{fC_{l}}{N^{l}},\] \[\theta_{2}= \sum_{l=2}^{N}\left(-1\right)^{l}\frac{fC_{l}}{N^{l}}\sum_{ \begin{subarray}{c}k_{1}+k_{2}=l\\ k_{1}>0,k_{2}>0\end{subarray}}\left(\begin{array}{c}l\\ k_{1},k_{2}\end{array}\right),\] \[\theta_{3}= \sum_{l=3}^{N}\left(-1\right)^{l}\frac{fC_{l}}{N^{l}}\sum_{ \begin{subarray}{c}k_{1}+k_{2}+k_{3}=l\\ k_{1}>0,k_{2}>0,k_{3}>0\end{subarray}}\left(\begin{array}{c}l\\ k_{1},k_{2},k_{3}\end{array}\right),\] \[\vdots\] \[\theta_{N}= \left(-1\right)^{N}\frac{fC_{N}}{N^{N}}N!. \tag{18}\] See the B.1 for the detailed derivation. The PMF of the discrete population rate (Eq. (17)), will be specified by using these canonical parameters, \(\boldsymbol{\theta}_{N}=\left(\theta_{1},\ldots,\theta_{N}\right)\), where these parameters appear in the polynomial term, \(Q_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)\) (Eq. (7)). Consequently, the polynomial term is computed as \[Q_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)=-f\sum_{j=1}^{Nr_{N}}\left(-1 \right)^{j+1}C_{j}\left(r_{N}\right)^{j}+\mathcal{O}\left(\frac{1}{N}\right). \tag{19}\] See B.2 for the derivation. In the limit of \(N\to\infty\), our population rate PMF becomes the continuous density given by (see B.3) \[\lim_{N\to\infty}\mathcal{P}\left(r_{N}|\boldsymbol{\theta}_{N}\right) =p\left(r|\boldsymbol{\lambda}\right)dr\] \[=\frac{1}{Z}\exp\left[-f\sum_{j=1}^{\infty}\left(-1\right)^{j+1} C_{j}r^{j}\right]dr, \tag{20}\] where \(\mathbf{\lambda}=\left\{f,\left\{C_{j}\right\}_{j\in\mathbb{N}^{+}}\right\}\). Depending on the choice of each \(C_{j}\), we obtain different types of densities. Here we provide two examples where the polynomial term \(Q_{N}\left(r_{N};\mathbf{\theta}_{N}\right)\) converges to a non-positive decreasing function with respect to \(r\), and therefore the corresponding densities result in widespread distributions with a non-concentrated maximum at \(0\) (Corollary 1). **Polylogarithmic exponential distribution** If we define \(C_{j}=\frac{1}{j^{m}}\ \ \forall j\) then the probability density function in Eq. (20) is \[p\left(r|f,m\right)=\frac{1}{Z}\exp\left[f\text{Li}_{m}\left[-r\right]\right], \tag{21}\] where \(\text{Li}_{m}[\cdot]\) is the polylogarithm function of order \(m=1,2,3,\ldots\) (See Appendix B.3). We call the density in Eq. (21) the polylogarithmic exponential density, where the function \(f\text{Li}_{m}\left[-r\right]\) is non-positive (see C.1) and strictly decreasing (see C.2) for \(r\in[0,1]\) with a maximum at \(r=0\). See Fig. 1 for the density functions for different \(f\) and \(m\). Note that \(m=1\) we obtain the natural logarithm, i.e., \[\text{Li}_{1}\left[-r\right]=-\log\left[1+r\right]. \tag{22}\] The distribution function of the polylogarithmic exponential density corresponding to the PDF in Eq. (21) is as follows \[F\left(u|f,m\right)=\frac{1}{Z}\int_{0}^{u}\exp\left(f\text{Li}_{m}\left[-r \right]\right)dr, \tag{23}\] where \(u\in[0,1]\). For \(m=1\), we obtain the distribution function \[F\left(u|f,m=1\right) =\frac{1}{Z}\int_{0}^{u}\exp\left(-f\log\left(1+r\right)\right)dr \tag{24}\] \[=\left\{\begin{array}{cc}\frac{1-\left(1+u\right)^{-/+1}}{1-2^ {-/+1}}&\text{ for }\,f\neq 1\\ \\ \frac{\log\left(1+u\right)}{\log 2}&\text{ for }\,f=1.\end{array}\right.\] Figure 1: Left: polylogarithmic exponential PDF as \(f\) varies (\(m=1\)). Right: polylogarithmic exponential PDF as \(m\) varies (\(f=3\)). For \(m=2,3,4,\ldots\) a numerical integration method may be used to approximate equation (23). The mean value of this distribution for \(m=1\) is given by \[\mu_{R}=\left\{\begin{array}{ll}\frac{1}{Z}\int_{0}^{1}\left(1+r\right)^{-f}rdr& =\frac{1}{1-2^{-f+1}}\left[\frac{1-2^{-f+2}}{f-2}-2^{-f+1}\right]&\mbox{for}\, f\neq 1\\ \\ \frac{1}{Z}\int_{0}^{1}\left(1+r\right)^{-1}rdr&=\frac{1-\log 2}{\log 2}&\mbox{ for}\,f=1,\end{array}\right. \tag{25}\] and the variance is \[\sigma_{R}^{2} =\frac{1}{Z}\int_{0}^{1}\left(1+r\right)^{-f}r^{2}dr-\mu_{R}^{2}\] \[=\left\{\begin{array}{ll}\frac{1}{1-2^{-f+1}}\left[\frac{2 \left(1-2^{-f+3}\right)}{\left(f-3\right)\left(f-2\right)}-2^{-f+1}\left(1+ \frac{2^{2}}{f-2}\right)\right]-\mu_{R}^{2}&\mbox{for}\,f\neq 1\\ \\ \frac{\log 2-\frac{1}{2}}{\log 2}-\left[\frac{1-\log 2}{\log 2}\right]^{2}& \mbox{for}\,f=1.\end{array}\right. \tag{26}\] **Shifted-geometric exponential distribution** If we instead define \(C_{j}=(\tau)^{j}\), with \(\;0<\tau<1,\;\forall j\;\;\) so that \(\tau r<1\), then the probability density function in Eq. (20) is \[p\left(r[f,\tau\right)= \frac{1}{Z}\exp\left[-\frac{f}{1+\frac{1}{\tau r}}\right]\] \[= \frac{1}{Z}\exp\left[f\left(\frac{1}{1+\tau r}-1\right)\right], \tag{27}\] where the last exponential argument corresponds to a shifted-geometric series. See B.3 for the details. Therefore, we call the density in Eq. (27) the shifted-geometric exponential density. See Fig. 2 for the density functions for different \(f\) and \(\tau\). In addition, the function \(f\left(\frac{1}{1+\tau r}-1\right)\) in Eq. (27) is non-positive (see C.1) and strictly decreasing (see C.2) for \(r\in[0,1]\) with a maximum at \(r=0\). See Fig. 1 for the PDF for different values of \(f\) and \(m\). The distribution function corresponding to the the shifted-geometric exponential density in Eq. (27) is calculated as \[F\left(u[f,\tau\right) =\frac{1}{Z}\int_{0}^{u}\exp\left[f\left(\frac{1}{1+\tau r}-1 \right)\right]dr\] \[=\frac{\left(1+\tau u\right)\exp\left[f\left(\frac{1}{1+\tau u} -1\right)\right]-1+fe^{-f}\left\{\operatorname{Ei}\left(f\right)-\operatorname {Ei}\left(\frac{f}{1+\tau u}\right)\right\}}{\left(1+\tau\right)\exp\left[f \left(\frac{1}{1+\tau}-1\right)\right]-1+fe^{-f}\left\{\operatorname{Ei} \left(f\right)-\operatorname{Ei}\left(\frac{f}{1+\tau}\right)\right\}}. \tag{28}\] Here, the special exponential integral function \(\mathrm{Ei}\left(x\right)\) is defined as follows for \(x\ \in\mathbb{R}\)[30] \[\mathrm{Ei}\left(x\right)=\gamma+\log\left(x\right)+\sum_{k=1}^{\infty}\frac{x^ {k}}{k\ k!}, \tag{29}\] where \(\gamma\) is the Euler-Mascheroni constant (\(\gamma\approx 0.5772156649\)). See the Appendix C.3 for verification of Eq. (28). The mean value and the variance of this distribution are given by \[\mu_{R}=\frac{1}{Z}\int_{0}^{1}\exp\left[f\left(\frac{1}{1+\tau r}-1\right) \right]rdr, \tag{30}\] and \[\sigma_{R}^{2}=\frac{1}{Z}\int_{0}^{1}\exp\left[f\left(\frac{1}{1+\tau r}-1 \right)\right]r^{2}dr-\mu_{R}^{2}, \tag{31}\] respectively, where the normalization constant \(Z\) is given in Eq. (C.17). Please see Eqs. (C.20) and (C.22) in the C.3 for the explicit expression of the integrals in Eqs. (30) and (31) respectively. #### Properties of the distributions **Sparsity** It can be appreciated in Figs. 1 and 2 that the sparsity for both the polylogarithmic exponential and the shifted-geometric exponential densities is controlled in a non-linear way by the \(f\) parameter. The densities with small values of \(f\) approach a uniform distribution while the densities with large \(f\) values become very sparse. In fact we can formally see this in the following limits for both distributions Figure 2: Left: shifted-geometric exponential PDF as \(f\) varies (\(\tau=0.8\)). Right: shifted-geometric exponential PDF as \(\tau\) varies (\(f=5\)). \[\lim_{f\to 0}p\left(r[f,m)\right)\] \[\lim_{f\to 0}p\left(r[f,\tau)\right) \tag{32}\] and \[\lim_{f\to\infty}p\left(r[f,m) = \frac{\lim_{f\to\infty}\exp\left[-f\sum\limits_{j=1}^{\infty}\left( -1\right)^{j+1}\frac{1}{j^{2m}}r^{j}\right]}{\lim_{\tau\to 0}\int_{0}^{\tau}e^{0} dr^{\prime}+0}\] \[\lim_{f\to\infty}p\left(r[f,\tau)\right. = \frac{\lim_{f\to\infty}\exp\left[-f\left(1-\frac{1}{1+\tau \tau}\right)\right]}{\lim_{\tau\to 0}\int_{0}^{\tau}e^{0} dr^{\prime}+0} \tag{33}\] From Eq. (32), both distributions become uniform as \(f\to 0\). On the other extreme, as \(f\to\infty\) both distributions tend to a Dirac delta distribution centered at \(0\) (Eq. (33)), which can be interpreted as a super-sparse distribution concentrated at \(0\). Compared to the polylogarithmic exponential distribution, the shifted-geometric exponential distribution exhibits fatter tails due to a slower decay in probability for increasing values of the population rate. This can be appreciated for \(f\in\{10,15\}\) in Figs. 1 and 2. The \(m\) parameter also modulates sparsity for the polylogarithmic exponential distribution (Fig. 1) but to a much lesser extent than the \(f\) parameter, i.e., the distribution is less sensitive to changes in the \(m\) parameter. Because of this, choosing \(m=1\), a case for which we provide the complete analytical PDF and distribution function, is the most natural polylogarithmic exponential distribution choice. Similarly, for the shifted-geometric exponential distribution, the \(\tau\) parameter is less relevant for inducing sparsity (Fig. 2 Right) compared to the \(f\) parameter, but more when compared to the \(m\) polylogarithmic parameter. a population but also allow more complex power-law type tails to be captured (such as in the shifted-geometric exponential PDF). #### Heat capacity and entropy Let \(f=\frac{1}{T}\) and \(df=-\frac{1}{T^{2}}dT\), where \(T\) denotes a temperature parameter. Then the heat capacity of both the polylogarithmic exponential distribution (with \(m=1\)) and the shifted-geometric distribution is computed as \[C\left(f\right) =\frac{\partial}{\partial T}\left(-\frac{1}{Z}\frac{dZ}{df}\right)\] \[=-f^{2}\frac{d}{df}\left(-\frac{1}{Z}\frac{dZ}{df}\right)\] \[=f^{2}\frac{d^{2}Z}{df^{2}}\frac{1}{Z}-\frac{f^{2}}{Z^{2}}\left( \frac{dZ}{df}\right)^{2}. \tag{34}\] See the C.4 for the specific values of the normalization constant and its derivatives for the heat capacity of the polylogarithmic exponential and the shifted-geometric exponential distributions, as well as some limits with respect to \(f\). The entropy of both distributions is computed as \[\mathbb{E}_{R}\left[-\log\left(p\left(r|\mathbf{\lambda}\right)\right)\right]\] Figure 3: Logarithm of the PDFs used as a baseline (red diagonal line) versus the logarithm of their corresponding PDFs with the \(k\)-th order approximation for their exponential argument functions for \(r\) in \([0,1]\). Left: polylogarithmic exponential PDF approximations with \(m=1\). Right: shifted-geometric exponential PDF approximations with \(\tau=0.8\). We fixed \(f=5\). \[=-\int_{0}^{1}p\left(r|\mathbf{\lambda}\right)\log\left(p\left(r|\mathbf{\lambda}\right) \right)dr. \tag{35}\] For the explicit entropy of the polylogarithmic exponential and the shifted-geometric exponential distributions see the C.5. The entropy of the polylogarithmic exponential distribution (\(m=1\)) is non-positive and a decreasing function of \(f\) as can be seen at the left panel of Fig. 4. Such negative entropy is compatible with a neural system that promotes a high level of organization. On the other hand, the heat capacity increases with \(f\) until a numerically found maximum at \(f\approx 11.96\), after which it decreases until \(\lim_{f\rightarrow\infty}C\left(f\right)=1\) (see C.4). Such limit can be intuitively observed in the right panel of Figure 4. At \(f=1\) the heat capacity is undetermined (represented by an open circle in the right panel of Figure 4 for \(m=1\)). The entropy for the shifted-geometric exponential distribution is also non-positive and a decreasing function of \(f\), compatible with a high level of organization, as can be seen at the left panel of Figure 4 for \(\tau=0.7\). The heat capacity (for \(\tau=0.7\)) has a numerical maximum at \(f\approx 18.44\) as can be appreciated at the right panel of Figure 4, after which it decreases until \(C\left(f\right)\approx 1\). However, unlike the polylogarithmic case (\(m=1\)), we obtain that \(\lim_{f\rightarrow\infty}C\left(f\right)\) is undetermined. #### Sampling For the case of \(m=1\) for the polylogarithmic exponential distribution, sampling can be carried out by the inverse transform method using an analytic form of an inverse of the distribution function (see Top Fig. 5). For other parameters or for the case of the shifted-geometric exponential distribution, samples are obtained by the generalized inverse transform method, using numerical integration of the distribution function (Bottom Fig. 5). Figure 4: Left: entropy of both distributions as \(f\) varies (\(m=1\), \(\tau=0.7\)). Right: heat capacity of both distributions as \(f\) varies (\(m=1\), \(\tau=0.7\)). Notice the empty point at \(f=1\) for the heat capacity of the log-modulated model, where it is undetermined. Figure 5: Histogram of 300,000 samples drawn from the polylogarithmic exponential distribution (Top) and from the shifted-geometric exponential distribution (Bottom). ## 4 Entropy-dominated homogeneous population In the previous section, we introduced the widespread distribution using the base measure function in Eq. (1) \(h\left(\mathbf{x}\right)\) given by Eq. (9). For the homogeneous population, this function cancels out with the binomial term, which includes the entropy term. In this section, we show that the condition \(h\left(\mathbf{x}\right)=1\) (used in the standard homogeneous pairwise MaxEnt model) fails the cancellation of the entropy, which results in a concentrated distribution. We now analyze the behaviour of the homogeneous PMF (Eq. (5)) with \(h\left(Nr_{N}\right)=1\) as the number of neurons \(N\) grows to infinity, while keeping the order of the polynomial part \(Q_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)\) constant in \(N\), i.e., \[\mathcal{O}\left(Q_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)\right)= \mathcal{O}\left(1\right). \tag{36}\] Coupled with the Stirling formula for factorials using the order notation, i.e., \[N!=\sqrt{2\pi N}\left(\frac{N}{e}\right)^{N}\left(1+\mathcal{O}\left(\frac{1 }{N}\right)\right), \tag{37}\] the function \(G_{N}\left(\cdot;\boldsymbol{\theta}_{N}\right)\) from the PMF (Eq. (5)) for \(r_{N}\neq 0\) and \(r_{N}\neq 1\) becomes (see the D for the details) \[G_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)=-\frac{1}{N}\log\sqrt{2\pi Nr _{N}\left(1-r_{N}\right)}+H\left(r_{N}\right)+\frac{1}{N}Q_{N}\left(r_{N}; \boldsymbol{\theta}_{N}\right)+\frac{1}{N}\mathcal{O}\left(\frac{1}{N}\right) \tag{38}\] where \(H\left(r_{N}\right)\) is the entropy term, defined as \[H\left(r_{N}\right)=-r_{N}\log\left(r_{N}\right)-\left(1-r_{N}\right)\log \left(1-r_{N}\right). \tag{39}\] Because the entropy order \(\mathcal{O}\left(H\left(r_{N}\right)\right)=\mathcal{O}\left(1\right)\) is constant and considering Eq. (36), then the order of the function \(NG_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)\) is \[\mathcal{O}\left(-\log\sqrt{2\pi Nr_{N}\left(1-r_{N}\right)}+NH \left(r_{N}\right)+Q_{N}\left(r_{N};\boldsymbol{\theta}_{N}\right)+\mathcal{O }\left(\frac{1}{N}\right)\right)\] \[=\mathcal{O}\left(-\sqrt{N}+N+1+\frac{1}{N}\right)\] \[=\mathcal{O}\left(N\right). \tag{40}\] Eq. (40) is the linear order of \(N\) because the entropy term dominates over the other terms for large \(N\). The dominance of the entropy as \(N\rightarrow\infty\) leads to the following delta PDF \[\lim_{N\rightarrow\infty}\mathcal{P}\left(r_{N}|\boldsymbol{\theta }_{N}\right) =p\left(r|r^{*}\right)dr\] \[=\delta\left(r-r^{*}\right)dr, \tag{41}\] whose peak is concentrated at its maximum \(r^{*}\) in the region dominated by the entropy. The corresponding distribution function is \[\lim_{N\rightarrow\infty}F\left(r_{m}|r^{*}\right)= \int_{0}^{r_{m}}\delta\left(r-r^{*}\right)dr\] \[= u\left(r_{m}-r^{*}\right), \tag{42}\] where \(u\left(\cdot\right)\) denotes the heavyside step function. For a proof of equations (41) and (42) see the D. The result above indicates that with \(h\left(\mathbf{x}\right)=1\) the distribution concentrates without canceling the entropy, unlike \(h\left(\mathbf{x}\right)\) given by Eq. (3). Different base measure functions in the exponential family for the binary patterns correspond to different base measure functions of the homogeneous population models (either discrete or continuous). We summarize these correspondences in Table 1. Note that the base measure function of the homogeneous continuous population rate model approaches to the delta function if we use \(h\left(\mathbf{x}\right)=1\). In this case, the limiting PDF is written by this base measure function alone (41). We also note that most existing models, e.g., the K-pairwise maximum entropy model by Tkacik et al. [11; 21] and the DG model [26], do not explicitly define a base measure function for the binary patterns. According to our theory, when the discrete homogeneous distributions exhibit the widespread property (in the continuous limit), their corresponding function, \(h\left(Nr_{N}\right)\), must contain an equivalent component that cancels with the entropy. Alternatively, we may consider that the models with widespread distributions are realized at a tuned parameter if we introduce a parameter that weights such function. The authors in [31] analyzed the homogeneous DG model by introducing a non-canonical parameter \(\beta\) that scales the pattern distribution as \(\mathcal{P}_{\beta}\left(\mathbf{x}|\boldsymbol{\theta}_{N}\right)=\mathcal{ P}\left(\mathbf{x}|\boldsymbol{\theta}_{N}\right)^{\beta}/Z_{\beta}\), which sets an imbalance between the entropy term \(H\left(r_{N}\right)\) and the one that comes from \(h\left(Nr_{N}\right)^{\beta}\). The widespread distribution is only possible at \(\beta=1\) in the limit of \(N\), and they reported it as a phase \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline The base measure function \(h\left(\mathbf{x}\right)\) of binary patterns distr. (Eq. 1) & Eq. (9) & \(h\left(\mathbf{x}\right)=1\) \\ \hline \hline The base measure function of homogeneous discrete pop. rate (\(r_{N}\), Eq. (3)) & 1 & \(\sqrt{2\pi r_{N}\left(1-r_{N}\right)}\exp\left[NH\left(r_{N}\right)\right]\) \\ \hline The base measure function of homogeneous continuous pop. rate (\(r\)) & 1 & \(\delta\left(r-r^{*}\right)\) \\ \hline \end{tabular} \end{table} Table 1: The base measure functions of the population rate models for two choices of the \(h\left(\cdot\right)\) function. transition along the parameter. ## 5 Discussion We proposed parametric models of distributions for the sparse collective activity of homogeneous binary neurons. Our models exhibit HOIs at all orders in a way that agrees with structured alternating HOIs observed experimentally. The distribution remains widespread with parameter-dependent sparsity in the limit of an infinite number of neurons. We derived these models using a theoretical framework giving sufficient conditions by which the PMF of our binary population model, or any other homogeneous exponential family population rate model, converges to a widespread continuous distribution. Note that we obtained an exponential distribution with a bounded support in the limit of the large number of homogeneous independent neurons. While the independence resulted in the simplest sparse homogeneous model, observed interactions among the neurons further shape their population beyond the exponential distribution. The proposed models explain how a sparse, widespread distribution arises from specific HOIs with alternating shrinking structure. Such models are expected to explain a sparse profile in spike-count population histograms of experimentally observed neurons [15; 10; 25; 21]. The models are also consistent with a previous theoretical prediction by [26], which states that all orders of interactions are required to produce a widespread activity distribution in a large population of (correlated) neurons. Here, we extended the theoretical framework in [26], showing that the base measure function, independently of the order of the (canonical) interactions, must be chosen carefully to avoid the dominance of the entropy term in the homogeneous distribution. Although the sparse widespread distributions are ubiquitously observed in neural systems, the underlying mechanisms remain open to exploration. One of the simplest yet insightful models that can reproduce these key features is the dichotomized Gaussian (DG) model [26; 13] and its extentensions [32; 33; 34; 35], which consists of threshold-neurons that become active if inputs sampled from a correlated multivariate Gaussian distribution exceed a threshold. The outputs of the DG neurons, represented as \((0,1)\) patterns, exhibit sparse population activity [31; 16] with characteristic HOIs. Specifically, they display alternating signs in the interactions at successive orders, such as negative triplewise and positive quadruple-wise interactions and so on [16; 35]. The structured HOIs contribute to the sparse activity, and create the widespread distribution [26]. Supporting this theoretical prediction, the specific alternating structure of HOIs was found in neural population activity [16]. Furthermore, by using more biologically plausible model neurons, it was discovered in [17] that the positive pairwise and negative triple-wise interactions are explained by excitatory shared (and hence correlated) inputs to pairs of neurons. These results suggest that the nonlinearity of neurons underlie the structured HOIs, and investigating the HOIs of neuronal activity provides a key to understanding its underlying mechanisms. The nonlinear functions to which the alternating shrinking series converge are key features of our models because they underlie HOIs and modulate the sparse population profile. We exemplified this by converging alternating series of the population rate using a logarithmic function and a function based on shifted-geometric series, which are both strictly decreasing functions of the population rate. The nonlinearity resulting in the HOIs of neurons come not only from spiking nonlinearity at the soma but also from the nonlinear operations at the dendrites. Examples of dendritic computation include directional selectivity, coincidence detection in auditory neurons, temporal integration, image denoising, forward masking [36] and nonlinear integration of spatial cortical feedback in V1 neurons [37]. As a support of the specific logarithmic operation, a modeling study of a collision-sensitive locust neuron, which considers experimentally determined presynaptic activation patterns, suggests that a single neuron's dendritic tree implements a logarithmic transform [38]. Specifically, the fan-like dendritic structures in such neurons [39] have been suggested to support such nonlinear computations. It is thus a future challenge to construct a unifying model based on our framework for generating the sparse and widespread distribution such that the models account for the HOIs in neurons from more detailed mechanistic generation of sparse neural population activity. ## 6 Acknowledgements We thank Miguel Aguilera for his valuable comments on this manuscript. This work was supported by JSPS KAKENHI Grant Number JP 20K11709, 21H05246.
2310.10514
Diagonalization in a quantum kicked rotor model with non-analytic potential
In this paper we study the lattice quasi-periodic operators with power-law long-range hopping and meromorphic monotone potentials, and diagonalize the operators via a Nash-Moser iteration scheme. As applications, we obtain uniform power-law localization, uniform dynamical localization and Lipschitz continuity of the integrated density of states (IDS) for such operators. Our main motivation comes from investigating quantum suppression of chaos in a quantum kicked rotor model with non-analytical potential.
Yunfeng Shi, Li Wen
2023-10-16T15:33:04Z
http://arxiv.org/abs/2310.10514v1
# Diagonalization in a quantum kicked rotor model with non-analytic potential ###### Abstract. In this paper we study the lattice quasi-periodic operators with power-law long-range hopping and meromorphic monotone potentials, and diagonalize the operators via a Nash-Moser iteration scheme. As applications, we obtain uniform power-law localization, uniform dynamical localization and Lipschitz continuity of the integrated density of states (IDS) for such operators. Our main motivation comes from investigating quantum suppression of chaos in a quantum kicked rotor model with non-analytical potential. Key words and phrases:Nash-Moser iteration, Quasi-periodic operators, Localization, Power-law hopping, Lipschitz continuity of the IDS ## 1. Introduction Quantum chaos aims to investigate the quantum mechanics of classically chaotic systems. One of the basic models in quantum chaos is the so called quantum kicked rotor, which was first introduced by [1] as a quantum analog of the standard mapping. This model is defined by \[\sqrt{-1}\frac{\partial\Psi(\theta,t)}{\partial t}=\left(\mathcal{H}_{0}+\check {\phi}(\theta)\sum_{n\in\mathbb{Z}}\delta(t-2n\omega)\right)\Psi(\theta,t), \tag{1.1}\] where \[(\theta,t)\in\mathbb{R}\times\mathbb{R},\ \omega>0,\ \mathcal{H}_{0}=\frac{ \partial^{2}}{\partial\theta^{2}}\ \text{or}\ \sqrt{-1}\frac{\partial}{\partial\theta},\] and \(\check{\phi}:\ \mathbb{T}=\mathbb{R}/\mathbb{Z}\to\mathbb{R}\) is a potential1. Typically, the motion in the quantum kicked rotor model is almost-periodic while the classical one is chaotic. This quantum suppression of chaos phenomenon was first discovered in [1], and well understood after the remarkable work of Fishman, Grempel and Prange (cf. [1, 2]), in which they mapped the quantum kicked rotor model onto lattice ergodic operators \(H_{x}\) given by Footnote 1: The original model in [1] corresponds to (1.1) with \(\mathcal{H}_{0}=\frac{\partial^{2}}{\partial\theta^{2}}\) and \(\check{\phi}(\theta)=\cos(2\pi\theta)\). \[(H_{x}u)(n)=\sum_{m\neq n}\phi(n-m)u(m)+d_{n}(x)u(n),\ x\in \mathbb{R},\ n\in\mathbb{Z},\] where \[\phi(n) =-\int_{0}^{1}\tan\pi\left(\frac{\check{\phi}(\theta)}{2}\right) e^{-2\pi\sqrt{-1}n\theta}d\theta,\] \[d_{n}(x) =\tan\pi\left(x-n^{2}\omega\right)\ \text{or}\ \tan\pi\left(x-n\omega\right)\ \text{depending on}\ \mathcal{H}_{0}.\] The parameter \(x\in\mathbb{R}\) represents the quasi-energy of the Floquet operator associated to (1.1) (cf. [10]). It turns out that the almost-periodicity of the motion in (1.1) follows from the Anderson localization (i.e., pure point spectrum with exponentially decaying eigenfunctions) of \(H_{x}\) for a.e. \(x\in\mathbb{R}\) (cf. [10, 11]). For the analytic potential \(\check{\phi}\), the operator \(H_{x}\) admits an exponential hopping (i.e., \(|\phi(n)|\leq e^{-c|n|}\) for some \(c>0\)). However, the non-analytic (even singular) potential \(\check{\phi}\) which could yield a power-law decay hopping (i.e., \(|\phi(n)|\leq|n|^{-C}\) for some \(C>0\)) also appears naturally in some important physical models, e.g., the quantum Fermi accelerator (cf. [11]). The work [10] has provided numerical evidence for the localization transition in quantum chaos with the singular potential \(\check{\phi}(\theta)=(|\theta|^{\alpha}\mod 1)\), which induces a hopping of \(|\phi(n)|\sim|n|^{-1-\alpha}\). Note that the analytic derivation of the localization in [10] relies on physical perspective of localization for random operators with power-law hopping (cf. [12]). However, the on-site energy sequence \(\{d_{n}(x)\}_{n\in\mathbb{Z}}\) is only pseudo-random and the localization properties of \(H_{x}\) should depend on arithmetic properties of \(\omega\). Thus, a mathematically rigorous treatment of the localization for quasi-periodic operators with (realistic) power-law hopping becomes important. We would also like to mention that certain quantum kicked rotor models in higher dimensions (cf. [10]) or with a quasi-periodic potential (cf. [11]) will give rise to higher dimensional lattice quasi-periodic operators with tangent potential. The main purpose of this paper is to diagonalize certain lattice quasi-periodic operators with power-law decay hopping and meromorphic monotone potentials via the Nash-Moser iteration scheme. As applications, we prove uniform power-law localization, uniform dynamical localization and Lipschitz continuity of the IDS. More precisely, consider on \(\mathbb{Z}^{d}\) the operators \[H_{z}=\varepsilon T_{\phi}+f(z-\mathbf{n}\cdot\mathbf{\omega})\delta_{\mathbf{n}\mathbf{n}^{ \prime}},\ \varepsilon\in\mathbb{R}, \tag{1.2}\] where the off-diagonal part (i.e., the hopping term) \(T_{\phi}\) satisfies \[(T_{\phi}u)(\mathbf{n})=\sum_{\mathbf{m}\neq\mathbf{n}}\phi(\mathbf{n}-\mathbf{m})u(\mathbf{m}),\ \phi(\mathbf{0})=0,\ |\phi(\mathbf{n})|\leq|\mathbf{n}|^{-s}\] with some \(s>0\) and \(|\mathbf{n}|=\max\limits_{1\leq i\leq d}|n_{i}|\). In the diagonal part we assume the potential \(f\) is a \(1\)-periodic meromorphic function defined on \(D_{R}=\{z\in\mathbb{C}:\ |\Im z|<R\}\) satisfying the monotonicity condition (cf. (1.4) for details), which includes \(\tan(\pi z)\) and \(e^{2\pi\sqrt{-1}z}\) as the special cases. We call \(\mathbf{\omega}\in[0,1]^{d}\) the frequency and \(z\) the phase. Typically, we assume \(\mathbf{\omega}\) satisfies the Diophantine condition, namely, there exist \(\tau>d\) and \(\gamma>0\) so that \[\|\mathbf{n}\cdot\mathbf{\omega}\|_{\mathbb{T}}=\inf_{l\in\mathbb{Z}}|l-\mathbf{n}\cdot\bm {\omega}|\geq\frac{\gamma}{|\mathbf{n}|^{\tau}}\ \text{for}\ \forall\ \mathbf{n}\in\mathbb{Z}^{d}\setminus\{\mathbf{0}\}.\] Obviously, if \(\varepsilon=0\), then \(H_{z}\) exhibits the Anderson localization for all \(z\) such that \(z-\mathbf{n}\cdot\mathbf{\omega}\) is not a pole of \(f\) for all \(\mathbf{n}\in\mathbb{Z}^{d}\). So it is reasonable to expect the localization of \(H_{z}\) for \(0<|\varepsilon|\ll 1\). A potential approach is to diagonalize \(H_{z}\) via some unitary transformation. However, it is not totally trivial to carry out such a method since the small divisors problem, i.e., \[\liminf_{|\mathbf{n}|\to\infty}|f(z-\mathbf{n}\cdot\mathbf{\omega})-f(z)|=0.\] In 1983 Craig [12] first proved an inverse Anderson localization result for some lattice almost-periodic Schrodinger operators by using a KAM type diagonalization method. This method of Craig is to fix a diagonal operator \(d_{\mathbf{n}}\delta_{\mathbf{n}\mathbf{n}^{\prime}}\) (with \(d_{\mathbf{n}}=g(\mathbf{n}\cdot\mathbf{\omega})\) for some \(1\)-periodic function \(g\)) satisfying \[|d_{\mathbf{m}}-d_{\mathbf{n}}|>\frac{\gamma}{|\mathbf{m}-\mathbf{n}|^{\tau}}\ (\mathbf{m}\neq\mathbf{n}). \tag{1.3}\] An additional condition is imposed on \(g\) so that \(\{d_{\mathbf{n}}\}_{\mathbf{n}\in\mathbb{Z}^{d}}\) is almost-periodic. For sufficiently small \(\varepsilon\), Craig constructed a unitary transformation \(Q\) and some \(1\)-periodic function \(g^{\prime}\) so that \[Q^{-1}(\varepsilon\Delta+g(\mathbf{n}\cdot\mathbf{\omega})\delta_{\mathbf{n}\mathbf{n}^{\prime }}+g^{\prime}(\mathbf{n}\cdot\mathbf{\omega})\delta_{\mathbf{n}\mathbf{n}^{\prime}})Q=g(\mathbf{n} \cdot\mathbf{\omega})\delta_{\mathbf{n}\mathbf{n}^{\prime}},\] where \(\Delta\) denotes the lattice Laplacian. The transformation \(Q\) is obtained via KAM iterations. It is important that the modulation operator \(g^{\prime}(\mathbf{n}\cdot\mathbf{\omega})\delta_{\mathbf{n}\mathbf{n}^{\prime}}\) is constructed so that the small-divisors \(d_{\mathbf{n}}-d_{\mathbf{m}}\) do not change in the iterations. However, it is much more difficult to deal with the direct problem, i.e., to diagonalize \(\varepsilon\Delta+d_{\mathbf{n}}\delta_{\mathbf{n}\mathbf{n}^{\prime}}\) via some unitary transformation. In this case one has to deal with small divisors of the form \(\tilde{d}_{\mathbf{n}}-\tilde{d}_{\mathbf{m}}\) with \(\tilde{d}_{\mathbf{n}}\) being the modulation of \(d_{\mathbf{n}}\). Generally, it is hard to determine whether the modulated sequence \(\{\tilde{d}_{\mathbf{n}}\}_{\mathbf{n}\in\mathbb{Z}^{d}}\) still satisfies the non-resonant condition (1.3) or not. For this reason, Bellissard-Lima-Scoppola [1] provided a class of meromorphic monotone functions which is stable under small analytic function perturbations. As a result, the non-resonant condition (1.3) can be essentially preserved under modulations. Then by using the KAM iteration method, [1] proved the Anderson localization for (1.2) with \(\phi\) being exponentially decaying. Later, Poschel [16] developed a general approach to study both inverse and direct Anderson localization in the setting of translation invariant Banach algebra. He also presented new examples of limit-periodic Schrodinger operators that exhibit Anderson localization. We want to remark that the methods of [10, 1, 12] can only handle operators with exponential decay hopping. Very recently, Shi [14] developed a Nash-Moser iteration type diagonalization approach to deal with almost-periodic operators with power-law decay hopping. The key new ingredients of the proof in [14] consist of introducing the smoothing operation in the iteration together with establishing the tame estimates for some operator norms induced by the power-law hopping. Consequently, Shi [14] generalized the results of Poschel [16] to the power-law hopping case. Finally, we also refer to [17, 18, 19, 20, 21] for important progress on the study of lattice quasi-periodic operators with monotone type potentials. We would like to remark that while the method of Shi [14] works for more general translation invariant Banach algebras, if one applies it to the operators (1.2), the power-law localization of \(H_{z}\) only holds if \(|\varepsilon|\leq\varepsilon_{0}\) for some postive \(\varepsilon_{0}=\varepsilon_{0}(z)\) depending on \(z\). This single-phase type localization result is obviously not satisfactory and one expects uniform (in \(z\)) localization results as that obtained in [1]. In addition, if we want to investigate the dynamical localization and the regularity of the IDS, a uniform in \(z\) estimate of \(\varepsilon_{0}\) is required. In this paper we combine the Nash-Moser iteration diagonalization method of Shi [14] with the meromorphic monotone function estimates of Bellissard-Lima-Scoppola [1] to establish uniform power-law localization, uniform dynamical localization and Lipschitz continuity of the IDS for (1.2). One of the new ingredients of our proof is to introduce a norm which takes account of both \(z\in D_{R}\) and \(\mathbf{n}\in\mathbb{Z}^{d}\) directions. This new norm is actually the \(\ell^{1}\)-type one rather than the \(\ell^{2}\)-one as that in [14], which will lead to a significant simplification of the proof. ### Main results We present our results in the language of holomorphic kernels algebra introduced by Bellissard-Lima-Scoppola [1]. We first introduce the function class of potentials. For \(R>0\), \(\mathscr{H}_{R}\) denotes the set of period \(1\) holomorphic bounded functions on \(D_{R}=\{z\in\mathbb{C}:\ |\Im z|<R\}\), equipped with the norm \(\|f\|_{R}=\sup\limits_{z\in D_{R}}|f(z)|\). Then \(\mathscr{P}_{R}\) denotes the set of period \(1\) meromorphic functions \(g\) on \(D_{R}\) such that, there is a constant \(c>0\) satisfying \[|g(z)-g(z-a)|\geq c\|a\|_{\mathbb{T}}\text{ for }\forall\ a\in\mathbb{R}\text{ and }\forall\ z\in D_{R} \tag{1.4}\] with \(\|a\|_{\mathbb{T}}=\inf\limits_{k\in\mathbb{Z}}\lvert a-k\rvert.\) Then \(|g|_{R}\) is defined as the biggest possible value of \(c\) in (1.4). Let \(\mathscr{P}=\bigcup\limits_{R>0}\mathscr{P}_{R}\). We remark that both \(g_{1}(z)=e^{2\pi\sqrt{-1}z}\) (cf. [1]) and \(g_{2}(z)=\tan(\pi z)\) are in \(\mathscr{P}\) (cf. [1] for details). We then consider the holomorphic kernels constituting the hopping operators. Let \[\boldsymbol{\omega}=(\omega_{1},\cdots,\omega_{d})\in\mathbb{R}^{d},\ \boldsymbol{n}=(n_{1},\cdots,n_{d})\in\mathbb{Z}^{d}.\] Denote \(\boldsymbol{n}\cdot\boldsymbol{\omega}=\sum\limits_{i=1}^{d}n_{i}\omega_{i}\) and \(|\boldsymbol{n}|=\max\limits_{1\leq i\leq d}\lvert n_{i}\rvert\). For \(R>0\) and \(s\geq 0\), denote by \(\mathscr{U}_{R,s}\) the set of kernels \(\mathcal{M}=(\mathcal{M}(z,\boldsymbol{n}))_{\boldsymbol{n}\in\mathbb{Z}^{d},z\in D_{R}}\) with \(\mathcal{M}(z,\boldsymbol{n})\in\mathscr{H}_{R}\) for each \(\boldsymbol{n}\in\mathbb{Z}^{d}\), and \[\|\mathcal{M}\|_{R,s}=\sup\limits_{z\in D_{R}}\sum\limits_{ \boldsymbol{n}\in\mathbb{Z}^{d}}|\mathcal{M}(z,\boldsymbol{n})|\langle \boldsymbol{n}\rangle^{s}<\infty,\ \langle\boldsymbol{n}\rangle=\max\{1,|\boldsymbol{n}|\}. \tag{1.5}\] Then \(\mathscr{U}_{R,s}\) is a Banach space. Giving \(\boldsymbol{\omega}\in\mathbb{R}^{d}\), define an algebraic structure with respect to (w.r.t) \(\boldsymbol{\omega}\) by2 Footnote 2: This definition is also valid for elements that are not in \(\mathscr{U}_{R,s}\), e.g., \(\mathcal{M}=(\mathcal{M}(z,\boldsymbol{n}))_{\boldsymbol{n}\in\mathbb{Z}^{d},z \in D_{R}}\) with \(\mathcal{M}(z,\boldsymbol{n})\in\mathscr{P}_{R}\) for each \(\boldsymbol{n}\in\mathbb{Z}^{d}\). \[(\mathcal{M}_{1}\mathcal{M}_{2})(z,\boldsymbol{n})=\sum\limits_{ \boldsymbol{l}\in\mathbb{Z}^{d}}\mathcal{M}_{1}(z,\boldsymbol{l})\mathcal{M}_ {2}(z-\boldsymbol{l}\cdot\boldsymbol{\omega},\boldsymbol{n}-\boldsymbol{l}). \tag{1.6}\] An involution w.r.t \(\boldsymbol{\omega}\) is given by \[(\mathcal{M}^{*})(z,\boldsymbol{n})=\overline{\mathcal{M}(\bar{z}- \boldsymbol{n}\cdot\boldsymbol{\omega},-\boldsymbol{n})}. \tag{1.7}\] By (1.5) and (1.7), we have \((\mathcal{M}_{1}\mathcal{M}_{2})^{*}=\mathcal{M}_{2}^{*}\mathcal{M}_{1}^{*}\) and \(\|\mathcal{M}\|_{R,s}=\|\mathcal{M}^{*}\|_{R,s}\). If \(f\) is defined in some domain of \(\mathbb{C}\), we also write \(f^{*}(z)=\bar{f}(\bar{z})\). Then \(\mathscr{U}_{R,s}\) with above two structures w.r.t \(\boldsymbol{\omega}\) denotes \(\mathscr{U}_{R,s}^{\boldsymbol{\omega}}\). **Remark 1.1**.: We remark that 1. If \(f:D_{R}\to\mathbb{C}\), then \(f\) can be considered as a diagonal kernel by defining \(f(z,\boldsymbol{n})\equiv f(z)\delta_{\boldsymbol{n}\boldsymbol{0}}\). So if \(V\in\mathscr{P}\), we can still define the product \(V\mathcal{M}\) via (1.6) for \(\mathcal{M}\in\mathscr{U}_{R,s}^{\boldsymbol{\omega}}\). We say \(V\in\mathscr{P}\) is self-adjoint if \(V^{*}=V.\) 2. If \(\boldsymbol{e}\in\mathbb{Z}^{d}\), \(\mathcal{U}_{\boldsymbol{e}}\) is the kernel \(\mathcal{U}_{\boldsymbol{e}}(z,\boldsymbol{n})=\delta_{\boldsymbol{n} \boldsymbol{e}}\), and the Laplace kernel is then given by \(\boldsymbol{\delta}=\sum\limits_{|\boldsymbol{e}|_{1}=1}\mathcal{U}_{ \boldsymbol{e}}\), where \(|\boldsymbol{e}|_{1}=\sum\limits_{i=1}^{d}|e_{i}|\). 3. The unit in \(\mathscr{U}_{R,s}^{\boldsymbol{\omega}}\) is denoted by \(\boldsymbol{1}=\mathcal{U}_{\boldsymbol{0}}\). Then \(\mathcal{M}\in\mathscr{U}_{R,s}^{\boldsymbol{\omega}}\) is called unitary (resp. self-adjoint) if \(\mathcal{M}\mathcal{M}^{*}=\mathcal{M}^{*}\mathcal{M}=\boldsymbol{1}\) (resp. \(\mathcal{M}=\mathcal{M}^{*}\)). Finally, we define the Diophantine condition. Fix \(\gamma>0\) and \(\tau>d\). We say \(\mathbf{\omega}\in[0,1]^{d}\) satisfies the \((\tau,\gamma)\)-Diophantine condition, if \[\|\mathbf{n}\cdot\mathbf{\omega}\|_{\mathbb{T}}\geq\frac{\gamma}{\langle\mathbf{n}\rangle^{ \tau}}\text{ for }\forall\ \mathbf{n}\in\mathbb{Z}^{d}\setminus\{\mathbf{0}\}.\] Denote by \(\mathrm{DC}_{\tau,\gamma}\) the set of all \((\tau,\gamma)\)-Diophantine frequencies. Throughout this paper we always assume \(\mathbf{\omega}\in\mathrm{DC}_{\tau,\gamma}.\) This assumption is reasonable since the long-range hopping in our model has a power-law off-diagonal decay. More precisely, since we use a KAM type perturbation method, imposing Diophantine condition on the frequency allows a controllable loss of derivatives (cf. Lemma 3.3 in this paper) when solving the homological equations. Without this assumption, we can not have effective estimates on the solutions of homological equations, and the iteration scheme will become invalid. In contrast, in the work of [11, 12] the Diophantine frequency can be relaxed to the Bruno-Russmann one 3 since the hopping operators in [11, 12] are of exponential off-diagonal decay. However, if the power-law lower bound in the Diophantine condition is replaced by an exponential one (i.e., the Liouville frequency case), to the best of our knowledge, there is simply no localization results for quasi-periodic operators on \(\mathbb{Z}^{d}\). Footnote 3: For example, this condition includes the case that the power-law bound in the Diophantine condition is replaced by a sub-exponential one. We are able to state our main results. #### 1.1.1. Diagonalization We first introduce the diagonalization theorem. **Theorem 1.1**.: _Let \(\mathbf{\omega}\in\mathrm{DC}_{\tau,\gamma}\). Fix \(\delta>0\), \(\alpha_{0}>0\) and \(R>0\). Let_ \[V\in\mathscr{P}_{R},\ \mathcal{M}\in\mathscr{U}_{R,\alpha+3\delta}^{\mathbf{ \omega}}.\] _Then for_ \[\alpha>\alpha_{0}+\tau+4\delta,\] _there is some \(\varepsilon_{0}=\varepsilon_{0}(R,\alpha,\alpha_{0},\gamma,\tau,|V|_{R},\delta)>0\) such that the following holds true. If \(\|\mathcal{M}\|_{R,\alpha+3\delta}<\varepsilon_{0}\), then there exist an invertible element \(\mathcal{U}\in\mathscr{U}_{\frac{R}{2},\alpha-\tau-4\delta}^{\mathbf{\omega}}\) and some \(\hat{V}\in\mathscr{P}_{\frac{R}{2}}\) so that_ \[\mathcal{U}(V+\mathcal{M})\mathcal{U}^{-1}=\hat{V}, \tag{1.8}\] \[\|\mathcal{U}^{\pm 1}-\mathbf{1}\|_{\frac{R}{2},\alpha-\tau-4\delta} \leq K_{1}\|\mathcal{M}\|_{R,\alpha+3\delta}^{\frac{3\delta}{\alpha-\alpha_{0} }},\] (1.9) \[V-\hat{V}\in\mathscr{H}_{\frac{R}{2}},\ |\hat{V}|_{\frac{R}{2}} \geq\frac{1}{2}|V|_{R},\] (1.10) \[\|V-\hat{V}\|_{\frac{R}{2},0}\leq K_{2}\|\mathcal{M}\|_{R,\alpha+ 3\delta},\] _where \(K_{1}>0\) and \(K_{2}>0\) only depend on \(\alpha_{0},\alpha,\delta\). Moreover, if both \(\mathcal{M}\) and \(V\) are self-adjoint, then \(\mathcal{U}\) is unitary and \(\hat{V}^{*}=\hat{V}\)._ **Remark 1.2**.: In this theorem it suffices to assume \(\mathcal{M}\in\mathscr{U}_{R,\alpha}^{\mathbf{\omega}}\) for \(\alpha>\tau\) since \(\delta>0\) and \(\alpha_{0}>0\) can be arbitrarily small. #### 1.1.2. Power-law Localization Now we can apply the above theorem to obtain the (uniform) power-law localization. A representation of \(\mathcal{M}\in\mathscr{U}_{R,s}^{\boldsymbol{\omega}}\) in \(\ell^{2}(\mathbb{Z}^{d})\) is given by \[\left(T_{\mathcal{M}}(z)\psi\right)(\boldsymbol{n})=\sum_{\boldsymbol{l}\in \mathbb{Z}^{d}}\mathcal{M}(z-\boldsymbol{n}\cdot\boldsymbol{\omega}, \boldsymbol{l}-\boldsymbol{n})\psi(\boldsymbol{l}), \tag{1.11}\] where \(\psi\in\ell^{2}(\mathbb{Z}^{d})\) and \(z\in D_{R}\). Fix \(V\in\mathscr{P}_{R}\). Define for \(0\leq R^{\prime}\leq R\) the set \[\mathcal{Z}_{R^{\prime}}=\bigcap_{\boldsymbol{n}\in\mathbb{Z}^{d}}\{z\in \mathbb{C}:\ |\Im z|\leq R^{\prime}\ \text{and}\ z-\boldsymbol{n}\cdot\boldsymbol{\omega}\ \text{is not a pole of}\ V\}. \tag{1.12}\] Since \(V\) is meromorphic, the set \(D_{R}\setminus\mathcal{Z}_{R}\) is at most countable. For \(V\in\mathscr{P}_{R}\) and \(z\in\mathcal{Z}_{R}\), denote by \(T_{V}(z)=V(z-\boldsymbol{n}\cdot\boldsymbol{\omega})\delta_{\boldsymbol{n} \boldsymbol{n}^{\prime}}\) the multiplication operator. Then we have **Theorem 1.2**.: _Let \(\boldsymbol{\omega}\in\mathrm{DC}_{\tau,\gamma}\). Fix \(\delta>0\), \(\alpha_{0}>0\) and \(R>0\). Let \(V\in\mathscr{P}_{R}\) and \(\mathcal{M}\in\mathscr{U}_{R,s}^{\boldsymbol{\omega}}\) with_ \[s>\alpha_{0}+\tau+\frac{d}{2}+7\delta.\] _Then there is some \(\varepsilon_{0}=\varepsilon_{0}(R,\alpha_{0},\gamma,\tau,|V|_{R},\delta,s,d)>0\) such that the following holds true. If \(\|\mathcal{M}\|_{R,s}<\varepsilon_{0}\), then the operator \(H_{z}=T_{\mathcal{M}}(z)+T_{V}(z)\) has a complete set of eigenfunctions \(\{\varphi_{\boldsymbol{n}}\}_{\boldsymbol{n}\in\mathbb{Z}^{d}}\) obeying \(|\varphi_{\boldsymbol{n}}(\boldsymbol{i})|\leq 2\langle\boldsymbol{n}- \boldsymbol{i}\rangle^{-s+\tau+7\delta}\) for all \(\boldsymbol{n}\in\mathbb{Z}^{d}\), \(\boldsymbol{i}\in\mathbb{Z}^{d}\) and \(z\in\mathcal{Z}_{R/2}\). In addition, if both \(\mathcal{M}\) and \(V\) are self-adjoint, then \(H_{x}\) is self-adjoint and its spectrum is equal to \(\mathbb{R}\) for \(x\in\mathcal{Z}_{0}\)._ **Remark 1.3**.: We first mention that the perturbation strength \(\varepsilon_{0}\) is independent of \(x\) for \(x\in\mathcal{Z}_{0}\). **Remark 1.4**.: We have explicit descriptions of localization centers. Moreover, we establish in fact the uniform (power-law) localization (cf. [10] for the definition of uniform localization). **Remark 1.5**.: Let \(H_{x}=\varepsilon T_{\phi}+\tan\pi(x-\boldsymbol{n}\cdot\boldsymbol{\omega}) \delta_{\boldsymbol{n}\boldsymbol{n}^{\prime}}\) for some sequence \(\phi=\{\phi(\boldsymbol{n})\}_{\boldsymbol{n}\in\mathbb{Z}^{d}}\) satisfying \(\phi(\boldsymbol{0})=0\), and \(|\phi(\boldsymbol{n})|\leq|\boldsymbol{n}|^{-s}\) for \(\boldsymbol{n}\neq\boldsymbol{0}\). If \(\boldsymbol{\omega}\in\mathrm{DC}_{\tau,\gamma},\ s>d+\tau\) and \(|\varepsilon|\leq\varepsilon_{0}(s,\tau,\gamma,d)>0\), then \(H_{x}\) has uniform power-law localization for all \(x\not\in\frac{1}{2}+\mathbb{Z}+\boldsymbol{\omega}\cdot\mathbb{Z}^{d}.\) This extends the perturbative results of [1] to the power-law hopping case. #### 1.1.3. Uniform dynamical localization In this section we apply our diagonalization theorem to study dynamical localization. For \(\psi\in\mathbb{C}^{\mathbb{Z}^{d}}\) and \(s\geq 0\), define \[\|\psi\|_{s}^{2}=\sum_{\boldsymbol{n}\in\mathbb{Z}^{d}}|\psi(\boldsymbol{n})|^ {2}\langle\boldsymbol{n}\rangle^{2s}.\] Let \(\ell_{s}^{2}(\mathbb{Z}^{d})\) denote the set of all \(\psi=\{\psi(\boldsymbol{n})\}_{\boldsymbol{n}\in\mathbb{Z}^{d}}\) satisfying \(\|\psi\|_{s}<\infty\). Given the family \((H_{x})_{x\in\mathbb{T}}\) of self-adjoint operators defined on \(\ell^{2}(\mathbb{Z}^{d})\), we are interested in the estimate of \[\|e^{-\sqrt{-1}tH_{x}}\psi\|_{q}\ \text{for}\ \psi\in\ell_{q}^{2}(\mathbb{Z}^{d}).\] We have **Theorem 1.3**.: _Let \(\mathbf{\omega}\in\mathrm{DC}_{\tau,\gamma}\). Fix \(\delta>0\), \(\alpha_{0}>0\), \(R>0\) and \(q\geq 0\). Let both \(V\in\mathscr{P}_{R}\) and \(\mathcal{M}\in\mathscr{U}_{R,s}^{\mathbf{\omega}}\) be self-adjoint. Assume_ \[s>\alpha_{0}+\tau+q+\frac{d}{2}+7\delta.\] _Then there is some \(\varepsilon_{0}=\varepsilon_{0}(R,\alpha_{0},\gamma,\tau,|V|_{R},\delta,s,d,q)>0\) such that the following holds true. If \(\|\mathcal{M}\|_{R,s}<\varepsilon_{0}\), then for \(\forall\ \psi\in\ell_{q}^{2}(\mathbb{Z}^{d})\), we have_ \[\sup_{x\in\mathcal{Z}_{0}}\sup_{t\in\mathbb{R}}\|e^{-\sqrt{-1}tH_{x}}\psi\|_{q }<\infty,\] _where \(H_{x}=T_{\mathcal{M}}(x)+T_{V}(x)\)._ **Remark 1.6**.: Since \(\mathbb{R}\setminus\mathcal{Z}_{0}\) is at most countable, we have for \(\forall\ \psi\in\ell_{q}^{2}(\mathbb{Z}^{d})\), \[\int_{\mathbb{T}}\sup_{t\in\mathbb{R}}\|e^{-\sqrt{-1}tH_{x}}\psi\|_{q}dx<\infty,\] which refers to the strong dynamical localization. #### 1.1.4. Lipschitz continuity of the IDS In this section prove the Lipschitz continuity of the IDS. Let \(V\in\mathscr{P}_{R}\) and \(\mathcal{M}\in\mathscr{U}_{R,s}^{\mathbf{\omega}}\) with \(V^{*}=V\), \(\mathcal{M}^{*}=\mathcal{M}\). Let \(H_{x}=T_{\mathcal{M}}(x)+T_{V}(x)\) for \(x\in\mathcal{Z}_{0}.\) Denote by \(\mathbb{P}_{(-\infty,E]}(H_{x})\) the spectral resolution of \(H_{x}\), and \(\chi_{L}\) the projection \[(\chi_{L}\psi)(\mathbf{n})=\left\{\begin{array}{cl}\psi(\mathbf{n})&\text{if }|\mathbf{n}|\leq L,\\ 0&\text{otherwise,}\end{array}\right.\] respectively. If \(\mathbf{\omega}\in\mathrm{DC}_{\tau,\gamma}\), then the limit \[\kappa(E)=\lim_{L\to\infty}\frac{1}{(2L+1)^{d}}\mathrm{tr}(\chi_{L}\mathbb{P} _{(-\infty,E]}(H_{x}))\] exists and is independent of \(x\) for a.e. \(x\in\mathbb{T}\). We have **Theorem 1.4**.: _Let \(\mathbf{\omega}\in\mathrm{DC}_{\tau,\gamma}\). Fix \(\delta>0\), \(\alpha_{0}>0\) and \(R>0\). Assume further that_ \[s>\alpha_{0}+\tau+d+7\delta.\] _Then there is some \(\varepsilon_{0}=\varepsilon_{0}(R,\alpha_{0},\gamma,\tau,|V|_{R},\delta,s,d)>0\) such that for \(\|\mathcal{M}\|_{R,s}<\varepsilon_{0}\) and \(E_{1},E_{2}\in\mathbb{R}\),_ \[|\kappa(E_{1})-\kappa(E_{2})|\leq\frac{2}{|V|_{R}}|E_{1}-E_{2}|.\] **Remark 1.7**.: We refer to [19, 18] for _all couplings_ results on the Lipschitz continuity of IDS for \(1D\) lattice quasi-periodic Schrodinger operators with Lipschitz monotone potentials. ### Structure of the paper The paper is organized as follows. Some preliminaries including tame estimate and the smoothing operator are introduced in SS2. The Nash-Moser iteration theorem is established in SS3. In SS4 we prove the convergence of the iteration scheme, and then finish the proof of Theorem 1.1. The proofs of Theorems 1.2, 1.3 and 1.4 are completed in SS5, SS6 and SS7, respectively. Some technical estimates are included in the appendix. ## 2. Preliminaries ### Tame property The norm defined by (1.5) has the following important tame property. **Lemma 2.1**.: _For any \(s\geq 0\) and \(\mathcal{M}_{1},\mathcal{M}_{2}\in\mathscr{U}_{R,s}^{\boldsymbol{\omega}}\), we have_ \[\|\mathcal{M}_{1}\mathcal{M}_{2}\|_{R,s}\leq K(s)(\|\mathcal{M}_{1}\|_{R,0}\| \mathcal{M}_{2}\|_{R,s}+\|\mathcal{M}_{1}\|_{R,s}\|\mathcal{M}_{2}\|_{R,0}), \tag{2.1}\] _where \(K(s)=2^{\max(0,s-1)}\). In particular,_ \[\|\mathcal{M}_{1}\mathcal{M}_{2}\|_{R,0}\leq\|\mathcal{M}_{1}\|_{R,0}\| \mathcal{M}_{2}\|_{R,0}. \tag{2.2}\] Proof.: For a detailed proof, we refer to the appendix. ### Smoothing operator The smoothing operator plays an essential role in the Nash-Moser iteration scheme. In the present context we have **Definition 2.2**.: Fix the \(\theta\geq 0\). Define the smoothing operator \(S_{\theta}\) by \[(S_{\theta}\mathcal{M})(z,\boldsymbol{n}) =\mathcal{M}(z,\boldsymbol{n})\text{ for }|\boldsymbol{n}|\leq\theta,\] \[(S_{\theta}\mathcal{M})(z,\boldsymbol{n}) =0\text{ for }|\boldsymbol{n}|>\theta.\] Given a sequence \(\{\theta_{l}\}_{l=0}^{\infty}\) with \(\theta_{l+1}>\theta_{l}\geq 0\) and \(\lim\limits_{l\to\infty}\theta_{l}=+\infty\), define \[\mathcal{M}^{(0)}=S_{\theta_{0}}\mathcal{M},\ \mathcal{M}^{(l)}=(S_{\theta_{l}}-S _{\theta_{l-1}})\mathcal{M}\text{ for }l\geq 1.\] Then \(\mathcal{M}^{(l)}\) is called the \(l\)-section of \(\mathcal{M}\) w.r.t \(\{\theta_{l}\}_{l=0}^{\infty}\). **Lemma 2.3**.: _Fix the \(\theta\geq 0\). Then for \(\mathcal{M}\in\mathscr{U}_{R,s}^{\boldsymbol{\omega}}\), we have_ \[\|S_{\theta}\mathcal{M}\|_{R,s} \leq\langle\theta\rangle^{s-s^{\prime}}\|\mathcal{M}\|_{R,s^{ \prime}}\text{ for }0\leq s^{\prime}\leq s, \tag{2.3}\] \[\|(I-S_{\theta})\mathcal{M}\|_{R,s} \leq\langle\theta\rangle^{s-s^{\prime}}\|\mathcal{M}\|_{R,s^{ \prime}}\text{ for }0\leq s\leq s^{\prime}, \tag{2.4}\] _where \(I\) denotes the identity operator. In particular, if \(\mathcal{M}^{(l)}\) is the \(l\)-section of \(\mathcal{M}\) w.r.t \(\{\theta_{l}\}_{l=0}^{\infty}\), we have_ \[\|\mathcal{M}^{(l)}\|_{R,s} \leq\langle\theta_{l}\rangle^{s-s^{\prime}}\|\mathcal{M}\|_{R,s^ {\prime}}\text{ for }0\leq s^{\prime}\leq s, \tag{2.5}\] \[\|\mathcal{M}^{(l)}\|_{R,s} \leq\langle\theta_{l-1}\rangle^{s-s^{\prime}}\|\mathcal{M}\|_{R,s ^{\prime}}\text{ for }0\leq s\leq s^{\prime}. \tag{2.6}\] Proof.: The proof follows immediately from Definition 2.2 and (1.5). ## 3. The Nash-Moser iteration In this section we will prove a Nash-Moser iteration theorem. The main strategy is based on the iteration scheme outlined in [11] combined with meromorphic function estimates of [11]. The final transformation \(\mathcal{U}\) will be obtained as the limit of the product \(\mathcal{U}_{l}=\prod\limits_{i=l}^{0}e^{\mathcal{W}_{i}}\) with a sequence of transformations \(\mathcal{W}_{i}\) (\(0\leq i\leq l\)). More precisely, at the \(l\)-th iteration step we will find \(\mathcal{W}_{l}\in\mathscr{U}_{R_{l},s}^{\boldsymbol{\omega}}\), \(V_{l}\in\mathscr{P}_{R_{l}}\) and \(\mathcal{R}_{l}\in\mathscr{U}_{R_{l},s}^{\boldsymbol{\omega}}\) so that \[\prod\limits_{i=l}^{0}e^{\mathcal{W}_{i}}\left(V+\sum\limits_{i=0}^{l-1} \mathcal{M}^{(i)}\right)\prod\limits_{i=0}^{l}e^{-\mathcal{W}_{i}}=V_{l}+ \mathcal{R}_{l}\] and \(\|\mathcal{R}_{l}\|_{R_{l},s}=o(\|\mathcal{R}_{l-1}\|_{R_{l-1},s})\), where \(\mathcal{M}^{(l)}\) is the \(l\)-section of \(\mathcal{M}\) w.r.t \(\{\theta_{l}=\theta_{0}\Theta^{l}\}_{l=0}^{\infty}\) with \(\theta_{0}\), \(\Theta>1\) being specified later. The sequence \(\{R_{l}\}_{l=0}^{\infty}\) satisfies \(R_{l}\searrow R_{\infty}\geq R_{0}/2.\) To clarify the iteration scheme, we set \[\mathcal{R}_{0} =\mathcal{W}_{0}=\mathcal{M}_{-1}=0,\ V_{0}=V_{-1}=V,\] \[\mathcal{M}_{l} =\mathcal{U}_{l}\mathcal{M}^{(l)}\mathcal{U}_{l}^{-1}+\mathcal{R }_{l}.\] Equivalently, at the \(l\)-th iteration step we aim to find \(\mathcal{W}_{l}\in\mathscr{U}_{R_{l},s}^{\boldsymbol{\omega}}\), \(V_{l}\in\mathscr{P}_{R_{l}}\) and \(\mathcal{R}_{l}\in\mathscr{U}_{R_{l},s}^{\boldsymbol{\omega}}\) so that \[e^{\mathcal{W}_{l}}\left(V_{l-1}+\mathcal{M}_{l-1}\right)e^{-\mathcal{W}_{l}} =V_{l}+\mathcal{R}_{l},\] with \(\|\mathcal{R}_{l}\|_{R_{l},s}=o(\|\mathcal{R}_{l-1}\|_{R_{l-1},s})\). For this purpose, it needs to eliminate terms of order \(O(\|\mathcal{R}_{l-1}\|_{R_{l-1},s})\), which leads to solving the following homological equations \[V_{l}(z)=V_{l-1}(z)+\mathcal{M}_{l-1}(z,\boldsymbol{0}),\ \mathcal{W}_{l}(z, \boldsymbol{0})=0,\] \[(\mathcal{W}_{l}V_{l}-V_{l}\mathcal{W}_{l})(z,\boldsymbol{n})=-( S_{\theta_{l}}\tilde{\mathcal{M}}_{l-1})(z,\boldsymbol{n})\ \text{for}\ \boldsymbol{n}\neq\boldsymbol{0},\] where \(\tilde{\mathcal{M}}_{l-1}=(I-S_{0})\mathcal{M}_{l-1}\). To solve the homological equations, we have to address the following issues. First, the non-resonant property of \(V_{l}\) should be preserved, which requires the use of quantitative version of meromorphic monotone function estimates established by Bellissard-Lima-Scoppola [10]. Second, since in the iteration steps we need to estimate the \(\|\cdot\|_{R,s}\) norm of products of elements in \(\mathscr{U}_{R,s}^{\boldsymbol{\omega}}\), the tame property of the norm (cf. (2.1)) plays an essential role. This section is then organized as follows. We first provide some useful estimates in SS3.1. The Nash-Moser iteration theorem is then proved in SS3.2. ### Some useful estimates Let \(V^{\prime}\in\mathscr{P}_{R^{\prime}}\) and \(\mathcal{M}^{\prime}\in\mathscr{U}_{R^{\prime},s}^{\boldsymbol{\omega}}\). We define \(\bar{V^{\prime}}\) and \(\tilde{\mathcal{M}^{\prime}}\) as follows \[\bar{V^{\prime}}(z) =V^{\prime}(z)+\mathcal{M}^{\prime}(z,\boldsymbol{0}), \tag{3.1}\] \[\tilde{\mathcal{M}^{\prime}}(z,\boldsymbol{n}) =((I-S_{0})\mathcal{M}^{\prime})(z,\boldsymbol{n})=\left\{ \begin{array}{cl}\mathcal{M}^{\prime}(z,\boldsymbol{n})&\text{if}\ \boldsymbol{n}\neq\boldsymbol{0},\\ 0&\text{if}\ \boldsymbol{n}=\boldsymbol{0}.\end{array}\right.\] We have **Lemma 3.1**.: _If \(R^{\prime}>Q^{\prime}>0\) is such that_ \[\|\mathcal{M}^{\prime}\|_{R^{\prime},0}<Q^{\prime}|V^{\prime}|_{R^{\prime}}, \tag{3.2}\] _then \(\bar{V^{\prime}}\in\mathscr{P}_{R^{\prime}-Q^{\prime}}\) and_ \[|\bar{V^{\prime}}|_{R^{\prime}-Q^{\prime}}\geq|V^{\prime}|_{R^{\prime}}-\frac{ \|\mathcal{M}^{\prime}\|_{R^{\prime},0}}{Q^{\prime}}>0. \tag{3.3}\] Proof.: The proof follows from **Lemma 3.2** (Lemma I.2, [10]).: _Let \(g\) be in \(\mathscr{P}_{R^{\prime}}\) and \(f\) be in \(\mathscr{H}_{R^{\prime}}\). If \(R^{\prime}>Q^{\prime}>0\) is such that \(\|f\|_{R^{\prime}}<Q^{\prime}|g|_{R^{\prime}}\), then \(f+g\in\mathscr{P}_{R^{\prime}-Q^{\prime}}\) and_ \[\left|\left|f+g|_{R^{\prime}-Q^{\prime}}-|g|_{R^{\prime}-Q^{\prime}}\right| \leq\left(Q^{\prime}\right)^{-1}\|f\|_{R^{\prime}}\,.\] Hence it suffices to apply the above lemma with \(g(z)=V^{\prime}(z)\) and \(f(z)=\mathcal{M}^{\prime}(z,\boldsymbol{0})\) For \(\theta\geq 0\), we define the kernel \(\mathcal{W}^{\prime}\) as \[\mathcal{W}^{\prime}(z,\mathbf{0})=0,\ \mathcal{W}^{\prime}(z,\mathbf{n})= \frac{(S_{\theta}\tilde{\mathcal{M}}^{\prime})(z,\mathbf{n})}{\tilde{V}^{\prime}(z)- \tilde{V}^{\prime}(z-\mathbf{n}\cdot\mathbf{\omega})}\ \text{for}\ \mathbf{n}\neq\mathbf{0}. \tag{3.4}\] We have **Lemma 3.3**.: _For any \(\theta\geq 0\) and \(\mathcal{W}^{\prime}\in\mathscr{U}^{\mathbf{\omega}}_{R^{\prime}-Q^{\prime},s}\), we have_ \[\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},s}\leq\frac{ \langle\theta\rangle^{\tau}}{\gamma|\tilde{V}^{\prime}|_{R^{\prime}-Q^{\prime }}}\|\mathcal{M}^{\prime}\|_{R^{\prime},s}, \tag{3.5}\] _Moreover, if \((\mathcal{M}^{\prime})^{*}=\mathcal{M}^{\prime}\) and \((V^{\prime})^{*}=V^{\prime}\), then \((\mathcal{W}^{\prime})^{*}=-\mathcal{W}^{\prime}\)._ Proof.: Since (3.4), (2.3) and \(\mathbf{\omega}\in\mathrm{DC}_{\tau,\gamma}\), we get \[\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},s}= \sup_{z\in D_{R^{\prime}-Q^{\prime}}}\sum_{\mathbf{n}\in\mathbb{Z}^{d} }|\mathcal{W}^{\prime}(z,\mathbf{n})|\langle\mathbf{n}\rangle^{s}\] \[\leq \frac{1}{\gamma|\tilde{V}^{\prime}|_{R^{\prime}-Q^{\prime}}}\sup _{z\in D_{R^{\prime}}}\sum_{\mathbf{n}\in\mathbb{Z}^{d}}|(S_{\theta}\tilde{ \mathcal{M}}^{\prime})(z,\mathbf{n})|\langle\mathbf{n}\rangle^{s+\tau}\] \[= \frac{1}{\gamma|\tilde{V}^{\prime}|_{R^{\prime}-Q^{\prime}}}\|S_{ \theta}\tilde{\mathcal{M}}^{\prime}\|_{R^{\prime},s+\tau}\leq\frac{\langle \theta\rangle^{\tau}}{\gamma|\tilde{V}^{\prime}|_{R^{\prime}-Q^{\prime}}}\| \tilde{\mathcal{M}}^{\prime}\|_{R^{\prime},s}\] \[\leq \frac{\langle\theta\rangle^{\tau}}{\gamma|\tilde{V}|_{R^{\prime}- Q^{\prime}}}\|\mathcal{M}^{\prime}\|_{R^{\prime},s},\] which implies (3.5). The last assertion of the lemma follows directly from (1.7). The following elementary inequality plays an important role in the proof of tame estimate. **Lemma 3.4**.: _Let \((x,y,s)\in\mathbb{R}^{3}_{+}\setminus\{(0,0,0)\}\). Then we have_ \[(x+y)^{s}\leq K(s)(x^{s}+y^{s}), \tag{3.6}\] _where_ \[K(s)=2^{\max(0,s-1)}\geq 1. \tag{3.7}\] We then introduce a key lemma concerning tame property. Recalling Lemma 2.1, we have **Lemma 3.5**.: _Let \(K(s)\) be given by (3.7). Then for any \(n\geq 1\) and \(s\geq 0\), we have_ \[\left\|\prod_{i=1}^{n}\mathcal{N}_{i}\right\|_{R,0}\leq \prod_{i=1}^{n}\|\mathcal{N}_{i}\|_{R,0}, \tag{3.8}\] \[\left\|\prod_{i=1}^{n}\mathcal{N}_{i}\right\|_{R,s}\leq (K(s))^{n-1}\sum_{i=1}^{n}\left(\prod_{j\neq i}\|\mathcal{N}_{j} \|_{R,0}\right)\|\mathcal{N}_{i}\|_{R,s}. \tag{3.9}\] _In particular,_ \[\|\mathcal{N}^{n}\|_{R,0}\leq \|\mathcal{N}\|_{R,0}^{n}, \tag{3.10}\] \[\|\mathcal{N}^{n}\|_{R,s}\leq n(K(s))^{n-1}\|\mathcal{N}\|_{R,0}^{n-1}\|\mathcal{N}\|_{R,s}. \tag{3.11}\] **Remark 3.1**.: In fact, imitating the proof in Lemma 2.1, we can get a better estimate \[\left\|\prod_{i=1}^{n}\mathcal{N}_{i}\right\|_{R,s}\leq K(s,n)\sum_{i=1}^{n}\left(\prod_{j\neq i}\|\mathcal{N}_{j}\|_{R,0} \right)\|\mathcal{N}_{i}\|_{R,s},\] where \(K(s,n)=\max\{1,n^{s-1}\}\). It may be useful elsewhere, but it is not necessary in this paper. Proof.: The proof follows directly from an induction (on \(n\)) argument using Lemma 2.1. We refer to the proof of Lemma 4.2 in [11] for details. Under the above preparations, we can prove **Lemma 3.6**.: _We have_ \[e^{\mathcal{W}^{\prime}}(V^{\prime}+\mathcal{M}^{\prime})e^{-\mathcal{W}^{ \prime}}=\bar{V}^{\prime}+\mathcal{R}^{\prime},\] _where \(\mathcal{R}^{\prime}\in\mathscr{U}^{\boldsymbol{\omega}}_{R^{\prime}-Q^{ \prime},s}\) for \(\forall\ \theta\geq 0\), and_ \[\|\mathcal{R}^{\prime}\|_{R^{\prime}-Q^{\prime},s}\] \[\leq 4K(s)e^{2K(s)\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0 }}\left(\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0}\|\mathcal{M}^{ \prime}\|_{R^{\prime},s}+\|\mathcal{M}^{\prime}\|_{R^{\prime},0}\|\mathcal{W}^ {\prime}\|_{R^{\prime}-Q^{\prime},s}\right)\] \[+\|(I-S_{\theta})\mathcal{M}^{\prime}\|_{R^{\prime},s}. \tag{3.12}\] Proof.: We define for any \(\mathcal{P}=(\mathcal{P}(z,\boldsymbol{n}))_{\boldsymbol{n}\in\mathbb{Z}^{d},z\in D_{R}}\) and \(k\geq 0\), \[A^{k}_{\mathcal{W}^{\prime}}(\mathcal{P})\equiv\sum_{i=0}^{k}\binom{k}{i}( \mathcal{W}^{\prime})^{k-i}\mathcal{P}(-\mathcal{W}^{\prime})^{i}.\] Formally, we have by using the Taylor series expansion \[e^{\mathcal{W}^{\prime}}\mathcal{P}e^{-\mathcal{W}^{\prime}}=\sum_{k=0}^{ \infty}\frac{A^{k}_{\mathcal{W}^{\prime}}(\mathcal{P})}{k!}.\] From (3.4), we can obtain \[\mathcal{W}^{\prime}\bar{V}^{\prime}-\bar{V}^{\prime}\mathcal{W}^{\prime}=-S_ {\theta}\tilde{\mathcal{M}}^{\prime},\] and then for \(k\geq 1\), \[A^{k}_{\mathcal{W}^{\prime}}(\bar{V})=A^{k-1}_{\mathcal{W}^{\prime}}(-S_{ \theta}\tilde{\mathcal{M}}^{\prime})=-A^{k-1}_{\mathcal{W}^{\prime}}(S_{ \theta}\tilde{\mathcal{M}}^{\prime}).\] As a result, we get \[e^{\mathcal{W}^{\prime}}(V^{\prime}+\mathcal{M}^{\prime})e^{- \mathcal{W}^{\prime}}=e^{\mathcal{W}^{\prime}}(\bar{V}^{\prime}+\tilde{ \mathcal{M}}^{\prime})e^{-\mathcal{W}^{\prime}}\] \[=\bar{V}^{\prime}+\sum_{k=2}^{\infty}\frac{A^{k}_{\mathcal{W}^{ \prime}}(\bar{V}^{\prime})}{k!}+\sum_{k=1}^{\infty}\frac{A^{k}_{\mathcal{W}^{ \prime}}(S_{\theta}\tilde{\mathcal{M}}^{\prime})}{k!}+\sum_{k=0}^{\infty} \frac{A^{k}_{\mathcal{W}^{\prime}}((I-S_{\theta})\tilde{\mathcal{M}}^{\prime })}{k!}\] \[=\bar{V}^{\prime}+\sum_{k=1}^{\infty}\frac{A^{k}_{\mathcal{W}^{ \prime}}(S_{\theta}\tilde{\mathcal{M}}^{\prime})}{(k-1)!(k+1)}+\sum_{k=0}^{ \infty}\frac{A^{k}_{\mathcal{W}^{\prime}}((I-S_{\theta})\tilde{\mathcal{M}}^ {\prime})}{k!}\] \[=\bar{V}^{\prime}+\mathcal{R}^{\prime}. \tag{3.13}\] Next, we try to establish (3.12). Using (3.9) yields for \(k\geq 1\), \[\left\|A^{k}_{\mathcal{W}^{\prime}}(\mathcal{P})\right\|_{R^{\prime }-Q^{\prime},s}\] \[\leq\sum_{i=0}^{k}\left\|\binom{k}{i}(\mathcal{W}^{\prime})^{k-i} \mathcal{P}(-\mathcal{W}^{\prime})^{i}\right\|_{R^{\prime}-Q^{\prime},s}\] \[\leq\sum_{i=0}^{k}\binom{k}{i}(K(s))^{k}\left(\|\mathcal{W}^{ \prime}\|^{k}_{R^{\prime}-Q^{\prime},0}\|\mathcal{P}\|_{R^{\prime}-Q^{\prime},s}+\|\mathcal{W}^{\prime}\|^{k-1}_{R^{\prime}-Q^{\prime},0}\|\mathcal{P}\|_ {R^{\prime}-Q^{\prime},0}\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},s}\right)\] \[=2^{k}(K(s))^{k}\left(\|\mathcal{W}^{\prime}\|^{k}_{R^{\prime}-Q^ {\prime},0}\|\mathcal{P}\|_{R^{\prime}-Q^{\prime},s}+\|\mathcal{W}^{\prime}\| ^{k-1}_{R^{\prime}-Q^{\prime},0}\|\mathcal{P}\|_{R^{\prime}-Q^{\prime},0}\| \mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},s}\right).\] Therefore, we have \[\left\|\sum_{k=1}^{\infty}\frac{A^{k}_{\mathcal{W}^{\prime}}(S_{ \theta}\tilde{\mathcal{M}}^{\prime})}{(k-1)!(k+1)}\right\|_{R^{\prime}-Q^{ \prime},s}\] \[\leq 2K(s)\left(\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0} \|S_{\theta}\tilde{\mathcal{M}}^{\prime}\|_{R^{\prime}-Q^{\prime},s}+\|S_{ \theta}\tilde{\mathcal{M}}^{\prime}\|_{R^{\prime}-Q^{\prime},0}\|\mathcal{W}^ {\prime}\|_{R^{\prime}-Q^{\prime},s}\right)\] \[\quad\times\sum_{k=1}^{\infty}\frac{(2K(s))^{k-1}\,\|\mathcal{W} ^{\prime}\|^{k-1}_{R^{\prime}-Q^{\prime},0}}{(k-1)!}\] \[\leq 2K(s)e^{2K(s)\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0} }\left(\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0}\|S_{\theta}\tilde{ \mathcal{M}}^{\prime}\|_{R^{\prime}-Q^{\prime},s}+\|S_{\theta}\tilde{ \mathcal{M}}^{\prime}\|_{R^{\prime}-Q^{\prime},0}\|\mathcal{W}^{\prime}\|_{R^ {\prime}-Q^{\prime},s}\right)\] \[\leq 2K(s)e^{2K(s)\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0} }\left(\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0}\|S_{\theta}\mathcal{ M}^{\prime}\|_{R^{\prime},s}+\|S_{\theta}\mathcal{M}^{\prime}\|_{R^{\prime},0}\| \mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},s}\right). \tag{3.14}\] Similarly, we get \[\left\|\sum_{k=0}^{\infty}\frac{A^{k}_{\mathcal{W}^{\prime}}((I- S_{\theta})\tilde{\mathcal{M}}^{\prime})}{k!}\right\|_{R^{\prime}-Q^{\prime},s}\] \[\leq 2K(s)\|(I-S_{\theta})\tilde{\mathcal{M}}^{\prime}\|_{R^{ \prime}-Q^{\prime},s}\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0}\sum_{ k=1}^{\infty}\frac{2^{k-1}(K(s))^{k-1}\|\mathcal{W}^{\prime}\|^{k-1}_{R^{ \prime}-Q^{\prime},0}}{(k-1)!}\] \[\quad+2K(s)\|(I-S_{\theta})\tilde{\mathcal{M}}^{\prime}\|_{R^{ \prime}-Q^{\prime},0}\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},s}\sum_{ k=1}^{\infty}\frac{2^{k-1}(K(s))^{k-1}\|\mathcal{W}^{\prime}\|^{k-1}_{R^{ \prime}-Q^{\prime},0}}{(k-1)!}\] \[\quad+\|(I-S_{\theta})\tilde{\mathcal{M}}^{\prime}\|_{R^{\prime}- Q^{\prime},s}\] \[\leq 2K(s)e^{2K(s)\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0} }\left(\|(I-S_{\theta})\mathcal{M}^{\prime}\|_{R^{\prime},s}\|\mathcal{W}^{ \prime}\|_{R^{\prime}-Q^{\prime},0}+\|(I-S_{\theta})\mathcal{M}^{\prime}\|_{R^ {\prime},0}\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},s}\right)\] \[\quad+\|(I-S_{\theta})\mathcal{M}^{\prime}\|_{R^{\prime},s}. \tag{3.15}\] Recalling the Definition 2.2 and (1.5), we obtain \[\|S_{\theta}\mathcal{M}^{\prime}\|_{R^{\prime},s}+\|(I-S_{\theta}) \mathcal{M}^{\prime}\|_{R^{\prime},s} \leq 2\|\mathcal{M}^{\prime}\|_{R^{\prime},s},\] \[\|S_{\theta}\mathcal{M}^{\prime}\|_{R^{\prime},0}+\|(I-S_{\theta}) \mathcal{M}^{\prime}\|_{R^{\prime},0} \leq 2\|\mathcal{M}^{\prime}\|_{R^{\prime},0},\] which together with (3.13), (3.14) and (3.15) implies \[\|\mathcal{R}^{\prime}\|_{R^{\prime}-Q^{\prime},s}\] \[\leq\left\|\sum_{k=1}^{\infty}\frac{A_{\mathcal{W}^{\prime}}^{k}(S _{\theta}\tilde{\mathcal{M}}^{\prime})}{(k-1)!(k+1)}\right\|_{R^{\prime}-Q^{ \prime},s}+\left\|\sum_{k=0}^{\infty}\frac{A_{\mathcal{W}^{\prime}}^{k}((I-S_{ \theta})\tilde{\mathcal{M}}^{\prime})}{k!}\right\|_{R^{\prime}-Q^{\prime},s}\] \[\leq 2K(s)e^{2K(s)\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0 }}\left(\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0}\|S_{\theta} \mathcal{M}^{\prime}\|_{R^{\prime},s}+\|S_{\theta}\mathcal{M}^{\prime}\|_{R^{ \prime},0}\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},s}\right)\] \[\quad+2K(s)e^{2K(s)\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{ \prime},0}}\left(\|(I-S_{\theta})\mathcal{M}^{\prime}\|_{R^{\prime},s}\| \mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0}+\|(I-S_{\theta})\mathcal{M}^ {\prime}\|_{R^{\prime},0}\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},s}\right)\] \[\quad+\|(I-S_{\theta})\mathcal{M}^{\prime}\|_{R^{\prime},s}\] \[\leq 4K(s)e^{2K(s)\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{ \prime},0}}\left(\|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},0}\| \mathcal{M}^{\prime}\|_{R^{\prime},s}+\|\mathcal{M}^{\prime}\|_{R^{\prime},0} \|\mathcal{W}^{\prime}\|_{R^{\prime}-Q^{\prime},s}\right)\] \[\quad+\|(I-S_{\theta})\mathcal{M}^{\prime}\|_{R^{\prime},s}.\] This completes the proof. ### The Nash-Moser iteration theorem In this subsection we try to establish the iteration theorem. We first introduce some parameters. * Fix \(\delta>0\) and \(\alpha_{0}>0\). Let \[\alpha>\alpha_{0}+\tau+4\delta.\] (3.16) * Fix any \(\alpha_{1}=\alpha_{1}(\alpha,\delta)>0\) so that \[\alpha_{1}>2\alpha+\delta.\] (3.17) * Let \(\Theta=\Theta(\alpha_{0},\alpha,\delta)>0\) satisfy \[\Theta^{-\delta} \leq\frac{1}{4}e^{-2K(\alpha_{1})}\leq\frac{1}{4},\] (3.18) \[\Theta^{-\alpha_{0}} \leq\frac{1}{4}.\] (3.19) * Let \(\eta_{0}=\eta_{0}(R,\Theta,\alpha,\alpha_{0},\gamma,\tau,|V|_{R},\delta)>0\) be the minimum value of \(\eta\) satisfying \[\eta^{\frac{\alpha_{0}-\alpha}{2}} \leq\frac{R|V|_{R}}{8}\left(1-\Theta^{\frac{\alpha_{0}-\alpha}{2 }}\right),\;\eta^{\frac{\alpha_{0}-\alpha}{2}}\leq 1-\Theta^{\frac{\alpha_{0}- \alpha}{2}},\] (3.20) \[\eta^{-\delta} \leq\frac{\gamma|V|_{R}}{4\Theta^{\tau}},\;\eta^{-\delta}\leq(32 K(\alpha_{1}))^{-1}e^{-2K(\alpha_{1})}\Theta^{\alpha_{0}-\alpha},\] (3.21) \[\eta^{-1} \leq\Theta^{-1},\;\eta^{\alpha_{0}-\alpha}\leq\Theta^{\alpha_{0} -\alpha-3\delta},\] (3.22) \[\eta^{-\delta} \leq(12(K(\alpha_{1}))^{2})^{-1}.\] (3.23) * Let \(\varepsilon_{0}=\varepsilon_{0}(R,\alpha,\alpha_{0},\gamma,\tau,|V|_{R}, \delta)>0\) satisfy \[\varepsilon_{0}\leq\eta_{0}^{\alpha_{0}-\alpha}\leq 1.\] (3.24) * Let \(\theta_{0}\geq\eta_{0}\) and \(\theta_{l}=\theta_{0}\Theta^{l}\). Denote by \(\mathcal{M}^{(l)}\) the \(l\)-section of \(\mathcal{M}\) w.r.t \(\{\theta_{l}\}_{l=0}^{\infty}\). * Let \[R_{0} =R,\;Q_{l}=\frac{4}{|V|_{R}}\theta_{l}^{\frac{\alpha_{0}-\alpha}{2 }},\] (3.25) \[R_{l+1} =R_{l}-Q_{l}\geq\frac{R}{2}\left(1+\Theta^{(\frac{\alpha_{0}- \alpha}{2})(l+1)}\right)(\text{since \eqref{eq:Moser}}).\] We start with a useful lemma. **Lemma 3.7**.: _For all \(\mathcal{N}_{m}\in\mathscr{U}_{R^{\prime},s}^{\omega}\) with \(1\leq m\leq k\), we have_ \[\big{\|}e^{\mathcal{N}_{1}}\cdots e^{\mathcal{N}_{k}}-\mathbf{1}\big{\|}_{R^{ \prime},s}\leq e^{K(s)\left(\sum\limits_{m=1}^{k}\|\mathcal{N}_{m}\|_{R^{ \prime},0}\right)}\left(\sum\limits_{m=1}^{k}\|\mathcal{N}_{m}\|_{R^{\prime},s }\right). \tag{3.26}\] Proof.: We refer to the appendix for a detailed proof. We are able to state our iteration theorem. **Theorem 3.8**.: _If \(\|\mathcal{M}\|_{R,\alpha+3\delta}<\varepsilon_{0}\), then there exists a sequence_ \[(V_{l},\mathcal{M}_{l},\mathcal{W}_{l+1},\mathcal{R}_{l+1})_{l=0}^{\infty}\in \mathscr{P}_{R_{l}}\times\mathscr{U}_{R_{l},s}^{\omega}\times\mathscr{U}_{R_ {l+1},s}^{\omega}\times\mathscr{U}_{R_{l+1},s}^{\omega}\ (s\in[\alpha_{0},\alpha_{1}])\] _satisfying_ \[e^{\mathcal{W}_{l+1}}(V_{l}+\mathcal{M}_{l})e^{-\mathcal{W}_{l+1 }}=V_{l+1}+\mathcal{R}_{l+1},\] \[\mathcal{U}_{l+1}=\prod\limits_{i=l+1}^{1}e^{\mathcal{W}_{i}},\ V_ {0}=V,\ \mathcal{M}_{0}=\mathcal{M}^{(0)}, \tag{3.27}\] \[V_{l+1}=\bar{V}_{l},\ \mathcal{M}_{l+1}=\mathcal{U}_{l+1}\mathcal{M} ^{(l+1)}\mathcal{U}_{l+1}^{-1}+\mathcal{R}_{l+1}, \tag{3.28}\] _so that_ \[\|\mathcal{M}_{l}\|_{R_{l},s} \leq 2\theta_{l}^{s-\alpha}\ \text{for}\ s\in[\alpha_{0},\alpha_{1}], \tag{3.29}\] \[|V_{l}|_{R_{l}} \geq|V|_{R}-\sum\limits_{j=0}^{l-1}\frac{\|\mathcal{M}_{j}\|_{R_ {j},0}}{Q_{j}}\geq\frac{|V|_{R}}{2}\ (l\geq 1),\] (3.30) \[\|\mathcal{W}_{l+1}\|_{R_{l+1},s} \leq\theta_{l}^{s-\alpha+\tau+\delta}\ \text{for}\ s\in[\alpha_{0},\alpha_{1}],\] (3.31) \[\|\mathcal{U}_{l+1}-\mathbf{1}\|_{R_{l+1},s} \leq\theta_{l}^{(s-\alpha+\tau+\delta)_{+}+\delta}\ \text{for}\ s\in[\alpha_{0},\alpha_{1}],\] (3.32) \[\|\mathcal{R}_{l+1}\|_{R_{l+1},s} \leq\theta_{l+1}^{s-\alpha}\ \text{for}\ s\in[\alpha_{0},\alpha_{1}],\] _where \(x_{+}=x\) if \(x\geq 0\) and \(x_{+}=0\) if \(x<0\). In addition, if both \(\mathcal{M}\) and \(V\) are self-adjoint, then for each \(l\geq 0\), \(\mathcal{U}_{l+1}\) is unitary and \(V_{l}^{*}=V_{l}\)._ Proof of Theorem 3.8.: We first check (3.29) and (3.30) hold true for \(\mathcal{M}_{0}\) and \(V_{0}\) respectively. Let \(V\) and \(\mathcal{M}\) be as in Theorem 1.1. Since (2.3), (3.24) and \(\|\mathcal{M}\|_{R,\alpha+3\delta}<\varepsilon_{0}\), we obtain for \(s\geq\alpha+3\delta\), \[\|\mathcal{M}_{0}\|_{R,s}\leq\theta_{0}^{s-\alpha-3\delta}\|\mathcal{M}\|_{R, \alpha+3\delta}\leq\theta_{0}^{s-\alpha-3\delta}. \tag{3.33}\] If \(\alpha_{0}\leq s<\alpha+3\delta\), we have \[\|\mathcal{M}_{0}\|_{R,s}\leq\|\mathcal{M}\|_{R,\alpha+3\delta}\leq\theta_{0} ^{\alpha_{0}-\alpha}\leq\theta_{0}^{s-\alpha},\] which combined with (3.33) implies \[\|\mathcal{M}_{0}\|_{R,s}\leq 2\theta_{0}^{s-\alpha}\ \text{for}\ s\in[\alpha_{0}, \alpha_{1}].\] Obviously, we have \(|V_{0}|_{R_{0}}=|V|_{R}\geq\frac{1}{2}|V|_{R}\). Next, assume that for \(0\leq l\leq L\), we have constructed \(V_{l},\mathcal{M}_{l}\) using (3.27) and (3.28) so that both (3.30) and (3.29) hold true. We want to use (3.27) and (3.28) to construct \(V_{L+1}\), \(\mathcal{M}_{L+1}\) so that (3.30) and (3.29) hold true again. Recalling (3.1), we let \[V_{L+1}(z)=\bar{V}_{L}(z)=V_{L}(z)+\mathcal{M}_{L}(z,\mathbf{0}).\] According to (3.16), (3.25), (3.29) and (3.30), we get \[\|\mathcal{M}_{L}\|_{R_{L},0}\leq\|\mathcal{M}_{L}\|_{R_{L},\alpha_{0}}\leq 2 \theta_{L}^{\alpha_{0}-\alpha}<Q_{L}|V|_{R_{L}},\] which together with Lemma 3.1, (3.20) and (3.30) implies \[|V_{L+1}|_{R_{L+1}} \geq|V_{L}|_{R_{L}}-\frac{\|\mathcal{M}_{L}\|_{R_{L},0}}{Q_{L}} \geq|V|_{R}-\sum_{j=0}^{L}\frac{\|\mathcal{M}_{j}\|_{R_{j},0}}{Q_{j}}\] \[\geq|V|_{R}-\frac{|V|_{R}}{2}\sum_{j=0}^{\infty}\theta_{j}^{\frac {\alpha_{0}-\alpha}{2}}\] \[=|V|_{R}-\frac{|V|_{R}}{2}\frac{\theta_{0}^{\frac{\alpha_{0}- \alpha}{2}}}{1-\Theta^{\frac{\alpha_{0}-\alpha}{2}}}\geq\frac{|V|_{R}}{2}. \tag{3.34}\] Next, \(\mathcal{W}_{L+1}\) is obtained by setting \(\bar{V}^{\prime}=V_{L+1},\mathcal{M}^{\prime}=\mathcal{M}_{L}\) and \(\theta=\theta_{L+1}\) via (3.4). Since (3.5), (3.21), (3.29) and (3.3), we get \[\|\mathcal{W}_{L+1}\|_{R_{L+1},s} \leq\frac{\theta_{L+1}^{\tau}}{\gamma|V_{L+1}|_{R_{L+1}}}\left\| \mathcal{M}_{L}\right\|_{R_{L},s}\] \[\leq\frac{4\Theta^{\tau}}{\gamma|V|_{R}}\theta_{0}^{-\delta} \theta_{L}^{s-\alpha+\tau+\delta}\] \[\leq\theta_{L}^{s-\alpha+\tau+\delta}\text{ for }s\in[\alpha_{0}, \alpha_{1}].\] In the following we provide corresponding estimates. To estimate \(\|\mathcal{R}_{l+1}\|_{R_{l+1},s}\), we first deal with \(\|(I-S_{\theta_{L+1}})\mathcal{M}_{L}\|_{R_{L},s}\). We have two cases. **Case 1.**: \(s\in[\alpha+\delta,\alpha_{1}]\). From (3.18) and (3.29), we have \[\|(I-S_{\theta_{L+1}})\mathcal{M}_{L}\|_{R_{L},s}\leq \|\mathcal{M}_{L}\|_{R_{L},s}\leq 2\theta_{L}^{s-\alpha}\] \[\leq 2\Theta^{\alpha-s}\theta_{L+1}^{s-\alpha}\leq 2\Theta^{-\delta} \theta_{L+1}^{s-\alpha}\] \[\leq \frac{1}{2}\theta_{L+1}^{s-\alpha}.\] **Case 2.**: \(s\in[\alpha_{0},\alpha+\delta)\). Since (2.4), (3.17), (3.19) and (3.29), we have \[\|(I-S_{\theta_{L+1}})\mathcal{M}_{L}\|_{R_{L},s}\leq \theta_{L+1}^{-\alpha}\|\mathcal{M}_{L}\|_{R_{L},s+\alpha}\] \[\leq 2\theta_{L+1}^{-\alpha}\theta_{L}^{s}\leq 2\Theta^{-\alpha_{0}} \theta_{L+1}^{s-\alpha}\] \[\leq \frac{1}{2}\theta_{L+1}^{s-\alpha}.\] To sum up, one has \[\|(I-S_{\theta_{L+1}})\mathcal{M}_{L}\|_{R_{L},s}\leq\frac{1}{2}\theta_{L+1}^ {s-\alpha}\text{ for }s\in[\alpha_{0},\alpha_{1}]. \tag{3.35}\] Hence from (3.12), (3.16), (3.21), (3.31), (3.29) and (3.35), we have \[\|\mathcal{R}_{L+1}\|_{R_{L+1},s}\] \[\leq 4K(s)e^{2K(s)\|\mathcal{W}_{L+1}\|_{R_{L+1},0}}\left(\| \mathcal{W}_{L+1}\|_{R_{L+1},0}\|\mathcal{M}_{L}\|_{R_{L},s}+\|\mathcal{M}_{L} \|_{R_{L},0}\|\mathcal{W}_{L+1}\|_{R_{L+1},s}\right)\] \[+\|(I-S_{\theta_{L+1}})\mathcal{M}_{L}\|_{R_{L},s}\] \[\leq 16K(\alpha_{1})e^{2K(\alpha_{1})}\theta_{L}^{s-2\alpha+ \alpha_{0}+\tau+\delta}+\frac{1}{2}\theta_{L+1}^{s-\alpha}\] \[\leq 16K(\alpha_{1})e^{2K(\alpha_{1})}\theta_{0}^{-\delta} \theta_{L}^{s-\alpha}+\frac{1}{2}\theta_{L+1}^{s-\alpha}\] \[\leq\theta_{L+1}^{s-\alpha}\text{ for }s\in[\alpha_{0}, \alpha_{1}]. \tag{3.36}\] By (3.26) and (3.31), we can obtain \[\|\mathcal{U}_{L+1}-\mathbf{1}\|_{R_{L+1},s} \leq e^{K(s)\left(\sum\limits_{j=0}^{L}\|\mathcal{W}_{j+1}\|_{R_{ j+1},0}\right)}\left(\sum\limits_{j=0}^{L}\|\mathcal{W}_{j+1}\|_{R_{j+1},s}\right)\] \[\leq e^{K(s)\left(\sum\limits_{j=0}^{L}\|\mathcal{W}_{j+1}\|_{R_{ j+1},\alpha_{0}}\right)}\left(\sum\limits_{j=0}^{L}\|\mathcal{W}_{j+1}\|_{R_{ j+1},s}\right)\] \[\leq(L+1)e^{(L+1)K(\alpha_{1})}\max\limits_{0\leq j\leq L}\| \mathcal{W}_{j+1}\|_{R_{j+1},s}. \tag{3.37}\] Since (3.18) and \(L+1\leq 2^{L+1}\), we have \[(L+1)e^{(L+1)K(\alpha_{1})}\leq\left(2e^{K(\alpha_{1})}\right)^{L+1}\leq \theta_{0}^{\frac{\delta}{2}}(\Theta^{\frac{\delta}{2}})^{L+1}=\theta_{L+1}^{ \frac{\delta}{2}}.\] Together with (3.22), (3.31) and (3.37), we can obtain for \(\alpha_{0}\leq s<\alpha-\tau-\delta\) that \[\|\mathcal{U}_{L+1}-\mathbf{1}\|_{R_{L+1},s}\leq\theta_{L+1}^{\frac{\delta}{2} }\theta_{0}^{s-\alpha+\tau+\delta}\leq\theta_{L}^{\delta}\theta_{0}^{-\frac{ \delta}{2}}\Theta^{\frac{\delta}{2}}\leq\theta_{L}^{\delta}. \tag{3.38}\] Similarly, for \(\alpha-\tau-\delta\leq s\leq\alpha_{1}\), we obtain \[\|\mathcal{U}_{L+1}-\mathbf{1}\|_{R_{L+1},s}\leq\theta_{L+1}^{\frac{\delta}{2} }\theta_{L}^{s-\alpha+\tau+\delta}\leq\theta_{0}^{-\frac{\delta}{2}}\Theta^{ \frac{\delta}{2}}\theta_{L}^{s-\alpha+\tau+2\delta}\leq\theta_{L}^{s-\alpha+ \tau+2\delta}. \tag{3.39}\] Combining (3.38) and (3.39) implies (3.32). Hence \[\|\mathcal{U}_{L+1}\|_{R_{L+1},s}\leq 1+\theta_{L}^{(s-\alpha+\tau+\delta)_{+}+ \delta}\leq 2\theta_{L}^{(s-\alpha+\tau+\delta)_{+}+\delta}. \tag{3.40}\] In the same way, we can get \[\|\mathcal{U}_{L+1}^{-1}\|_{R_{L+1},s}\leq 2\theta_{L}^{(s-\alpha+\tau+\delta)_{+ }+\delta}. \tag{3.41}\] Similar to (3.33), combining (2.5) and (3.24), we have for \(s\geq\alpha+3\delta\), \[\|\mathcal{M}^{(L+1)}\|_{R_{L+1},s} \leq\theta_{L+1}^{s-\alpha-3\delta}\|\mathcal{M}\|_{R_{L+1},\alpha +3\delta}\leq\theta_{L+1}^{s-\alpha-3\delta}\|\mathcal{M}\|_{R,\alpha+3\delta}\] \[\leq\theta_{L+1}^{s-\alpha-3\delta}. \tag{3.42}\] From (2.6), (3.22) and (3.24), we obtain for \(\alpha_{0}\leq s<\alpha+3\delta\), \[\|\mathcal{M}^{(L+1)}\|_{R_{L+1},s} \leq\theta_{L}^{s-\alpha-3\delta}\|\mathcal{M}\|_{R_{L+1},\alpha +3\delta}\leq\theta_{L}^{s-\alpha-3\delta}\|\mathcal{M}\|_{R,\alpha+3\delta}\] \[\leq\theta_{L+1}^{s-\alpha-3\delta}\Theta^{\alpha+3\delta-\alpha_ {0}}\theta_{0}^{\alpha_{0}-\alpha}\] \[\leq\theta_{L+1}^{s-\alpha-3\delta}. \tag{3.43}\] Combining (3.42) and (3.43) yields \[\|\mathcal{M}^{(L+1)}\|_{R_{L+1},s}\leq\theta_{L+1}^{s-\alpha-3\delta}\text{ for }s\in[\alpha_{0},\alpha_{1}]. \tag{3.44}\] If \(\alpha_{0}\leq s<\alpha-\tau-\delta\), then \[(s-\alpha+\tau+\delta)_{+}+\alpha_{0}-\alpha-\delta=\alpha_{0}-\alpha-\delta \leq s-\alpha-\delta.\] If \(\alpha-\tau-\delta\leq s\leq\alpha_{1}\), then \[(s-\alpha+\tau+\delta)_{+}+\alpha_{0}-\alpha-\delta \leq s-\alpha-\delta+(\alpha_{0}-\alpha+\tau+\delta)\] \[\leq s-\alpha-\delta\text{ (since \eqref{eq:2.1.1})}.\] As a result, we have \[(s-\alpha+\tau+\delta)_{+}+\alpha_{0}-\alpha-\delta\leq s-\alpha-\delta\text{ for }s\in[\alpha_{0},\alpha_{1}]. \tag{3.45}\] Since (3.40), (3.41), (3.44) and (3.45), we get \[\|\mathcal{U}_{L+1}\|_{R_{L+1},0}\|\mathcal{M}^{(L+1)}\|_{R_{L+1 },0}\|\mathcal{U}_{L+1}^{-1}\|_{R_{L+1},s}\] \[\leq 4\theta_{L+1}^{\alpha_{0}-\alpha-3\delta}\theta_{L}^{(s- \alpha+\tau+\delta)_{+}+2\delta}\leq 4\theta_{L+1}^{(s-\alpha+\tau+\delta)_{+}+ \alpha_{0}-\alpha-\delta}\] \[\leq 4\theta_{L+1}^{s-\alpha-\delta},\] \[\|\mathcal{U}_{L+1}\|_{R_{L+1},s}\|\mathcal{M}^{(L+1)}\|_{R_{L+1 },0}\|\mathcal{U}_{L+1}^{-1}\|_{R_{L+1},s}\] \[\leq 4\theta_{L+1}^{s-\alpha-\delta},\] and \[\|\mathcal{U}_{L+1}\|_{R_{L+1},0}\|\mathcal{M}^{(L+1)}\|_{R_{L+1},s}\|\mathcal{U}_{L+1}^{-1}\|_{R_{L+1},0}\] \[\leq 4\theta_{L+1}^{s-\alpha-3\delta}\theta_{L}^{2\delta}\leq 4 \theta_{L+1}^{s-\alpha-\delta}.\] According to (3.9), we obtain for \(s\in[\alpha_{0},\alpha_{1}]\), \[\|\mathcal{U}_{L+1}\mathcal{M}^{(L+1)}\mathcal{U}_{L+1}^{-1}\|_ {R_{L+1},s} \leq(K(s))^{2}\|\mathcal{U}_{L+1}\|_{R_{L+1},0}\|\mathcal{M}^{(L+1 )}\|_{R_{L+1},0}\|\mathcal{U}_{L+1}^{-1}\|_{R_{L+1},s}\] \[\quad+(K(s))^{2}\|\mathcal{U}_{L+1}\|_{R_{L+1},0}\|\mathcal{M}^{( L+1)}\|_{R_{L+1},s}\|\mathcal{U}_{L+1}^{-1}\|_{R_{L+1},0}\] \[\leq 12(K(\alpha_{1}))^{2}\theta_{L+1}^{s-\alpha-\delta}.\] Thus by combining (3.23) and (3.36), we have \[\|\mathcal{M}_{L+1}\|_{R_{L+1},s} \leq\|\mathcal{U}_{L+1}\mathcal{M}^{(L+1)}\mathcal{U}_{L+1}^{-1}\| _{R_{L+1},s}+\|\mathcal{R}_{L+1}\|_{R_{L+1},s}\] \[\leq 12(K(\alpha_{1}))^{2}\theta_{L+1}^{s-\alpha-\delta}+\theta_{L +1}^{s-\alpha}\] \[\leq 12(K(\alpha_{1}))^{2}\theta_{0}^{-\delta}\theta_{L+1}^{s- \alpha}+\theta_{L+1}^{s-\alpha}\] \[\leq 2\theta_{L+1}^{s-\alpha}.\] Finally, we prove for each \(l\geq 0\), \(\mathcal{U}_{l+1}\) is unitary and \(V_{l}^{*}=V_{l}\) by induction. For \(l=0\), since \(V^{*}=V\) and \(\mathcal{M}^{*}=\mathcal{M}\), we have \(\mathcal{M}_{0}^{*}=\mathcal{M}_{0}\) and \(V_{1}^{*}=V_{1}\), which combined with Lemma 3.3 implies \(\mathcal{W}_{1}^{*}=-\mathcal{W}_{1}\). Thus \[\mathcal{R}_{1}^{*} =\left(\sum_{k=1}^{\infty}\frac{A_{\mathcal{W}_{1}}^{k}(S_{\theta} \tilde{\mathcal{M}}_{0})}{(k-1)!(k+1)}+\sum_{k=0}^{\infty}\frac{A_{\mathcal{W}_ {1}}^{k}((I-S_{\theta})\tilde{\mathcal{M}}_{0})}{k!}\right)^{*}\] \[=\sum_{k=1}^{\infty}\frac{A_{\mathcal{W}_{1}}^{k}(S_{\theta} \tilde{\mathcal{M}}_{0}^{*})}{(k-1)!(k+1)}+\sum_{k=0}^{\infty}\frac{A_{\mathcal{ W}_{1}}^{k}((I-S_{\theta})\tilde{\mathcal{M}}_{0}^{*})}{k!}\] \[=\sum_{k=1}^{\infty}\frac{A_{\mathcal{W}_{1}}^{k}(S_{\theta} \tilde{\mathcal{M}}_{0})}{(k-1)!(k+1)}+\sum_{k=0}^{\infty}\frac{A_{\mathcal{ W}_{1}}^{k}((I-S_{\theta})\tilde{\mathcal{M}}_{0})}{k!}=\mathcal{R}_{1}\] and \[(e^{\mathcal{W}_{1}})^{*}e^{\mathcal{W}_{1}} =e^{-\mathcal{W}_{1}}e^{\mathcal{W}_{1}}=\mathbf{1},\] \[e^{\mathcal{W}_{1}}(e^{\mathcal{W}_{1}})^{*} =e^{\mathcal{W}_{1}}e^{-\mathcal{W}_{1}}=\mathbf{1}.\] Therefore, \[\mathcal{M}_{1}^{*} =\left(e^{\mathcal{W}_{1}}\mathcal{M}^{(1)}e^{-\mathcal{W}_{1}} +\mathcal{R}_{1}\right)^{*}\] \[=e^{\mathcal{W}_{1}}\mathcal{M}^{(1)}e^{-\mathcal{W}_{1}}+ \mathcal{R}_{1}\] \[=\mathcal{U}_{1}\mathcal{M}^{(1)}\mathcal{U}_{1}^{-1}+\mathcal{R }_{1}=\mathcal{M}_{1}.\] Now, we assume \(V_{L}^{*}=V_{L},\mathcal{M}_{L}^{*}=\mathcal{M}_{L},\mathcal{R}_{L}^{*}= \mathcal{R}_{L}\) and \(\mathcal{U}_{L}\) is unitary for \(L\geq 1\). Recalling (3.28), we have \[V_{L+1}^{*}=V_{L}^{*}+\mathcal{M}_{L}^{*}(z,\mathbf{0})=V_{L}+\mathcal{M}_{L}( z,\mathbf{0})=V_{L+1}.\] Similar to the above arguments, we have \[\mathcal{W}_{L+1}^{*}=-\mathcal{W}_{L+1},\ \mathcal{R}_{L+1}^{*}=\mathcal{R}_{L+1}.\] Thus \[\mathcal{U}_{L+1}^{*}\mathcal{U}_{L+1} =e^{-\mathcal{W}_{L+1}}\mathcal{U}_{L}^{*}\mathcal{U}_{L}e^{ \mathcal{W}_{L+1}}=e^{-\mathcal{W}_{L+1}}e^{\mathcal{W}_{L+1}}=\mathbf{1},\] \[\mathcal{U}_{L+1}\mathcal{U}_{L+1}^{*} =e^{\mathcal{W}_{L+1}}\mathcal{U}_{L}\mathcal{U}_{L}^{*}e^{- \mathcal{W}_{L+1}}=e^{\mathcal{W}_{L+1}}e^{-\mathcal{W}_{L+1}}=\mathbf{1},\] and \[\mathcal{M}_{L+1}^{*} =\left(\mathcal{U}_{L+1}\mathcal{M}^{(L+1)}\mathcal{U}_{L+1}^{- 1}+\mathcal{R}_{L+1}\right)^{*}\] \[=\mathcal{U}_{L+1}\mathcal{M}^{(L+1)}\mathcal{U}_{L+1}^{-1}+ \mathcal{R}_{L+1}=\mathcal{M}_{L+1}.\] This finishes the proof. ## 4. Proof of Theorem 1.1 In this section we prove the convergence of the iterations (cf. Theorem 3.8) in the previous section and thus finish the proof of Theorem 1.1. Proof of Theorem 1.1.: Recalling Theorem 3.8, then for \[\theta_{0}^{-\delta}=\|\mathcal{M}\|_{R,\alpha+3\delta}^{\frac{\delta}{\alpha- \alpha_{0}}}\] and \(\theta_{0}>\eta_{0}\), we have \[\|\mathcal{M}\|_{R,\alpha+3\delta}=\theta_{0}^{\alpha_{0}-\alpha}<\eta_{0}^{ \alpha_{0}-\alpha}=\varepsilon_{0}.\] So applying Theorem 3.8 yields the existence of the sequence \((V_{l},\mathcal{M}_{l},\mathcal{W}_{l+1},\mathcal{R}_{l+1})_{l=0}^{\infty}\). Thus \(\hat{V}(z)-V(z)=\sum\limits_{j=0}^{\infty}\mathcal{M}_{j}(z,\mathbf{0})\) exists in \(\mathscr{H}_{\frac{R}{2}}\) and \[\|\hat{V}-V\|_{\frac{R}{2},0} =\|\hat{V}-V\|_{\frac{R}{2},\alpha_{0}}\leq\sum\limits_{j=0}^{ \infty}2\theta_{j}^{\alpha_{0}-\alpha}\] \[=\frac{2\theta_{0}^{\alpha_{0}-\alpha}}{1-\Theta^{\alpha_{0}- \alpha}}=K_{2}\|\mathcal{M}\|_{R,\alpha+3\delta},\] where \(K_{2}=K_{2}(\alpha,\alpha_{0},\delta)=\frac{2}{1-\Theta^{\alpha_{0}-\alpha}}\). Next, we prove the convergences of \(\mathcal{U}_{l+1}\) and \(\mathcal{U}_{l+1}^{-1}\) with \(\mathcal{U}_{l+1}=e^{\mathcal{W}_{l+1}}\cdots e^{\mathcal{W}_{1}}\). Recalling (3.31), we can obtain \[\|\mathcal{W}_{l+1}\|_{R_{l+1},0}\leq\|\mathcal{W}_{l+1}\|_{R_{l+1},\alpha_{0} }\leq\theta_{l}^{\alpha_{0}-\alpha+\tau+\delta}\leq\theta_{l}^{-3\delta}\] and \[\|\mathcal{W}_{l+1}\|_{R_{l+1},\alpha-\tau-4\delta}\leq\theta_{l}^{-3\delta}.\] Hence \[\sum\limits_{l=0}^{\infty}\|\mathcal{W}_{l+1}\|_{R_{l+1},0} \leq\sum\limits_{l=0}^{\infty}\theta_{l}^{-3\delta}=\frac{\theta_{0}^{-3 \delta}}{1-\Theta^{-3\delta}}\leq\frac{1}{1-\Theta^{-3\delta}}:=C<+\infty,\] \[\sum\limits_{l=0}^{\infty}\|\mathcal{W}_{l+1}\|_{R_{l+1},\alpha- \tau-4\delta}\leq\sum\limits_{l=0}^{\infty}\theta_{l}^{-3\delta}\leq C.\] Then for \(\forall\ \varepsilon>0\), there is \(N(\varepsilon)\in\mathbb{N}\) so that for all \(n\geq N\) and \(p\in\mathbb{N}\), \[\sum\limits_{l=n}^{n+p}\|\mathcal{W}_{l+1}\|_{R_{l+1},0} <\frac{\varepsilon}{4(1+C)K(\alpha)e^{K(\alpha)C}},\] \[\sum\limits_{l=n}^{n+p}\|\mathcal{W}_{l+1}\|_{R_{l+1},\alpha- \tau-4\delta} <\frac{\varepsilon}{4(1+C)K(\alpha)e^{K(\alpha)C}}.\] Using (3.26) implies \[\|e^{\mathcal{W}_{n+p+1}}\cdots e^{\mathcal{W}_{n+1}}-\mathbf{1} \|_{\frac{R}{2},\alpha-\tau-4\delta}\] \[\leq e^{K(\alpha-\tau-4\delta)\sum\limits_{l=n}^{n+p}\|\mathcal{W }_{l+1}\|_{R_{l+1},0}}\sum\limits_{l=n}^{n+p}\|\mathcal{W}_{l+1}\|_{R_{l+1}, \alpha-\tau-4\delta}\] \[\leq e^{K(\alpha)\sum\limits_{l=n}^{n+p}\|\mathcal{W}_{l+1}\|_{R_ {l+1},0}}\sum\limits_{l=n}^{n+p}\|\mathcal{W}_{l+1}\|_{R_{l+1},\alpha-\tau-4 \delta},\] \[\|\mathcal{U}_{n}\|_{\frac{R}{2},0} \leq 1+\|e^{\mathcal{W}_{n}}\cdots e^{\mathcal{V}_{1}}-\mathbf{1} \|_{\frac{R}{2},0}\] \[\leq 1+e^{K(\alpha)\sum\limits_{l=0}^{n-1}\|\mathcal{W}_{l+1}\|_{ R_{l+1},0}}\sum\limits_{l=0}^{n-1}\|\mathcal{W}_{l+1}\|_{R_{l+1},0}.\] As a result, we have \[\|e^{\mathcal{W}_{n+p+1}}\cdots e^{\mathcal{W}_{n+1}}-\mathbf{1}\|_{ \frac{R}{2},\alpha-\tau-4\delta}\|\mathcal{U}_{n}\|_{\frac{R}{2},0}\] \[\leq e^{K(\alpha)\sum\limits_{l=n}^{n+p}\|\mathcal{W}_{l+1}\|_{R_{ l+1},0}}\sum\limits_{l=n}^{n+p}\|\mathcal{W}_{l+1}\|_{R_{l+1},\alpha-\tau-4\delta}\] \[+e^{K(\alpha)\sum\limits_{l=0}^{n+p}\|\mathcal{W}_{l+1}\|_{R_{l+1 },0}}\left(\sum\limits_{l=0}^{n-1}\|\mathcal{W}_{l+1}\|_{R_{l+1},0}\right) \left(\sum\limits_{l=n}^{n+p}\|\mathcal{W}_{l+1}\|_{R_{l+1},\alpha-\tau-4 \delta}\right)\] \[\leq\frac{\varepsilon}{2K(\alpha)}.\] Similarly, we get \[\|e^{\mathcal{W}_{n+p+1}}\cdots e^{\mathcal{W}_{n+1}}-\mathbf{1}\|_{\frac{R}{ 2},0}\|\mathcal{U}_{n}\|_{\frac{R}{2},\alpha-\tau-4\delta}\leq\frac{ \varepsilon}{2K(\alpha)}.\] From (3.9), we obtain \[\|\mathcal{U}_{n+p+1}-\mathcal{U}_{n}\|_{\frac{R}{2},\alpha-\tau- 4\delta} \leq K(\alpha-\tau-4\delta)\|e^{\mathcal{W}_{n+p+1}}\cdots e^{ \mathcal{W}_{n+1}}-\mathbf{1}\|_{\frac{R}{2},\alpha-\tau-4\delta}\|\mathcal{U }_{n}\|_{\frac{R}{2},0}\] \[\quad+K(\alpha-\tau-4\delta)\|e^{\mathcal{W}_{n+p+1}}\cdots e^{ \mathcal{W}_{n+1}}-\mathbf{1}\|_{\frac{R}{2},0}\|\mathcal{U}_{n}\|_{\frac{R}{ 2},\alpha-\tau-4\delta}\] \[\leq K(\alpha)\left(\frac{\varepsilon}{2K(\alpha)}+\frac{ \varepsilon}{2K(\alpha)}\right)=\varepsilon,\] which implies the product \(\mathcal{U}_{l+1}\) converges to some \(\mathcal{U}\in\mathscr{U}_{\frac{R}{2},\alpha-\tau-4\delta}^{\omega}\) as \(l\to\infty\). Similarly, one can show \(\lim\limits_{l\to\infty}\|\mathcal{U}_{l+1}^{-1}-\mathcal{U}^{-1}\|_{\frac{R}{ 2},\alpha-\tau-4\delta}=0\). In addition, we have \[\|\mathcal{U}^{\pm 1}-\mathbf{1}\|_{\frac{R}{2},\alpha-\tau-4\delta} \leq\left(\sum\limits_{l=0}^{\infty}\|\mathcal{W}_{l+1}\|_{R_{l+ 1},\alpha-\tau-4\delta}\right)e^{K(\alpha)\sum\limits_{l=0}^{\infty}\| \mathcal{W}_{l+1}\|_{R_{l+1},0}}\] \[\leq Ce^{K(\alpha)C}\theta_{0}^{-3\delta}=K_{1}\|\mathcal{M}\|_{ \frac{3\delta}{R,\alpha+3\delta}}^{\frac{3\delta}{\alpha-\alpha}},\] where \(K_{1}=K_{1}(\alpha_{0},\alpha,\delta)=Ce^{K(\alpha)C}\). Next, considering \(\sum\limits_{j=0}^{l}\mathcal{M}^{(j)}\), we have \(\sum\limits_{j=0}^{l}\mathcal{M}^{(j)}=S_{\theta_{l}}\mathcal{M}\), which implies \[\|\mathcal{M}-\sum\limits_{j=0}^{l}\mathcal{M}^{(j)}\|_{\frac{R}{ 2},\alpha-\tau-4\delta} =\|(I-S_{\theta_{l}}\mathcal{M})\|_{\frac{R}{2},\alpha-\tau-4 \delta}\leq\theta_{l}^{-\tau-7\delta}\|\mathcal{M}\|_{R,\alpha+3\delta}\] \[\leq\theta_{l}^{-\tau-7\delta}\to 0\text{ (as }l\to\infty).\] Obviously, we have \[\|\mathcal{R}_{l+1}\|_{\frac{R}{2},\alpha-\tau-4\delta} \leq\|\mathcal{R}_{l+1}\|_{R_{l+1},\alpha-\tau-4\delta}\] \[\leq\theta_{l+1}^{-\tau-4\delta}\to 0\text{ (as }l\to\infty).\] This finishes the proof of convergence of the iteration scheme. Moreover, from (3.30), we have \(|\hat{V}|_{\frac{R}{2}}\geq\frac{1}{2}|V|_{R}\). The remaining is to show if both \(V\) and \(\mathcal{M}\) are self-adjoint, then \(\mathcal{U}\) can be improved to become unitary and \(\hat{V}^{*}=\hat{V}\). Suppose now \[V^{*}=V,\ \mathcal{M}^{*}=\mathcal{M}.\] From Theorem 3.8, we show \(\mathcal{U}_{l+1}\) is unitary and \(V_{l}^{*}=V_{l}\) for each \(l\geq 0\). Therefore, \[\mathcal{U}^{*}\mathcal{U}=\lim_{l\to\infty}\mathcal{U}_{l+1}^{*} \mathcal{U}_{l+1}=\mathbf{1},\] \[\mathcal{U}\mathcal{U}^{*}=\lim_{l\to\infty}\mathcal{U}_{l+1} \mathcal{U}_{l+1}^{*}=\mathbf{1},\] \[\hat{V}^{*}=\lim_{l\to\infty}V_{l}^{*}=\lim_{l\to\infty}V_{l}= \hat{V},\] which implies \(\mathcal{U}\) is unitary and \(\hat{V}^{*}=\hat{V}\). This completes the whole proof of Theorem 1.1. ## 5. Proof of Theorem 1.2 In this section we will prove the power-law localization (cf. Theorem 1.2) by using Theorem 1.1. We begin with a useful lemma. **Lemma 5.1**.: _For any \(\mathcal{M},\mathcal{M}_{1},\mathcal{M}_{2}\in\mathscr{U}_{R,s}^{\omega}\) and \(z\in D_{R}\), we have_ \[T_{\mathcal{M}^{*}}(z) =\left(T_{\mathcal{M}}(\bar{z})\right)^{*}, \tag{5.1}\] \[T_{\mathcal{M}_{1}\mathcal{M}_{2}}(z) =T_{\mathcal{M}_{1}}(z)T_{\mathcal{M}_{2}}(z). \tag{5.2}\] _If in addition \(s>q+\frac{d}{2}\), then_ \[\|T_{\mathcal{M}}(z)\|_{q}\leq X(s,q)\|\mathcal{M}\|_{R,s}, \tag{5.3}\] _where_ \[X(s,q)=\sqrt{K(2q)(Y^{2}(s)+Y^{2}(s-q))}>0, \tag{5.4}\] \(K(s)\) _is given by (3.7), \(Y(s)=\sqrt{\sum\limits_{\boldsymbol{n}\in\mathbb{Z}^{d}}\langle\boldsymbol{n} \rangle^{-2s}}\) and the norm of \(T_{\mathcal{M}}(z)\) denotes the standard operator norm on \(\ell_{q}^{2}(\mathbb{Z}^{d})\)._ Proof.: Let \((\cdot,\cdot)\) denote the standard inner product on \(\ell^{2}(\mathbb{Z}^{d})\). First, for \(\forall\)\(\psi\) and \(\varphi\in\ell^{2}(\mathbb{Z}^{d})\), \[(T_{\mathcal{M}^{*}}(z)\psi,\varphi) =\sum_{\boldsymbol{n}\in\mathbb{Z}^{d}}\sum_{\boldsymbol{l}\in \mathbb{Z}^{d}}\mathcal{M}^{*}(z-\boldsymbol{n}\cdot\boldsymbol{\omega}, \boldsymbol{l}-\boldsymbol{n})\psi(\boldsymbol{l})\overline{\varphi( \boldsymbol{n})}\] \[=\sum_{\boldsymbol{n}\in\mathbb{Z}^{d}}\sum_{\boldsymbol{l}\in \mathbb{Z}^{d}}\psi(\boldsymbol{l})\overline{\mathcal{M}(\bar{z}-\boldsymbol {l}\cdot\boldsymbol{\omega},\boldsymbol{n}-\boldsymbol{l})\varphi(\boldsymbol {n})}\] \[=\left(\psi,T_{\mathcal{M}}(\bar{z})\varphi\right)=\left(\left(T_{ \mathcal{M}}(\bar{z})\right)^{*}\psi,\varphi\right),\] which shows (5.1). Next, for \(\forall\)\(\psi\in\ell^{2}(\mathbb{Z}^{d})\) and \(\forall\)\(\boldsymbol{n}\in\mathbb{Z}^{d}\), we have \[\left(T_{\mathcal{M}_{1}\mathcal{M}_{2}}(z)\psi\right)(\boldsymbol {n}) =\sum_{\boldsymbol{l}\in\mathbb{Z}^{d}}\sum_{\boldsymbol{k}\in \mathbb{Z}^{d}}\mathcal{M}_{1}(z-\boldsymbol{n}\cdot\boldsymbol{\omega}, \boldsymbol{k})\mathcal{M}_{2}(z-(\boldsymbol{n}+\boldsymbol{k})\cdot \boldsymbol{\omega},\boldsymbol{l}-(\boldsymbol{n}+\boldsymbol{k}))\psi( \boldsymbol{l})\] \[=\sum_{\boldsymbol{k}\in\mathbb{Z}^{d}}\mathcal{M}_{1}(z- \boldsymbol{n}\cdot\boldsymbol{\omega},\boldsymbol{k})\left(T_{\mathcal{M}_ {2}}\psi\right)(\boldsymbol{n}+\boldsymbol{k})\] \[=\sum_{\boldsymbol{k}\in\mathbb{Z}^{d}}\mathcal{M}_{1}(z- \boldsymbol{n}\cdot\boldsymbol{\omega},\boldsymbol{k}-\boldsymbol{n})\left(T_ {\mathcal{M}_{2}}\psi\right)(\boldsymbol{k})\] \[=\left(T_{\mathcal{M}_{1}}\left(T_{\mathcal{M}_{2}}\psi\right) \right)(\boldsymbol{n})=\left(\left(T_{\mathcal{M}_{1}}T_{\mathcal{M}_{2}} \right)\psi\right)(\boldsymbol{n}),\] which implies (5.2). If in addition \(s>q+\frac{d}{2}\) for \(\forall\ \psi\in\ell_{q}(\mathbb{Z}^{d})\), we get \[\|T_{\mathcal{M}}(z)\psi\|_{q}^{2} =\sum_{\mathbf{n}\in\mathbb{Z}^{d}}\left|\sum_{\mathbf{l}\in\mathbb{Z}^{d}} \mathcal{M}(z-\mathbf{n}\cdot\mathbf{\omega},\mathbf{l}-\mathbf{n})\psi(\mathbf{l})\right|^{2} \langle\mathbf{n}\rangle^{2q}\] \[\leq\sum_{\mathbf{n}\in\mathbb{Z}^{d}}\left(\sum_{\mathbf{l}\in\mathbb{Z} ^{d}}|\mathcal{M}(z-\mathbf{n}\cdot\mathbf{\omega},\mathbf{l}-\mathbf{n})||\psi(\mathbf{l})|\right)^ {2}\langle\mathbf{n}\rangle^{2q}.\] Recalling (1.5), we obtain \[|\mathcal{M}(z-\mathbf{n}\cdot\mathbf{\omega},\mathbf{l}-\mathbf{n})|\leq\langle\mathbf{l}-\mathbf{n} \rangle^{-s}\|\mathcal{M}\|_{R,s}\] By Cauchy-Schwarz inequality, we have \[\left(\sum_{\mathbf{l}\in\mathbb{Z}^{d}}|\mathcal{M}(z-\mathbf{n}\cdot \mathbf{\omega},\mathbf{l}-\mathbf{n})||\psi(\mathbf{l})|\right)^{2} \leq\left(\sum_{\mathbf{l}\in\mathbb{Z}^{d}}|\mathcal{M}(z-\mathbf{n} \cdot\mathbf{\omega},\mathbf{l}-\mathbf{n})|\langle\mathbf{l}-\mathbf{n}\rangle^{s}\right)\] \[\quad\times\left(\sum_{\mathbf{l}\in\mathbb{Z}^{d}}|\mathcal{M}(z- \mathbf{n}\cdot\mathbf{\omega},\mathbf{l}-\mathbf{n})|\langle\mathbf{l}-\mathbf{n}\rangle^{-s}|\psi( \mathbf{l})|^{2}\right)\] \[\leq\|\mathcal{M}\|_{R,s}^{2}\left(\sum_{\mathbf{l}\in\mathbb{Z}^{d} }\langle\mathbf{l}-\mathbf{n}\rangle^{-2s}|\psi(\mathbf{l})|^{2}\right),\] which implies \[\|T_{\mathcal{M}}(z)\psi\|_{q}^{2}\leq\|\mathcal{M}\|_{R,s}^{2}\sum_{\mathbf{n} \in\mathbb{Z}^{d}}\left(\sum_{\mathbf{l}\in\mathbb{Z}^{d}}\langle\mathbf{l}-\mathbf{n} \rangle^{-2s}|\psi(\mathbf{l})|^{2}\langle\mathbf{n}\rangle^{2q}\right).\] According to (3.6), we have \[\sum_{\mathbf{n}\in\mathbb{Z}^{d}}\left(\sum_{\mathbf{l}\in\mathbb{Z}^{d }}\langle\mathbf{l}-\mathbf{n}\rangle^{-2s}|\psi(\mathbf{l})|^{2}\langle\mathbf{n}\rangle^{2q}\right)\] \[\leq K(2q)\sum_{\mathbf{n}\in\mathbb{Z}^{d}}\left(\sum_{\mathbf{l}\in \mathbb{Z}^{d}}\langle\mathbf{l}-\mathbf{n}\rangle^{-2s}|\psi(\mathbf{l})|^{2}\left( \langle\mathbf{l}\rangle^{2q}+\langle\mathbf{l}-\mathbf{n}\rangle^{2q}\right)\right)\] \[=K(2q)\left(\sum_{\mathbf{n}\in\mathbb{Z}^{d}}\langle\mathbf{n}\rangle^{-2 s}\sum_{\mathbf{l}\in\mathbb{Z}^{d}}|\psi(\mathbf{l})|^{2}\langle\mathbf{l}\rangle^{2q}+\sum_{ \mathbf{n}\in\mathbb{Z}^{d}}\langle\mathbf{n}\rangle^{-(2s-2q)}\sum_{\mathbf{l}\in\mathbb{ Z}^{d}}|\psi(\mathbf{l})|^{2}\right)\] \[=K(2q)(Y^{2}(s)\|\psi\|_{q}^{2}+Y^{2}(s-q)\|\psi\|_{0}^{2})\] \[\leq K(2q)(Y^{2}(s)+Y^{2}(s-q))\|\psi\|_{q}^{2}.\] Therefore, \[\|T_{\mathcal{M}}(z)\psi\|_{q}^{2}\leq\|\mathcal{M}\|_{R,s}^{2}K(2q)(Y^{2}(s) +Y^{2}(s-q))\|\psi\|_{q}^{2},\] that is \[\|T_{\mathcal{M}}(z)\|_{q}\leq X(s,q)\|\mathcal{M}\|_{R,s}.\] We can now prove Theorem 1.2. Proof of Theorem 1.2.: We apply Theorem 1.1 with \[\alpha=s-3\delta. \tag{5.5}\] Since (5.5), we have \[\mathcal{M}\in\mathscr{U}_{R,s}^{\boldsymbol{\omega}}=\mathscr{U}_{R,\alpha+3 \delta}^{\boldsymbol{\omega}},\] \[\alpha=s-3\delta>\alpha_{0}+\tau+4\delta+\frac{d}{2}>\alpha_{0}+\tau+4\delta.\] Hence applying Theorem 1.1 implies that if \(\|\mathcal{M}\|_{R,s}\ll 1\), there are \(\mathcal{U}\in\mathscr{U}_{\frac{R}{2},s-\tau-7\delta}^{\frac{3}{2}}\) and \(\hat{V}\in\mathscr{P}_{\frac{R}{2}}\) so that \[\mathcal{U}(V+\mathcal{M})\mathcal{U}^{-1} =\hat{V},\] \[\|\mathcal{U}^{\pm 1}-\mathbf{1}\|_{\frac{R}{2},s-\tau-7\delta} \leq K_{1}\|\mathcal{M}\|_{R,s}^{\frac{3\delta}{s-\alpha_{0}-3\delta}}.\] Letting \(U_{z}=T_{\mathcal{U}}(z)\), according to (5.3) and \(s-\tau-7\delta>\frac{d}{2}\), we obtain \[\|U_{z}^{\pm 1}\|_{0} \leq 1+\|U_{z}^{\pm 1}-I\|_{0}\leq 1+X(s-\tau-7\delta,0)\|\mathcal{U} ^{\pm 1}-\mathbf{1}\|_{\frac{R}{2},s-\tau-7\delta}\] \[\leq 1+X(s-\tau-7\delta,0)K_{1}\|\mathcal{M}\|_{R,s}^{\frac{3 \delta}{s-\alpha_{0}-3\delta}},\] where \(X(s-\tau-7\delta,0)\) and \(K_{1}\) are given by (5.4) and (1.9) respectively. Therefore \(U_{z}\) is a bounded invertible operator. By (5.2), we get for \(z\in\mathcal{Z}_{R/2}\) (cf. (1.12)), \[U_{z}H_{z}U_{z}^{-1}=T_{\hat{V}}(z).\] Note that \(T_{\hat{V}}(z)\) is a diagonal operator for \(z\in\mathcal{Z}_{R/2}\). Then the standard basis \(\{\delta_{\boldsymbol{n}}\}_{\boldsymbol{n}\in\mathbb{Z}^{d}}\) of \(\ell^{2}(\mathbb{Z}^{d})\) is a complete set of eigenfunctions of \(T_{\hat{V}}(z)\) with eigenvalues \(\{\hat{V}(z-\boldsymbol{n}\cdot\boldsymbol{\omega})\}_{\boldsymbol{n}\in \mathbb{Z}^{d}}\) for \(z\in\mathcal{Z}_{R/2}\). Letting \(\varphi_{\boldsymbol{n}}=U_{z}^{-1}\delta_{\boldsymbol{n}}\), then \[H_{z}\varphi_{\boldsymbol{n}}=\hat{V}(z-\boldsymbol{n}\cdot\boldsymbol{ \omega})\varphi_{\boldsymbol{n}},\] which combined with the boundedness of \(U_{z}^{-1}\) will imply \(\{\varphi_{\boldsymbol{n}}\}_{\boldsymbol{n}\in\mathbb{Z}^{d}}\) is a _complete set of eigenfunctions_ of \(H_{z}\) for \(z\in\mathcal{Z}_{R/2}\). In fact, we have \[\varphi_{\boldsymbol{n}}(\boldsymbol{i})=\sum_{\boldsymbol{l}\in\mathbb{Z}^{ d}}\mathcal{U}^{-1}(z-\boldsymbol{i}\cdot\boldsymbol{\omega},\boldsymbol{l}- \boldsymbol{i})\delta_{\boldsymbol{n}}(\boldsymbol{l})=\mathcal{U}^{-1}(z- \boldsymbol{i}\cdot\boldsymbol{\omega},\boldsymbol{n}-\boldsymbol{i}).\] Since \(\mathcal{U}^{-1}\in\mathscr{U}_{\frac{R}{2},s-\tau-7\delta}^{\frac{3\delta}{s -\alpha_{0}-3\delta}}\), we obtain \[|\varphi_{\boldsymbol{n}}(\boldsymbol{i})| \leq\|\mathcal{U}^{-1}\|_{\frac{R}{2},s-\tau-7\delta}\langle \boldsymbol{n}-\boldsymbol{i}\rangle^{-s+\tau+7\delta}\] \[\leq(1+\|\mathcal{U}^{-1}-\mathbf{1}\|_{\frac{R}{2},s-\tau-7 \delta})\langle\boldsymbol{n}-\boldsymbol{i}\rangle^{-s+\tau+7\delta}\] \[\leq(1+K_{1}\|\mathcal{M}\|_{R,s}^{\frac{3\delta}{s-\alpha_{0}-3 \delta}})\langle\boldsymbol{n}-\boldsymbol{i}\rangle^{-s+\tau+7\delta}\] \[\leq 2\langle\boldsymbol{n}-\boldsymbol{i}\rangle^{-s+\tau+7\delta}\] provided \(K_{1}\|\mathcal{M}\|_{R,s}^{\frac{3\delta}{s-\alpha_{0}-3\delta}}\leq 1\). This shows particularly \(\varphi_{\boldsymbol{n}}\in\ell^{2}(\mathbb{Z}^{d}).\) We then prove the _completeness_. Suppose for all \(\boldsymbol{n}\in\mathbb{Z}^{d}\), \((\psi,\varphi_{\boldsymbol{n}})=0\). It suffices to show \(\psi=0\). For \(\forall\ \boldsymbol{n}\in\mathbb{Z}^{d}\), we have \[0=(\psi,\varphi_{\boldsymbol{n}})=\psi(\boldsymbol{n})+(\psi,(U_{z}^{-1}-I) \delta_{\boldsymbol{n}}),\] which combined with (5.1) implies \[\|\psi\|_{0}^{2}=\sum_{\boldsymbol{n}\in\mathbb{Z}^{d}}|\psi( \boldsymbol{n})|^{2} =\sum_{\boldsymbol{n}\in\mathbb{Z}^{d}}|(\psi,(U_{z}^{-1}-I)\delta_{ \boldsymbol{n}})|^{2}\] \[=\sum_{\boldsymbol{n}\in\mathbb{Z}^{d}}|((U_{z}^{-1}-I)^{*}\psi, \delta_{\boldsymbol{n}})|^{2}\] \[=\sum_{\boldsymbol{n}\in\mathbb{Z}^{d}}\left|\left(T_{(\mathcal{U }^{-1}-\boldsymbol{1})^{*}}(\bar{z})\psi\right)(\boldsymbol{n})\right|^{2}\] \[=\|T_{(\mathcal{U}^{-1}-\boldsymbol{1})^{*}}(\bar{z})\psi\|_{0}^ {2}.\] By using (5.3) and \(\|\mathcal{M}\|_{R^{\prime},s^{\prime}}=\|\mathcal{M}^{*}\|_{R^{\prime},s^{ \prime}}\), we obtain \[\|T_{(\mathcal{U}^{-1}-\boldsymbol{1})^{*}}(\bar{z})\psi\|_{0}^{2} \leq X^{2}(s-\tau-7\delta,0)\|(\mathcal{U}^{-1}-\boldsymbol{1})^{ *}\|_{\frac{\bar{z}}{2},s-\tau-7\delta}^{2}\|\psi\|_{0}^{2}\] \[=X^{2}(s-\tau-7\delta,0)\|(\mathcal{U}^{-1}-\boldsymbol{1})\|_{ \frac{\bar{z}}{2},s-\tau-7\delta}^{2}\|\psi\|_{0}^{2}\] \[\leq K_{1}^{2}X^{2}(s-\tau-7\delta,0)\|\mathcal{M}\|_{R,s}^{\frac {6\delta}{\tau-\alpha\phi-3\delta}}\|\psi\|_{0}^{2}.\] Thus, we have \[\|\psi\|_{0}^{2}\leq K_{1}^{2}X^{2}(s-\tau-7\delta,0)\|\mathcal{M}\|_{R,s}^{ \frac{6\delta}{\tau-\alpha\phi-3\delta}}\|\psi\|_{0}^{2}\leq\frac{1}{2}\|\psi \|_{0}^{2}\] provided \(K_{1}^{2}X^{2}(s-\tau-7\delta,0)\|\mathcal{M}\|_{R,s}^{\frac{6\delta}{\tau- \alpha\phi-3\delta}}\leq\frac{1}{2}\). This shows \(\psi=0\) and thus the _completeness_ of \(\{\varphi_{\boldsymbol{n}}\}_{\boldsymbol{n}\in\mathbb{Z}^{d}}\). Finally, if \(\mathcal{M}\) and \(V\) are self-adjoint, then \(\mathcal{U}\) is unitary and \(\hat{V}^{*}=\hat{V}\). Therefore, by (5.2) and (5.1), we have for \(x\in\mathcal{Z}_{0}\) \[(H_{x})^{*} =T_{\mathcal{M}^{*}}(\bar{x})+T_{V^{*}}(\bar{x})=T_{\mathcal{M}} (x)+T_{V}(x)=H_{x},\] \[(U_{x})^{*}U_{x} =T_{\mathcal{U}^{*}}(\bar{x})T_{\mathcal{U}}(x)=T_{\mathcal{U}^{ *}\mathcal{U}}(x)=I,\] \[U_{x}(U_{x})^{*} =T_{\mathcal{U}}(x)T_{\mathcal{U}^{*}}(\bar{x})=T_{\mathcal{U} \mathcal{U}^{*}}(x)=I,\] which implies \(H_{x}\) is a self-adjoint operator and \(U_{x}\) is a unitary operator. Hence the spectrum of \(H_{x}\) is equal to that of \(T_{\hat{V}}(x)\). At this stage, we recall a result proven in [1]. **Lemma 5.2**.: _If \(g\in\mathscr{P}_{R}\), \(g^{*}=g\), then there is a unique \(x\in[0,1)\) such that the real poles of \(g\) are \(\{x+n:n\in\mathbb{Z}\}\). Moreover, \(g\) is strictly monotone in each interval \((x+n,x+n+1)\) and \(\mathbb{R}=\{g(x):x\in\mathbb{R}\}\)._ Since \(\{\boldsymbol{n}\cdot\boldsymbol{\omega}\mod 1:\ \boldsymbol{n}\in\mathbb{Z}^{d}\}\) is dense in \([0,1]\), we have \[\sigma(T_{\hat{V}}(x))=\mathbb{R},\] Consequently, we obtain \(\sigma(H_{x})=\mathbb{R}\) for \(x\in\mathcal{Z}_{0}\). This proves Theorem 1.2. ## 6. Proof of Theorem 1.3 In this section we prove the uniform dynamical localization by applying Theorem 1.1. Proof of Theorem 1.3.: From Theorem 1.1, there exist \(\mathcal{U}\in\mathscr{U}_{\frac{B}{2},s-\tau-7\delta}^{\boldsymbol{\omega}}\) and \(\hat{V}\in\mathscr{P}_{\frac{B}{2}}\) so that \[\mathcal{U}(V+\mathcal{M})\mathcal{U}^{-1} =\hat{V},\] \[\|V-\hat{V}\|_{\frac{B}{2},0} \leq K_{2}\|\mathcal{M}\|_{R,\alpha+3\delta},\] Moreover, for \(\forall\ x\in\mathcal{Z}_{0}\), \[U_{x}H_{x}U_{x}^{-1}=T_{\hat{V}}(x),\] where \(\hat{V}(x)\) is real valued. Thus for \(\forall\ \psi\in\ell_{q}^{2}(\mathbb{Z}^{d})\) and \(x\in\mathcal{Z}_{0}\), we have \[\|e^{-\sqrt{-1}tT_{\hat{V}}(x)}\psi\|_{q}^{2}=\sum_{\boldsymbol{n}\in\mathbb{Z }^{d}}|e^{-\sqrt{-1}t\hat{V}(x-\boldsymbol{n}\cdot\boldsymbol{\omega})}\psi( \boldsymbol{n})|^{2}\langle\boldsymbol{n}\rangle^{2q}=\|\psi\|_{q}^{2}.\] Recalling (5.3) and \(s-\tau-7\delta>q+\frac{d}{2}\), \(U_{x}\) is a bounded invertible operator on \(\ell_{q}^{2}(\mathbb{Z}^{d})\). So for \(\forall\ \psi\in\ell_{q}^{2}(\mathbb{Z}^{d})\) and \(x\in\mathcal{Z}_{0}\), we have \[\|e^{-\sqrt{-1}tH_{x}}\psi\|_{q}^{2} =\|U_{x}^{-1}e^{-\sqrt{-1}tT_{\hat{V}}(x)}U_{x}\psi\|_{q}^{2}\leq \|U_{x}^{-1}\|_{q}^{2}\|e^{-\sqrt{-1}tT_{\hat{V}}(x)}U_{x}\psi\|_{q}^{2}\] \[=\|U_{x}^{-1}\|_{q}^{2}\sum_{\boldsymbol{n}\in\mathbb{Z}^{d}}|e^{ -\sqrt{-1}t\hat{V}(x-\boldsymbol{n}\cdot\boldsymbol{\omega})}(U_{x}\psi)( \boldsymbol{n})|^{2}\langle\boldsymbol{n}\rangle^{2q}\] \[\leq\|U_{x}^{-1}\|_{q}^{2}\|U_{x}\psi\|_{q}^{2}\leq\|U_{x}^{-1} \|_{q}^{2}\|U_{x}\|_{q}^{2}\|\psi\|_{q}^{2}\] \[\leq X^{4}(s-\tau-7\delta,q)\|\mathcal{U}\|_{\frac{q}{2},s-\tau-7 \delta}^{4}\|\psi\|_{q}^{2}<\infty.\] This proves Theorem 1.3. ## 7. Proof of Theorem 1.4 In this section we prove Lipschitz continuity of the IDS by using Theorem 1.1. We first prove the invariance of IDS under unitary transformation that is also nearly identical. For any self-adjoint operator \(H\) defined on \(\ell^{2}(\mathbb{Z}^{d})\) and \(E\in\mathbb{R}\), let \[\kappa_{H}(E)=\lim_{L\to\infty}\frac{1}{(2L+1)^{d}}\mathrm{tr}(\chi_{L}\mathbb{ P}_{(-\infty,E]}(H)).\] We have **Lemma 7.1**.: _Let \(H\) be a self-adjoint operator on \(\ell^{2}(\mathbb{Z}^{d})\) and let \(UHU^{-1}=D\), where \(U\) is unitary and \(D\) is diagonal. Assume that the matrix elements \(u_{\boldsymbol{i}\boldsymbol{j}}=(\delta_{\boldsymbol{i}},U\delta_{ \boldsymbol{j}})\) of \(U\) satisfy_ \[|u_{\boldsymbol{i}\boldsymbol{j}}-\delta_{\boldsymbol{i}\boldsymbol{j}}|\leq c _{1}\langle\boldsymbol{i}-\boldsymbol{j}\rangle^{-r},\] _where \(r>d\) and \(c_{1}>0\). Then_ \[\kappa_{H}(E)=\kappa_{D}(E)\] _if one of them exists._ Proof.: By \(UHU^{-1}=D\), we have \[\mathrm{tr}(\chi_{L}\mathbb{P}_{(-\infty,E]}(H)) =\mathrm{tr}(\chi_{L}U^{-1}\mathbb{P}_{(-\infty,E]}(D)U)\] \[=\mathrm{tr}(U\chi_{L}U^{-1}\mathbb{P}_{(-\infty,E]}(D))\] \[=\mathrm{tr}(\chi_{L}\mathbb{P}_{(-\infty,E]}(D)+(U\chi_{L}-\chi_{ L}U)U^{-1}\mathbb{P}_{(-\infty,E]}(D)).\] Direct computations yield \[(U\chi_{L}-\chi_{L}U)_{\mathbf{ij}}=\left\{\begin{array}{cl}0&\text{if $|\mathbf{i}|>L$ and $|\mathbf{j}|>L$,}\\ 0&\text{if $|\mathbf{i}|\leq L$ and $|\mathbf{j}|\leq L$,}\\ -u_{\mathbf{ij}}&\text{if $|\mathbf{i}|\leq L$ and $|\mathbf{j}|>L$,}\\ u_{\mathbf{ij}}&\text{if $|\mathbf{i}|>L$ and $|\mathbf{j}|\leq L$.}\end{array}\right.\] Denote \((U^{-1}\mathbb{P}_{(-\infty,E]}(D))_{\mathbf{ij}}=c_{\mathbf{ij}}\). Then \(\sup_{\mathbf{i},\mathbf{j}\in\mathbb{Z}^{d}}|c_{\mathbf{ij}}|\leq c_{2}\) for some \(c_{2}>0\). Note also that \[|((U\chi_{L}-\chi_{L}U)U^{-1}\mathbb{P}_{(-\infty,E]}(D))_{\mathbf{ij}}|\leq\left\{ \begin{array}{cl}\sum\limits_{|\mathbf{k}|>L}|u_{\mathbf{ik}}c_{\mathbf{kj}}|&\text{if $|\mathbf{i}|\leq L$,}\\ \sum\limits_{|\mathbf{k}|\leq L}|u_{\mathbf{ik}}c_{\mathbf{kj}}|&\text{if $|\mathbf{i}|>L$.} \end{array}\right.\] As a result, the decay property of \(u_{\mathbf{ij}}\) allows us to obtain \[|\text{tr}((U\chi_{L}-\chi_{L}U)U^{-1}\mathbb{P}_{(-\infty,E]}(D))| \leq c_{2}\sum_{|\mathbf{i}|\leq L}\sum_{|\mathbf{k}|>L}|u_{\mathbf{ik}}|+c_{2 }\sum_{|\mathbf{i}|>L}\sum_{|\mathbf{k}|\leq L}|u_{\mathbf{ik}}|\] \[\leq 2c_{1}c_{2}\sum_{|\mathbf{i}|\leq L}\sum_{|\mathbf{k}|>L}\langle\mathbf{i }-\mathbf{k}\rangle^{-r}.\] To control the above sum, we have the following two cases. **Case 1.**\(|\mathbf{i}|<L-\sqrt{L}\). In this case \(\langle\mathbf{i}-\mathbf{k}\rangle\geq\sqrt{L}+1\) for \(|\mathbf{k}|>L\). Note that for any \(\mathbf{i}\in\mathbb{Z}^{d}\) and \(t\in\mathbb{N}\), \[\#\{\mathbf{k}\in\mathbb{Z}^{d}:\ \langle\mathbf{i}-\mathbf{k}\rangle=t,|\mathbf{k}|>L\}\leq(2t +1)^{d}-(2t-1)^{d}\leq 2d(3t)^{d-1}.\] Hence, for a fixed \(\mathbf{i}\in\mathbb{Z}^{d}\) with \(|\mathbf{i}|<L-\sqrt{L}\), we get \[\sum_{|\mathbf{k}|>L}\langle\mathbf{i}-\mathbf{k}\rangle^{-r} =\sum_{t\geq\sqrt{L}+1}\sum_{\langle\mathbf{i}-\mathbf{k}\rangle=t,|\mathbf{ k}|>L}\langle\mathbf{i}-\mathbf{k}\rangle^{-r}\] \[=\sum_{t\geq\sqrt{L}+1}\frac{\#\{\mathbf{k}\in\mathbb{Z}^{d}:\ \langle\mathbf{i}-\mathbf{k} \rangle=t,|\mathbf{k}|>L\}}{t^{r}}\] \[\leq\sum_{t\geq\sqrt{L}+1}\frac{2d3^{d-1}}{t^{r-d+1}}.\] Since \(r>d\) and fixing any \(\eta\in(0,r-d)\), we have \[\sum_{|\mathbf{i}|\leq L-\sqrt{L}}\sum_{|\mathbf{k}|>L}\langle\mathbf{i}-\mathbf{ k}\rangle^{-r} \leq\#\{\mathbf{i}\in\mathbb{Z}^{d}:\ |\mathbf{i}|\leq L\}\sum_{t\geq\sqrt{L}+1}\frac{2d3^{d-1}}{t^{t-d+1}}\] \[\leq\frac{(2L+1)^{d}}{\sqrt{L}^{\eta}}\sum_{t\geq\sqrt{L}+1}\frac {2d3^{d-1}}{t^{r-d-\eta+1}}\] \[\leq\frac{(2L+1)^{d}}{\sqrt{L}^{\eta}}\sum_{t=1}^{\infty}\frac{2d 3^{d-1}}{t^{r-d-\eta+1}}=O(L^{d-\eta/2}).\] **Case 2.**\(L-\sqrt{L}\leq|\mathbf{i}|\leq L\). In this case we have \[\sum_{|\mathbf{k}|>L}\langle\dot{\mathbf{i}}-\mathbf{k}\rangle^{-r} =\sum_{t=1}^{\infty}\sum_{(\mathbf{i}-\mathbf{k})=t,|\mathbf{k}|>L}\langle\dot{ \mathbf{i}}-\mathbf{k}\rangle^{-r}\] \[=\sum_{t=1}^{\infty}\frac{\#\{\mathbf{k}\in\mathbb{Z}^{d}:\ \langle\dot{\mathbf{i}}-\mathbf{k} \rangle=t,|\mathbf{k}|>L\}}{t^{r}}\] \[\leq\sum_{t=1}^{\infty}\frac{2d3^{d-1}}{t^{r-d+1}}.\] Thus, we obtain \[\sum_{L-\sqrt{L}\leq|\mathbf{i}|\leq L}\sum_{|\mathbf{k}|>L}\langle\dot{ \mathbf{i}}-\mathbf{k}\rangle^{-r} \leq\#\{\mathbf{i}\in\mathbb{Z}^{d}:\ L-\sqrt{L}\leq|\mathbf{i}|\leq L \}\sum_{t=1}^{\infty}\frac{2d3^{d-1}}{t^{t-d+1}}=O(L^{d-1/2}).\] So combining **Case 1** and **Case 2** implies \[\lim_{L\to\infty}\frac{1}{(2L+1)^{d}}\text{tr}((U\chi_{L}-\chi_{ L}U)U^{-1}\mathbb{P}_{(-\infty,E]}(D))=0,\] which concludes the proof of this lemma. We are ready to prove Theorem 1.4. Proof of Theorem 1.4.: From Theorem 1.1, there exist \(\mathcal{U}\in\mathscr{U}_{\frac{R}{2},s-\tau-7\delta}^{\mathbf{\omega}}\) and \(\hat{V}\in\mathscr{P}_{\frac{R}{2}}\) so that \[\mathcal{U}(V+\mathcal{M})\mathcal{U}^{-1} =\hat{V},\] \[U_{x}H_{x}U_{x}^{-1} =T_{\hat{V}}(x),\] where \(U_{x}\) is unitary and \(T_{\hat{V}}(x)\) is diagonal for \(x\in\mathcal{Z}_{0}\). Recalling (1.9), we obtain \[|(U_{x})_{\mathbf{i}\mathbf{j}}-\delta_{\mathbf{i}\mathbf{j}}| =|(\mathcal{U}-\mathbf{1})(x-\mathbf{i}\cdot\mathbf{\omega},\mathbf{j}-\mathbf{i})|\] \[\leq\|\mathcal{U}-\mathbf{1}\|_{\frac{R}{2},s-\tau-7\delta}\langle \dot{\mathbf{i}}-\mathbf{j}\rangle^{-s+\tau+7\delta}\] \[\leq K_{1}\|\mathcal{M}\|_{R,s}^{\frac{\delta\tau}{s-\alpha_{0}- 3\delta}}\langle\dot{\mathbf{i}}-\mathbf{j}\rangle^{-(s-\tau-7\delta)},\] where \(s-\tau-7\delta>d\). Denote by \(\kappa_{0}(E)\) the IDS of \(T_{\hat{V}}(x)\). From Lemma 7.1 and \(\mathbf{\omega}\in\text{DC}_{\tau,\gamma}\), it follows that \[\kappa(E)=\kappa_{0}(E)=\text{mes}\{\theta\in\mathbb{T}:\ \hat{V}( \theta)\leq E\}, \tag{7.1}\] where \(\text{mes}(\cdot)\) denotes the Lebesgue measure. Recalling Lemma 5.2 and \(\hat{V}^{*}=\hat{V}\), we assume \(\hat{V}\) is non-decreasing in \((0,1)\) without loss of generality. Then \(\hat{V}\) has an inverse function defined on \(\mathbb{R}\) which is denoted by \(\hat{V}^{-1}\). Since \(\hat{V}\in\mathscr{P}_{\frac{R}{2}}\) and (1.10), we have \[\inf_{x\in(0,1)}\frac{d\hat{V}(x)}{dx}\geq|\hat{V}|_{\frac{R}{2}} \geq\frac{1}{2}|V|_{R}>0,\] which combined with (7.1) implies for \(E_{1}<E_{2}\), \[|\kappa(E_{2})-\kappa(E_{1})| =\operatorname{mes}\{\theta\in\mathbb{T}:\ E_{1}<\hat{V}(\theta) \leq E_{2}\}\] \[=\operatorname{mes}\{\theta\in\mathbb{T}:\ \hat{V}^{-1}(E_{1})< \theta\leq\hat{V}^{-1}(E_{2})\}\] \[=\hat{V}^{-1}(E_{2})-\hat{V}^{-1}(E_{1})\] \[\leq\frac{2}{|V|_{R}}(E_{2}-E_{1}).\] This proves Theorem 1.4. ## Acknowledgments This work is partially supported by the NSF of China (No. 12271380). ## Appendix A Proof of Lemma 2.1.: Recalling (1.5) and (3.6), we have for any \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\in\mathscr{U}_{R,s}^{\boldsymbol{\omega}}\), \[\|\mathcal{M}_{1}\mathcal{M}_{2}\|_{R,s}\leq \sup_{z\in D_{R}}\sum_{\boldsymbol{l}\in\mathbb{Z}^{d}}\sum_{ \boldsymbol{n}\in\mathbb{Z}^{d}}|\mathcal{M}_{1}(z,\boldsymbol{l})|| \mathcal{M}_{2}(z-\boldsymbol{l}\cdot\boldsymbol{\omega},\boldsymbol{n}- \boldsymbol{l})|\left(\langle\boldsymbol{l}\rangle+\langle\boldsymbol{n}- \boldsymbol{l}\rangle\right)^{s}\] \[\leq K(s)\sup_{z\in D_{R}}\sum_{\boldsymbol{l}\in\mathbb{Z}^{d}} \sum_{\boldsymbol{n}\in\mathbb{Z}^{d}}|\mathcal{M}_{1}(z,\boldsymbol{l})|| \mathcal{M}_{2}(z-\boldsymbol{l}\cdot\boldsymbol{\omega},\boldsymbol{n}- \boldsymbol{l})|\left(\langle\boldsymbol{l}\rangle^{s}+\langle\boldsymbol{n}- \boldsymbol{l}\rangle^{s}\right)\] \[\leq K(s)\left(\sup_{z\in D_{R}}\sum_{\boldsymbol{l}\in\mathbb{Z}^{ d}}|\mathcal{M}_{1}(z,\boldsymbol{l})||\langle\boldsymbol{l}\rangle^{s}\| \mathcal{M}_{2}\|_{R,0}+\sup_{z\in D_{R}}\sum_{\boldsymbol{l}\in\mathbb{Z}^{d} }|\mathcal{M}_{1}(z,\boldsymbol{l})|\|\mathcal{M}_{2}\|_{R,s}\right)\] \[\leq K(s)\left(\|\mathcal{M}_{1}\|_{R,s}\|\mathcal{M}_{2}\|_{R,0}+\| \mathcal{M}_{1}\|_{R,0}\|\mathcal{M}_{2}\|_{R,s}\right),\] which implies Lemma 2.1. Proof of Lemma 3.7.: Recalling (3.9), we have \[\|\mathcal{N}_{1}^{l_{1}}\cdots\mathcal{N}_{k}^{l_{k}}\|_{R^{\prime},s}\leq(K (s))^{l_{1}+\cdots+l_{k}-1}\sum_{m=1}^{k}l_{m}\left(\prod_{j\neq m}\|\mathcal{ N}_{j}\|_{R^{\prime},0}^{l_{j}}\right)\|\mathcal{N}_{m}\|_{R^{\prime},0}^{l_{m}-1} \|\mathcal{N}_{m}\|_{R^{\prime},s}.\] which implies \[\left\|e^{\mathcal{N}_{1}}\cdots e^{\mathcal{N}_{k}}-\boldsymbol{ 1}\right\|_{R^{\prime},s}\] \[= \left\|\sum_{l=1}^{\infty}\sum_{l_{1}+\cdots+l_{k}=l}\frac{ \mathcal{N}_{1}^{l_{1}}\cdots\mathcal{N}_{k}^{l_{k}}}{l_{1}!\cdots l_{k}!} \right\|_{R^{\prime},s}\] \[\leq \sum_{l=1}^{\infty}(K(s))^{l-1}\sum_{l_{1}+\cdots+l_{k}=l}\sum_{ m=1}^{k}\frac{l_{m}\left(\prod_{j\neq m}\|\mathcal{N}_{j}\|_{R^{\prime},0}^{l_{j}} \right)\|\mathcal{N}_{m}\|_{R^{\prime},0}^{l_{m}-1}\|\mathcal{N}_{m}\|_{R^{ \prime},s}}{l_{1}!\cdots l_{k}!}\] \[= \sum_{m=1}^{k}\sum_{l=1}^{\infty}\frac{(K(s))^{l-1}}{(l-1)!} \left(\sum_{j=1}^{k}\|\mathcal{N}_{j}\|_{R^{\prime},0}\right)^{l-1}\|\mathcal{ N}_{m}\|_{R^{\prime},s}\] \[\leq e^{K(s)\left(\sum\sum\limits_{m=1}^{k}\|\mathcal{N}_{m}\|_{R^{ \prime},0}\right)}\left(\sum_{m=1}^{k}\|\mathcal{N}_{m}\|_{R^{\prime},s} \right).\]
2301.06437
Do I Belong? Modeling Sense of Virtual Community Among Linux Kernel Contributors
The sense of belonging to a community is a basic human need that impacts an individuals behavior, long-term engagement, and job satisfaction, as revealed by research in disciplines such as psychology, healthcare, and education. Despite much research on how to retain developers in Open Source Software projects and other virtual, peer-production communities, there is a paucity of research investigating what might contribute to a sense of belonging in these communities. To that end, we develop a theoretical model that seeks to understand the link between OSS developer motives and a Sense of Virtual Community. We test the model with a dataset collected in the Linux Kernel developer community, using structural equation modeling techniques. Our results for this case study show that intrinsic motivations - social or hedonic motives - are positively associated with a sense of virtual community, but living in an authoritative country and being paid to contribute can reduce the sense of virtual community. Based on these results, we offer suggestions for open source projects to foster a sense of virtual community, with a view to retaining contributors and improving projects sustainability.
Bianca Trinkenreich, Klaas-Jan Stol, Anita Sarma, Daniel M. German, Marco A. Gerosa, Igor Steinmacher
2023-01-16T13:56:28Z
http://arxiv.org/abs/2301.06437v3
# Do I Belong? Modeling Sense of Virtual Community Among Linux Kernel Contributors ###### Abstract The sense of belonging to a community is a basic human need that impacts an individual's behavior, long-term engagement, and job satisfaction, as revealed by research in disciplines such as psychology, healthcare, and education. Despite much research on how to retain developers in Open Source Software (OSS) projects and other virtual, peer-production communities, there is a paucity of research investigating what might contribute to a sense of belonging in these communities. To that end, we develop a theoretical model that seeks to understand the link between OSS developer motives and a Sense of Virtual Community (SVC). We test the model with a dataset collected in the Linux Kernel developer community (N=225), using structural equation modeling techniques. Our results for this case study show that intrinsic motivations (social or hedonic motives) are positively associated with a sense of virtual community, but living in an authoritative country and being paid to contribute can reduce the sense of virtual community. Based on these results, we offer suggestions for open source projects to foster a sense of virtual community, with a view to retaining contributors and improving projects' sustainability. sense of virtual community, belonging, open source, software developers, human factors, survey, PLS-SEM ## I Introduction The sustainability and long-term survival of Open Source Software (OSS) projects depend not only on attracting but, more crucially, retaining motivated developers [1]. The reasons behind a developer's decision to stay or leave an OSS project can depend on different intrinsic or extrinsic factors, including an individual's feelings of identity and belonging to the community [2]. Hagerty et al. defined a sense of belonging as _"the experience of personal involvement in a system or environment so that persons feel themselves to be an integral part of that system or environment"_[3]. The need to belong is a powerful, fundamental, and pervasive force that has multiple strong effects on emotional patterns and cognitive processes across all cultures and different types of people [4]. Maslow [5] positioned 'belonging' as a basic human need, and Hagerty et al. [6] posited that a sense of belonging represents a unique mental health concept. A sense of belonging is key to productivity, satisfaction, and engagement [4], and can help to avoid attrition [7]. In Science, Technology, Engineering, and Mathematics (STEM), a sense of belonging is strongly related to retention [8], especially for underrepresented groups [9]. The sense of belonging that members have towards others within a certain group is known as a _sense of community_[10]. The dimensions of a sense of community include feelings of membership and attachment to a group [11], a feeling that members matter to one another and to the group [12]. The concept of sense of virtual community (SVC) was developed by observing that virtual communities represent a new form of community, in which social relationships are predominantly forged in cyberspace [13]. Understanding SVC in OSS is relevant as it can influence the vitality and sustainability of a community [14, 15], and is linked to more satisfied, involved, and committed contributors [16]. Individuals who develop a psychological and relational contract with a community are supported by a state of being involved, rather than external factors such as earning something or climbing a career ladder and therefore tend to develop a deeper, reciprocal relationship with that community [10]. Since sustainability is a key concern for OSS projects, we must understand SVC in OSS communities. While several studies have investigated different motivations to contribute to OSS [17, 18, 19, 20, 21, 22], none have modeled how these factors can help or hinder in creating a sense of virtual community. Without a deeper understanding of how the different factors interplay to create a sense of community, strategies that aim to promote individual factors will likely be unsuccessful in creating a sustainable community. Understanding how different factors work together or against each other can help communities strategize how to retain their contributors. Therefore, in this paper, we ask the following research question: **Research Question:**_How does a sense of virtual community develop in Open Source Software projects?_ We answer our research question by first developing a theoretical model of SVC grounded in prior literature (Sec. III). We then evaluate our model through a sample (N=225) of Linux Kernel project contributors, using partial least squares structural equation modeling (PLS-SEM) (Sec. IV). The results of our analysis provide empirical support for part of our model, showing that _hedonism_ (motivation that aims to maximize pleasure and fun and minimize pain [23]) and _social motives_ (motivation that aims to maximize joint gains and others' gains [24]) have a positive association with a sense of virtual community, which can be weakened when contributors are _being paid_ or are surrounded by an authoritative culture, i.e., national culture with a high index of power distance (Sec. V). We conclude the paper by discussing the implications of our findings, and threats to validity (Sec. VI). ## II Background ### _Sense of Virtual Community_ While numerous definitions of the term 'community' exist, a common theme is that it involves human relationships based on some common characteristics [25]. The classical McMillan and Chavis [12] definition of 'Sense of Community' includes four characteristics: (1) feelings of membership (belonging to, and identifying with, the community), (2) feelings of influence (having an influence on, and being influenced by the community), (3) integration and fulfillment of needs (being supported by others in the community while also supporting them), and (4) shared emotional connection (relationships, shared history, and a'spirit' of the community). _Virtual_ communities typify a relatively new form of interaction whereby community members share information and knowledge in the virtual space for mutual learning, collaboration, or problem solving [13]. The development of OSS involves distributed problem solving within a virtual community [26]. Virtual communities are a particularly important type of virtual group because they are self-sustaining social systems in which members engage and connect with each other, developing a Sense of Virtual Community (SVC) [27]. The sense of community includes membership, identity, belonging, and attachment to a group that primarily interacts through electronic communication [28, 29, 11]. SVC has been tailored to virtual communities by deriving from McMillan's theory of sense of community [12]. The goal of measuring SVC is to assess the "community-ness" of a virtual community [11]. Community managers can assess and promote SVC to fulfill a core set of members' needs [30], so they feel they belong to a unique group. Such meaningful relationships are associated with increased satisfaction and communication with the virtual community, trust [31], and social capital in the project [32]. SVC has been shown to lead to an occupational commitment [33], and ultimately can help retain contributors and further attract potential newcomers [11, 34], who are critical to the sustainability of OSS projects [35]. ### _Motivations to Contribute to Open Source Software_ The software engineering literature suggests that, by managing developers' motivation and satisfaction, a software organization can achieve higher productivity levels and avoid turnover, budget overflows, and delivery delays [36]. Motivations for joining Open Source has been the topic of considerable research [17, 18, 19, 20, 21, 22]; motivations can be extrinsic or intrinsic. Extrinsic motivations are based on outside incentives that make people change their actions due to an external intervention [37]. As many companies, including Microsoft, Google, and IBM, hire or sponsor OSS contributors [38], career ambition and payment have become common extrinsic motivations [39]. However, intrinsic motivations also explain much of contributors' motivations [22], moving a person to act for fun or enjoy a challenge, kinship, altruistic reasons, or ideology, rather than in response to external pressures or rewards [40]. Previous research showed that several forces influence the decision of an OSS contributor to join, remain, or leave an OSS project [41, 42, 43]. Despite the extensive attention this topic has received, there are still no studies investigating how OSS contributors driven by different motivations develop a sense of virtual community. We argue that understanding how a sense of virtual community develops in OSS involves understanding the relationship between individual characteristics and motivations and the resulting community-related feelings. ## III Theory Development Feelings of belonging in an online community can be influenced by several individual characteristics and factors of the surrounding environment [44]. In the education literature, researchers [45, 46] found associations between students' sense of belonging and a range of motivational variables. Motivational factors can be regarded as expectations related to the interaction with a virtual community (answering _why_ users behave). Integration and fulfillment of needs refer to the idea that common needs, goals, and beliefs provide an integrative force for a cohesive community that can meet collective and individual needs. Thus, meeting members' needs is a primary function of a strong community [12]. Individuals who develop a psychological relationship contract with a community because it is focused on a state of being involved--rather than earning something or getting somewhere--tend to develop a sense of community [10]. Previous research on online communities also showed that individuals who are driven by _social motives_[47] tend to develop a sense of virtual community [28, 48]. Based on the Fundamental Social Motives Inventory, we included both kinship and altruism as social motives [47] and propose the following hypothesis: **Hypothesis 1 (H1).** Open Source contributors motivated by social reasons have a higher sense of virtual community. Most of the respondents in Gerosa et al.'s study (91%) agreed (or strongly agreed) that they contribute to OSS for entertainment (fun) [22]. Hedonic motivation is a type of motivation that aims to maximize pleasure and fun and minimize pain. It is an umbrella term that includes hedonic expectancy, perceived enjoyment, and playfulness [23]. Considering that expectations of enjoyable experiences, feelings of amusement, and being mentally or intellectually stimulated by interactions are associated with a sense of virtual community [13, 15], and that changes in the perceived fulfillment of their entertainment needs can determine the change of their sense of virtual community [30], we propose the following hypothesis: **Hypothesis 2 (H2).** Open Source contributors motivated by hedonic reasons have a higher sense of virtual community. It is known that some open source contributors have a strong ideological basis for their actions [49], believing, for example, that source code should be freely available. Recently, Gerosa et al.'s study showed that, however, ideology is not a popular motivation--especially for young contributors [22]. Historically, the group-based morality of 'fighting' a shared dominant opponent incites the sense of virtual community among contributors [50]. This feeling was quite common in the 1990s, when big corporations characterized Open Source as 'communism' [51] and Linux as a 'cancer' [52]. Besides ideology, we include reciprocity in moral motives, as it represents the moral desire of contributors who aim for social justice by giving back to the community [53]. According to the Social Identity theory [54], sharing a moral vision is positively associated with feelings of belonging. Moreover, a homogeneous ideology throughout a religion was shown as being positively associated with a sense of virtual community [55]. Hence, we posit that: **Hypothesis 3 (H3).** Open Source contributors motivated by moral reasons have a higher sense of virtual community. Motivations may not always be strong enough to sustain an OSS contributor's participation [56]. Motivations may vary for different groups of people, depending on contextual factors. This implies the existence of moderating factors that change the relationship between motivations and a sense of virtual community. Cognitive Evaluation Theory suggests that feelings of autonomy are positively associated with intrinsic motivations and belonging, while tangible rewards negatively affect intrinsic motivating factors [57]. We evaluated the role of a feeling of autonomy using the variable of _power distance_ from Hofstede's framework of Country Culture [58] as a proxy; a lower power distance would reflect in higher autonomy. We also evaluated the exposition to tangible rewards using the variable _is paid_. People in societies exhibiting a large degree of power distance tend to accept a hierarchical order [59]. In high power distance cultures (where a high power differential between individuals is accepted and considered normal), information flows are usually constrained by hierarchy [58]. As an important cultural value describing the acquiescent acceptance of authority, power distance has received increasing attention in many domains [60, 61]. Prior research showed that, when surrounded by cultures with a high degree of power distance, students reported a lower sense of belonging to their school [62]. Therefore, in hierarchical cultures, leaders need control over the information flow, and the desire to restrict autonomy and access to critical information by lower-level team members could lead to significant organizational barriers to sharing knowledge and working in a community [63]. Thus, we define the following moderation hypotheses: **Hypothesis 4a (H4a).** Power distance moderates the association between Open Source contributors' social motives and their sense of virtual community. **Hypothesis 4b (H4b).** Power distance moderates the association between Open Source contributors' hedonic motives and their sense of virtual community. **Hypothesis 4c (H4c).** Power distance moderates the association between Open Source contributors' moral motives and their sense of virtual community. The traditional notion that OSS developers are all volunteers is now long outdated; many OSS contributors are currently paid, usually employed by a company, to contribute [64, 39, 65]. Indeed, many Linux Kernel contributors are paid to make their contributions, compensated by firms that have business models relying on the Linux Kernel [66, 67, 68]. In contrast to traditional paid software development work, and despite its benefits to OSS contributors, introducing financial incentives in OSS communities create complex feelings among OSS developers [69]. For example, developers on the Debian project expressed negative emotion because they felt payment went against the project's espoused values [70]. On the other side, not receiving pay for their work to support their livelihoods can frustrate OSS developers and affect their contributions [69]. Despite compensation, OSS contributors may be driven towards a project by both simultaneous feelings of belonging (intrinsic) and payment (extrinsic) [1, 39]. Nevertheless, there is no research examining the complex impact of receiving payment on intrinsic factors associated with SVC. As many OSS developers are currently paid, we would expect that the behavior of those who are paid and those who are not (volunteers) would diverge. Hence, we propose the following three moderating hypotheses: **Hypothesis 5a (H5a).** Being paid moderates the association between Open Source contributors' social motives and their sense of virtual community. **Hypothesis 5b (H5b).** Being paid moderates the association between Open Source contributors' hedonic motives and their sense of virtual community. **Hypothesis 5c (H5c).** Being paid moderates the association between Open Source contributors' moral motives and their sense of virtual community. ## IV Research Design The research design is summarized in Fig. 1. We conducted a survey among Linux Kernel contributors to evaluate our theoretical model. We studied one specific community to avoid confounding factors related to differences that each OSS community can pose. Introduced in 1991, the Linux Kernel represents one of the largest and most active OSS projects [71], boasting over ten million source lines of code and more than 20,000 contributors from different countries and cultural backgrounds, including volunteers and paid developers from more than 200 companies [72, 73]. Linux Kernel's is impact is perceived in terms of processes and infrastructure tools that emerged from the community [73]. While the Linux Kernel Mailing List is known for its uncivil comments and toxic discussions that tend to discourage people from joining the community [74], community leaders aim to change the project's image and increase the sense of community among members. We closely collaborated with contributors and maintainers of the Linux Foundation involved with Linux Kernel, who had a crucial role in designing the data collection instrument and reaching out to potential participants. They engaged in several meetings with the team and reviewed the items of the questionnaires to provide their feedback, making sure that the instrument was appropriate for the study goals. They also distributed the survey to the Linux kernel community, playing an essential role in recruiting the participants for this study. We used Partial Least Squares-Structural Equation Modeling (PLS-SEM) to analyze the relationships between motivations [75] and a sense of virtual community. SEM is a second-generation multivariate data analysis method; a recent survey (which also provides an introduction to the method) indicates that PLS-SEM has been used to study a variety of phenomena in software engineering [76]. SEM facilitates the simultaneous analysis of relationships among constructs, each measured by one or more indicator variables. The main advantage of SEM is being able to measure complex model relationships while accounting for measurement error when using latent variables (e.g., Sense of Virtual Community). PLS-SEM has previously been used in literature to evaluate factors that impact the sense of belonging in other contexts [77, 78]. In the following, we discuss the measurement model (i.e., operationalization of constructs), data collection, and analysis. ### _Measurement model_ The theoretical model comprising the hypotheses is based on a number of theoretical concepts; some of the concepts may be directly observed (e.g., 'is paid'), but others cannot (e.g., sense of virtual community)--these concepts are represented as _latent_ variables. A latent variable cannot be directly measured or observed but instead is measured through a set of indicators or manifest variables. For the latent variables in this study, we adapted existing measurement instruments. **Sense of Virtual Community**: We adapted items about feelings of a virtual community from Blanchard's [11] instrument of sense of virtual community to better fit with the context of OSS contributions. In collaboration with a group of Linux Kernel community managers, we analyzed the items proposed by Blanchard et al. [33] and decided to use a subset of questions to compose a shorter version of the instrument to cover the dimensions of SVC. The subset was discussed synchronously by researchers and managers, and the items were considered appropriate and meaningful to represent SVC to the Linux Kernel contributors. **Intrinsic Motivations**: We created items based on Gerosa et al. [22]'s instrument, which was built upon previous studies of motivations in OSS [18, 19, 20]. Following the community managers' request to make the questionnaire as short as possible, we grouped the intrinsic motivation factors from Gerosa et al.'s study [17] into three factors: 1. Social Motives (Kinship and Altruism) [47]; 2. Hedonic Motives (Joy and Fun) [23]; and 3. Moral motives (Ideology and Reciprocity) [53]. **English Confidence**: We reused four questions about the self-confidence of fluency levels during interactions by speaking and writing in technical and non-technical situations from a previous survey [79]. **Power Distance:** We asked in which country the respondent lived and used the value per country proposed by Hofstede's framework [58] as the Power Distance dimension in the model. For the demographic questions, we adapted questions from surveys used in OSS communities to ask about tenure, self-identified gender, and compensation [80, 81]. Tenure was shown in 10-year slices in Table I for presentation purposes, but was included as a continuous variable in our analysis. We provided a dropdown list of years since 1991 (the year when the Linux Kernel was launched) for respondents to inform the year they started contributing to the project. ### _Data Collection and Analysis_ We administered the online questionnaire using LimeSurvey, a leading Open Source survey software, to survey Linux Kernel contributors. We explored their motivations and their sense of virtual community. Our online appendix provides the instrument and replication package [82]. #### Iii-B1 Designing the instrument The questions were discussed during 12 online meetings between October 2020 and February 2021 in a group of five researchers experienced in OSS and survey studies and two members of the Linux community. The group discussed each of the questions until reaching a consensus. The questionnaire provides informed consent followed by closed questions about the importance of each motivation factor as a reason to contribute to the Linux Kernel and questions about their feelings about the Linux Kernel community. Finally, we added demographic questions aiming to segment analysis and understand the phenomenon considering the different dimensions of our participants, and an open question for additional comments. Investigating the forces that push people with different individual characteristics can help us better support a diverse community [22]. The questions included gender identity, financial compensation, starting the year at Linux Kernel, and country of residence. #### Iii-B2 Piloting the questionnaire After designing the instrument, we piloted the questionnaire before distributing it to Fig. 1: Research Design and Phases for Results’ Analysis the population of interest. In the first round, our collaborators from the Linux Foundation recruited a group of Linux Kernel maintainers, who answered the questionnaire and provided feedback. Although the feedback was overall positive, maintainers suggested reverse-coding some items for the SVC construct, i.e., items worded as negative statements (low score indicates agreement). Inverse, negative, or reverse-coded items can be defined as those having a directionality as opposed to the logic of the construct being measured [83]. We agreed with the suggestion and inverted two of the four items as this can help to mitigate acquiescence bias [84], which can occur when participants tend to agree with statements without regard for their actual content or due to laziness, indifference, or automatic accommodation to a response pattern [85]. The item _I feel at home in the group_ was changed to _I don't feel at home in the group_. We inverted and adapted the question _I feel that my contribution is valued_ to _I want to contribute more but I do not feel valued_. After the first pilot, we ran two more pilot sessions with two researchers who are Open Source contributors. We used the think-aloud method [86] and recorded their suggestions while answering the questions. We made minor changes to the questionnaire and increased font size for better readability on different devices. #### Iv-B3 Recruiting participants The Linux Foundation contributors who collaborated in this study took the lead in recruiting the participants from the Linux Kernel. They reached out to maintainers and contributors using mailing lists from the Linux Kernel community and interacted with maintainers to ask for engagement. Further, we presented the study motivation during the first day of the Linux Plumbers annual conference ([https://lpc.events/event/11/](https://lpc.events/event/11/)), inviting participants to answer the questionnaire. The survey was available between August 12 and September 21, 2021. #### Iv-B4 Sample Analysis We received 318 responses and carefully filtered the data to consider only valid responses. Respondents who did not complete the whole questionnaire were dropped (n=26). We also dropped the participants who answered "I'm not sure" to any of the items included with the five-point Likert scale for Motivations (n=16) and Sense of Virtual Community (n=51). In addition to the 5-point Likert scale, we included a 6th alternative ("I'm not sure") for participants who either preferred not to, or did not know how to, answer the question (to avoid forcing them), which is different from being neutral--based on the dissonance between ignorance and indifference [87, 88]. Therefore, we considered these responses as missing data. The efficacy of imputation methods has not yet been validated when using FIMIX-PLS; Sarstedt et al. [89] recommend removing all responses with missing values for any question before segmenting data into clusters. After applying these filters, we retained 225 valid responses from residents of five different continents with a broad tenure distribution. The majority identified as men (84.4%), from Europe (52.9%), and paid to contribute (65.4%), matching previously reported distributions of OSS contributors [81]. Table I presents a summary of the demographics. To establish an appropriate sample size, we conducted a power analysis using the free G*Power tool [90] (see online appendix for details). The maximum number of predictors in our model is six (three motivations and three control variables to SVC). This calculation indicated a minimum sample size of 62 and our sample of 225 exceeded that number considerably. We used the software package SmartPLS version 4 for the analyses. The analysis procedures for PLS-SEM comprise two main steps, with tests and procedures in each step. The first step is to evaluate the measurement model, which empirically assesses the relationships between the constructs and indicators (see Sec. V-A). The second step is to evaluate the theoretical (or structural) model that represents the hypotheses (see Sec. V-B). ## V Analysis and Results In this section, we describe our results, which include the evaluation of the measurement model (Sec. V-A), followed by the hypotheses evaluation in the structural model (Sec. V-B), both computed through our survey data. We assess the significance of our model by following the evaluation protocol proposed by previous research [76, 91] to make results consistent with our claims. The path weighting scheme was estimated using SmartPLS 4 [92]. Two tests are recommended to ensure that a dataset is suitable for factor analysis [93, 94]. We first conducted Bartlett's test of sphericity [93] on all constructs. We found a p-value \(<\).01 (p values less than.05 indicate that factor analysis may be useful). Second, we calculated the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy. Our result (.81) is well above the recommended threshold of.60 [94]. ### _Evaluation of the Measurement Model_ Some of the constructs in the theoretical model (see Fig. 2) are modeled as latent variables, i.e., measured by more than one observed variable (i.e., item/question on the survey). The first step in evaluating a structural equation model is to assess the soundness of the measurement of these latent variables--this is referred to as evaluating the'measurement model' [91]. We present the assessment of several criteria. #### Iv-A1 Convergent Validity First, we assess whether the questions (indicators) that represent each latent variable are understood by the respondents in the same way as they were intended by the designers of the questions [95], i.e., we assess the convergent validity of the measurement instrument. The assessment of convergent validity relates to the degree to which a measure correlates positively with alternative measures of the same construct. Our model contains two latent variables, both of which are reflective (not formative), as functions of the latent construct. Changes in the theoretical, latent construct are reflected in changes in the indicator variables [91]. We used two metrics to assess convergent validity: the Average Variance Extracted (AVE) and the loading of an indicator onto its construct (the outer loading). The AVE is equivalent to a construct's communality [91], which is the proportion of variance that is shared across indicators. A reflective construct is assumed to reflect (or "cause") any change in its indicators. The AVE should be at least.50, indicating that it explains most of the variation (i.e. 50% or more) in its indicators [91]. This variance is indicated by taking the squared value of an indicator's loading. As Table II shows, all AVE values for both latent constructs in our model are above this threshold of.50. A latent variable is measured by two or more indicators; indicators with loading below.4 should be removed because this implies that a change in the latent construct that it purportedly represents (or'reflects') does not get reflected in a sufficiently large change in the indicator [91]. Outer loading of.7 is widely considered sufficient, and.6 is considered sufficient for exploratory studies [91]. We followed an iterative process to evaluate the outer loading of the latent constructs; the indicators of the construct English Confidence all exceeded.7, but SVC had two indicators below.7. We removed the SVC indicator which had a loading below.4 (_svc6: a majority of the developers and I want the same thing_). After removing this indicator, the AVE value of SVC (now with five indicators) increased from.44 to.51 and all outer loadings were above.60. #### Iv-A2 Internal Consistency Reliability Second, we verified how well the different indicators are consistent with one another and can reliably and consistently measure the constructs, i.e., we assess the internal consistency reliability. A high degree of consistency means that the indicators refer to the same construct. There are several tests to measure internal consistency reliability. We performed both the Cronbach's \(\alpha\) and Composite Reliability tests; Cronbach's \(\alpha\) frequently shows lower values, whereas the Composite Reliability (CR) is a more liberal test, which sometimes overestimates the values [91]. A desirable range of values for both Cronbach's \(\alpha\) and CR is between.7 and.9 [91]. Values below.6 suggest a lack of internal consistency reliability, whereas values over.95 suggest that indicators are too similar and thus not desirable. The Cronbach \(\alpha\) and CR values for both latent variables fell in the range.75-.95; only the CR for English Confidence was slightly over at.951. AVE values were both higher than.50. #### Iv-A3 Discriminant Validity Third, we verified whether each construct represents characteristics not measured by other constructs, i.e., we assessed the discriminant validity of the instrument (indicating the distinctiveness of the constructs). Our model includes two latent variables (SVC and English Confidence). A primary means to assess discriminant validity is to investigate the Heterotrait-monotrait (HTMT) ratio of correlations, developed by Henseler et al. [96]. The discriminant validity could be considered problematic if the HTMT ratio exceeds.9 [96]; some scholars recommend a more conservative cut-off of.85 [91]. The HTMT ratio between the two latent constructs (SVC and English Confidence) was.24. We also assessed the cross-loadings of indicators and the Fornell-Larcker criterion. Items should only load onto their 'native' construct, the one they purportedly represent (Table III). For the sake of completeness, we report the Cornell-Larcker procedure in the online appendix [82]. ### _Evaluation of the Theoretical Model_ We now evaluate and discuss the theoretical model, which includes the evaluation of the hypotheses. #### Iv-B1 Assessing Collinearity Our theoretical model has three different exogenous variables of intrinsic motivations, the moderators 'Compensation' and 'Power Distance,' and the control variables 'English Confidence,' 'Gender,' and 'Tenure.' We hypothesized that the exogenous variables are associated with the endogenous variable Sense of Virtual Community. To ensure that the three exogenous constructs are independent, we calculate their collinearity using the Variance Inflation Factor (VIF). A widely accepted cut-off value for the VIF is 5 [91], and in our model, all VIF values are below 5. #### Iv-A2 Path Coefficients and Significance PLS does not make strong assumptions about the distribution (such as a Normal distribution) of the data, so parametric tests of significance should not be used. To evaluate whether path coefficients are statistically significant, PLS packages employ a bootstrapping procedure. This involves drawing a large number (usually five thousand) of random subsamples with replacement. The replacement is needed to guarantee that all subsamples have the same number of observations as the original data set. The PLS path model is estimated for each subsample. From the resulting bootstrap distribution, a standard error can be determined [91], which can subsequently be used to make statistical inferences. The mean path coefficient determined by bootstrapping can differ slightly from the path coefficient calculated directly from the sample; this variability is captured in the standard error of the sampling distribution of the mean. Table IV shows the results for our hypotheses, including the mean of the bootstrap distribution (\(B\)), the standard deviation (\(SD\)), the 95% confidence interval, and the p-values. The path coefficients in Fig. 2 and Table IV are interpreted as standardized regression coefficients, indicating the direct effect of a variable on another. Each hypothesis is represented by an arrow in the diagram in Fig. 2. For example, the arrow pointing from Hedonic Motives to SVC represents H2. Given its positive path coefficient (0.421), Hedonic Motives are positively associated with SVC. The path coefficient is 0.421; this means that when the score for "Hedonic" motives increases by one standard deviation unit, the score for "Sense of Virtual Community" increases by 0.421 standard deviation unit (the standard deviation is the amount of variation of a set of values). Based on these results, we found support for Hypotheses H1 (p=.002), H2 (p=.000), H4a (p=.045), and H5b (p=.023). Hypothesis H3 was not supported, nor were H4b, H4c, H5a, or H5b (all p values \(>\).2). The three control variables all have significant associations with SVC: English confidence, gender, and tenure (p \(<\).05). #### Iv-A3 Coefficient of Determination We assessed the relationship between constructs and the predictive capabilities of the model. The \(R^{2}\) values of the endogenous variable in our model (SVC) was 0.4, which is considered weak-moderate [91, 97]. We also inspected Stone-Geisser's \(Q^{2}\)[98] value, which is a measure of external validity, as an indicator of the model's predictive relevance [91], and can be obtained through a so-called blindfolding procedure (available within the SmartPLS software). Blindfolding is a resampling technique that omits certain data, predicts the omitted data points, then uses the prediction error to cross-validate the model estimates [99]. \(Q^{2}\) values are calculated only for the SVC, the reflective endogenous construct of our model, with a value of.17. Values larger than 0 indicate the construct has predictive relevance, while negative values show the model does not perform better than the simple average of the endogenous variable would do. The Standardized Root Mean Square Residual (SRMR) is a common fit measure that is appropriate to detect misspecification of PLS-SEM models [76]. SRMR is the square root of the sum of the squared differences between the model-implied Fig. 2: Item loadings and path coefficients (p \(<\) 0.05 indicated by a full line). Non-significant links are indicated with a dashed line and the empirical correlation matrix, or the Euclidean distance between the two matrices [100]. A value of 0 for SRMR would indicate a perfect fit, and a cut-off value of 0.08 is considered adequate [101]. Our results suggest a good fit of the empirical data in the theoretical model (SRMR = 0.06). #### Iv-B4 Moderating Factors We examined our data to determine if the impact of each intrinsic motivation on a sense of virtual community would change when they are exposed to a high Power Distance culture or when they are financially compensated to contribute. Only significant results at 0.05 are reported, with confidence intervals calculated through bootstrapping. * Power Distance Country Culture: Being surrounded by a high power distance culture, in which leaders impose a high level of control and restrict the information flow [58], has been reported to negatively affect the sense of virtual community [63]. We did not find significant correlations between Power Distance and SVC for hedonic or moral motivations. Still, we found it for social motivations, which has a moderating effect on our model. Hence, we found support for H4a but do not support H4b and H4c. * Compensation: Being paid to contribute reduces the sense of virtual community of contributors driven by hedonic motivations but not by social motivations and neither by moral motivations. Hence, we found support for H5b but rejected H5a and H5c. Fig. 3 presents an interaction diagram showing the simple slopes for the relationship between the exogenous variable Social Motives and the endogenous variable SVC. All three slopes are positive, indicating a positive relationship; the top line (in green) is at +1 standard deviation of the moderator, Power distance; the bottom slope (in red) is at \(-1\) standard deviation of the moderator. The middle slope (in blue) represents the relationship at the mean level of Power distance. The figure shows that given a higher level of Power distance, the relationship between social motives and SVC is _dampened_ (flatter), whereas with lower levels of Power distance, the relationship is _strengthened_ (steeper). #### Iv-B5 Control Variables We also examined our data to determine if being part of gender minorities, tenure, or English Confidence could strengthen or weaken the sense of virtual community. We found that participants who identify as gender minorities tend to have a lower sense of virtual community, while participants with higher tenure and English Confidence reported a higher sense of virtual community. ### _Cluster Analysis: Detecting Unobserved Heterogeneity_ While moderators and context factors capture _observed_ heterogeneity (see Sec. V-B4), there may also be _unobserved heterogeneity_, or _latent classes_ of respondents, the presence of which could threaten the validity of results and conclusions [89]. Latent classes of respondents refer to some groupings of respondents on one or more criteria that were not measured. The hypothesis results may differ for different groups. We adopted Becker et al.'s approach [102], which jointly applies PLS-POS and FIMIX algorithms to identify latent classes. In Step 1, we used the minimum sample size for the maximum number of segments and ran FIMIX to find the optimal number of segments. In Step 2, we ran PLS-POS to compute the segmentation. In Step 3, we ran a multi-group analysis (PLS-MGA) and evaluated whether the segments were distinguishable. In Step 4, we checked if the resulting groups were plausible. We discuss the steps in more detail. In Step 1, we assessed the maximum number of segments according to the minimum sample size (see Sec. IV-B4). Dividing the sample size (225) by the minimum sample size (62) yields a theoretical upper bound of three segments; each segment should satisfy the minimum sample size. We ran FIMIX for one (meaning, treating the original sample as a single segment), two, and three segments [89]. The results were compared using several different retention criteria (see Table V) [89]. For each criterion, the optimal solution is the number of segments with the lowest value (in _italics_ in Table V), except in terms of criterion 'EN,' where higher values indicate a better separation of segments. Sarstedt et al. [103] argue that researchers should start the fit analysis by jointly considering the _combination_ of modified Akaike's Information Criterion with factor 3 (AIC3) and Consistent AIC (CAIC) (Group 1 in Table V): when _both_ criteria suggest the same number of segments, this result is likely to be most appropriate. As this is not the case here (AIC3 suggests 3 segments, CAIC suggests 1 segment), a second evaluation considers whether modified Akaike's Information Criterion with factor 4 (AIC4) and Bayesian Information Criterion (BIC) suggest the same number of segments (Group 2 in Table V). Again, this is not the case as AIC4 suggests 3 segments, and BIC suggests 1 segment). The third evaluation (Group 3) considers the joint analysis of Akaike's Information Criterion (AIC) and Minimum Description Length with factor 5 (MDL5); first, consider the number of segments indicated by the lowest values of AIC (3 segments) and MDL5 (1 segment). The appropriate number of segments should be lower than suggested by AIC (because it tends to overestimate) and higher than the number of segments suggested by MDL5 (because it tends Fig. 3: Power distance as a moderator of Social motives \(\rightarrow\) SVC to underestimate). Hence, this combination suggests that a 2-segment solution is appropriate because 2 is lower than the 3 suggested by AIC and higher than the 1 suggested by MLD5. The value of EN is highest for the 2-segment solution. In Step 2, we evaluated the segment sizes of the 2-segment solution and proportions of data to check whether groups were substantial or candidates for exclusion. A segment is not substantial if its size is considerably lower in proportion (e.g., a 2% segment size) or below the minimum sample size [102]. The 2-segment solution divided the dataset into groups with 158 (70.2%) and 67 (29.8%) observations; both considerable portions and larger than the minimum sample size [102]. In Step 3, we ran a multi-group analysis (PLS-MGA) with parametric tests to verify whether the segments were distinguishable [102], i.e., whether the results were different for the two segments. We found significant differences in hypotheses H4b-c, H5a-c, and on the control variables Tenure and English Confidence (see Table VI), thus, we conclude these two segments represent two different groups of respondents. Both groups presented \(R^{2}\), goodness-of-fit (GoF), and SRMR [89] equal or more favorable than the original model. The values of the path coefficients and the explained variance of the endogenous variable SVC are shown in Table VI, which presents the results for the two segments, as well as the original estimates (see column \(B\) in Table IV). In Step 4, we examined that groups were "plausible" [102] by explaining the different segments (highlighted in gray in Table VI) to label the segments. This labeling is somewhat speculative and not definitive, not dissimilar to the labeling of emergent factors in exploratory factor analysis. Given that for Segment 1 only Hedonic motives are significant, we posit that this segment represents _Hedonists_ (\(B\)=.31); for Segment 2, we find that Social motives are significant (\(B\)=.22), thus we label Segment 2 as _Socially Motivated_. We note that moral motives were not significant in the original analysis (see column 'Orig.'), but this did become significant with a negative coefficient (\(B\)=-0.23) for Segment 2. For hedonists (Seg. 1), tenure (\(B\)=.43) is positively associated with SVC. When social motives are associated with SVC (Seg. 2), English Confidence positively affects SVC (\(B\)=.88). Both hedonists (\(B\)=-0.50) and socially motivated (\(B\)=-0.61) contributors have the association with SVC weakened when they are paid. Both groups showed that being part of a gender minority is associated with less SVC. ## VI Discussion In this study, we developed a theoretical model grounded in psychology literature to map the relationship between a Sense of Virtual Community and intrinsic motivations in OSS. The theoretical model includes a number of salient factors that have been shown to be important in belonging to an online community in general but not yet within the OSS domain. Over the past two decades, the nature of OSS communities (as a specific type of online community) has changed; traditionally, OSS was male-dominated and primarily volunteer-based, but being paid to contribute is now common, and increasingly we observe the participation of what we refer to as "minorities" in the broadest sense of the word, including women [104]. Our analysis highlights a number of key findings and implications; as we discuss these quantitative results, we bring exemplar quotes from the respondents' responses to the final open question of the survey to illustrate the discussion. **H1. Social motives \(\rightarrow\) SVC:** Social motives have a positive association with SVC. The intrinsic social motivations of kinship and altruism are positively associated with a sense of virtual community in OSS. This finding was corroborated by one of our respondents in the final open question, who associated SVC with social motivations: _"I did not fit in, in a big way. I was never able to create enough social capital to make networking effective, no matter who I tried to connect with."_ Another respondent mentioned _"not being able to relate to colleagues_ and named their perceived lack of SVC as _"a sense of otherness that never goes away."_ However, the cluster analysis (Sec. V-C) indicated Segment 1 (which we labeled 'hedonic') to be non-significant, but Segment 2 (labeled'social') is significant. We also found that for the'socially motivated' English confidence is much more strongly related (\(B\)=.88 instead of.13) to SVC. This is intuitive because socially motivated people seek interaction, and English is the primary language within the Linux Kernel community. **H2. Hedonic Motives \(\rightarrow\) SVC:** Hedonic motives have a positive association with a Sense of Virtual Community. OSS communities should seek to prevent toxic and other types of undesirable behavior that might reduce contributors' enjoyment; communities could also consider setting more clear community codes of conduct [74, 105, 106]. The cluster analysis showed that when only hendonism (not social motives) is associated with SVC (Seg. 1), Tenure is also associated with SVC. Hedonic-motivated contributors from our sample are also the ones who have longer tenure associated with SVC. Those contributors may have surpassed the initial barriers [107] and find enjoyment, or as mentioned by another respondent: _"It is therapeutic. When I feel bad about myself, [-] it calms me down emotionally to do Kernel development when I feel like that."_ **H3. Moral Motives \(\rightarrow\) SVC:** The cluster analysis does not support H3. While social motives are positively associated with SVC (Seg. 2), moral motives are negatively associated with and reduce SVC. The first association is expected and not surprising [47]. People motivated by kinship or because they are happy to help others are keener to be part of the team and feel good in a community [28, 48]. Interestingly, the SVC presented a negative association with moral motivation. We argue that people motivated by ideological reasons may contribute regardless of how they feel about belonging there. They do it because they feel it is the right thing to do, either because it is the most ethical choice, as advocated by the Free Software Foundation ([https://www.sfsf.org/](https://www.sfsf.org/)), or because they have a moral debt to the software project that they use, so they pay back [53]. Future research can investigate how strong the ties between these people and the community are and what roles they play in building SVC for the rest of the community. **H4a/b. Power Distance moderates the relationship between (a) Social and (b) Hedonic motives to SVC:** Being surrounded by a culture with a high level of power distance weakens SVC for socially motivated contributors (when we consider all contributors). Still, if we consider Segment 1 (Hedonic) in the cluster analysis, we observe that power distance also weakens the SVC associated with hendonism. These results align with Cognitive Evaluation Theory [57]; contributors driven by hedonic (Seg. 1) or social motives (Seg. 2) need more autonomy (through less hierarchy--less Power Distance) to develop a Sense of Virtual Community. When not exerted in toxic and harsh ways to discipline community members, concerted control of communications can also ultimately play a pro-social role in increasing the SVC by increasing cohesiveness, commitment, and conformity [108]. **H5a/b. Payment moderates the relationship between (a) Social and (b) Hedonic motives to SVC:** Being paid to contribute weakens the association with SVC for hendonist contributors. The cluster analysis shows that being paid to contribute also weakens the SVC associated with social motives. Paid contributors, even those driven by hedonic or social motivations, showed a lower Sense of Virtual Community to the Linux Kernel. This result aligns with Cognitive Evaluation Theory [57] and might be explained by the conflicting identities and divided loyalties that paid contributors have to both their sponsoring firms and the Linux Kernel community [39]. We hypothesize that these contributors would leave the community if there were no payment to compensate for their participation. **Implications for OSS communities to retain contributors** SVC is associated with practices on _exchanging support_[15, 33], _creating identities and making identifications_[33], _producing mutual cognitive and affective trust_ amongst members of a community [33], and establishing norms and a _"concertive (but not enforced) control"_[108], in which members of the community become responsible for directing their work and monitoring themselves. OSS communities can provide not only online interest groups for members, chat rooms, instant messaging, and discussion forums to encourage community involvement [109] but also online tools with shared spaces for contributors to work "together" on issues to be able to discuss and collaborate on similar interests. Better interactions can strengthen contributors' Sense of Virtual Community, especially those seeking social relationships. When the information being exchanged surpasses the technical content and includes socio-emotional support, it shows personal relationships among group members, and finally brings feelings of acceptance by members [33]. OSS communities should foster exchanging support among members to bring a positive impact on developing SVC [15]. The exchanging support includes technical and social support and happens through comments in pull requests and participation in mailing lists (by either reading or posting messages). Communities can manage pull requests and mailing lists to guarantee that members' posts are not being missed [74] and that the communication adheres to the code of conduct. **Implications for OSS communities to attract newcomers.** Exchanging information and providing support to other community members are practices associated with positive feelings toward the community, and members' stronger attachment to the community [110]. Community members can encourage newcomers to become more active and move beyond the stage of 'lurker,' enticing them to participate in mailing lists [15] and to start making social connections to establish mutual trust, be known by other contributors, and facilitate the development of their Sense of Virtual Community. Conferences and meetups can help hedonic and socially motivated contributors have fun and increase their social capital. **Implications for Research.** This study suggests a positive link between Social and Hedonic motivations and a Sense of Virtual Community. Further, the cluster analysis has detected unobserved heterogeneity within our sample, suggesting that there are different subgroups within the community for which different motivations play a more prominent role. Future work could explore how the challenges faced by contributors influence the development of a Sense of Virtual Community and how a Sense of Virtual Community influences the decisions to stay or leave a project. While we included three control variables, future work can consider additional variables, for example, demographic variables such as age. Our study focuses on the Linux Kernel community, which is a limitation to the generalizability of this study; we suggest that our findings provide a useful starting point to conduct similar studies across other specific communities or across OSS developers regardless of which community they partake in. When considering other projects, we also suggest that different project governance models might also play a role in SVC. This study has also demonstrated that payment plays a role in SVC and that minorities and marginalized individuals feel less part of the community. Finally, our study has focused on the antecedents of a sense of virtual community in OSS but not on the consequences of it, and this could be a further area of focus in future work. Future work can investigate whether SVC is related to contributor satisfaction and whether a reduced SVC leads to contributors' attrition, thus jeopardizing a community's sustainability. ## VII Limitations and Threats to Validity _Construct Validity._ We adopted and tailored existing measurement instruments for some constructs based on prior literature. Our analysis of the measurement model confirmed that our constructs were internally consistent and scored satisfactorily on convergent and discriminant validity tests. In this study we have used respondents' country of residence as a proxy for Power Distance as a dimension of culture as defined by Hofstede [59]. While also used in other studies [62], we acknowledge it is an approximation and not a perfect measure. One potential issue is that we do not know how _long_ respondents have lived in their current country of residence. Another potential issue is that contributors' culture from where they grew up may differ from their current culture. This is why we report the metric as being surrounded by a specific culture instead of having a specific culture. Measuring culture in a more precise way is an important avenue for future work in general. _Internal Validity._ Our hypotheses propose associations between different constructs rather than causal relationships, as the present study is a cross-sectional sample study [111]. We acknowledge the limitation that our respondents comprise contributors who are more likely to have a sense of virtual community as they dedicated their time to answering the questionnaire, suggesting a response bias. While it is clear that contributors motivated by some intrinsic-social reasons tend to experience a sense of virtual community and that power distance and financial compensation can influence those associations, a theoretical model such as ours cannot capture a complete and exhaustive list of factors. Other factors can play a role and our results represent a starting point for future studies. _External Validity._ The Linux Kernel is a mature project that has attracted contributors for its value over the years, and while studied frequently and sometimes positioned as a 'quintessential' open source project, open source projects can vary in many ways. The specific context of the Linux Kernel project may therefore have impacted the results, which are therefore not necessarily generalizable to other OSS projects. Nevertheless, theory-building is a continuous and iterative process of proposing, testing, and modifying theory [112], in which a single case study is a first step towards constructing theories. In that sense, it is more valuable to interpret these results as a starting point and seek theoretical generalizability rather than statistical generalizability. Further, given the very important role that the Linux Kernel project plays in the software industry, this project we argue this is an appropriate starting point; further replication studies can validate and extend our theoretical model. Our survey was conducted online and anonymously, but the numbers are aligned with the overall distribution of the Linux Kernel contributors. The Linux Kernel includes contributions of more than 20,000 developers [73], and are mostly paid to contribute [66, 68]. According to previous research, around 10% of contributors to Linux Kernel identify themselves as women [80], and the majority is from the USA, which is aligned with our sample. The responses were sufficiently consistent to find full or partial empirical support for four hypotheses. ## VIII Conclusion A Sense of Virtual Community (SVC) helps individuals feel valued in their community, leading to more satisfied, involved, and committed contributors. While research has identified different motivations and challenges for contributors to OSS, it is unclear how a sense of community is created and what factors impact it. In this paper, we close this gap by developing a theoretical model for sense of virtual community in OSS through a survey of Linux Kernel contributors. We found evidence that a subset of intrinsic motivations (social and hedonic motives) are positively associated with SVC; however, other extrinsic factors such as the country's culture and being paid to contribute can lessen SVC among contributors. Additionally, those with higher English confidence feel a higher sense of belonging in the community, and contributors who identify as part of a gender minority (non-men), tend to feel less of a sense of virtual community. Our results also show heterogeneity in our respondents, suggesting that there are different subgroups within the community for whom different motivations play a more prominent role. This suggests that a "one size fits all" approach would not work when designing interventions to create an inclusive, welcoming community. Our SVC model can help researchers and community design interventions by highlighting the different factors that interplay in creating a sense of virtual community in OSS for different subgroups of contributors. ## Acknowledgments We are indebted to Kate Stewart, Vice President of Dependable Embedded Systems at the Linux Foundation, and Shuah Khan, maintainer and contributor of the Linux Kernel, for their invaluable support and assistance in the survey design and distribution. We much appreciate the extensive efforts and time they spent in meetings and reviews, and their thoughtful input to the survey design, and reaching out to the Linux Kernel community members. We are deeply grateful to all respondents of this survey. The National Science Foundation partially supports this work under Grant Numbers 1900903, 1901031, 2236198, 2235601, and Science Foundation Ireland grants no. 13/RC/2094-P2 to Lero, the SFI Research Centre for Software, and no. 15/SIRG/3293.
2303.07150
Multi PILOT: Learned Feasible Multiple Acquisition Trajectories for Dynamic MRI
Dynamic Magnetic Resonance Imaging (MRI) is known to be a powerful and reliable technique for the dynamic imaging of internal organs and tissues, making it a leading diagnostic tool. A major difficulty in using MRI in this setting is the relatively long acquisition time (and, hence, increased cost) required for imaging in high spatio-temporal resolution, leading to the appearance of related motion artifacts and decrease in resolution. Compressed Sensing (CS) techniques have become a common tool to reduce MRI acquisition time by subsampling images in the k-space according to some acquisition trajectory. Several studies have particularly focused on applying deep learning techniques to learn these acquisition trajectories in order to attain better image reconstruction, rather than using some predefined set of trajectories. To the best of our knowledge, learning acquisition trajectories has been only explored in the context of static MRI. In this study, we consider acquisition trajectory learning in the dynamic imaging setting. We design an end-to-end pipeline for the joint optimization of multiple per-frame acquisition trajectories along with a reconstruction neural network, and demonstrate improved image reconstruction quality in shorter acquisition times. The code for reproducing all experiments is accessible at https://github.com/tamirshor7/MultiPILOT.
Tamir Shor, Tomer Weiss, Dor Noti, Alex Bronstein
2023-03-13T14:23:39Z
http://arxiv.org/abs/2303.07150v2
# Multi PILOT: Learned Feasible Multiple Acquisition Trajectories for Dynamic MRI ###### Abstract Dynamic Magnetic Resonance Imaging (MRI) is known to be a powerful and reliable technique for the dynamic imaging of internal organs and tissues, making it a leading diagnostic tool. A major difficulty in using MRI in this setting is the relatively long acquisition time (and, hence, increased cost) required for imaging in high spatio-temporal resolution, leading to the appearance of related motion artifacts and decrease in resolution. Compressed Sensing (CS) techniques have become a common tool to reduce MRI acquisition time by subsampling images in the \(k\)-space according to some acquisition trajectory. Several studies have particularly focused on applying deep learning techniques to learn these acquisition trajectories in order to attain better image reconstruction, rather than using some predefined set of trajectories. To the best of our knowledge, learning acquisition trajectories has been only explored in the context of static MRI. In this study, we consider acquisition trajectory learning in the dynamic imaging setting. We design an end-to-end pipeline for the joint optimization of multiple per-frame acquisition trajectories along with a reconstruction neural network, and demonstrate improved image reconstruction quality in shorter acquisition times. The code for reproducing all experiments is accessible at [https://github.com/tamirshor7/MultiPILOT](https://github.com/tamirshor7/MultiPILOT). Magnetic Resonance Imaging MRI MRI Magnetic Resonance Imaging MRI MRI ## 1 Introduction Magnetic Resonance Imaging (MRI) has become one of the most popular medical imaging techniques. It is often favored over other technologies due to its non-invasiveness, lack of harmful radiation, and excellent soft-tissue contrast. In particular, for some tasks, _dynamic_ MRI was shown to be substantially better applicable than static MRI. Such tasks include but are not limited to cardiac MRI, tissue motion, and cerebrospinal fluid (CSF) flow analysis. A major drawback of MRI, however, is that it requires relatively long scan times. This not only makes MRI scans expensive but also requires patients to remain still for long periods of time. Aside from causing discomfort, prolonged scanning is more susceptible to the appearance of imaging artifacts originating from the patient's movement. In the setting of dynamic MRI, reducing frame acquisition time directly increases the temporal resolution and reduces the in-frame motion artifact of the organ of interest (e.g., the heart). A popular approach for reducing scan time is Compressed Sensing (CS) techniques - these methods subsample the Fourier space (\(k\)-space) of the image according to some predefined trajectory. CS is usually used in a pipeline prior to applying some reconstruction logic for the recovery of information lost in subsampling and for filtering blurring and aliasing artifacts caused by violating the Nyquist sampling criterion (Zaitsev et al., 2015). Previous workFollowing recent years' developments in deep learning and its applicability in inverse problem solving (Ongie et al., 2020), many recent studies opted for fixing some predefined handcrafted acquisition trajectory (e.g., Cartesian, Radial, Golden Angle, - henceforth collectively referred to as _fixed trajectories_ in this paper) and focusing on developing deep learning models for denoising and restoring the image data lost in undersampling (Hammernik et al., 2018; Hyun et al., 2018), or performing super-resolution reconstruction (Chen et al., 2020; Masutani et al., 2020). Extensive research had also been made to design good handcrafted acquisition trajectories, both in the context of static ((Larson et al., 2007; Yiasemis et al., 2023)) and dynamic ((Utzschneider et al., 2021; Bliesener et al., 2020) MRI. Despite its crucial impact on the resulting image, _learning_ the acquisition trajectories within the \(k\)-space has been so far studied to a much lesser extent. While trajectory optimization can be performed over a set of Cartesian subsampling schemes (Weiss et al., 2020; Bahadir et al., 2020), recent research unveiled the potential in optimizing over more general, non-Cartesian acquisition trajectories (Alush-Aben et al., 2020; Weiss et al., 2021; Wang et al., 2021; Chaithya et al., 2022). The latter case is considered more complex as the optimization procedure must impose hardware-dictated kinematic feasibility constraints that every sampling trajectory must satisfy. Without constraining the optimization, trajectories could violate these requirements and be unrealizable in real MRI machines. Our work focuses on expanding PILOT - an end-to-end framework for joint optimization of physically feasible \(k\)-space acquisition trajectories and image reconstruction neural network previously introduced by (Weiss et al., 2021) in the static MR imaging setting. We extend this framework to the _dynamic_ MRI setting. While one can naively adapt PILOT for dynamic MRI image reconstruction by using a single learned trajectory across multiple consecutive frames, learning distinct trajectories across the frames hides the potential for improved image reconstruction which is exploited and demonstrated in the present study. From this perspective, dynamic MRI differs from its static counterpart in two important aspects. Firstly, in dynamic MRI, each data sample consists of some integer number \(n\) of frames. This implies generalizing the trajectory learning problem to learning \(n\) independent trajectories. As we later show, jointly learning \(n\) feasible trajectories along with a reconstruction network is a harder optimization problem that requires non-trivial extensions of the initial pipeline and training regime presented in PILOT. Secondly, dynamic MRI data samples the images of the same organ across time, resulting in high cross-frame redundancy, which we exploit for more efficient sampling. ContributionsThis paper makes the following contributions: 1. We present Multi-PILOT - an end-to-end pipeline for the joint optimization of multiple per-frame feasible acquisition trajectories along with a reconstruction model capable of taking cross-frame data redundancy into consideration. We demonstrate Multi-PILOT's ability to achieve superior cross-frame image reconstruction results compared to that of PILOT (that learns a single learned trajectory for all frames) and to that of constant trajectory-based reconstruction. Our improvement is shown to be expressed both in sampling time and reconstruction quality. 2. We present two trajectory learning-related training techniques that we refer to as _trajectory freezing_ and _reconstruction resets_. While we demonstrate the contribution of these methods to reconstruction results for static and dynamic MRI using only our pipeline, these techniques are generalizable to other joint sampling-reconstruction optimization tasks. 3. Our work demonstrates the intricacy of jointly learning the acquisition trajectories and the reconstruction network. We present quantitative evidence for the shortcomings of 'naive' learning of independent per-frame trajectories without incorporating any of the additional considerations proposed in this paper. ## 2 Methods We adopt the approach employed by (Weiss et al., 2021) in PILOT - a pipeline consisting of a subsampling layer simulating the data acquisition, a regridding layer creating the subsampled image on a Cartesian grid, and a reconstruction layer for recovering the subsampled image. The subsampling and regridding layers are parametrized by the \(k\)-space acquisition trajectory coordinates which are jointly learned with the reconstruction parameters in order to find the optimal trajectory. In the remainder of this section, we present how each layer is used to within our end-to-end pipeline for multi-trajectory learning and detail the training regime. ### Subsampling layer Given a data sample composed of \(n\) fully sampled frames, \(\mathbf{Z}=[\mathbf{Z}_{1},\mathbf{Z}_{2}...\mathbf{Z}_{n}]\), this layer returns a set \(\tilde{\mathbf{X}}=\hat{\mathcal{F}}_{\mathbf{K}}(\mathbf{Z})\) of the \(n\) subsampled frames in the frequency domain. We note the assumption our input is composed of a discrete set of frames is a simplifying assumption, more applicable to the case of prospectively gated MRI acquisition. To perform subsampling, the layer maintains a representation of the acquisition trajectory \(\mathbf{K}\in\mathbb{R}^{N_{\mathrm{frames}}\times N_{\mathrm{shots}}\times m}\) as learnable parameters, where \(N_{\mathrm{frames}}\) is the number of frames per data sample (along the Figure 1: **Multi-PILOT Pipeline** Fully sampled frames \(\mathbf{Z}\) are fed into our model, subsampled based on our trajectories and reconstructed. temporal dimension), \(N_{\text{shots}}\) is the number of RF excitations and \(m\) is the number of sampling points within each shot. For each frame, we use the non-uniform FFT (NUFFT) algorithm (Dutt and Rokhlin, 1993) to obtain the subsampled image in the frequency domain at the non-Cartesian locations. To impose machine-related kinematic constraints for each sampling trajectory, we employ the projection algorithm proposed by (Chauffert et al., 2016; Radhakrishna and Ciuciu, 2023). ### Regridding layer We use the adjoint NUFFT (Dutt and Rokhlin, 1993) to transform our subsampled \(k\)-space data points into \(n\) subsampled frames in the image domain, \(\tilde{\mathbf{Z}}=\hat{\mathcal{F}}_{\mathbf{K}}^{*}(\tilde{\mathbf{X}})\). ### Reconstruction model The goal of the reconstruction model is, given the downsampled frames \(\tilde{\mathbf{Z}}\) in the image domain, to output a set of reconstructed frames \(\tilde{\mathbf{Z}}=R_{\boldsymbol{\theta}}(\tilde{\mathbf{Z}})\), where \(R_{\boldsymbol{\theta}}\) is the reconstruction network parametrized with learnable weights \(\boldsymbol{\theta}\). Note that the network reconstructs a sequence of \(n\) frames at once (collectively denoted as \(\tilde{\mathbf{Z}}\)) given \(n\) corresponding inputs (collectively denoted as \(\tilde{\mathbf{Z}}\)). To embody \(R_{\boldsymbol{\theta}}\), we use the ACNN model proposed by (Du et al., 2021). ACNN is basically a U-Net (Ronneberger et al., 2015) model with attention and batch normalization layers applied throughout the pipeline, aimed at learning the optimal \(k\)-space interpolation for recovering data lost in undersampling the frequency domain. As mentioned above, an important aspect of dynamic MRI reconstruction is the opportunity to utilize data redundancy across different frames. ACNN addresses this need by learning the \(k\)-space interpolation for each frame based on several adjacent frames. Although ACNN was initially presented as a model for the undersampled reconstruction of static 3D MRI samples, in our work we adapt ACNN to reconstruct temporal sequences of two-dimensional images. It is important to emphasize that the principal focus of this work is not a specific reconstruction model; the proposed algorithm can be used with any differentiable model. ### Training regime The training of the trajectories and the reconstruction model is performed by solving the following optimization problem \[\min_{\mathbf{K},\boldsymbol{\theta}}\,\sum_{i}\mathcal{L}(R_{\boldsymbol{ \theta}}(\hat{\mathcal{F}}_{\mathbf{K}}^{*}(\hat{\mathcal{F}}_{\mathbf{K}}( \mathbf{Z}_{i}))),\mathbf{Z}_{i}), \tag{1}\] where the loss \(\mathcal{L}\) (MSE in our experiments) is summed over a training set of fully sampled sequences \(\mathbf{Z}_{i}\) each comprising \(n\) frames. We emphasize that our goal was not to find an optimal loss function, having in mind that the proposed algorithm can be used with more complicated, possibly task-specific, loss functions. The principal goal of training in such a multi-frame setting is to exploit the similarities across frames in order to achieve subsampling and reconstruction results superior to those of using a single subsampling trajectory, shared across all frames. As we later show in Section 3, naively feeding sequences of frames through our pipeline fails to achieve that. To our belief, this is due to the increased complexity of jointly optimizing a set of independent trajectories along with a reconstruction network. We found that the two training techniques described in the sequel were particularly beneficial to overcome this difficulty. Reconstruction resetsJointly optimizing acquisition trajectories and the reconstruction model is a complicated optimization task, even within the setting of static MRI. Each optimization step over one component also induces an update in the other. This means, for example, that the current state of the reconstruction model is inherently dependent not only on the current subsampling trajectories but also, possibly, on much earlier states of the subsampling model. This dependency is not desired, as we mainly want the reconstruction model to perform the best only with the recent states of the acquisition model. Furthermore, this joint optimization is more susceptible to local minima. For this reason, in Multi-PILOT, we chose to reset the weights of our reconstruction model every \(c\) training epochs, with \(c\) being a hyper-parameter kept fixed throughout the training procedure. In Section 3, we show that this method is not only beneficial in the dynamic setting but also improves results in the static case. We believe that a similar technique can be applied in various joint sampling-reconstruction optimization efforts. Trajectory freezingAs previously stated, a key goal in multi-trajectory learning is to utilize cross-frame data redundancy and adjust learned trajectories to capture the unique features required for each frame. We observed that applying the optimization step over all trajectories at once complicates the task, as each trajectory is optimized under constant variations of the data acquired for neighboring frames. As a remedy, given some set of frames \(\mathbf{Z}=[\mathbf{Z}_{1},\mathbf{Z}_{2}...\mathbf{Z}_{n}]\), we propose to optimize each trajectory within our set of frames separately. Every trajectory is only optimized for some given number of epochs, and during that time all other trajectories are fixed. After optimizing each trajectory separately, we jointly fine-tune all trajectories for several epochs. The exact details of trajectory freezing method are further explicated in Appendix B. In our experiments, we restricted the freezing schedule to chronological order, deferring the investigation of the optimal optimization order for future work. ## 3 Experimental evaluation ### Dataset We used the OCMR dataset (Chen et al., 2020), containing a total of 265 anonymized cardiovascular MRI (CMR) scans, both fully sampled and undersampled. Each sample consists of a set of a sequence of \(384\times 144\) 2D images with a variable number of frames. The data was acquired using Siemens MAGNETOM's Prisma, Avanto and Sola scanners. From this dataset, we only included a total of 62 scans containing fully-sampled multi-coil data. Given the small amount of fully-sampled data, we augmented the data using vertical and horizontal flips, image re-scaling, and modulation of the frames with Gaussian masks to highlight varying image regions. Each of the augmentation operations was applied independently at random with a probability of 0.4 in each sample. Finally, we created our training, test, and evaluation samples by splitting all of the available videos into units of 8 frames. This was done to allow training on a larger set of data samples. We note that inference with a higher number of frames currently requires retraining the model. We aim to improve this in future work. After augmentation and splitting, 4170 samples were obtained. 80% of the data were allocated for training, 17.5% for testing, and 2.5% for the validation set. ### Training setting All experiments were run on a single Nvidia RTX A4000 GPU. Optimization in all experiments was done using the Adam optimizer (Kingma and Ba, 2014). For the reconstruction model, we applied dropout with a probability of 0.1 and used an initial learning rate of \(10^{-4}\) with a decay of \(5\times 10^{-3}\) every 30 epochs. For trajectory learning, we used an initial learning rate of 0.2, decaying by a factor of 0.7 every 3 epochs. When applying reconstruction resets, we reset the reconstruction model every 35 epochs. When applying trajectory freezing, we optimized each trajectory for 35 epochs. When neither was applied, we executed each experiment for 315 epochs, so that all of our experiments ran for a total of 315 epochs (with or without resets and freezing). In all experiments batches of 12 samples each containing 8 \(384\times 144\) frames. Using this setting, our GPU memory consumption is up to 9.2GB, single epoch training time is around 13 minutes. Machine physical constraints for all experiments were Gmax = 40mT/m for the peak gradient, Smax = 200T/m/s for the maximum slew-rate, and dt = 10\(\mu\)sec for the sampling time. ### Quality metrics For quantitative evaluation, we used peak signal-to-noise ratio (PSNR), visual information fidelity (VIF) (Sheikh and Bovik, 2006), and feature similarity indexing method (FSIM)(Zhang et al., 2011). PSNR measures pixel-wise similarity between images. VIF relies on statistical attributes expected to be shared between the target and the reconstructed image. FSIM compares images based on phase congruency and gradient magnitude that are known to be related to dominant features in the human visual system. According to the study of (Mason et al., 2019), VIF and FSIM were shown to be best correlated with the image quality scores assigned to a set of reconstructed MR images by expert radiologists. In spite of its popularity in other imaging and vision tasks, we chose not to include the structural similarity index measure (SSIM) within our metrics. This is because recent evaluations (Pambrun and Noumeir, 2015; Mason et al., 2019) as well as our own findings point to SSIM's inability to credibly represent similarity in medical imaging tasks. ### Reconstruction results In this section, we provide an ablation study (Table 1) showing that Multi-PILOT achieves superior per-frame image reconstruction compared to two baselines: 1. Golden Angle rotated stack of stars (GAR) \(k\)-space acquisition (Zhou et al., 2017) - a fixed (non-learned) \(k\)-space subsampling strategy, that according to the study of (Bliesener et al., 2020) provides state-of-the-art results in non-Cartesian dynamic MRI subsampling; and 2. PILOT (with a single trajectory applied to the acquisition of all frames) is brought as the trajectory learning baseline that most resembles ours, only without our proposed adaptations to the dynamic case. We additionally conduct an ablation study to learn the contribution of reconstruction resets and trajectory freezing. In all of our experiments that include PILOT, we used the projection algorithm proposed by (Chauffert et al., 2016) to impose kinematic constraints, in spite of the fact that the original PILOT algorithm was penalty-based. This choice is dictated by the improved performance of projection-based version of PILOT, and also provides better grounds for comparison to Multi-PILOT which also utilizes the projection algorithm. In all experiments other than GAR, radial trajectory initialization was used. Golden Angle initialization was also attempted, however empirically this initialization lead to sub-optimal results using our method. Table 1 summarizes the performance of the compared reconstruction algorithms. The following conclusions are evident from the table. Firstly, the performance of the proposed method, trained with trajectory freezing and reconstruction resets surpasses both of our baselines according to all evaluated metrics. Our method achieves a 2 dB PSNR improvement, a 0.05 point VIF score improvement, and a 0.03 point FSIM score improvement compared to those achieved by our baselines. This suggests that independent per-frame trajectory learning achieves better reconstruction quality in dynamic MRI using our pipeline. Secondly, the evaluation demonstrated the advantage of using reconstruction resets both for single and multi-trajectory learning. For single trajectory learning, we observed a 0.85 dB PSNR improvement. For multi-trajectory learning, the improvement exceeded 3.12 dB and 5.28 dB with and without trajectory freezing, respectively. Similar trends are manifested in the VIF and FSIM scores. The incorporation of trajectory freezing increased the metrics by 1.54 dB PSNR, 0.02 VIF points, and 0.03 FSIM points. Thirdly, the evaluation shows that 'naively' learning multiple per-frame acquisition trajectories (without using reconstruction resets or trajectory freeze) achieves inferior reconstruction capabilities. This is true even in comparison to learning a single trajectory shared amongst all frames. We view this outcome as surprising since the solution to our optimization problem found by PILOT resides within the solution space of learning independent per-frame trajectories. We hypothesize that the reason for this result is the increased complexity of solving the optimization problem in its generalized multi-trajectory version. This assumption is supported by the noticeable improvement seen by incorporating reconstruction resets and trajectory freezing. Nonetheless, we believe that further investigation is required to elucidate this effect. The favorable performance of our method is also shown in Figure 2. Compared to PILOT's reconstruction, Multi-PILOT exhibits significantly less imaging noise and artifacts. The corresponding acquisition trajectories and correlation with visual results are further explained in Appendix C.7. Additional visual results are presented in Appendix A.1. ### Acquisition time minimization In this section, we evaluate Multi-PILOT's potential in reducing MRI scan acquisition times by comparing the number of shots required for a certain reconstruction quality. As mentioned before, we sample every frame of the \(k\)-space using a constant pre-defined number of shots - each shot comprises a sequence of independently acquired 512 frequency samples. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & Learned & Recon. & Traj. & \multicolumn{2}{c|}{PSNR} & \multicolumn{2}{c|}{VIF} & \multicolumn{1}{c|}{FSIM} \\ & traj. & Resets & Freeze & \multicolumn{2}{c|}{} & & & \\ \hline GAR & ✗ & ✗ & ✗ & \(34.30\pm 0.61\) & \(0.772\pm 0.011\) & \(0.822\pm 0.009\) \\ PILOT & Single & ✗ & ✗ & \(35.87\pm 0.74\) & \(0.699\pm 0.015\) & \(0.8554\pm 0.006\) \\ & Single & ✓ & ✗ & \(36.72\pm 0.74\) & \(0.705\pm 0.013\) & \(0.871\pm 0.005\) \\ & Multi & ✗ & ✗ & \(34.06\pm 0.62\) & \(0.725\pm 0.015\) & \(0.806\pm 0.011\) \\ & Multi & ✗ & ✓ & \(33.44\pm 0.63\) & \(0.684\pm 0.01\) & \(0.790\pm 0.009\) \\ & Multi & ✓ & ✗ & \(37.18\pm 0.72\) & \(0.806\pm 0.009\) & \(0.875\pm 0.008\) \\ Ours & Multi & ✓ & ✓ & **38.72 \(\pm\) 0.77** & **0.823 \(\pm\) 0.009** & **0.906 \(\pm\) 0.006** \\ \hline \end{tabular} \end{table} Table 1: **Reconstruction results comparison.The proposed Multi-PILOT method (denoted as _Ours_) shows favorable reconstruction in all evaluation metrics.** For our comparison, we explore the performance of our method and that of the next-best baseline: PILOT that uses a single trajectory shared across all frames, with reconstruction resets employed during training. We vary the number of shots used to sample the \(k\)-space. Evaluation results are summarized in Table 2. Our primary conclusion is the potential of our method in reducing acquisition times. For example, in all metrics, a \(12\)-shot version of Multi-PILOT achieves better reconstruction than a \(16\)-shot PILOT, while \(10\)-shot Multi-PILOT achieves comparable performance. This means that for a given required level of reconstruction, our method can use \(25-35\%\) fewer shots/sample points compared to what our baseline would have to use. The results in Table 2 also support our results from Section 3.4 and show that our method provides substantially better reconstruction PSNR, VIF, and FSIM values compared to our baseline in additional settings. Visual reconstruction results are presented in Section A.2, and the depiction of corresponding learned trajectories can be found in Appendix C. ## 4 Conclusion We investigated the task of dynamic MRI subsampling and restoration and discussed some of the challenges and unique considerations required when approaching this problem. As our solution to this problem, we proposed Multi-PILOT - an end-to-end pipeline for jointly learning optimal per-frame feasible \(k\)-space acquisition trajectories along with a multi-frame reconstruction model. Multi-PILOT is designed to address the distinct features of our problem within the dynamic setting (cross-frame data redundancy and complex optimization landscape). Our evaluation showed Multi-PILOT's potential for improving the reconstruction quality of dynamic MRI and reducing its acquisition time. We furthermore introduced reconstruction resets and trajectory freezing - two training methods that consistently and substantially improved reconstruction results within PILOT and Multi-PILOT training, and could be applicable to other subsampling and restoration pipelines. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{\(N_{\text{shots}}\)} & \multicolumn{2}{c|}{PILOT} & \multicolumn{3}{c}{Multi-PILOT} \\ \cline{2-7} & PSNR & VIF & FSIM & PSNR & VIF & FSIM \\ \hline 10 & \(35.33\pm 0.752\) & \(0.629\pm 0.014\) & \(0.841\pm 0.006\) & \(36.32\pm 0.70\) & \(0.753\pm 0.011\) & \(0.855\pm 0.008\) \\ \hline 12 & \(35.94\pm 0.73\) & \(0.672\pm 0.014\) & \(0.854\pm 0.006\) & \(37.35\pm 0.72\) & \(0.774\pm 0.01\) & \(0.878\pm 0.008\) \\ \hline 14 & \(36.45\pm 0.74\) & \(0.685\pm 0.012\) & \(0.866\pm 0.006\) & \(37.16\pm 0.72\) & \(0.775\pm 0.011\) & \(0.873\pm 0.009\) \\ \hline 16 & \(36.72\pm 0.743\) & \(0.705\pm 0.013\) & \(0.871\pm 0.005\) & \(38.72\pm 0.77\) & \(0.823\pm 0.009\) & \(0.906\pm 0.006\) \\ \hline \end{tabular} \end{table} Table 2: **Acquisition time minimization.** Using \(10-12\) shots, our method achieves reconstruction quality similar to that of our \(16\) shot baseline. Figure 2: **Representative visual reconstruction results**. Fully sampled frame (A); reconstruction from undersampled data using Multi-PILOT (B), PILOT (C) and GAR (D).
2301.09704
Improving Estimation Efficiency In Structural Equation Models By An Easy Empirical Likelihood Approach
In this article, we construct empirical likelihood (EL)-weighted estimators of linear functionals of a probability measure in the presence of side information. Motivated by nuisance parameters in semiparametric models with possibly infinite dimensions, we consider the use of estimated constraint functions and allow the number of constraints to grow with the sample size. We study the asymptotic properties and efficiency gains. The results are used to construct improved estimators of parameters in structural equation models. The EL-weighted estimators of parameters are shown to have reduced variances in a SEM in the presence of side information of stochastic independence of the random error and random covariate. Some simulation results on efficiency gain are reported.
Shan Wang, Hanxiang Peng
2023-01-23T20:24:05Z
http://arxiv.org/abs/2301.09704v1
Improving Estimation Efficiency In Structural Equation Models By An Easy Empirical Likelihood Approach ###### Abstract In this article, we construct empirical likelihood (EL)-weighted estimators of linear functionals of a probability measure in the presence of side information. Motivated by nuisance parameters in semiparametric models with possibly infinite dimension, we consider the use of estimated constraint functions and allow the number of constraints to grow with the sample size. We study the asymptotic properties and efficiency gains. The results are used to construct improved estimators of parameters in structural equations models. The EL-weighted estimators of parameters are shown to have reduced variances in a SEM in the presence of side information of stochastic independence of the random error and random covariate. Some simulation results on efficiency gain are reported. **AMS 2000 subject classifications:** Primary 62D05; secondary 62J05, 62F12, 62F40. **Keywords and phrases:** Estimated constraints; Infinitely many constraints; Maximum empirical likelihood estimators; Side information; Structural equation models. ## 1 Introduction Structure equation models (SEM) is a popular multivariate technique for analyzing data in behavioral, medical and social sciences. It is an analysis of moment structures in which the variance-covariance matrix \(\Sigma=\mbox{Var}(\mathbf{Z})\) of a random vector \(\mathbf{Z}\in\mathcal{R}^{p}\) is specified by a parametric matrix function, \(\Sigma=\Sigma(\boldsymbol{\vartheta})\), \(\boldsymbol{\vartheta}\in\Theta\) for some subset \(\Theta\) of \(\mathcal{R}^{q}\). Given independent and identically distributed (i.i.d.) observations \(\mathbf{Z}_{1},\ldots,\mathbf{Z}_{n}\) of \(\mathbf{Z}\), one focuses on estimating the parameter vector \(\boldsymbol{\vartheta}\). A moment-type estimator of \(\boldsymbol{\vartheta}\) is based on the criterion of _minimum discrepancy function (MDF)_, see e.g. Shapiro (2007)[17]. For a \(p\times p\) matrix \(\mathbf{M}\), denote by \(\mbox{vecs}(\mathbf{M})\) the \(p(p+1)/2\)-dimensional vector formed by stacking its columns of the upper triangular matrix. Let \(\Xi\) be a subset of \(\mathcal{R}^{p(p+1)/2}\) consisting of ###### Abstract We consider the _discrete-time Markov where \(\tilde{\mathbf{s}}_{n}=\mathrm{vecs}(\tilde{\mathbb{S}}_{n})\). In many semiparametric models, \(\mathbf{g}(\mathbf{z})\) involves in nuisance parameters which must be estimated, leading to a plug-in estimator \(\hat{\mathbf{g}}(\mathbf{z})\). Using it, we work with \[\hat{\mathbb{S}}_{n}=\frac{1}{n}\sum_{i=1}^{n}\frac{(\mathbf{Z}_{i}-\bar{ \mathbf{Z}})(\mathbf{Z}_{i}-\bar{\mathbf{Z}})^{\top}}{1+\hat{\mathbf{g}}^{\top }(\mathbf{Z}_{i})\hat{\boldsymbol{\zeta}}}, \tag{1.9}\] where \(\hat{\boldsymbol{\zeta}}\) is the solution to Eqt (1.7) by replacing \(\mathbf{g}(\mathbf{Z}_{i})=\hat{\mathbf{g}}(\mathbf{Z}_{i})\). As a result, an improved MDF estimator \(\hat{\boldsymbol{\vartheta}}_{n}\) of \(\boldsymbol{\vartheta}\) is any value in \(\Theta\) that satisfies \[F(\hat{\mathbf{s}}_{n},\sigma(\hat{\boldsymbol{\vartheta}}_{n}))=\inf_{ \boldsymbol{\vartheta}\in\Theta}F(\hat{\mathbf{s}}_{n},\sigma(\boldsymbol{ \vartheta})), \tag{1.10}\] where \(\hat{\mathbf{s}}_{n}=\mathrm{vecs}(\hat{\mathbb{S}}_{n})\). The improved MDF estimator \(\boldsymbol{\tilde{\vartheta}}_{n}\) is more efficient than the usual MDF estimator \(\boldsymbol{\vartheta}_{n}\). The efficiency criteria used are that of a least dispersed regular estimator or that of a locally asymptotic minimax estimator, and are based on the convolution theorems and on the lower bounds of the local asymptotic risk in LAN and LAMN families, see the monograph by Bickel, et al. (1993)[1]. The side information contained in (1.5) is carried by the EL-weights \((n(1+\mathbf{g}(\mathbf{Z}_{i})^{\top}\bar{\boldsymbol{\zeta}}))^{-1}\) based on the principle of the maximum empirical likelihood. There is an extensive amount of literature on the empirical likelihood. It was introduced by Owen (1990, 1991) [10, 11] to construct confidence intervals in a non-parametric setting. Soon it was used to construct point estimators. Qin and Lawless (1994) [14] studied maximum empirical likelihood estimators (MELE). Bravo (2010)[3] studied a class of M-estimators based on generalized empirical likelihood with side information and showed that the resulting class of estimators is efficient in the sense that it achieves the same asymptotic lower bound as that of the efficient GMM estimator with the same side information. Parente and Smith (2011)[12] investigated generalized empirical likelihood estimators for irregular constraints. Estimators of the preceding EL-weighted form were investigated in Zhang (1995, 1997) [21, 22] in M-estimation and quantile processes in the presence of auxiliary information. Hellerstein and Imbens (1999) [8] exploited such estimators for the least squares estimators in a linear regression model. Yuan et al. (2012) [20] explored such estimators in U-statistics. Tang and Leng (2012) [19] utilized the form to construct improved estimators of parameters in quantile regression. Asymptotic properties of the EL-weighted estimators were obtained for a finite number of known constraints. Motivated by nuisance parameters in semiparametric models and the infinite dimension of such models, Peng and Schick (2013)[13] considered the use of estimated constraint functions and studied a growing number of constraints in MELE. MELE enjoy high efficiency and is particularly convenient to incorporate side information. Just like any other optimization problems, however, it is not trivial to numerically find MELE especially for a large number of constraints. Peng and Schick [13] employed one-step estimators to construct MELE. The EL-weighted approach reduces the number of constraints and are thus computationally easier than general MELE. This article used the EL-weighted approach to construct efficient estimators of linear functionals of a probability measure in the presence of side information for two cases, viz, known marginal distributions and equal but unknown marginals, each of which is equivalent to infinitely many constraints. The rest of the article is organized as follows. In Section 2, we shall construct the EL-weighted estimator of the linear functional of a probability measure in the presence of side information which is expressed by an finite or infinite number of known or estimated constraints, and present the asymptotic properties. In Section 3, we give examples of side information and study the asymptotic properties of the improved estimators in SEM. The form of SEM can be extended to great extent in a variety of ways. We shall focus on the extensions that have been described in Bollen (1989)[2] as well as in the LISREL software manual (Joreskog and Sorbom (1996)[9]). The components present in a general SEM are a path analysis, the conceptual synthesis of latent variable and measurement models, and general estimation procedures. In SEM, only information up to the second moments is used, while other forms of information such higher order moments, independence or symmetry of the random errors are ignored, which can be used by the EL-weighting method to improve efficiency. In Section 4, we report simulation results. Technical details are collected in Section 5. ## 2 The main results Suppose that \(Z_{1},\ldots,Z_{n}\) are i.i.d. random variables with a common distribution \(Q\) taking values in a measurable space \(\mathcal{Z}\). We are interested in efficient estimation of the linear functional \(\boldsymbol{\theta}=\int\boldsymbol{\psi}\,dQ\) of \(Q\) for some square-integrable function \(\boldsymbol{\psi}\) from \(\mathcal{Z}\) to \(\mathcal{R}^{r}\) when side information is available through * \(\mathbf{u}\) is a measurable function from \(\mathcal{Z}\) to \(\mathcal{R}^{m}\) such that \(\int\mathbf{u}\,dQ=0\) and the variance-covariance matrix \(\mathbf{W}=\int\mathbf{u}\mathbf{u}^{\top}\,dQ\) is nonsingular. To utilize the information contained (C), consider the the empirical likelihood, \[\mathscr{R}_{n}=\sup\Big{\{}\prod_{j=1}^{n}n\pi_{j}:\boldsymbol{\pi}\in \mathscr{P}_{n},\;\sum_{j=1}^{n}\pi_{j}\mathbf{u}(Z_{j})=0\Big{\}},\] where \(\mathscr{P}_{n}=\{\pi\in[0,\,1]^{n}:\sum_{j=1}^{n}\pi_{j}=1\}\) is the unit probability simplex. Following Owen (1990)[10], one uses Lagrange multipliers to get the maximizers, \[\tilde{\pi}_{j}=\frac{1}{n}\frac{1}{1+\mathbf{u}(Z_{j})^{\top}\tilde{\boldsymbol {\zeta}}},\quad j=1,\ldots,n, \tag{2.1}\] where \(\tilde{\boldsymbol{\zeta}}\) is the solution to the equation \[\frac{1}{n}\sum_{j=1}^{n}\frac{\mathbf{u}(Z_{j})}{1+\mathbf{u}(Z_{j})^{\top} \tilde{\boldsymbol{\zeta}}}=0. \tag{2.2}\] These \(\tilde{\pi}_{j}\)'s incorporate the side information, and a natural estimator of \(\mathbf{\theta}=\int\mathbf{\psi}\,dQ\) is the EL-weighted estimator, \[\mathbf{\tilde{\theta}}=\sum_{j=1}^{n}\tilde{\pi}_{j}\mathbf{\psi}(Z_{j})=\frac{1}{n} \sum_{j=1}^{n}\frac{\mathbf{\psi}(Z_{j})}{1+\mathbf{u}(Z_{j})^{\top}\mathbf{\zeta}}. \tag{2.3}\] For \(\mathbf{\psi}_{\mathbf{t}}(\mathbf{z})=\mathbf{1}[\mathbf{z}\leq\mathbf{t}]\) for fixed \(\mathbf{t}\in\mathcal{R}^{p}\), one obtains the distribution function \(\theta=P(\mathbf{Z}\leq\mathbf{t})\). For \(\psi(\mathbf{z})=z_{1}\cdots z_{p}\), \(\theta=E(Z_{1}\cdots Z_{p})\) is the mixed moment. Write \(\|\mathbf{a}\|\) for the euclidean norm of \(\mathbf{a}\) and \(\mathbf{a}\otimes\mathbf{b}\) for the Kronecker product of \(\mathbf{a}\) and \(\mathbf{b}\). For \(\mathbf{x}=(x_{1},\ldots,x_{p}),\mathbf{y}=(y_{1},\ldots,y_{p})\), write \(\mathbf{x}\leq\mathbf{y}\) for \(x_{1}\leq y_{1},\ldots,x_{p}\leq x_{p}\). Let \(L_{2}^{m}(Q)=\left\{\mathbf{f}=(f_{1},\ldots,f_{m})^{\top}:\int\|\mathbf{f}\|^ {2}\,dQ^{m}<\infty\right\}\), and let \(L_{2,0}^{m}(Q)=\left\{\mathbf{f}\in L_{2}^{m}(Q):\int\mathbf{f}\,dQ^{m}=0\right\}\). For \(\mathbf{f}\in L_{2}^{m}(Q)\), write \([\mathbf{f}]\) for the closed linear span of the components \(f_{1}\),..., \(f_{m}\) in \(L_{2}(Q)\). Let \(Z\) be an i.i.d. copy of \(Z_{1}\). Let \(\mathbf{\phi}_{0}\) be the projection of \(\mathbf{\psi}\) onto the closed linear span \([\mathbf{u}]\) of \(\mathbf{u}\), so that \(\mathbf{\phi}_{0}=\Pi(\mathbf{\psi}|[\mathbf{u}])=E(\mathbf{\psi}(Z)\otimes\mathbf{u}^{ \top}(Z))\mathbf{W}^{-1}\mathbf{u}\). Let \(\Sigma_{0}=\operatorname{Var}(\mathbf{\psi}(Z))-\operatorname{Var}(\mathbf{\phi}_{0}( Z))\). We now give the first result with the proof delayed in Section 5. **Theorem 2.1**.: _Assume (C) with \(m\) fixed. Then \(\mathbf{\tilde{\theta}}\) given in (2.3) satisfies the stochastic expansion,_ \[\mathbf{\tilde{\theta}}=\mathbf{\bar{\psi}}-\mathbf{\bar{\phi}}_{0}+o_{p}(n^{-1/2}), \tag{2.4}\] _Thus if \(\Sigma_{0}=\operatorname{Var}(\mathbf{\psi}(Z))-\operatorname{Var}(\mathbf{\phi}_{0}( Z))\) is nonsingular then \(\sqrt{n}(\mathbf{\tilde{\theta}}-\mathbf{\theta})\) is asymptotically normal with mean zero and asymptotic covariance matrix \(\Sigma_{0}\), that is,_ \[\sqrt{n}(\mathbf{\tilde{\theta}}-\mathbf{\theta}){\Longrightarrow}\mathscr{N}(0, \Sigma_{0}).\] Theorem 2.1 exhibits that the EL-weighted estimator \(\mathbf{\tilde{\theta}}\) has a smaller asymptotic variance than that of the sample mean \(\bar{\mathbf{\psi}}\), and the amount of reduction is \(\operatorname{Var}(\mathbf{\phi}_{0}(Z))\). It is, in fact, the MELE of \(\mathbf{\theta}\). **Remark 2.1**.: Haberman (1984)[7] studied minimum Kullback-Leibler divergence -type estimators for the linear functionals of a probability measure, and more general problems involving a fixed number of side information. The EL-weighted estimator \(\mathbf{\tilde{\theta}}\) in Theorem 2.1 is asymptotically equivalent to Haberman's estimator, see his page 976. This shows that Haberman's estimator is semiparametrically efficient. In semiparametric models, the constraint function \(\mathbf{u}\) contains nuisance parameters and must be estimated. Let \(\hat{\mathbf{u}}=(\hat{u}_{1},\ldots,\hat{u}_{m})^{\top}\) be an estimate of \(\mathbf{u}\). With it we now work with the EL-weights \[\hat{\pi}_{j}=\frac{1}{n}\frac{1}{1+\hat{\mathbf{u}}(Z_{j})^{\top}\mathbf{\hat{ \zeta}}},\quad j=1,\ldots,n, \tag{2.5}\] where \(\mathbf{\hat{\zeta}}\) is the solution to the equation (2.2) with \(\mathbf{u}=\hat{\mathbf{u}}\). In the same fashion, a natural estimate \(\mathbf{\hat{\theta}}\) of \(\mathbf{\theta}\) is given by \[\mathbf{\hat{\theta}}=\sum_{j=1}^{n}\hat{\pi}_{j}\mathbf{\psi}(Z_{j})=\frac{1}{n}\sum_ {j=1}^{n}\frac{\mathbf{\psi}(Z_{j})}{1+\hat{\mathbf{u}}(Z_{j})^{\top}\mathbf{\hat{ \zeta}}}. \tag{2.6}\] Set \(\hat{\mathbf{W}}=n^{-1}\sum_{j=1}^{n}\hat{\mathbf{u}}\hat{\mathbf{u}}^{\top}(Z_{j})\). Let \(|\mathbf{W}|_{o}\) denote the spectral norm (largest eigenvalue) of a matrix \(\mathbf{W}\). We have **Theorem 2.2**.: _Assume (C) with \(m\) fixed. Let \(\hat{\mathbf{u}}\) be an estimator of \(\mathbf{u}\) such that_ \[\max_{1\leq j\leq n}\|\hat{\mathbf{u}}(Z_{j})\|=o_{p}(n^{1/2}), \tag{2.7}\] \[|\hat{\mathbf{W}}-\mathbf{W}|_{o}=o_{p}(1), \tag{2.8}\] \[\frac{1}{n}\sum_{j=1}^{n}\big{(}\boldsymbol{\psi}(Z_{j})\otimes\hat{\mathbf{u }}(Z_{j})-E\big{(}\boldsymbol{\psi}(Z_{j})\otimes\hat{\mathbf{u}}(Z_{j})\big{)} \big{)}=o_{p}(1), \tag{2.9}\] _and that there exists some measurable function \(\mathbf{v}\) that satisfies (C) such that_ \[\frac{1}{n}\sum_{j=1}^{n}E\left(\|\hat{\mathbf{u}}(Z_{j})-\mathbf{v}(Z_{j})\| ^{2}\right)=o(1), \tag{2.10}\] \[\frac{1}{n}\sum_{j=1}^{n}\hat{\mathbf{u}}(Z_{j})=\frac{1}{n}\sum_{j=1}^{n} \mathbf{v}(Z_{j})+o_{p}(n^{-1/2}). \tag{2.11}\] _Then \(\hat{\boldsymbol{\theta}}\) given in (2.6) satisfies the stochastic expansion,_ \[\hat{\boldsymbol{\theta}}=\bar{\boldsymbol{\psi}}-\bar{\boldsymbol{\phi}}+o_ {p}(n^{-1/2}), \tag{2.12}\] _where \(\boldsymbol{\phi}=\Pi(\boldsymbol{\psi}|[\mathbf{v}])\). Thus if \(\Sigma=\mathrm{Var}(\boldsymbol{\psi}(Z))-\mathrm{Var}(\boldsymbol{\phi}(Z))\) is nonsingular then_ \[\sqrt{n}(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}){\Longrightarrow}{ \mathcal{N}}(0,\Sigma).\] We now allow the number of constraints to depend on \(n\), \(m=m_{n}\), and grow to infinity with increasing \(n\). To stress the dependence, let us write \[\mathbf{u}_{n}=(u_{1},\dots,u_{m_{n}})^{\top},\quad\hat{\mathbf{u}}_{n}=(\hat {u}_{1},\dots,\hat{u}_{m_{n}})^{\top},\] and \(\bar{\boldsymbol{\theta}}_{n}=\bar{\boldsymbol{\theta}}\), \(\hat{\boldsymbol{\theta}}_{n}=\hat{\boldsymbol{\theta}}\) for the corresponding estimators of \(\boldsymbol{\theta}\), that is, \[\bar{\boldsymbol{\theta}}_{n}=\frac{1}{n}\sum_{j=1}^{n}\frac{\boldsymbol{\psi }(Z_{j})}{1+\mathbf{u}_{n}(Z_{j})^{\top}\bar{\boldsymbol{\zeta}}_{n}}\quad \text{and}\quad\hat{\boldsymbol{\theta}}_{n}=\frac{1}{n}\sum_{j=1}^{n}\frac{ \boldsymbol{\psi}(Z_{j})}{1+\hat{\mathbf{u}}_{n}(Z_{j})^{\top}\bar{\boldsymbol {\zeta}}_{n}}, \tag{2.13}\] where \(\bar{\boldsymbol{\zeta}}_{n}\) and \(\hat{\boldsymbol{\zeta}}_{n}\) solves Eqt (2.2) with \(\mathbf{u}=\mathbf{u}_{n}\) and \(\mathbf{u}=\hat{\mathbf{u}}_{n}\), respectively. Denote by \([\mathbf{u}_{\infty}]\) the closed linear span of \(\mathbf{u}_{\infty}=(u_{1},u_{2},\dots)\). Set \[\mathbf{W}_{n}=\mathrm{Var}(\mathbf{u}_{n}(Z)),\quad\bar{\mathbf{W}}_{n}= \frac{1}{n}\sum_{j=1}^{n}\mathbf{u}_{n}\mathbf{u}_{n}^{\top}(Z_{j}),\quad\hat{ \mathbf{W}}_{n}=\frac{1}{n}\sum_{j=1}^{n}\hat{\mathbf{u}}_{n}\hat{\mathbf{u}}_ {n}^{\top}(Z_{j}).\] Peng and Schick (2013) [13] introduced that a sequence \(\mathbf{W}_{n}\) of \(m_{n}\times m_{n}\) dispersion matrices is _regular_ if \[0<\inf_{n}\inf_{\|\mathbf{u}\|=1}\mathbf{u}^{\top}\mathbf{W}_{n}\mathbf{u}\leq \sup_{n}\sup_{\|\mathbf{u}\|=1}\mathbf{u}^{\top}\mathbf{W}_{n}\mathbf{u}<\infty.\] Note that if \(\mathbf{W}=\mathbf{W}_{n}\) is independent of \(n\) then the regularity of \(\mathbf{W}\) simplifies to its nonsingularity. We have **Theorem 2.3**.: _Suppose that \({\bf u}_{n}=(u_{1},\ldots,u_{m_{n}})^{\top}\) satisfies (C) for each \(m=m_{n}\) such that_ \[\max_{1\leq j\leq n}\|{\bf u}_{n}(Z_{j})\|=o_{p}(m_{n}^{-3/2}n^{1/2}), \tag{2.14}\] _that the sequence of \(m_{n}\times m_{n}\) dispersion matrices \({\bf W}_{n}\) is regular and satisfies_ \[|\tilde{\bf W}_{n}-{\bf W}_{n}|_{o}=o_{p}(m_{n}^{-1}), \tag{2.15}\] \[\frac{1}{n}\sum_{j=1}^{n}\big{(}\boldsymbol{\psi}(Z_{j})\otimes{\bf u}_{n}(Z_{ j})-E\big{(}\boldsymbol{\psi}(Z_{j})\otimes{\bf u}_{n}(Z_{j})\big{)}\big{)}=o_{p} (m_{n}^{-1/2}). \tag{2.16}\] _Then \(\boldsymbol{\tilde{\theta}}_{n}\) satisfies, as \(m_{n}\) grows to infinity with \(n\), the stochastic expansion,_ \[\boldsymbol{\tilde{\theta}}_{n}=\boldsymbol{\bar{\psi}}-\boldsymbol{\bar{ \varphi}}_{0}+o_{p}(n^{-1/2}), \tag{2.17}\] _where \(\boldsymbol{\varphi}_{0}=\Pi(\boldsymbol{\psi}|[{\bf u}_{\infty}])\). Thus if \(\Sigma_{0}=\operatorname{Var}(\boldsymbol{\psi}(Z))-\operatorname{Var}( \boldsymbol{\varphi}_{0}(Z))\) is nonsingular,_ \[\sqrt{n}(\boldsymbol{\tilde{\theta}}_{n}-\boldsymbol{\theta}){\Longrightarrow} \mathcal{N}(0,\Sigma_{0}).\] **Theorem 2.4**.: _Suppose that \({\bf u}_{n}=(u_{1},\ldots,u_{m_{n}})^{\top}\) satisfies (C) for each \(m=m_{n}\). Let \(\hat{\bf u}_{n}\) be an estimator of \({\bf u}_{n}\) such that_ \[\max_{1\leq j\leq n}\|\hat{\bf u}_{n}(Z_{j})\|=o_{p}(m_{n}^{-3/2}n^{1/2}), \tag{2.18}\] \[|\hat{\bf W}_{n}-{\bf W}_{n}|_{o}=o_{p}(m_{n}^{-1}) \tag{2.19}\] _for which the \(m_{n}\times m_{n}\) dispersion matrices \({\bf W}_{n}\) is regular,_ \[\frac{1}{n}\sum_{j=1}^{n}\big{(}\boldsymbol{\psi}(Z_{j})\otimes\hat{\bf u}_{n }(Z_{j})-E\big{(}\boldsymbol{\psi}(Z_{j})\otimes\hat{\bf u}_{n}(Z_{j})\big{)} \big{)}=o_{p}(m_{n}^{-1/2}), \tag{2.20}\] _and that there exists some measurable function \({\bf v}_{n}\) from \(\mathcal{Z}\) into \(\mathcal{R}^{m_{n}}\) such that (C) is met for every \(m=m_{n}\), the dispersion matrix \({\bf U}_{n}={\bf W}_{n}^{-1/2}\int{\bf v}_{n}{\bf v}_{n}^{\top}\,dQ{\bf W}_{n} ^{-\top/2}\) satisfies \(|{\bf U}_{n}|_{o}=O(1)\), and_ \[\frac{1}{n}\sum_{j=1}^{n}E\left(\|\hat{\bf u}_{n}(Z_{j})-{\bf v}_{n}(Z_{j})\|^ {2}\right)=o(m_{n}^{-1}),\quad\text{and} \tag{2.21}\] \[\frac{1}{n}\sum_{j=1}^{n}\hat{\bf u}_{n}(Z_{j})=\frac{1}{n}\sum_{j=1}^{n}{\bf v }_{n}(Z_{j})+o_{p}(m_{n}^{-1/2}n^{-1/2}). \tag{2.22}\] _Then \(\boldsymbol{\hat{\theta}}\) satisfies, as \(m_{n}\) tends to infinity, the stochastic expansion,_ \[\boldsymbol{\hat{\theta}}_{n}=\boldsymbol{\bar{\psi}}-\boldsymbol{\bar{\varphi }}+o_{p}(n^{-1/2}), \tag{2.23}\] _where \(\boldsymbol{\varphi}=\Pi(\boldsymbol{\psi}|[{\bf v}_{\infty}])\). Thus if \(\Sigma=\operatorname{Var}(\boldsymbol{\psi}(Z))-\operatorname{Var}( \boldsymbol{\varphi}(Z))\) is nonsingular,_ \[\sqrt{n}(\boldsymbol{\hat{\theta}}_{n}-\boldsymbol{\theta}){\Longrightarrow} \mathcal{N}(0,\Sigma).\] ## 3 The asymptotic properties of EL-weighted MDF estimators In this section, we give two examples of side information and discuss the reduction in the asymptotic covariance matrix of the improved MDF estimator \(\hat{\boldsymbol{\vartheta}}_{n}\) given in (1.10) compared with the sample MDF estimator \(\boldsymbol{\vartheta}_{n}\). ### Examples and side information We introduce the SEM and discuss side information. **Example 1**.: Consider the _combined model_ of latent variable and measurement error, \[\boldsymbol{\eta}=\ \mathbf{B}\boldsymbol{\eta}+\Gamma\boldsymbol{\xi}+ \boldsymbol{\zeta},\quad\mathbf{Y}-\boldsymbol{\mu}_{y}=\Lambda_{y}\boldsymbol {\eta}+\boldsymbol{\epsilon},\quad\mathbf{X}-\boldsymbol{\mu}_{x}=\Lambda_{x} \boldsymbol{\xi}+\boldsymbol{\delta}, \tag{3.1}\] where \(\mathbf{B}\), \(\Gamma\), \(\Lambda_{x}\), \(\Lambda_{y}\), \(\boldsymbol{\mu}_{x}\) and \(\boldsymbol{\mu}_{y}\) are compatible parameter matrices and vectors, \(\mathbf{X}\) and \(\mathbf{Y}\) are random vectors having finite fourth moments, \(\boldsymbol{\eta}\) and \(\boldsymbol{\xi}\) are latent endogenous and exogenous random vectors, respectively, and \(\boldsymbol{\zeta}\), \(\boldsymbol{\epsilon}\) and \(\boldsymbol{\delta}\) are disturbances (random errors) that satisfy \[\begin{split}& E(\boldsymbol{\zeta})=0,\ E(\boldsymbol{\epsilon})=0, \ E(\boldsymbol{\delta})=0,\quad\mathrm{Cov}(\boldsymbol{\epsilon}, \boldsymbol{\eta})=0,\quad\mathrm{Cov}(\boldsymbol{\delta},\boldsymbol{\xi})= 0,\\ &\mathrm{Cov}(\boldsymbol{\xi},\,\boldsymbol{\zeta})=0,\quad \mathrm{Cov}(\boldsymbol{\epsilon},\,\boldsymbol{\zeta})=0,\quad\mathrm{Cov}( \boldsymbol{\delta},\,\boldsymbol{\zeta})=0,\quad\mathrm{Cov}(\boldsymbol{ \epsilon},\,\boldsymbol{\delta})=0.\end{split} \tag{3.2}\] Let \(\Phi=E(\boldsymbol{\xi}\boldsymbol{\xi}^{\top})\), \(\Psi=E(\boldsymbol{\zeta}\boldsymbol{\zeta}^{\top})\), \(\Theta_{\epsilon}=E(\boldsymbol{\epsilon}\boldsymbol{\epsilon}^{\top})\) and \(\Theta_{\delta}=E(\boldsymbol{\delta}\boldsymbol{\delta}^{\top})\). The parameter vector then is \(\boldsymbol{\vartheta}=\mathrm{vecs}(\boldsymbol{\mu}_{x},\boldsymbol{\mu}_{y },\mathbf{B},\Gamma,\Lambda_{x},\Lambda_{y},\Phi,\Psi,\Theta_{\epsilon}, \Theta_{\delta})\), denoted by \(q\) the dimension. Let \(\Sigma_{yy}(\boldsymbol{\vartheta})\) be the structured variance-covariance of \(\mathbf{Y}\), and let \(\mathbf{A}=\mathbf{I}_{d}-\mathbf{B}\). Based on the relationships in (3.1) - (3.2), one derives \[\Sigma_{yy}(\boldsymbol{\vartheta})=\Lambda_{y}\mathbf{A}^{-1}(\Gamma\Phi \Gamma^{\top}+\Psi)\mathbf{A}^{-\top}\Lambda_{y}^{\top}+\Theta_{\epsilon},\] assuming that \(\mathbf{A}\) is invertible. Similarly, one derives the structured covariance matrix \(\Sigma_{yx}(\boldsymbol{\vartheta})\) of \(\mathbf{Y}\) and \(\mathbf{X}\) and the variance-covariance \(\Sigma_{xx}(\boldsymbol{\vartheta})\) of \(\mathbf{X}\), \[\Sigma_{yx}(\boldsymbol{\vartheta})=\Lambda_{y}\mathbf{A}^{-1}\Gamma\Phi \Lambda_{x}^{\top}=\Sigma_{xy}(\boldsymbol{\vartheta})^{\top},\quad\Sigma_{ xx}(\boldsymbol{\vartheta})=\Lambda_{x}\Phi\Lambda_{x}^{\top}+\Theta_{\delta}.\] The structured variance-covariance \(\Sigma(\boldsymbol{\vartheta})\) of \(\mathbf{Z}=(\mathbf{Y}^{\top},\mathbf{X}^{\top})^{\top}\) then is \[\Sigma(\boldsymbol{\vartheta})=\begin{pmatrix}\Sigma_{yy}(\boldsymbol{ \vartheta})&\Sigma_{yx}(\boldsymbol{\vartheta})\\ \Sigma_{xy}(\boldsymbol{\vartheta})&\Sigma_{xx}(\boldsymbol{\vartheta})\end{pmatrix}.\] These formulas can be found in literature, but we would mention that they are implied by the structural relationships in (3.1) - (3.2). While the unstructured sample variance-covariance matrix estimator \(\mathbb{S}_{n}\) in (1.1) of the unstructured variance-covariance \(\Sigma\) of \(\mathbf{Z}\) ignores the information contained in (3.2), the EL-weighted estimator \(\hat{\mathbb{S}}_{n}\) of \(\Sigma\) in (1.9) utilizes the information, and results in an improved estimator \(\boldsymbol{\vartheta}\) determined by (1.10). **Example 2**.: In the combined model in Example 1, consider \(\Lambda_{y}=\mathbf{I}_{d}\), \(\Lambda_{x}=\mathbf{I}_{c}\), \(\mathrm{Var}(\boldsymbol{\delta})=0\) and \(\mathrm{Var}(\boldsymbol{\epsilon})=0\). This is a SEM, and (3.1) - (3.2) simplify to \[\mathbf{Y}=\mathbf{B}\mathbf{Y}+\Gamma\mathbf{X}+\boldsymbol{\zeta},\quad E( \boldsymbol{\zeta})=0,\quad\mathrm{Cov}(\mathbf{X},\,\boldsymbol{\zeta})=0. \tag{3.3}\] Identification is crucial for the consistency and asymptotic normality of the MDF estimators. Necessary and sufficient conditions can be found in the literature, e.g., Bollen (1989)[2], Brito and Pearl (2002)[4] and Drton, et al. (2011)[5]. In particular, the Null B Rule and the Recursive Rules are sufficient conditions for the identifiability of the parameters. The former states that if \(\mathbf{B}=0\) then the parameters can be identified, while the latter says that if \(\mathbf{B}\) can be written as a lower triangular matrix with zero diagonal and the covariance matrix \(\Psi\) of the error \(\boldsymbol{\zeta}\) is diagonal then the parameters are identifiable. An example for the latter case is the model given by \[\mathbf{Y}=\mathbf{B}\mathbf{Y}+\Lambda\mathbf{X}+\boldsymbol{\epsilon}, \tag{3.4}\] where \(\mathbf{B},\Lambda\) are \(2\times 2\) matrices, with \(\mathbf{B}\) having all entries equal to \(0\) except for the \((2,1)\) entry equal to \(\beta\), and \(\Lambda\) having the (1, 1) entry equal to \(0\) and the (1, 2), (2, 1) and (2, 2) entries equal to \(\lambda_{i},i=1,2,3\), respectively. The path diagram is shown in Fig. 1. **Side information**. SEM make use of the information up to the second moments, whereas other information is completely ignored. For example, random errors are modeled as uncorrelated with covariates. It is common that the random error \(\epsilon\) is modeled as independent of the random covariate \(\mathbf{X}\). The information contained in the independence can be utilized by the vector constraint function, \[\mathbf{g}(\mathbf{Z})=\boldsymbol{\Phi}_{m}(F(\varepsilon))\otimes\boldsymbol {\Phi}_{m}(G(\mathbf{X})), \tag{3.5}\] where \(\boldsymbol{\Phi}_{m}(t)=\sqrt{2}(\cos(\pi t),\ldots,\cos(m\pi t))^{\top}\) is a vector of the first \(m\) terms of the trigonometric basis, and \(F\) and \(G\) are the respective distribution functions (DF) of the linear combination \(\varepsilon=\mathbf{a}^{\top}\boldsymbol{\epsilon}\) of \(\boldsymbol{\epsilon}\) and \(\mathbf{X}\). Here \(\mathbf{a}\) is a known constant vector and \(\otimes\) denotes the Kronecker product. See Example 1 of Peng and Schick (2013)[13] for more details. As \(F,G\) are unknown, we estimate them by the empirical DF (EDF) \(F_{n},G_{n}\). We replace \(\boldsymbol{\epsilon}\) with \(\hat{\boldsymbol{\epsilon}}=\mathbf{Y}-\hat{\mathbf{B}}\mathbf{Y}-\hat{\Gamma }\mathbf{X}\), where \(\hat{\mathbf{B}}\) Figure 1: The path diagram for SEM (3.4) and \(\hat{\Gamma}\) are the MDF estimators of \(\mathbf{B}\) and \(\Gamma\). Substitution of them in (3.5) yields the estimated constraint function, \[\hat{\mathbf{g}}(\mathbf{Z})=\mathbf{\Phi}_{m}(F_{n}(\hat{\varepsilon}))\otimes \mathbf{\Phi}_{m}(G_{n}(\mathbf{X})), \tag{3.6}\] This is a semiparametric model with (infinite dimensional) nuisance parameters \(F,G\), and the plug-in estimators of \(F_{n},G_{n}\) lead to the estimated constraints. Another example of side information is that the marginal medians (or means) \(m_{01}\) and \(m_{02}\) of \(\mathbf{X}\) are _known_. Such marginal information is often possible such as from the past data. In this case, the constraint function is \[\mathbf{g}(\mathbf{Z}_{j})=(\mathbf{1}[X_{1j}\leq m_{01}]-0.5,\,\mathbf{1}[X_{ 2j}\leq m_{02}]-0.5)^{\top},\,j=1,\ldots,n. \tag{3.7}\] ### The asymptotic properties We need some results from Shapiro (2007)[17]. Let \(\boldsymbol{\vartheta}_{0}\) be the true value of parameter \(\boldsymbol{\vartheta}\) and \(\boldsymbol{\xi}_{0}=\boldsymbol{\sigma}(\boldsymbol{\vartheta}_{0})\). By the Taylor expansion it is not difficult to show that a discrepancy function \(F\) satisfies \[2\mathbf{H}_{0}:=\frac{\partial^{2}F(\boldsymbol{\xi}_{0},\boldsymbol{\xi}_{0 })}{\partial\mathbf{t}\partial\mathbf{t}^{\top}}=\frac{\partial^{2}F( \boldsymbol{\xi}_{0},\boldsymbol{\xi}_{0})}{\partial\boldsymbol{\xi}\partial \boldsymbol{\xi}^{\top}}=-\frac{\partial^{2}F(\boldsymbol{\xi}_{0}, \boldsymbol{\xi}_{0})}{\partial\mathbf{t}\partial\boldsymbol{\xi}^{\top}}, \tag{3.8}\] and \(\mathbf{H}_{0}\) is positive definite, see also Shapiro (2007)[17]. In particular, for both \(F_{ML}\) and \(F_{GLS}\) (in the case of \(W=\mathbb{S}_{\ltimes}\)), one has \[\mathbf{H}_{0}=\Sigma_{0}^{-1}\otimes\Sigma_{0}^{-1}, \tag{3.9}\] where \(\Sigma_{0}=\Sigma(\boldsymbol{\vartheta}_{0})\). Formally, set \(\Delta(\boldsymbol{\vartheta})=\partial\boldsymbol{\sigma}(\boldsymbol{ \vartheta})/\partial\boldsymbol{\vartheta}^{\top}\) with \(\Delta_{0}=\Delta(\boldsymbol{\vartheta}_{0})\) and \[\begin{split}\mathbf{w}(\mathbf{z})&=\mathrm{vecs} \big{(}(\mathbf{z}-\boldsymbol{\mu}_{0})(\mathbf{z}-\boldsymbol{\mu}_{0})^{ \top}-\Sigma_{0}\big{)},\quad\mathbf{z}\in\mathcal{R}^{p},\\ \mathbf{v}(\mathbf{z})&=\mathbf{g}(\mathbf{z})+E( \dot{\mathbf{g}}(\mathbf{Z}))\Psi(\mathbf{z}),\quad\Psi(\mathbf{z})=(\Delta_ {0}^{\top}\mathbf{H}_{0}\Delta_{0})^{-1}\Delta_{0}^{\top}\mathbf{H}_{0} \mathbf{w}(\mathbf{z}).\end{split} \tag{3.10}\] Summarizing Shapiro's results, we have **Lemma 3.1**.: _Let \(\mathbf{Z},\mathbf{Z}_{1},\ldots,\mathbf{Z}_{n}\) be i.i.d. random vectors with finite and nonsingular covariance matrix \(\mathrm{Var}(\mathbf{w}(\mathbf{Z}))\). Assume that \(\boldsymbol{\vartheta}_{0}\) is an interior point of \(\Theta\) which is compact and can be approximated at \(\boldsymbol{\vartheta}_{0}\) by \(\mathcal{R}^{q}\). Suppose that \(F\) is a discrepancy function. Suppose that \(\boldsymbol{\sigma}(\boldsymbol{\vartheta})\) is twice continuously differentiable with gradient \(\Delta(\boldsymbol{\vartheta})\) of full rank \(q\) in a neighborhood of \(\boldsymbol{\vartheta}_{0}\). Suppose that the model is locally identifiable, i.e., \(\boldsymbol{\sigma}(\boldsymbol{\vartheta})=\boldsymbol{\sigma}(\boldsymbol{ \vartheta}_{0})\) implies \(\boldsymbol{\vartheta}=\boldsymbol{\vartheta}_{0}\) for \(\boldsymbol{\vartheta}\) in a neighborhood of \(\boldsymbol{\vartheta}_{0}\). Then_ \[\sqrt{n}(\boldsymbol{\tilde{\vartheta}}-\boldsymbol{\vartheta}_{0}){\Longrightarrow }{\mathcal{N}}(0,\mathbf{V}_{0}), \tag{3.11}\] _where \(\mathbf{V}_{0}=(\Delta_{0}^{\top}\mathbf{H}_{0}\Delta_{0})^{-1}\Delta_{0}^{ \top}\mathbf{H}_{0}\,\mathrm{Var}(\mathbf{w}(\mathbf{Z}))\mathbf{H}_{0}\Delta _{0}(\Delta_{0}^{\top}\mathbf{H}_{0}\Delta_{0})^{-1}\)._ **Remark 3.1**.: Lemma 3.1 implies \(\boldsymbol{\tilde{\vartheta}}-\boldsymbol{\vartheta}_{0}=O_{p}(n^{-1/2})\). Consequently, each residual satisfies \(\hat{\boldsymbol{\epsilon}}_{i}-\boldsymbol{\epsilon}_{i}=O_{p}(n^{-1/2})\) for \(\mathbf{Z}_{i}\) of bounded second moment. We shall impose a stronger assumption of \(E\|\hat{\boldsymbol{\epsilon}}_{i}-\boldsymbol{\epsilon}_{i}\|^{2}=O(n^{-1})\) uniformly in \(i\). Proof of Lemma 3.1.: We shall present the proof based on Theorem 5.5 of Shapiro (2007)[17]. To this end, we first verify the conditions of his Proposition 4.2 to show \(\boldsymbol{\tilde{\vartheta}}\) is a consistent estimator of \(\boldsymbol{\vartheta}_{0}\). Note that \(\mathbf{s}_{n}=\mathrm{vecs}(\mathbb{S}_{n})\) is clearly a (strongly) consistent estimator of \(\boldsymbol{\sigma}_{0}=\boldsymbol{\sigma}(\boldsymbol{\vartheta}_{0})\) since \(\mathbb{S}_{n}\) is a (strongly) consistent estimator \(\Sigma_{0}=\Sigma(\boldsymbol{\vartheta}_{0})\). The local identifiability of \(\boldsymbol{\sigma}(\boldsymbol{\vartheta})\) at \(\boldsymbol{\vartheta}_{0}\) implies the uniqueness of the optimal solution (i.e. \(\boldsymbol{\vartheta}_{0}\)), hence his (4.4) is proved since \(\Theta\) is compact, see the last paragraph of his page 238. This establishes the consistency by his Proposition 4.2. It thus follows from his Theorem 5.5, (5.11) and (5.33) that \(\boldsymbol{\tilde{\vartheta}}\) satisfies \[\boldsymbol{\tilde{\vartheta}}=\boldsymbol{\vartheta}_{0}+(\Delta_{0}^{\top} \mathbf{H}_{0}\Delta_{0})^{-1}\Delta_{0}^{\top}\mathbf{H}_{0}(\mathbf{t}_{n}- \boldsymbol{\sigma}_{0})+o_{p}(n^{-1/2}), \tag{3.12}\] where \(\mathbf{t}_{n}=\mathrm{vecs}(\mathbf{T}_{n})\) with \(\mathbf{T}_{n}=n^{-1}\sum_{j=1}^{n}(\mathbf{Z}_{j}-\boldsymbol{\mu}_{0})( \mathbf{Z}_{j}-\boldsymbol{\mu}_{0})^{\top}\). Since \(\mathbf{Z}\) has finite fourth moment, it follows from the central limit theorem, \[\sqrt{n}(\mathbf{t}_{n}-\boldsymbol{\sigma}_{0}){\Longrightarrow}\mathscr{N} (0,\mathrm{Var}(\mathbf{w}(\mathbf{Z}))). \tag{3.13}\] The preceding two displays yield the desired (3.24) and end the proof. Revesz (1976)[15] investigated the approximation of the empirical distribution function in two dimension. Unlike in the case of one dimension in which the Kolmogorov-Smirnov statistic is asymptotically distribution free, the test in two dimension is not asymptotically distribution free, as shown in his Theorem 3, which is quoted in Lemma 3.2 below. Let \(\mathbf{Y}=(Y_{1},Y_{2})^{\top}\) be a random vector, and let \(\mathbf{T}\) be a transformation of \(\mathbf{Y}\) on \(\mathcal{R}^{2}\) such that \(\mathbf{T}\mathbf{Y}\) is uniformly distributed. Consider the transformation given by \(\mathbf{T}(y_{1},y_{2})=(H(y_{1}),G(y_{2}|y_{1}))^{\top}\), where \[H(y_{1})=P(Y_{1}\leq y_{1}),\quad G(y_{2}|y_{1})=P(Y_{2}\leq y_{2}|Y_{1}=y_{1}). \tag{3.14}\] **Lemma 3.2**.: _Let \(\mathbf{Y}_{1}=(Y_{11},Y_{12})\), \(\mathbf{Y}_{2}=(Y_{21},Y_{22})\),... be a sequence of i.i.d. rv's having a common DF \(F(\mathbf{y})=F(y_{1},y_{2})\). Suppose that \(F(y_{1},y_{2})\) is absolutely continuous and satisfies_ \[\Big{|}\frac{\partial G(y_{2}|H^{-1}(y_{1}))}{\partial y_{1}}\Big{|}\leq L,\, \Big{|}\frac{\partial^{2}G(y_{2}|H^{-1}(y_{1}))}{\partial y_{1}^{2}}\Big{|} \leq L,\,\mathbf{y}=(y_{1},y_{2})\in\mathcal{R}^{2}, \tag{3.15}\] _for some constant \(L>0\). Then we can define a sequence \(\{\bar{B}_{n}\}\) of Brownian Measures (B.M.) and a Kiefer Measure (K.M.) \(\bar{K}\) such that_ \[\begin{split}&\sup_{\mathbf{y}\in\mathcal{R}^{2}}|\beta_{n}( \mathbf{y})-\bar{B}_{n}(TD_{\mathbf{y}})|=O(n^{-\frac{1}{19}}),\quad a.s.\\ &\sup_{\mathbf{y}\in\mathcal{R}^{2}}|n^{\frac{1}{2}}\beta_{n}( \mathbf{y})-\bar{K}(TD_{\mathbf{y}};n)|=O(n^{\frac{1}{2}\frac{2}{5}}),\quad a.s.\end{split} \tag{3.16}\] _where \(\beta_{n}(\mathbf{y})=n^{\frac{1}{2}}(F_{n}(\mathbf{y})-F(\mathbf{y}))\) with \(F_{n}(\mathbf{y})\) the EDF and \(D_{\mathbf{y}}=[0,y_{1}]\times[0,y_{2}]\)._ **Remark 3.2**.: We shall assume \(\sup_{\mathbf{y}}|\bar{B}_{n}(TD_{\mathbf{y}})|=O(1)\) a.s. for the DF \(F\). Here \(\bar{B}\) and \(\bar{K}\) are the stochastically equivalent versions of the "measures" \(B\) and \(K\), and Revesz (1976)[15] remarked that "All the results here will be formulated and proved in the two-dimensional case only; it appears, however, that the generalization to higher dimensions is possible via the methods of this paper". To generalize the theorem to the d-dimensional case, we keep the definition of the Wiener Process \(W({\bf x})=W(x_{1},...,x_{d})\) to be a separable Gaussian process from Revesz's paper, and define \[B.M.: B(Q_{z})=W(Q_{z})-\lambda(Q_{z})W(1,...,1)\] \[K.M.: K(Q_{z};y)=W(Q_{z},y)-\lambda(Q_{z})W(1,...,1,y)\] where \(\lambda(\cdot)\) is a Lebesgue measure on \(\mathscr{Z}^{d}\). The d-dimensional transformation is defined by Rosenblatt (1952)[16]: Let \({\bf X}=(X_{1},...,X_{d})\) be a random vector with DF \(F(x_{1},...,x_{d})\). Let \({\bf z}=(z_{1},...,z_{d})=T{\bf x}=T(x_{1},...,x_{d})\), where \(T\) is the transformation given by \[z_{1}=P(X_{1}\leq x_{1})=F_{1}(x_{1}),\] \[z_{2}=P(X_{2}\leq x_{2}|X_{1}=x_{1})=F_{2}(x_{2}|x_{1}),\] \[\vdots\] \[z_{d}=P(X_{d}\leq x_{d}|X_{d-1}\leq x_{d-1},...,X_{1}=x_{1})=F_{d}(x_{d}|x_{ d-1},...,x_{1}).\] We generalize the conditions to the d-dimensional case (S). 1. \(F({\bf x})\) is absolutely continuous on \({\bf x}\in\mathcal{R}^{d}\). 2. For all \({\bf x}=(x_{1},...,x_{d})\in\mathcal{R}^{d}\), there exists a constant \(L>0\), \[\Big{|}\frac{\partial^{2}F_{2}(x_{2}|F_{1}^{-1}(x_{1}))}{\partial x_{1}^{2}} \Big{|}\leq L,\quad\Big{|}\frac{\partial F_{2}(x_{2}|F_{1}^{-1}(x_{1}))}{ \partial x_{1}}\Big{|}\leq L,\] 3. \[\Big{|}\frac{\partial^{2}F_{3}(x_{3}|F_{2}^{-1}(x_{2}|F_{1}^{-1}( x_{1})),F_{1}^{-1}(x_{1}))}{\partial x_{i}\partial x_{j}}\Big{|}\leq L,\quad i,j=1,2,\] \[\Big{|}\frac{\partial F_{3}(x_{3}|F_{2}^{-1}(x_{2}|F_{1}^{-1}( x_{1})),F_{1}^{-1}(x_{1}))}{\partial x_{i}}\Big{|}\leq L,\quad i=1,2,\] \[\vdots\] 4. For \(d>2\), \[\Big{|}\frac{\partial^{2}F_{d}(x_{d}|F_{d-1}^{-1}(x_{d}|F_{d-2}^{-1}( x_{d-2}|...),...,F_{1}^{-1}(x_{1}))}{\partial x_{i}\partial x_{j}}\Big{|}\leq L, \quad i,j=1,...,d,\] \[\Big{|}\frac{\partial F_{d}(x_{d}|F_{d-1}^{-1}(x_{d}|F_{d-2}^{-1}( x_{d-2}|...),...,F_{1}^{-1}(x_{1}))}{\partial x_{i}}\Big{|}\leq L,\quad i=1,...,d.\] **Theorem 3.1**.: _Let \({\bf X}_{1}=(X_{11},...,X_{1d})\), \({\bf X}_{2}=(X_{21},...,X_{2d})\),... be a sequence of i.i.d. rv's having a common distribution function \(F({\bf x})\). Assume (S). Then we _can define a sequence \(\{\bar{B}_{n}\}\) of Brownian Measures (B.M.) and a Kiefer Measure (K.M.) \(\bar{K}\) such that almost surely,_ \[\begin{split}&\sup_{\mathbf{x}\in\mathcal{R}^{d}}|\beta_{n}( \mathbf{x})-\bar{B}_{n}(TD_{\mathbf{x}})|=O(n^{-\frac{1}{19}}),\\ &\sup_{\mathbf{x}\in\mathcal{R}^{d}}|n^{\frac{1}{2}}\beta_{n}(x)- \bar{K}(TD_{\mathbf{x}};n)|=O(n^{\frac{1}{2}\frac{2}{5}}),\end{split} \tag{3.17}\] _where \(\beta_{n}(\mathbf{x})=n^{\frac{1}{2}}(F_{n}(\mathbf{x})-F(\mathbf{x}))\) with \(F_{n}(\mathbf{x})\) the EDF based on the sample \(\mathbf{X}_{1}\),..., \(\mathbf{X}_{n}\), and \(D_{\mathbf{x}}=[0,x_{1}]\times...\times[0,x_{d}]\)._ We need a property of U-statistics. Let \(\boldsymbol{\xi}_{1},\ldots,\boldsymbol{\xi}_{n}\) be i.i.d. rv taking values in a measurable space \(\mathcal{S}\). Let \(\mathbf{h}\) be a measurable function from \(\mathcal{S}^{2}\) to \(\mathcal{R}^{m}\) which is symmetric, i.e., \(\mathbf{h}(\mathbf{x},\mathbf{y})=\mathbf{h}(\mathbf{y},\mathbf{x}),\mathbf{ x},\mathbf{y}\in\mathcal{S}\). A multivariate U-statistic (of order 2) with kernel \(\mathbf{h}\) is defined as \[\mathbf{U}_{n}(\mathbf{h})=\binom{n}{2}^{-1}\sum_{1\leq i<j\leq n}\mathbf{h}( \boldsymbol{\xi}_{i},\boldsymbol{\xi}_{j}).\] Assume that \(\mathbf{h}\) is square-integrable. Let \(\boldsymbol{\mu}(\mathbf{h})=E(\mathbf{h}(\boldsymbol{\xi}_{1},\boldsymbol{ \xi}_{2}))\). Recall that a kernel \(\mathbf{k}\) is _degenerate_ if \(E(\mathbf{k}(\boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2})|\boldsymbol{\xi}_{2})=0\) a.s. Let \(\tilde{\mathbf{h}}(\mathbf{x})=E(\mathbf{h}(\mathbf{x},\boldsymbol{\xi}_{2}))\), and \[\mathbf{h}^{*}(\mathbf{x},\mathbf{y})=\mathbf{h}(\mathbf{x},\mathbf{y})- \bar{\mathbf{h}}(\mathbf{x})-\bar{\mathbf{h}}(\mathbf{y})+\boldsymbol{\mu}( \mathbf{h}).\] Then \(\mathbf{h}^{*}\) is a degenerate kernel. Let \(\tilde{\mathbf{h}}=\mathbf{h}-\boldsymbol{\mu}(\mathbf{h})\). Then \[\mathbf{h}(\mathbf{x},\mathbf{y})=\boldsymbol{\mu}(\mathbf{h})+\tilde{\mathbf{ h}}(\mathbf{x})+\tilde{\mathbf{h}}(\mathbf{y})+\mathbf{h}^{*}(\mathbf{x}, \mathbf{y}).\] One thus obtains the Hoeffding decomposition for a multivariate U-statistic, \[\mathbf{U}_{n}(\mathbf{h})=\boldsymbol{\mu}(\mathbf{h})+\frac{2}{n}\sum_{j=1} ^{n}\tilde{\mathbf{h}}(\boldsymbol{\xi}_{j})+\mathbf{U}_{n}(\mathbf{h}^{*})=: \boldsymbol{\mu}(\mathbf{h})+\hat{\mathbf{U}}_{n}(\mathbf{h})+\mathbf{U}_{n}( \mathbf{h}^{*}),\quad a.s. \tag{3.18}\] Let \(\mathbf{k}\) be a degenerate kernel with \(E(\|\mathbf{k}(\boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2})\|^{2})<\infty\). For \(i<j,k<l\), one has \(E(\mathbf{k}(\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{j})\mathbf{k}(\boldsymbol{ \xi}_{k},\boldsymbol{\xi}_{l})^{\top})=E(\mathbf{k}(\boldsymbol{\xi}_{1}, \boldsymbol{\xi}_{2})^{\otimes 2})\) if \(i=l,j=l\), and is equal to zero otherwise. Thus \[E(\mathbf{U}_{n}(\mathbf{k})^{\otimes 2})=\binom{n}{2}^{-1}E(\mathbf{k}( \boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2})^{\otimes 2}). \tag{3.19}\] It is easy to see \(E(\mathbf{h}^{*}(\boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2})^{\otimes 2})\preceq E( \mathbf{h}(\boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2})^{\otimes 2})\). Thus we prove **Lemma 3.3**.: _Suppose that \(\mathbf{h}\) is a kernel with \(E(\|\mathbf{h}(\boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2})\|^{2})<\infty\). Then_ \[\mathbf{U}_{n}(\mathbf{h})-\boldsymbol{\mu}(\mathbf{h})-\hat{\mathbf{U}}_{n}( \mathbf{h})=O_{p}(n^{-1}\sqrt{E(\|\mathbf{h}(\boldsymbol{\xi}_{1},\boldsymbol{ \xi}_{2})\|^{2})}).\] We need a Lipschitz-type property. 1. Let \(\tilde{\boldsymbol{\vartheta}}^{(i)}\) be the estimator based on the observations with \(\mathbf{Z}_{i}\) left out. Assume that there is a constant \(L_{0}\) such that \[\max_{i}\|\tilde{\boldsymbol{\vartheta}}-\tilde{\boldsymbol{\vartheta}}^{(i)} \|\leq L_{0}/n.\] (3.20) Let \(\tilde{\boldsymbol{\vartheta}}^{(ij)}\) be the estimator based on the observations with \(\mathbf{Z}_{i},\mathbf{Z}_{j}\) left out. Applying (L) repeatedly, one has for some constant \(L_{0}^{\prime}\), \[\max_{ij}\|\tilde{\boldsymbol{\vartheta}}-\tilde{\boldsymbol{\vartheta}}^{(ij) }\|\leq L_{0}^{\prime}/n. \tag{3.21}\] Let \(\mathbf{v}_{n}(\mathbf{z}_{1})=\boldsymbol{\Phi}_{m_{n}}(F(\varepsilon_{1})) \otimes\boldsymbol{\Phi}_{m_{n}}(G(\mathbf{x}_{1}))+2(\mathbf{h}_{1,\mathbf{A} }(\mathbf{z}_{1})+\mathbf{h}_{1,\mathbf{B}}(\mathbf{z}_{1}))\), where \[\mathbf{h}_{1,\mathbf{A}}(\mathbf{z}_{1}) =E(\dot{\boldsymbol{\Phi}}_{m_{n}}(F(\varepsilon_{2}))\otimes \boldsymbol{\Phi}_{m_{n}}(G(\mathbf{X}_{2}))(\mathbf{1}[\varepsilon_{1}\leq \varepsilon_{2}]-F(\varepsilon_{2})|\mathbf{Z}_{1}=\mathbf{z}_{1}),\] \[\mathbf{h}_{1,\mathbf{B}}(\mathbf{z}_{1}) =E(\boldsymbol{\Phi}_{m_{n}}(F(\varepsilon_{2}))\otimes\dot{ \boldsymbol{\Phi}}_{m_{n}}(G(\mathbf{X}_{2}))(\mathbf{1}[\mathbf{x}_{1}\leq \mathbf{X}_{2}]-G(\mathbf{X}_{2}))).\] **Theorem 3.2**.: _Suppose that the assumptions in Lemma 3.1 hold. Assume (L), (S) and the assumptions in Remark 3.1 and Remark 3.2. Suppose that \(\varepsilon\) has a bounded density. Suppose that \(\mathbf{W}_{n2}=E(\boldsymbol{\Phi}_{m_{n}}(G(\mathbf{X}))^{\otimes 2})\) is regular and that \(\int\mathbf{v}\mathbf{v}^{\top}\,dQ\) is nonsingular. If both \(m_{n}\) and \(n\) tend to infinity such that \(m_{n}^{12}/n=o(1)\), then \(\hat{\mathbf{s}}_{n}\) satisfies the stochastic expansion,_ \[\hat{\mathbf{s}}_{n}=\mathbf{s}_{n}-\mathbf{c}\operatorname{Var}(\mathbf{v}( \mathbf{Z}))^{-1}\bar{\mathbf{v}}+o_{p}(n^{-1/2}), \tag{3.22}\] _where \(\mathbf{c}=E\big{(}\mathbf{w}(\mathbf{Z})\otimes\mathbf{v}^{\top}(\mathbf{Z} )\big{)}\). Thus, with \(\mathbf{D}=\operatorname{Var}(\mathbf{w}(\mathbf{Z}))-\mathbf{c}\operatorname {Var}(\mathbf{v}(\mathbf{Z}))^{-1}\mathbf{c}^{\top}\),_ \[\sqrt{n}(\hat{\mathbf{s}}_{n}-\boldsymbol{\sigma}(\boldsymbol{\vartheta}_{0}) ){\Longrightarrow}\mathscr{N}(0,\,\mathbf{D}), \tag{3.23}\] _As a consequence, \(\hat{\boldsymbol{\vartheta}}\) given in (1.10) satisfies_ \[\sqrt{n}(\hat{\boldsymbol{\vartheta}}-\boldsymbol{\vartheta}_{0}){ \Longrightarrow}\mathscr{N}(0,\mathbf{V}), \tag{3.24}\] _where \(\mathbf{V}=(\Delta_{0}^{\top}\mathbf{H}_{0}\Delta_{0})^{-1}\Delta_{0}^{\top} \mathbf{H}_{0}\mathbf{DH}_{0}\Delta_{0}(\Delta_{0}^{\top}\mathbf{H}_{0}\Delta_ {0})^{-1}\)._ Proof of Theorem 3.2. We apply Theorem 2.4 with \(\boldsymbol{\psi}(\mathbf{z})=\operatorname{vecs}((\mathbf{z}-\boldsymbol{ \mu})^{\otimes 2})\). Write \(m=m_{n}\). As \(\hat{\mathbf{u}}_{n}(\mathbf{Z}_{j})=\hat{\mathbf{g}}(\mathbf{Z}_{j})= \boldsymbol{\Phi}_{m}(F_{n}(\hat{\varepsilon}))\otimes\boldsymbol{\Phi}_{m}(G_ {n}(\mathbf{X}))\) and \(m^{7}/n=o(1)\), \[\max_{1\leq j\leq n}\|\mathbf{u}_{n}(\mathbf{Z}_{j})\|+\max_{1\leq j\leq n}\| \hat{\mathbf{u}}_{n}(\mathbf{Z}_{j})\|\leq 4m^{2}=o(m^{-3/2}n^{1/2}).\] This shows (2.18). Since \(\mathbf{W}_{n}=E(\mathbf{u}_{n}(\mathbf{Z})\mathbf{u}_{n}(\mathbf{Z})^{\top} )=\mathbf{I}_{m}\otimes\mathbf{W}_{n2}\) and \(\bar{\mathbf{W}}_{n}=\frac{1}{n}\sum_{j=1}^{n}\mathbf{u}_{n}(\mathbf{Z}_{j}) \mathbf{u}_{n}(\mathbf{Z}_{j})^{\top}\), it follows that \(\mathbf{W}_{n}\) is regular by the regularity of \(\mathbf{W}_{n2}\), and that \(|\bar{\mathbf{W}}_{n}-\mathbf{W}_{n}|_{o}=O_{p}(m^{2}n^{-1/2})\) as \[E|\bar{\mathbf{W}}_{n}-\mathbf{W}_{n}|_{o}^{2} \leq E\|\bar{\mathbf{W}}_{n}-\mathbf{W}_{n}\|^{2}=\operatorname{ trace}(E(\bar{\mathbf{W}}_{n}-\mathbf{W}_{n})^{\otimes 2})\] \[\leq n^{-1}E\|\mathbf{u}_{n}(\mathbf{Z})\|^{4}\leq m^{4}n^{-1}.\] One verifies that there exists some constant \(c_{0}>0\) such that for all \(t\). \[\|\boldsymbol{\Phi}_{m}(t)\|\leq c_{0}m^{1/2},\quad\|\dot{\boldsymbol{\Phi}}_{m} (t)\|\leq c_{0}m^{3/2},\quad\|\ddot{\boldsymbol{\Phi}}_{m}(t)\|\leq c_{0}m^{5/2}. \tag{3.25}\] By the MVT, one thus has \(|\hat{\mathbf{W}}_{n}-\bar{\mathbf{W}}|_{o}=O_{p}(m^{5}n^{-1/2}).\) Taken together one proves \(|\hat{\mathbf{W}}_{n}-\mathbf{W}_{n}|_{o}=o_{p}(m^{-1})\) as \(m^{12}/n=o(1),\) yielding (2.19). Moreover, it is not difficult to verify that \(\mathbf{U}_{n}=\mathbf{W}_{n}^{-1/2}\int\mathbf{v}_{n}\mathbf{v}_{n}^{\top}\,dQ \mathbf{W}_{n}^{-\top/2}=O(1).\) Write the left-hand-side average of (2.20) as \(\mathbf{J}_{n}+\mathbf{K}_{n}-E(\mathbf{J}_{n}+\mathbf{K}_{n}),\) where \[\mathbf{J}_{n}=\frac{1}{n}\sum_{j=1}^{n}\boldsymbol{\psi}(\mathbf{Z}_{j}) \otimes(\hat{\mathbf{u}}_{n}(\mathbf{Z}_{j})-\mathbf{u}_{n}(\mathbf{Z}_{j})),\] Note first that \[E(\|\mathbf{K}_{n}\|^{2})\leq n^{-1}E(|\boldsymbol{\psi}(\mathbf{Z})\|^{2}\| \mathbf{u}_{n}(\mathbf{Z})\|^{2})=O(m^{4}n^{-1}). \tag{3.26}\] We shall show next \[E(\|\mathbf{J}_{n}\|^{2})=O(m^{4}n^{-1}). \tag{3.27}\] Taken together we prove (2.20) as \(m^{5}/n=o(1).\) To show (3.27), using the inequality \(\|\mathbf{A}\otimes\mathbf{B}\|\leq\|\mathbf{A}\|\,\|\mathbf{B}\|\) and by (3.25), we get \[\|\hat{\mathbf{u}}_{n}(\mathbf{Z}_{j})-\mathbf{u}_{n}(\mathbf{Z}_ {j})\| \leq\|\mathbf{\Phi}_{m}(F_{n}(\hat{\varepsilon}_{j}))-\mathbf{ \Phi}_{m}(F(\varepsilon_{j}))\|\cdot\|\mathbf{\Phi}_{m}(G_{n}(\mathbf{X}_{j}))\|\] \[\quad+\|\mathbf{\Phi}_{m}(F(\varepsilon_{j})\|\cdot\|\mathbf{ \Phi}_{m}(G_{n}(\mathbf{X}_{j}))-\mathbf{\Phi}_{m}(G(\mathbf{X}_{j}))\|\] \[\leq c_{0}m^{2}(|F_{n}(\hat{\varepsilon}_{j})-F(\varepsilon_{j})| +|G_{n}(\mathbf{X}_{j})-G(\mathbf{X}_{j})|).\] Let \(D_{n}=\sup_{t}|F_{n}(t)-F(t)|=O_{p}(n^{-1/2})\) (Kolmogorov-Simirnov's statistic). As \(F\) has a bounded density (by \(c_{f}\)), we have \[|F_{n}(\hat{\varepsilon}_{j})-F(\varepsilon_{j})|\leq D_{n}+|F(\hat{ \varepsilon}_{j})-F(\varepsilon_{j})|\leq D_{n}+c_{f}|\hat{\varepsilon}_{j}- \varepsilon_{j}|, \tag{3.28}\] By (3.33) below and Remark 3.1, we thus obtain \[\frac{1}{n}\sum_{j=1}^{n}\|\hat{\mathbf{u}}(\mathbf{Z}_{j})-\mathbf{u}( \mathbf{Z}_{j})\|^{2}=O(m^{4}/n). \tag{3.29}\] Therefore (3.27) follows from \[E(\|\mathbf{J}_{n}\|^{2})\leq E(\|\boldsymbol{\psi}(\mathbf{Z})\|^{2})\frac{1 }{n}\sum_{j=1}^{n}E(\|\hat{\mathbf{u}}(\mathbf{Z}_{j})-\mathbf{u}(\mathbf{Z}_ {j})\|^{2})=O(m^{4}/n). \tag{3.30}\] We shall now show (2.21)-(2.22). Note \[\begin{split}&\mathbf{\Phi}_{m}(F_{n}(\hat{\varepsilon}_{j}))= \mathbf{\Phi}_{m}(F(\varepsilon_{j}))+\dot{\mathbf{\Phi}}_{m}(F(\varepsilon_ {j}))(F_{n}(\hat{\varepsilon}_{j})-F(\varepsilon_{j}))+\mathbf{R}_{1j}\\ &\mathbf{\Phi}_{m}(G_{n}(\mathbf{X}_{j}))=\mathbf{\Phi}_{m}(G( \mathbf{X}_{j}))+\dot{\mathbf{\Phi}}_{m}(G(\mathbf{X}_{j}))(G_{n}(\mathbf{X}_{ j})-G(\mathbf{X}_{j}))+\mathbf{R}_{2j},\end{split} \tag{3.31}\] where, by (3.28) and the assumption in Remark 3.1, we have \[\max_{1\leq j\leq n}\|\mathbf{R}_{1j}\|=O_{p}(m^{5/2}/n). \tag{3.32}\] By (3.16) and the assumption in Remark 3.2, we have \[\max_{1\leq j\leq n}|G_{n}(\mathbf{X}_{j})-G(\mathbf{X}_{j})|=O_{p}(n^{-1/2}). \tag{3.33}\] Similarly by Remark 3.2, \[\max_{j}\|\mathbf{R}_{2j}\|=O_{p}(m^{5/2}/n). \tag{3.34}\] By (3.31), \[\frac{1}{n}\sum_{j=1}^{n}\hat{\mathbf{u}}_{n}(\mathbf{Z}_{j})=\frac{1}{n}\sum_ {j=1}^{n}\mathbf{u}_{n}(\mathbf{Z}_{j})+\mathbf{A}+\mathbf{B}+\mathbf{R}, \tag{3.35}\] where \[\mathbf{A} =\frac{1}{n}\sum_{j=1}^{n}\dot{\mathbf{\Phi}}_{m}(F(\varepsilon_{j} ))\otimes\mathbf{\Phi}_{m}(G(\mathbf{X}_{j}))(F_{n}(\hat{\varepsilon}_{j})-F( \varepsilon_{j})),\] \[\mathbf{B} =\frac{1}{n}\sum_{j=1}^{n}\mathbf{\Phi}_{m}(F(\varepsilon_{j})) \otimes\dot{\Phi}_{m}(G(\mathbf{X}_{j}))(G_{n}(\mathbf{X}_{j})-G(\mathbf{X}_ {j})),\] \[\mathbf{R} =\frac{1}{n}\sum_{j=1}^{n}\mathbf{R}_{1j}\otimes\mathbf{\Phi}_{ m}(G_{n}(\mathbf{X}_{j}))+\frac{1}{n}\sum_{j=1}^{n}\mathbf{\Phi}_{m}(F_{n}( \hat{\varepsilon}_{j})\otimes\mathbf{R}_{2j}.\] By (3.32) and (3.34) and the first equality in (3.25), \(\|\mathbf{R}\|=O(m^{3}/n).\) Let \(\mathbf{b}(\mathbf{Z}_{i},\mathbf{Z}_{j})=\mathbf{\Phi}_{m}(F(\varepsilon_{j} ))\otimes\dot{\Phi}_{m}(G(\mathbf{X}_{j}))(\mathbf{1}[\mathbf{X}_{i}\leq \mathbf{X}_{j}]-G(\mathbf{X}_{j})).\) It then follows \(E(\kappa(\mathbf{Z}_{i},\mathbf{Z}_{j}))=0\) for all \(i,j\) from the independence of \(\varepsilon\) and \(\mathbf{X},\) and \(\mathbf{B}\) is approximately a multivariate U-statistic, i.e., \(\mathbf{B}=\mathbf{U}_{n}(\mathbf{h}_{\mathbf{B}})+O(m^{2}/n),\) where \(\mathbf{h}_{\mathbf{B}}(\mathbf{z}_{1},\mathbf{z}_{2})=\frac{1}{2}(\mathbf{b} (\mathbf{z}_{i},\mathbf{z}_{j})+\mathbf{b}(\mathbf{z}_{j},\mathbf{z}_{i})).\) Let \(\mathbf{h}_{1}(\mathbf{z}_{1})=E(\mathbf{h}(\mathbf{z}_{1},\mathbf{Z}_{2})).\) Then \[\mathbf{h}_{1,\mathbf{B}}(\mathbf{z}_{1})=E(\mathbf{\Phi}_{m}(F(\varepsilon_{2 }))\otimes\dot{\Phi}_{m}(G(\mathbf{X}_{2}))(\mathbf{1}[\mathbf{x}_{1}\leq \mathbf{X}_{2}]-G(\mathbf{X}_{2}))).\] By Lemma 3.3, \[\mathbf{B}=\frac{1}{n}\sum_{j=1}^{n}2\mathbf{h}_{1,\mathbf{B}}(\mathbf{Z}_{j} )+O_{p}(m^{2}/n). \tag{3.36}\] Write \(F_{n}(\hat{\varepsilon}_{j})-F(\varepsilon_{j})=(F_{n}(\hat{\varepsilon}_{j}) -F_{n}(\varepsilon_{j}))+(F_{n}(\varepsilon_{j})-F(\varepsilon_{j})),\) giving \(\mathbf{A}=\mathbf{A}_{1}+\mathbf{A}_{2}.\) Likewise, \(\mathbf{h}_{1,\mathbf{A}}(\mathbf{z}_{1})=E(\dot{\mathbf{\Phi}}_{m}(F( \varepsilon_{2}))\otimes\mathbf{\Phi}_{m}(G(\mathbf{X}_{2}))(\mathbf{1}[ \varepsilon_{1}\leq\varepsilon_{2}]-F(\varepsilon_{2})|\mathbf{Z}_{1}=\mathbf{ z}_{1}),\) \[\mathbf{A}_{2}=\frac{1}{n}\sum_{j=1}^{n}2\mathbf{h}_{1,\mathbf{A}}(\mathbf{Z}_ {j})+O_{p}(m^{2}/n). \tag{3.37}\] Using (3.20), one calculates \(E(\|\mathbf{A}_{1}\|^{2})=O(m^{4}/n^{2}).\) Hence \[\mathbf{A}_{1}=\frac{1}{n}\sum_{j=1}^{n}\dot{\Phi}_{m}(F(\varepsilon_{j})) \otimes\mathbf{\Phi}_{m}(G(\mathbf{X}_{j}))(F_{n}(\hat{\varepsilon}_{j})-F_{n }(\varepsilon_{j}))=O_{p}(m^{2}/n). \tag{3.38}\] This and (3.37) prove \[\mathbf{A}=\frac{1}{n}\sum_{j=1}^{n}2\mathbf{h}_{1,\mathbf{A}}(\mathbf{Z}_{j})+O_ {p}(m^{2}/n). \tag{3.39}\] Taken together we show that (2.22) holds with \(\mathbf{v}=\mathbf{u}_{n}+2(\mathbf{h}_{1,\mathbf{A}}+\mathbf{h}_{1,\mathbf{B}})\) as \(m_{n}^{7}/n=o(1)\). This also proves (2.21). ## 4 Simulation results We used the R package sem to carry out the simulations based on the SEM (3.4) with \(\beta=1\). The details of the package can be found in Fox (2006)[6]. In LISREL notation and using Fig. 1, the SEM can be written as \[y_{1}=\lambda_{1}x_{2}+\epsilon_{1},\quad y_{2}=y_{1}+\lambda_{3}x_{1}+\lambda _{2}x_{2}+\epsilon_{2}. \tag{4.1}\] The parameters to be estimated include all the regression coefficients \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\), and the measurement-error variances \(\psi_{1}\) = \(Var(\epsilon_{1})\) and \(\psi_{2}\) = \(Var(\epsilon_{2})\). We present the simulation results of estimating the coefficients \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) with true parameter values \(\lambda_{1}=1\), \(\lambda_{2}=-1\) and \(\lambda_{3}=0.5\). For \(n=30,50,100\) and based on 50 repetitions, we calculated the averages and medians of biases from the usual and the EL-weighted MDF estimators \(\lambda_{n,k}\) and \(\tilde{\lambda}_{k}\) of the true parameter value \(\lambda_{k}\), \(k=1,2,3\), the averages and medians of the variances of the usual MDF and the EL-weighted estimators \(v_{n,k}\) and \(\tilde{v}_{k}\), \(k=1,2,3\), and the ratios \(r_{1,k}=\tilde{v}_{k}/\tilde{v}_{n,k}\) and \(r_{2,k}=median(\tilde{v}_{k})/median(v_{n,k})\). The discrepancy function used is the ML discrepancy function given in (1.3). A value of ratio less than one indicated the variance reduction of the EL-weighted estimator over the usual estimator. The results are reported on Tables 1-2. For Table 1, the side information is the _independence_ of \(\mathbf{X}\) and \(\boldsymbol{\epsilon}\) utilized via the constraint functions given in (3.6) for \(m=1,3,5\), where \(\boldsymbol{\epsilon}\) was generated from the normal mixture \(0.9*N(0,\mathbf{I}_{2})+0.1*N(0,5\mathbf{I}_{2})\), and \(\mathbf{X}\) from the bivariate exponential \(biexp(1,3)\). For Table 2, the side information is _known marginal medians_ of \(\mathbf{X}\) utilized via the constraint functions given in 3.7, where \(\mathbf{X}\) was generated from the bivariate exponential with scale parameters \((\gamma_{1},\gamma_{2})\). One can see that the efficiency gain is substantial (around 40%). The ratios were stable with a slightly decreasing trend with increasing \(n\), and the values of larger scale parameter had larger efficiency gains. ## 5 Proofs In this section, we first give two useful general theorems. As applications, we prove the theorems presented in Section 2. Let \(\mathbf{x}_{1},...,\mathbf{x}_{n}\) be \(m\)-dimensional vectors. Set \[\bar{\mathbf{x}}=\frac{1}{n}\sum_{j=1}^{n}\mathbf{x}_{j},\quad x_{*}=\max_{1 \leq j\leq n}\|\mathbf{x}_{j}\|,\quad\mathbb{S}=\frac{1}{n}\sum_{j=1}^{n} \mathbf{x}_{j}\mathbf{x}_{j}^{\top},\] _Simulated efficiency gain of the EL-weighted estimators in the SEM (3.4) using the side information in (3.6) of independence of \(\varepsilon\) and \(\mathbf{X}\) for a few values of \(n\) and number \(m\). \(\bar{b}_{n,k}\) (\(m(b_{n,k})\)) and \(\bar{\bar{b}}_{k}\) (\(m(\bar{b}_{k})\)) are the averages (medians) of biases from the usual and the EL-weighted MDF estimators respectively. \(\bar{v}_{n,k}\) (\(m(v_{n,k})\)) and \(\bar{\bar{v}}_{k}\) (\(m(\bar{v}_{k})\)) are the averages (medians) of the variances of the usual and the EL-weighted MDF estimators respectively. \(r_{1,k}\) (\(r_{2,k}\)) are the ratios of the averages (medians) of the variances of the EL-weighted estimators to the usual ones. \(\boldsymbol{\epsilon}\sim 0.9*N(0,\mathbf{I}_{2})+0.1*N(0,5\mathbf{I}_{2})\), \(\mathbf{X}\sim\mathrm{biexp}(1,3)\)._ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{11}{|c|}{\(n=30\)} \\ \hline \(m\) & \(\lambda_{k}\) & \(\bar{b}_{n,k}\) & \(\bar{\bar{b}}_{k}\) & \(m(b_{n,k})\) & \(m(\bar{b}_{k})\) & \(\bar{v}_{n,k}\) & \(\bar{\bar{v}}_{k}\) & \(r_{1,k}\) & \(m(v_{n,k})\) & \(m(\bar{v}_{k})\) & \(r_{2,k}\) \\ \hline \multirow{3}{*}{1} & \(\lambda_{1}\) & 0.2181 & -0.2218 & 0.2876 & -0.2437 & 2.4766 & 2.1962 & 0.8868 & 1.8529 & 1.3835 & 0.7467 \\ & \(\lambda_{2}\) & -0.0783 & -0.0544 & -0.1018 & -0.0586 & 2.1048 & 1.9226 & 0.9134 & 1.6773 & 1.3444 & 0.8015 \\ & \(\lambda_{3}\) & 0.0333 & -0.1002 & -0.0855 & -0.1314 & 0.5620 & 0.5660 & 1.0071 & 0.3729 & 0.3550 & 0.9520 \\ \hline \multirow{3}{*}{3} & \(\lambda_{1}\) & -0.0893 & -0.0688 & 0.0027 & -0.0607 & 2.3394 & 1.7818 & 0.7616 & 1.5748 & 1.0300 & 0.6541 \\ & \(\lambda_{2}\) & 0.1005 & -0.1172 & -0.3671 & -0.3255 & 2.3664 & 1.8948 & 0.8007 & 1.8519 & 1.3011 & 0.7026 \\ & \(\lambda_{3}\) & 0.1303 & -0.0938 & 0.0153 & -0.0941 & 0.6303 & 0.5588 & 0.8866 & 0.5221 & 0.4538 & 0.8692 \\ \hline \multirow{3}{*}{5} & \(\lambda_{1}\) & 0.3742 & -0.0233 & 0.2705 & 0.0381 & 1.7745 & 1.5109 & 0.8515 & 1.0320 & 0.8891 & 0.8615 \\ & \(\lambda_{2}\) & -0.1992 & -0.1774 & -0.1107 & -0.1846 & 2.2048 & 1.5480 & 0.7021 & 1.0474 & 0.7961 & 0.7601 \\ & \(\lambda_{3}\) & -0.0555 & -0.1333 & -0.0797 & -0.0429 & 0.4901 & 0.3838 & 0.7831 & 0.2615 & 0.2143 & 0.8195 \\ \hline \multicolumn{11}{|c|}{\(n=50\)} \\ \hline \(m\) & \(\lambda_{k}\) & \(\bar{b}_{n,k}\) & \(\bar{\bar{b}}_{k}\) & \(m(b_{n,k})\) & \(m(\bar{b}_{k})\) & \(\bar{v}_{n,k}\) & \(\bar{\bar{v}}_{k}\) & \(r_{1,k}\) & \(m(v_{n,k})\) & \(m(\bar{v}_{k})\) & \(r_{2,k}\) \\ \hline \multirow{3}{*}{1} & \(\lambda_{1}\) & 0.0579 & -0.0702 & 0.0963 & -0.0546 & 1.4331 & 1.2840 & 0.8960 & 1.2614 & 0.9242 & 0.7327 \\ & \(\lambda_{2}\) & -0.1007 & -0.1425 & -0.1802 & -0.1571 & 1.3882 & 1.2730 & 0.9170 & 1.1588 & 1.0470 & 0.9035 \\ & \(\lambda_{3}\) & -0.2388 & -0.1862 & -0.1546 & -0.1966 & 0.3278 & 0.3130 & 0.9549 & 0.2709 & 0.2552 & 0.9420 \\ \hline \multirow{3}{*}{3} & \(\lambda_{1}\) & -0.0217 & -0.0245 & 0.0697 & -0.0398 & 1.2074 & 0.9621 & 0.7968 & 1.0255 & 0.7716 & 0.7524 \\ & \(\lambda_{2}\) & 0.1368 & -0.1279 & 0.1578 & -0.1023 & 1.3264 & 1.0642 & 0.8023 & 1.0684 & 0.9074 & 0.8493 \\ & \(\lambda_{3}\) & 0.0039 & -0.0187 & -0.0181 & -0.0118 & 0.2586 & 0.2127 & 0.8225 & 0.2367 & 0.1823 & 0.7702 \\ \hline \multirow{3}{*}{5} & \(\lambda_{1}\) & 0.1038 & -0.0706 & 0.0412 & -0.0716 & 1.1384 & 0.9720 & 0.8538 & 1.0185 & 0.8304 & 0.8153 \\ & \(\lambda_{2}\) & 0.2014 & -0.2178 & 0.1338 & -0.1635 & 1.3568 & 1.1220 & 0.8269 & 1.1851 & 0.9699 & 0.8184 \\ & \(\lambda_{3}\) & -0.0049 & -0.0213 & 0.0461 & -0.0224 & 0.3525 & 0.3362 & 0.9538 & 0.2649 & 0.2500 & 0.9438 \\ \hline \multicolumn{11}{|c|}{\(n=100\)} \\ \hline \(m\) & \(\lambda_{k}\) & \(\bar{b}_{n,k}\) & \(\bar{\bar{b}}_{k}\) & \(m(b_{n,k})\) & \(m(\bar{b}_{n,k})\) & \(\bar{v}_{n,k}\) & \(\bar{\bar{v}}_{k}\) & \(r_{1,k}\) & \(m(v_{n,k})\) & \(m(\bar{v}_{k})\) & \(r_{2,k}\) \\ \hline \multirow{3}{*}{1} & \(\lambda_{1}\) & -0.0780 & -0.0863 & -0.0389 & -0.0345 & 0.5978 & 0.5240 & 0.8765 & 0.5418 & 0.4167 & 0.7691 \\ & \(\lambda_{2}\) & 0.0550 & -0.0943 & 0.0390 & -0.0564 & 0.6421 & 0.5687 & 0.8857 & 0.6456 & 0.4629 & 0.7170 \\ & \(\lambda_{3}\) & 0.0102 & -0.0581 & 0.0033 & -0.0133 & 0.1508 & 0.1472 & 0.9761 & 0.1365 & 0.1281 & 0.9385 \\ \hline \multirow{3}{*}{3} & \(\lambda_{1}\) & 0.1536 & -0.0925 & 0.2299 & -0.1106 & 0.5943 & 0.4759 & 0 \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{11}{|c|}{\(n=30,m=1\)} \\ \hline \((\gamma_{1},\gamma_{2})\) & \(\lambda_{k}\) & \(\bar{b}_{n,k}\) & \(\bar{\bar{b}}_{k}\) & \(m(b_{n,k})\) & \(m(\bar{b}_{k})\) & \(\bar{v}_{n,k}\) & \(\bar{\bar{v}}_{k}\) & \(r_{1,k}\) & \(m(v_{n,k})\) & \(m(\tilde{v}_{k})\) & \(r_{2,k}\) \\ \hline \((2,2)\) & \(\lambda_{1}\) & -0.0720 & -0.0942 & -0.1217 & -0.1600 & 0.1731 & 0.1531 & 0.8845 & 0.1478 & 0.1333 & 0.9019 \\ \((2,2)\) & \(\lambda_{2}\) & 0.0413 & 0.0298 & -0.0104 & 0.0495 & 0.1934 & 0.1657 & 0.8568 & 0.1796 & 0.1425 & 0.7934 \\ & \(\lambda_{3}\) & 0.0453 & 0.0365 & 0.0885 & 0.0687 & 0.2067 & 0.1808 & 0.8747 & 0.1841 & 0.1640 & 0.8908 \\ \hline \((2,3)\) & \(\lambda_{1}\) & 0.0424 & 0.0087 & -0.0198 & -0.0177 & 0.3424 & 0.2126 & 0.6209 & 0.3016 & 0.1918 & 0.6359 \\ \((2,3)\) & \(\lambda_{2}\) & -0.0937 & -0.0527 & -0.1107 & -0.0952 & 0.4198 & 0.2852 & 0.6794 & 0.3813 & 0.2478 & 0.6499 \\ & \(\lambda_{3}\) & -0.0600 & -0.1512 & -0.1224 & -0.1159 & 0.2004 & 0.1819 & 0.9077 & 0.1626 & 0.1518 & 0.9336 \\ \hline \((2,4)\) & \(\lambda_{1}\) & 0.0209 & 0.0535 & -0.0327 & -0.1286 & 0.7113 & 0.2411 & 0.3390 & 0.6064 & 0.2367 & 0.3903 \\ \((2,4)\) & \(\lambda_{2}\) & 0.0860 & 0.1354 & 0.1475 & 0.1280 & 0.6681 & 0.3136 & 0.4694 & 0.5949 & 0.3068 & 0.5157 \\ & \(\lambda_{3}\) & 0.0825 & 0.0080 & 0.0064 & -0.1096 & 0.1741 & 0.1917 & 1.1011 & 0.1312 & 0.1411 & 1.0755 \\ \hline \multicolumn{11}{|c|}{\(n=50\)} \\ \hline \((\gamma_{1},\gamma_{2})\) & \(\lambda_{k}\) & \(\bar{b}_{n,k}\) & \(\bar{\bar{b}}_{k}\) & \(m(b_{n,k})\) & \(m(\tilde{b}_{k})\) & \(\bar{v}_{n,k}\) & \(\bar{\tilde{v}}_{k}\) & \(r_{1,k}\) & \(m(v_{n,k})\) & \(m(\tilde{v}_{k})\) & \(r_{2,k}\) \\ \hline \((2,2)\) & \(\lambda_{1}\) & 0.0196 & 0.0318 & 0.0746 & 0.0918 & 0.0874 & 0.0721 & 0.8249 & 0.0797 & 0.0678 & 0.8507 \\ \((2,2)\) & \(\lambda_{2}\) & -0.0547 & -0.0467 & -0.0646 & -0.0760 & 0.1021 & 0.0876 & 0.8580 & 0.0953 & 0.0830 & 0.8709 \\ & \(\lambda_{3}\) & -0.0847 & -0.0766 & -0.0894 & -0.1072 & 0.0997 & 0.0876 & 0.8786 & 0.0877 & 0.0811 & 0.9247 \\ \hline \((2,3)\) & \(\lambda_{1}\) & 0.0059 & 0.0069 & 0.0058 & 0.0237 & 0.2121 & 0.1190 & 0.5611 & 0.1757 & 0.1072 & 0.6101 \\ \((2,3)\) & \(\lambda_{2}\) & -0.0544 & -0.0326 & -0.0447 & -0.0061 & 0.2422 & 0.1489 & 0.6148 & 0.2288 & 0.1299 & 0.5677 \\ & \(\lambda_{3}\) & 0.0422 & 0.0342 & 0.0937 & 0.0515 & 0.1068 & 0.1026 & 0.9607 & 0.0954 & 0.0827 & 0.8669 \\ \hline \((2,4)\) & \(\lambda_{1}\) & 0.1288 & 0.1121 & -0.0007 & 0.0403 & 0.3427 & 0.1501 & 0.4380 & 0.3068 & 0.1376 & 0.4485 \\ \((2,4)\) & \(\lambda_{2}\) & 0.1900 & 0.1183 & 0.0930 & 0.1413 & 0.3781 & 0.1822 & 0.4819 & 0.3453 & 0.1700 & 0.4923 \\ & \(\lambda_{3}\) & -0.0182 & 0.0027 & -0.0527 & 0.0671 & 0.0978 & 0.0994 & 1.0164 & 0.0895 & 0.0867 & 0.9687 \\ \hline \multicolumn{11}{|c|}{\(n=100\)} \\ \hline \((\gamma_{1},\gamma_{2})\) & \(\lambda_{k}\) & \(\bar{b}_{n,k}\) & \(\bar{\bar{b}}_{k}\) & \(m(b_{n,k})\) & \(m(\tilde{b}_{k})\) & \(\bar{v}_{n,k}\) & \(\bar{\tilde{v}}_{k}\) & \(r_{1,k}\) & \(m(v_{n,k})\) & \(m(\tilde{v}_{k})\) & \(r_{2,k}\) \\ \hline \((\gamma_{1},\gamma_{2})\) & \(\lambda_{k}\) & \(\bar{b}_{n,k}\) & \(\bar{\tilde{b}}_{k}\) & \(m(b_{n,k})\) & \(m(\tilde{b}_{k})\) & \(\bar{v}_{n,k}\) & \(\bar{\tilde{v}}_{k}\) & \(r_{1,k}\) & \(m(v_{n,k})\) & \(m(\tilde{v}_{k})\) & \(r_{2,k}\) \\ \hline \((\gamma_{1},\gamma_{2})\) & \(\lambda_{k}\) & \(\bar{b}_{n,k}\) & \(\bar{\tilde{b}}_{k}\) & \(m(b_{n,k})\) & \(m(\tilde{b}_{k})\) & \(\bar{v}_{n,k}\) & \(\bar{\tilde{v}}_{k}\) & \(r_{1,k}\) & \(m(v_{n,k})\) & \(m(\tilde{v}_{k})\) & \(r_{2,k}\) \\ \hline \((2,2)\) & \(\lambda_{1}\) & 0.0353 & 0.0346 & 0.0460 & 0.0478 & 0.0369 & 0.0326 & 0.8837 & 0.0360 & 0.0326 & 0.9050 \\ \((2,2)\) & \(\lambda_{2}\) & -0.0511 & -0.0585 & -0.0298 & -0.0316 & 0.0428 & 0.0378 & 0.8836 & 0.0423 & 0.0358 & 0.8456 \\ & \(\lambda_{3}\) & -0.0482 & -0.0453 & -0.0882 & -0.0691 & 0.0431 & 0.0390 & 0.90 \[x^{(\nu)}=\sup_{\|\mathbf{u}\|=1}\Big{\|}\frac{1}{n}\sum_{j=1}^{n}(\mathbf{u}^{ \top}\mathbf{x}_{j})^{\nu}\Big{\|},\quad\nu=3,4,\] and let \(\lambda\) and \(\Lambda\) denote the smallest and largest eigen value of the matrix \(\mathbb{S}\), \[\lambda=\inf_{\|\mathbf{u}\|=1}\mathbf{u}^{\top}\mathbb{S}\mathbf{u}\quad\text {and}\quad\Lambda=\sup_{\|\mathbf{u}\|=1}\mathbf{u}^{\top}\mathbb{S}\mathbf{u}.\] With these we associate the empirical likelihood \[\mathscr{R}=\sup\Big{\{}\prod_{j=1}^{n}n\pi_{j}:\boldsymbol{\pi}\in\mathscr{P}_ {n},\ \sum_{j=1}^{n}\pi_{j}\mathbf{x}_{j}=0\Big{\}}.\] Peng and Schick [10] carefully examined the above maximization as a numeric problem and detailed some very useful properties. We quote their Lemma 5.2 below for our application. **Lemma 5.1**.: _The inequality \(\lambda>5\|\bar{\mathbf{x}}\|x_{*}\) implies that there is a unique \(\boldsymbol{\zeta}\) in \(\mathcal{R}^{m}\) satisfying the below (5.1) to (5.7)._ \[1+\boldsymbol{\zeta}^{\top}\mathbf{x}_{j}>0,\quad j=1,\ldots,n, \tag{5.1}\] \[\sum_{j=1}^{n}\frac{\mathbf{x}_{j}}{1+\boldsymbol{\zeta}^{\top} \mathbf{x}_{j}}=0,\] (5.2) \[\|\boldsymbol{\zeta}\|\leq\frac{\|\bar{\mathbf{x}}\|}{\lambda-\| \bar{\mathbf{x}}\|x_{*}},\] (5.3) \[\|\boldsymbol{\zeta}\|x_{*}\leq\frac{\|\bar{\mathbf{x}}\|x_{*}}{ \lambda-\|\bar{\mathbf{x}}\|x_{*}}<\frac{1}{4},\] (5.4) \[\frac{1}{n}\sum_{j=1}^{n}(\boldsymbol{\zeta}^{\top}\mathbf{x}_{j})^ {2}=\boldsymbol{\zeta}^{\top}\mathbb{S}\boldsymbol{\zeta}\leq\Lambda\| \boldsymbol{\zeta}\|^{2}\leq\frac{\Lambda\|\bar{\mathbf{x}}\|^{2}}{(\lambda-\| \bar{\mathbf{x}}\|x_{*})^{2}},\] (5.5) \[\Big{\|}\frac{1}{n}\sum_{j=1}^{n}\frac{\mathbf{r}_{i}}{1+ \boldsymbol{\zeta}^{\top}\mathbf{x}_{j}}-\mathbf{r}_{j}+\mathbf{r}_{j} \mathbf{x}_{j}^{\top}\boldsymbol{\zeta}\Big{\|}\leq\Big{\|}\frac{1}{n}\sum_{j =1}^{n}\mathbf{r}_{j}(\boldsymbol{\zeta}^{\top}\mathbf{x}_{j})^{2}\Big{\|}+ \frac{4}{3}\frac{1}{n}\sum_{j=1}^{n}\|\mathbf{r}_{j}\|\ |\boldsymbol{\zeta}^{\top} \mathbf{x}_{j}|^{3}, \tag{5.6}\] _for vectors \(\mathbf{r}_{1},\ldots,\mathbf{r}_{n}\) of the same dimension, and_ \[\|\boldsymbol{\zeta}-\mathbb{S}^{-1}\bar{\mathbf{x}}\|^{2}\leq 2\Big{(} \frac{1}{\lambda}+\frac{\Lambda}{9\lambda^{2}}\Big{)}\|\zeta\|^{4}x^{(4)}. \tag{5.7}\] Now use the fact that \(\|\mathbf{x}\|=\sup_{\|\mathbf{v}\|=1}\mathbf{v}^{\top}\mathbf{x}\), the Cauchy-Schwartz inequality, (5.3), (5.4) and (5.5) to bound the square of the first term of the right-hand side of (5.6) by \[\frac{1}{n}\sum_{j=1}^{n}(\boldsymbol{\zeta}^{\top}\mathbf{x}_{j})^{4}\sup_{ \|\mathbf{v}\|=1}\mathbf{v}^{\top}\Big{(}\frac{1}{n}\sum_{j=1}^{n}\mathbf{r}_ {j}\mathbf{r}_{j}^{\top}\Big{)}\mathbf{v}\leq\|\boldsymbol{\zeta}\|^{4}x^{(4) }\Big{|}\frac{1}{n}\sum_{j=1}^{n}\mathbf{r}_{j}\mathbf{r}_{j}^{\top}\Big{|}_{o}\] and the square of the second term by \[\frac{16}{9}x_{*}^{2}\|\boldsymbol{\zeta}\|^{2}\frac{1}{n}\sum_{j=1}^{n}\|{\bf r} _{j}\|^{2}\frac{1}{n}\sum_{j=1}^{n}(\boldsymbol{\zeta}^{\top}{\bf x}_{j})^{4} \leq\frac{16d}{9}(x_{*}\|\boldsymbol{\zeta}\|)^{2}\|\boldsymbol{\zeta}\|^{4}{ \bf x}^{(4)}\Big{|}\frac{1}{n}\sum_{j=1}^{n}{\bf r}_{j}{\bf r}_{j}^{\top}\Big{|} _{o},\] where \(d\) is the dimension of \(r_{j}\). Combining the above we obtain \[\Big{\|}\frac{1}{n}\sum_{j=1}^{n}\!\frac{{\bf r}_{i}}{1+\boldsymbol{\zeta}^{ \top}{\bf x}_{j}}\!-\!{\bf r}_{j}\!+\!{\bf r}_{j}{\bf x}_{j}^{\top}\boldsymbol {\zeta}\Big{\|}^{2}\!\leq\!\|\boldsymbol{\zeta}\|^{4}x^{(4)}\Big{|}\frac{1}{n }\sum_{j=1}^{n}\!{\bf r}_{j}{\bf r}_{j}^{\top}\Big{|}_{o}\big{[}1\!+\!\frac{16 d}{9}(x_{*}\|\boldsymbol{\zeta}\|)^{2}\big{]}. \tag{5.8}\] We now apply the above results to random vectors. Let \({\bf T}_{n1},\ldots,{\bf T}_{nn}\) be \(m_{n}\)-dimensional random vectors. With these random vectors we associate the empirical likelihood \[\mathscr{R}_{n}=\sup\Big{\{}\prod_{j=1}^{n}n\pi_{j}:\boldsymbol{\pi}\in \mathscr{P}_{n},\ \sum_{j=1}^{n}\pi_{j}{\bf T}_{nj}=0\Big{\}}.\] To study the asymptotic behavior of \(\mathscr{R}_{n}\) we introduce \[T_{n}^{*}=\max_{1\leq j\leq n}\|{\bf T}_{nj}\|,\quad\bar{\bf T}_{n}=n^{-1}\sum _{j=1}^{n}{\bf T}_{nj},\quad T_{n}^{(\nu)}=\sup_{\|{\bf u}\|=1}\Big{\|}\frac{1} {n}\sum_{j=1}^{n}({\bf u}^{\top}{\bf T}_{nj})^{\nu}\Big{\|},\] and the matrix \[\mathbb{S}_{n}=\frac{1}{n}\sum_{j=1}^{n}{\bf T}_{nj}{\bf T}_{nj}^{\top},\] and let \(\lambda_{n}\) and \(\Lambda_{n}\) denote the smallest and largest eigen values of \(\mathbb{S}_{n}\), \[\lambda_{n}=\inf_{\|{\bf u}\|=1}{\bf u}^{\top}\mathbb{S}_{n}{\bf u}\quad\text{ and}\quad\Lambda_{n}=\sup_{\|{\bf u}\|=1}{\bf u}^{\top}\mathbb{S}_{n}{\bf u}.\] We impose the following conditions on \({\bf T}_{nj}\). 1. \(T_{n}^{*}=o_{p}(m_{n}^{-3/2}n^{1/2})\). 2. \(\|\bar{\bf T}_{n}\|=O_{p}(m_{n}^{1/2}n^{-1/2})\). 3. There is a sequence of regular \(m_{n}\times m_{n}\) dispersion matrices \({\bf W}_{n}\) such that \[|\mathbb{S}_{n}-{\bf W}_{n}|_{o}=o_{p}(m_{n}^{-1}).\] We impose the following conditions on \(\boldsymbol{\psi}\) and \({\bf T}_{nj}\). 1. \(n^{-1}\sum_{j=1}^{n}\big{(}\boldsymbol{\psi}(Z_{j})\otimes{\bf T}_{nj}^{\top}- E\big{(}\boldsymbol{\psi}(Z_{j})\otimes{\bf T}_{nj}^{\top}\big{)}\big{)}=o_{p}(m_{n}^ {-1/2})\). 2. There exists some measurable function \(\boldsymbol{\chi}\) from \(\mathcal{Z}\) into \(\mathcal{R}^{d}\) such that \(\int\boldsymbol{\chi}\,dQ=0\), \(\int\|\boldsymbol{\chi}\|^{2}\,dQ<\infty\) and \[\frac{1}{n}\sum_{i=1}^{n}\big{(}{\bf A}_{n}{\bf W}_{n}^{-1}{\bf T}_{ni}- \boldsymbol{\chi}(Z_{i})\big{)}=o_{p}(n^{-1/2}),\] where \({\bf A}_{n}=:n^{-1}\sum_{j=1}^{n}E\big{(}\boldsymbol{\psi}(Z_{j})\otimes{\bf T }_{nj}^{\top}\big{)}\). Let us first consider the case that \(m_{n}\) tends to infinity with the sample size. We have the following result. **Theorem 5.1**.: _Suppose (A1)-(A3) and (B1)-(B2) hold. Then there exists unique \(\boldsymbol{\zeta}_{n}\) which satisfies_ \[1+\boldsymbol{\zeta}_{n}^{\top}\mathbf{T}_{nj}>0,\quad\frac{1}{n}\sum_{j=1}^{n }\frac{\mathbf{T}_{nj}}{1+\boldsymbol{\zeta}_{n}^{\top}\mathbf{T}_{nj}}=0, \tag{5.9}\] _such that as \(m_{n}\) tends to infinity,_ \[\boldsymbol{\theta}_{n}=:\frac{1}{n}\sum_{j=1}^{n}\frac{\boldsymbol{\psi}(Z_{j })}{1+\boldsymbol{\zeta}_{n}^{\top}\mathbf{T}_{nj}}=\bar{\boldsymbol{\psi}}- \bar{\boldsymbol{\chi}}+o_{p}(n^{-1/2}), \tag{5.10}\] _where \(\bar{\boldsymbol{\chi}}=n^{-1}\sum_{j=1}^{n}\boldsymbol{\chi}(Z_{j})\) with \(\boldsymbol{\chi}\) given in (B2)._ Proof.: It follows from (A1) and (A2) that \(T_{n}^{*}\|\bar{\mathbf{T}}_{n}\|=o_{p}(1)\), and from (A3) that there are positive numbers \(a<b\) such that \(P(a\leq\lambda_{n}\leq\Lambda_{n}\leq b)\to 1\). Thus all three conditions imply that the probability of the event \(\{\lambda_{n}>5T_{n}^{*}\|\bar{\mathbf{T}}_{n}\|\}\) tends to one. Consequently, by Lemma 5.1, there exists an \(m_{n}\)-dimensional random vector \(\boldsymbol{\zeta}\) which is uniquely determined on this event by the properties (5.1)-(5.8) including (5.9). To prove (5.10), we apply (5.8) with \(\mathbf{r}_{j}=\boldsymbol{\psi}(Z_{j})\). Note first that \[T_{n}^{(4)}\leq\Lambda_{n}(T_{n}^{*})^{2}. \tag{5.11}\] This, (5.3) and (A1)-(A2) imply that the right side of (5.8) is bounded by \[\Big{(}1+\frac{d}{9}\Big{)}\frac{\|\bar{\mathbf{T}}_{n}\|^{4}}{(\lambda_{n}- \|\bar{\mathbf{T}}_{n}\|T_{n}^{*})^{4}}\Lambda_{n}(T_{n}^{*})^{2}\big{|}\frac{ 1}{n}\sum_{j=1}^{n}\boldsymbol{\psi}(Z_{j})\boldsymbol{\psi}(Z_{j})^{\top} \big{|}_{o}=o_{p}(m_{n}^{-1}n^{-1}),\] where the equality holds since the spectral norm of the average is bounded due to the square-integrability of \(\boldsymbol{\psi}\). Thus from (5.8) it follows \[\boldsymbol{\theta}_{n}=\frac{1}{n}\sum_{j=1}^{n}\boldsymbol{\psi}(Z_{j})- \frac{1}{n}\sum_{j=1}^{n}\boldsymbol{\psi}(Z_{j})\otimes\mathbf{T}_{nj}^{\top }\boldsymbol{\zeta}_{n}+o_{p}(n^{-1/2}). \tag{5.12}\] In view of (B2) the desired (5.10) now follows from (5.13)-(5.15) below. \[\frac{1}{n}\sum_{j=1}^{n}\big{(}\boldsymbol{\psi}(Z_{j})\otimes\mathbf{T}_{nj }^{\top}-E\big{(}\boldsymbol{\psi}(Z_{j})\otimes\mathbf{T}_{nj}^{\top}\big{)} \big{)}\,\boldsymbol{\zeta}_{n}=o_{p}(n^{-1/2}), \tag{5.13}\] \[\frac{1}{n}\sum_{j=1}^{n}E\big{(}\boldsymbol{\psi}(Z_{j})\otimes\mathbf{T}_{ nj}^{\top}\big{)}\big{(}\boldsymbol{\zeta}_{n}-\mathbb{S}_{n}^{-1}\bar{ \mathbf{T}}_{n}\big{)}=o_{p}(n^{-1/2}), \tag{5.14}\] \[\frac{1}{n}\sum_{j=1}^{n}E\big{(}\boldsymbol{\psi}(Z_{j})\otimes\mathbf{T}_{nj }^{\top}\big{)}\big{(}\mathbb{S}_{n}^{-1}-\mathbf{W}_{n}^{-1}\big{)}\bar{ \mathbf{T}}_{n}=o_{p}(n^{-1/2}). \tag{5.15}\] Note first that (B1), (A2) and (5.3) imply (5.13). Next we show \[\mathbf{A}_{n}=\frac{1}{n}\sum_{j=1}^{n}E\big{(}\boldsymbol{\psi}(Z_{j})\otimes \mathbf{T}_{nj}^{\top}\big{)}=O(m_{n}^{1/2}).\] Indeed, by Cauchy inequality, \[\|\mathbf{A}_{n}\|^{2}\leq\frac{1}{n}\sum_{j=1}^{n}\|E\big{(} \boldsymbol{\psi}(Z_{j})\otimes\mathbf{T}_{nj}^{\top}\big{)}\|^{2} \leq E\big{(}\|\boldsymbol{\psi}(Z_{1})\|^{2}\big{)}\frac{1}{n}\sum_{j=1}^{n }E(\|\mathbf{T}_{nj}\|^{2}\big{)}\] \[=E\big{(}\|\boldsymbol{\psi}(Z_{1})\|^{2}\big{)}\text{trace}\left( E(\mathbb{S}_{n})\right).\] But by (A3), the above trace is bounded by \[\|\text{trace}\left(E(\mathbb{S}_{n}-\mathbf{W}_{n})\right)\|+\text{trace} \left(E(\mathbf{W}_{n})\right)\leq m_{n}E\left(|\mathbb{S}_{n}-\mathbf{W}_{n }|_{o}\right)+\Lambda_{n}m_{n},\] Thus \(\|\mathbf{A}_{n}\|^{2}=O(m_{n})\). This, the regularity of \(\mathbf{W}_{n}\) in (A3), (5.7), (5.11) and (A1) imply that the square of the right side of (5.14) is bounded by \[O(m_{n})O_{p}(\|\boldsymbol{\zeta}_{n}\|^{4}T_{n}^{(4)}) =O(m_{n})O_{p}(\|\bar{\mathbf{T}}_{n}\|^{4}(T_{n}^{*})^{2})\] \[=O(m_{n})o_{p}(m_{n}^{2}n^{-2}m_{n}^{-3}n)=o_{p}(m_{n}^{-1}n^{-1}),\] hence (5.14) is proved. Again the rate of \(A_{n}\) and (A2)-(A3) imply (5.15). This completes the proof. Examining the proof of Theorem 5.1 one can see the following holds. **Theorem 5.2**.: _Suppose (A1)-(A3) and (B1)-(B2) are met for fixed \(m_{n}=m\). Then the results in Theorem 5.1 hold as \(n\) tends to infinity._ Proof of Theorem 2.1. We verify the conditions of Theorem 5.2 with \(\mathbf{T}_{nj}=\mathbf{u}(Z_{j})\). Since \(\mathbf{u}\) is square-integrable, conditions (A1) - (A3) are satisfied with \(\mathbf{W}_{n}=\mathbf{W}=E(\mathbf{u}\mathbf{u}^{\top}(Z))\). The square-integrability of \(\boldsymbol{\psi}\) implies that (B1) - (B2) are met with \(\mathbf{A}_{n}=\mathbf{A}=E(\boldsymbol{\psi}(Z)\otimes\mathbf{u}(Z)^{\top})\) and \(\boldsymbol{\chi}=\mathbf{A}\mathbf{W}^{-1}\mathbf{u}\), by the weak law of large numbers. We now apply the result of Theorem 5.2 to complete the proof. Proof of Theorem 2.2. We shall apply Theorem 5.2 to prove the result with \(\mathbf{T}_{nj}=\hat{\mathbf{u}}(Z_{j})\). Clearly conditions (A1), (A3) and (B1) follows from (2.7) - (2.9) respectively, whereas (A2) is implied by (2.11) in view of the fact that the right-hand-side average of (2.11) is \(O_{p}(n^{-1/2})\). By Cauchy inequality, \[\big{\|}\frac{1}{n}\sum_{j=1}^{n}E\big{(}\boldsymbol{\psi}(Z_{j})\otimes(\hat {\mathbf{u}}(Z_{j})-\mathbf{v}(Z_{j}))\big{)}\big{\|}^{2}\leq E(\|\boldsymbol {\psi}(Z_{1})\|^{2})\frac{1}{n}\sum_{j=1}^{n}E\big{(}\|\hat{\mathbf{u}}(Z_{j} )-\mathbf{v}(Z_{j})\|^{2}\big{)},\] which is \(o(1)\) by (2.10). Hence the \(\mathbf{A}_{n}\) in (B2) is given by \[\mathbf{A}_{n}=\frac{1}{n}\sum_{j=1}^{n}E\big{(}\boldsymbol{\psi}(Z_{j}) \otimes\hat{\mathbf{u}}(Z_{j})\big{)}=E(\boldsymbol{\psi}(Z_{1})\mathbf{v}(Z _{1}))+o(1).\] Thus by (2.11), (B2) holds with \(\mathbf{\chi}=E(\mathbf{\psi}(Z_{1})\mathbf{v}(Z_{1}))\mathbf{W}^{-1}\mathbf{v}\). We now apply Theorem 5.2 to complete the proof. Proof of Theorem 2.3. We apply Theorem 5.1 with \(\mathbf{T}_{nj}=\mathbf{u}_{n}(Z_{j})\) to prove the result, i.e. verify its conditions (A1)-(A3) and (B1)-(B2). Obviously (2.14), (2.15) and (2.16) correspond to (A1), (A3) and (B1) respectively. It follows from the regularity of \(\mathbf{W}_{n}\) that \(\operatorname{trace}(\mathbf{W}_{n})\leq Bm_{n}\) for some constant \(B\). Thus from \(nE[\|\bar{\mathbf{T}}_{n}\|^{2}]=\operatorname{trace}(\mathbf{W}_{n})=O(m_{n})\) it yields (A2). We are now left to prove (B2). Noticing \(\mathbf{A}_{n}=E(\mathbf{\psi}(Z)\otimes\mathbf{u}_{n}^{\top}(Z))\) and \(\mathbf{W}_{n}=E(\mathbf{u}_{n}\mathbf{u}_{n}(Z)^{\top})\), and \(\mathbf{A}_{n}\mathbf{W}_{n}^{-1}\mathbf{u}_{n}\) is the projection of \(\mathbf{\psi}(Z)\) onto the closed linear span \([\mathbf{u}_{n}]\), so that \(\mathbf{A}_{n}\mathbf{W}_{n}^{-1}\mathbf{u}_{n}(Z)\) is the conditional expectation of \(\mathbf{\psi}(Z)\) given \(\mathbf{u}_{n}(Z)\), i.e., \[\mathbf{A}_{n}\mathbf{W}_{n}^{-1}\mathbf{u}_{n}(Z)=E(\mathbf{\psi}(Z)|\mathbf{u}_ {n}(Z)).\] Since \(E(\mathbf{\psi}(Z)|\mathbf{u}_{n}(Z)),n\geq 1\) forms a martingale with respect to the sigma algebras, \(\sigma(\mathbf{u}_{n}(Z)),n\geq 1\), generated by \(\mathbf{u}_{n}(Z)\), it follows from Levy's martingale convergence theorem (see e.g. page 510, Shiryaev [18]) that \[E(\mathbf{\psi}(Z)|\sigma(\mathbf{u}_{n}(Z)))\to E(\mathbf{\psi}(Z)|\sigma(\mathbf{u}_ {\infty}(Z))),\quad\text{a.s.}\quad n\to\infty.\] By the property of conditional expectation (see e.g. Proposition 1, page 430, Bickel, et al. [1]), the last conditional expectation is the projection of \(\mathbf{\psi}(Z)\) onto the closed linear span \([\mathbf{u}_{\infty}(Z)]\), i.e., \(E(\mathbf{\psi}(Z)|\sigma(\mathbf{u}_{\infty}(Z)))=\Pi(\mathbf{\psi}(Z)|[\mathbf{u}_ {\infty}(Z)])\), hence \[\mathbf{\varphi}(Z)=\Pi(\mathbf{\psi}(Z)|[\mathbf{u}_{\infty}(Z)])=E(\mathbf{\psi}(Z)| \mathbf{u}_{\infty}(Z)).\] Thus that (B2) is satisfied with \(\mathbf{\chi}=\mathbf{\varphi}\) follows from \[nE\Big{(}\|\frac{1}{n}\sum_{i=1}^{n}\big{(}\mathbf{A}_{n}\mathbf{W}_{n}^{-1} \mathbf{u}_{n}(Z_{i})-\mathbf{\varphi}(Z_{i})\big{)}\,\|^{2}\Big{)}=E\Big{(}\| \mathbf{A}_{n}\mathbf{W}_{n}^{-1}\mathbf{u}_{n}(Z)-\mathbf{\chi}(Z)\|^{2}\Big{)},\] which converges to zero as \(n\) tends to infinity by the property of the convergence of Fourier series. This completes the proof. Proof of Theorem 2.4. We prove the result by verifying conditions (A1)-(A3) and (B1)-(B2) of Theorem 5.1 with \(\mathbf{T}_{nj}=\hat{\mathbf{u}}_{n}(Z_{j})\). Clearly \((A1)\), \((A3)\) and (B1) correspond to (2.18), (2.19) and (2.20) respectively, while (A2) follows from (2.21) and (5.16) below. We are left to verify (B2). First, by Cauchy inequality and (2.21), \[\Big{\|}\frac{1}{n}\sum_{j=1}^{n}E\left(\mathbf{\psi}(Z_{j})\otimes( \hat{\mathbf{u}}_{n}(Z_{j})-\mathbf{v}_{n}(Z_{j}))\,\right\|^{2}\] \[\leq E(\|\mathbf{\psi}(Z)\|^{2})\frac{1}{n}\sum_{j=1}^{n}E\left(\| \hat{\mathbf{u}}_{n}(Z_{j})-\mathbf{v}_{n}(Z_{j})\|^{2}\right)=o(m_{n}^{-1}),\] so that the \(\mathbf{A}_{n}\) in (B2) satisfies \[\mathbf{A}_{n}=E\left(\mathbf{\psi}(Z)\otimes\mathbf{v}_{n}(Z)\right)+o(m_{n}^{-1 /2}).\] Note that \(\mathrm{trace}(\mathbf{U}_{n})\leq m_{n}|\mathbf{U}_{n}|_{o}=O(m_{n})\) and \[nE(\|\bar{\mathbf{v}}_{n}\|^{2}) =E(\|\mathbf{v}_{n}(Z)\|^{2})\leq|\mathbf{W}_{n}^{1/2}|_{o}^{2}E( \|\mathbf{W}_{n}^{-1/2}\mathbf{v}_{n}(Z)\|^{2})\] \[=|\mathbf{W}_{n}^{1/2}|_{o}^{2}\mathrm{trace}(\mathbf{U}_{n}).\] This shows \[\|\bar{\mathbf{v}}_{n}\|=O_{p}(n^{-1/2}m_{n}^{1/2}). \tag{5.16}\] By Cauchy inequality, \[\|E\left(\boldsymbol{\psi}(Z)\otimes\mathbf{v}_{n}(Z)\right)\|^{2}\leq E\big{(} \|\boldsymbol{\psi}(Z)\|^{2}\big{)}E\big{(}\|\mathbf{v}_{n}(Z)\|^{2}\big{)}.\] But \[E\big{(}\|\mathbf{v}_{n}(Z)\|^{2}\big{)} \leq|\mathbf{W}_{n}^{1/2}|_{o}^{2}E\big{(}\|\mathbf{W}_{n}^{-1/2} \mathbf{v}_{n}(Z)\|^{2}\big{)}\] \[=|\mathbf{W}_{n}^{1/2}|_{o}^{2}\mathrm{trace}(\mathbf{U}_{n})=O (m_{n}).\] Hence \[E\left(\boldsymbol{\psi}(Z)\otimes\mathbf{v}_{n}(Z)\right)=O(m_{n}^{1/2}).\] Therefore combining the above and in view of (2.22) we arrive at \[\frac{1}{n}\sum_{j=1}^{n}\mathbf{A}_{n}\mathbf{W}_{n}^{-1}\dot{ \mathbf{u}}_{n}(Z_{j}) =\big{(}E\left(\boldsymbol{\psi}(Z)\otimes\mathbf{v}_{n}(Z)\right) +o(m_{n}^{-1/2})\big{)}\mathbf{W}_{n}^{-1}\] \[\times\big{(}\bar{\mathbf{v}}_{n}+o_{p}(m_{n}^{-1/2}n^{-1/2}) \big{)}\] \[=E\left(\boldsymbol{\psi}(Z)\otimes\mathbf{v}_{n}(Z)\right) \mathbf{W}_{n}^{-1}\bar{\mathbf{v}}_{n}+o_{p}(n^{-1/2}),\] Analogous to the proof of (B2) in Theorem 2.3 (or applying \(\mathbf{u}_{n}=\mathbf{v}_{n}\)), we have \[E\left(\boldsymbol{\psi}(Z)\otimes\mathbf{v}_{n}(Z)\right)\mathbf{W}_{n}^{-1} \bar{\mathbf{v}}_{n}=\bar{\boldsymbol{\chi}}+o_{p}(n^{-1/2}),\] where \(\boldsymbol{\chi}=\Pi(\boldsymbol{\psi}|[\mathbf{v}_{\infty}])\) is the projection of \(\boldsymbol{\psi}(Z)\) onto the closed linear span \([\mathbf{v}_{\infty}].\) Clearly \(\int\boldsymbol{\chi}\,dQ=0\) and \(\int|\boldsymbol{\chi}|^{2}\,dQ<\infty.\) Thus (B2) is proved with \(\boldsymbol{\varphi}=\boldsymbol{\chi}=\Pi(\boldsymbol{\psi}|[\mathbf{v}_{ \infty}]).\) This finishes the proof.
2305.15802
Asymptotic nonvanishing of syzygies of algebraic varieties
We establish precise nonvanishing results for asymptotic syzygies of smooth projective varieties. This refines Ein-Lazarsfeld's asymptotic nonvanishing theorem. Combining with the author's previous asymptotic vanishing result, we completely determine the asymptotic shapes of the minimal free resolutions of the graded section modules of a line bundle on a smooth projective variety as the positivity of the embedding line bundle grows.
Jinhyung Park
2023-05-25T07:34:29Z
http://arxiv.org/abs/2305.15802v2
# Asymptotic nonvanishing of syzygies ###### Abstract. We establish precise nonvanishing results for asymptotic syzygies of smooth projective varieties. This refines Ein-Lazarsfeld's asymptotic nonvanishing theorem. Combining with the author's previous asymptotic vanishing result, we completely determine the asymptotic shapes of the minimal free resolutions of the graded section modules of a line bundle on a smooth projective variety as the positivity of the embedding line bundle grows. Key words and phrases:asymptotic syzygies, Koszul cohomology, algebraic varieties, line bundles 2020 Mathematics Subject Classification: 14C20, 14J60, 13D02 J. Park was partially supported by the National Research Foundation (NRF) funded by the Korea government (MSIT) (NRF-2019R1A6A1A10073887 and NRF-2022M3C1C8094326). ## 1. Introduction The purpose of this paper is to address the main theme of [8] and [19]: _the asymptotic behavior of syzygies of algebraic varieties is surprisingly uniform_. After the pioneering work of Green [16], there has been a considerable amount of research to understand syzygies of algebraic varieties. It is an interesting problem to describe the overall asymptotic behavior of syzygies of graded section modules of a line bundle on a smooth projective variety as the positivity of the embedding line bundle grows (see [16, Problem 5.13] and [7, Problem 4.4]). The influential paper [8] of Ein-Lazarsfeld opens the door to asymptotic syzygies of algebraic varieties, and the asymptotic nonvanishing theorem was proved there. In the present paper, we provides a new approach to nonvanishing of asymptotic syzygies, and together with the author's asymptotic vanishing theorem [19], we exhibit the uniform behavior of all asymptotic syzygies. Throughout the paper, we work over an algebraically closed field \(\mathbf{k}\) of arbitrary characteristic. Let \(X\) be a smooth projective variety of dimension \(n\), and \(B\) be a line bundle on \(X\). For an integer \(d\geq 1\), set \[L_{d}:=\mathscr{O}_{X}(dA+P)\ \ \text{and}\ \ r_{d}:=h^{0}(X,L_{d})-1,\] where \(A\) is an ample divisor and \(P\) is an arbitrary divisor on \(X\). We assume that \(d\) is sufficiently large so that \(L_{d}\) is very ample and \(r_{d}=\Theta(d^{n})\). Here, for a nonnegative function \(f(d)\) defined for positive integers \(d\), we define: \[f(d)\geq\Theta(d^{k}) \Longleftrightarrow \text{there is a constant }C_{1}>0\text{ such that }\] \[f(d)\geq C_{1}d^{k}\text{ for any sufficiently large positive integer }d;\] \[f(d)\leq\Theta(d^{k}) \Longleftrightarrow \text{there is a constant }C_{2}>0\text{ such that }\] \[f(d)\leq C_{2}d^{k}\text{ for any sufficiently large positive integer }d;\] \[f(d)=\Theta(d^{k}) \Longleftrightarrow \text{there are constants }C_{1},C_{2}>0\text{ such that }\] \[C_{1}d^{k}\leq f(d)\leq C_{2}d^{k}\text{ for any sufficiently large positive integer }d.\] For simplicity, we write \(f(d)=\Theta(1)\) if \(f(d)\) is a constant including \(0\) for any sufficiently large positive integer \(d\). Let \(S_{d}:=\bigoplus_{m\geq 0}S^{m}H^{0}(X,L_{d})\). By Hilbert syzygy theorem, the finitely generated graded section \(S_{d}\)-module \[R_{d}=R(X,B;L_{d}):=\bigoplus_{m\in\mathbf{Z}}H^{0}(X,B\otimes L_{d}^{m})\] admits a minimal free resolution where \[E_{p}=\bigoplus_{q}K_{p,q}(X,B;L_{d})\otimes_{\mathbf{k}}S_{d}(-p-q).\] Here the _Koszul cohomology group_\(K_{p,q}(X,B;L_{d})\) can be regarded as the space of \(p\)-th syzygies of weight \(q\). When \(X\subseteq\mathbf{P}H^{0}(X,L_{d})=\mathbf{P}^{r_{d}}\) is projectively normal, the minimal free resolution of \(R_{d}\) with \(B=\mathscr{O}_{X}\) contains all information about the defining equations of \(X\) in \(\mathbf{P}^{r_{d}}\) and their syzygies. Based on the experience of the case of curves, it was widely believed that the minimal free resolutions of \(R_{d}\) become simpler as \(d\) increases. However, Ein-Lazarsfeld [8] showed that this had been misleading, and they instead proposed that there would be a uniform asymptotic vanishing and nonvanishing behavior of \(K_{p,q}(X,B;L_{d})\) when \(d\) is sufficiently large. It is elementary to see that \[K_{p,q}(X,B;L_{d})=0\ \ \text{for}\ q\geq n+2.\] The cases \(q=0\) and \(q=n+1\) are well understood due to Green, Schreyer, Ottaviani-Paoletti, and Ein-Lazarsfeld: [8, Proposition 5.1 and Corollary 5.2] state that \[K_{p,0}(X,B;L_{d})\neq 0 \Longleftrightarrow 0\leq p\leq h^{0}(B)-1;\] \[K_{p,n+1}(X,B;L_{d})\neq 0 \Longleftrightarrow r_{d}-n-h^{0}(X,\omega_{X}\otimes B^{-1})+1\leq p\leq r_{d}-n.\] For \(1\leq q\leq n\), let \(c_{q}(d)\) be the number such that \[K_{c_{q}(d),q}(X,B;L_{d})\neq 0\ \ \text{and}\ \ K_{p,q}(X,B;L_{d})=0\ \ \text{for}\ 0\leq p\leq c_{q}(d)-1,\] and \(c_{q}^{\prime}(d)\) be the number such that \[K_{r_{d}-c_{q}^{\prime}(d),q}(X,B;L_{d})\neq 0\ \ \text{and}\ \ K_{p,q}(X,B;L_{d})=0\ \ \text{for}\ r_{d}-c_{q}^{\prime}(d)+1\leq p\leq r_{d}.\] After interesting nonvanishing results of Ottaviani-Paoletti [18] and Eisenbud-Green-Hulek-Popescu [12], Ein-Lazarsfeld proved the asymptotic nonvanishing theorem ([8, Theorem 4.1]): For each \(1\leq q\leq n\), if \(d\) is sufficiently large, then \[K_{p,q}(X,B;L_{d})\neq 0\ \ \text{for}\ \Theta(d^{q-1})\leq p\leq r_{d}- \Theta(d^{n-1}).\] In particular, \(c_{q}(d)\geq\Theta(d^{q-1})\) and \(c_{q}^{\prime}(d)\leq\Theta(d^{n-1})\). In [8, Conjecture 7.1], Ein-Lazarsfeld conjectured that \[K_{p,q}(X,B;L_{d})=0\ \ \text{for}\ 0\leq p\leq\Theta(d^{q-1}),\] and this was confirmed by the present author [19, Theorem 1.1] using Raicu's result in the appendix of [20]. In particular, \(c_{q}(d)=\Theta(d^{q-1})\). Despite the aforementioned results on asymptotic syzygies of algebraic varieties, at least two problems remain. First, it is unclear whether vanishing and nonvanishing of \(K_{p,q}(X,B;L_{d})\) can alternate in a few steps after \(c_{q}(d)\) or before \(r_{d}-c_{q}^{\prime}(d)\). Second, the previous results do not say anything about \(K_{p,q}(X,B;L_{d})\) for \(r_{d}-\Theta(d^{n-1})\leq p\leq r_{d}\). In this paper, we completely resolve these two issues: We show that vanishing and nonvanishing of \(K_{p,q}(X,B;L_{d})\) can alternate only at \(c_{q}(d)\) and \(r_{d}-c_{q}^{\prime}(d)\), and we give estimations for \(c_{q}(d)\) and \(c_{q}^{\prime}(d)\). Consequently, we could determine the precise vanishing and nonvanishing range of \(p\) for \(K_{p,q}(X,B;L_{d})\). It is worth noting that \(c^{\prime}_{q}(d)\) heavily depends on \(H^{q-1}(X,B)\) while \(c_{q}(d)\) depends only on \(d\) asymptotically. **Theorem 1.1**.: _Let \(X\) be a smooth projective variety of dimension \(n\geq 1\), and \(B\) be a line bundle on \(X\). For an integer \(d\geq 1\), set_ \[L_{d}:=\mathscr{O}_{X}(dA+P)\ \ \text{and}\ \ r_{d}:=h^{0}(X,L_{d})-1,\] _where \(A\) is an ample divisor and \(P\) is an arbitrary divisor on \(X\). Fix an index \(1\leq q\leq n\). Then there exist functions \(c_{q}(d)\) and \(c^{\prime}_{q}(d)\) with_ \[c_{q}(d)=\Theta(d^{q-1})\ \ \text{and}\ \ c^{\prime}_{q}(d)=\begin{cases} \Theta(d^{n-q})&\text{if }H^{q-1}(X,B)=0\text{ or }q=1\\ q-1&\text{if }H^{q-1}(X,B)\neq 0\text{ and }q\geq 2\end{cases} \tag{1.1}\] _such that if \(d\) is sufficiently large, then_ \[K_{p,q}(X,B;L_{d})\neq 0\iff c_{q}(d)\leq p\leq r_{d}-c^{\prime}_{q}(d). \tag{1.2}\] To prove Theorem 1.1, we do not use Ein-Lazarsfeld's asymptotic nonvanishing theorem [8, Theroem 4.1], but we adopt their strategy in [8]. Let \(H\) be a suitably positive very ample line bundle on \(X\), and choose a general member \(\overline{X}\in|H|\). Put \[\overline{L}_{d}:=L_{d}|_{\overline{X}},\ \overline{B}:=B|_{\overline{X}},\ \overline{H}:=H|_{\overline{X}},\ V^{\prime}_{d}:=H^{0}(X,L_{d}\otimes H^{-1}).\] We have noncanonical splitting \[K_{p,q}(X,\overline{B};L_{d})=\bigoplus_{j=0}^{p}\wedge^{j}V^{\prime}_{d} \otimes K_{p-j,q}(\overline{X},\overline{B};\overline{L}_{d})\ \ \text{and}\ \ K_{p,q}(X,\overline{B}\otimes\overline{H};L_{d})=\bigoplus_{j=0}^{p} \wedge^{j}V^{\prime}_{d}\otimes K_{p-j,q}(\overline{X},\overline{B}\otimes \overline{H};\overline{L}_{d}).\] There are natural maps \[\theta_{p,q}\colon K_{p+1,q-1}(X,\overline{B}\otimes\overline{H};L_{d}) \longrightarrow K_{p,q}(X,B;L)\ \text{and}\ \ \theta^{\prime}_{p,q}\colon K_{p,q}(X,\overline{B};L_{d})\longrightarrow K_{p,q} (X,B;L_{d}).\] When \(q=1\) or \(2\), the map \(\theta_{p,q}\) should be modified (see Section 3). In [8, Sections 3 and 4], the secant constructions were introduced to carry nonzero syzygies by highly secant planes. This essentially shows that the map \(\theta_{p,q}\) is nonzero for \(\Theta(d^{q-1})\leq p\leq r_{d}-\Theta(d^{n-1})\). Instead of utilizing the secant constructions, in this paper, we apply the asymptotic vanishing theorem [19, Theorem 1.1] to get the estimations of \(c_{q}(d)\) and \(c^{\prime}_{q}(d)\) (Proposition 4.4) and to see that the maps \(\theta_{c_{q}(d),q}\) and \(\theta^{\prime}_{r_{d}-c^{\prime}_{q}(d),q}\) are nonzero (Proposition 4.6). The latter statement means that there are syzygies \(\alpha\) and \(\beta\) in ( \[\star\] ) \[K_{c_{q}(d)+1-j_{0},q-1}(\overline{X},\overline{B}\otimes\overline{H}; \overline{L}_{d})\ \ \text{and}\ \ K_{r_{d}-c^{\prime}_{q}(d)-j^{\prime}_{0},q}(\overline{X},\overline{B}; \overline{L}_{d})\] for some \(0\leq j_{0}\leq c_{q}(d)+1\) and \(\dim V^{\prime}_{d}-c^{\prime}_{q}(d)\leq j^{\prime}_{0}\leq\dim V^{\prime}_{d}\) that are lifted to syzygies in \(K_{c_{q}(d),q}(X,B;L_{d})\) and \(K_{r_{d}-c^{\prime}_{q}(d),q}(X,B;L_{d})\) via the maps \(\theta_{c_{q}(d),q}\) and \(\theta^{\prime}_{r_{d}-c^{\prime}_{q}(d),q}\), respectively. Since (\(\star\) *> 2.1) survives in \[K_{p,q}(X,\overline{B}\otimes\overline{H};L_{d})\ \text{for}\ c_{q}(d)\leq p \leq r_{d}-\Theta(d^{n-1})\ \text{and}\ K_{p,q}(X,\overline{B};L_{d})\ \text{for}\ \Theta(d^{n-1})\leq p\leq r_{d}-c^{\prime}_{q}(d),\] we can argue that the syzygies \(\alpha\) and \(\beta\) in (\(\star\) *> 2.1) are also lifted to syzygies in \(K_{p,q}(X,B;L_{d})\) for \(c_{q}(d)\leq p\leq r_{d}-c^{\prime}_{q}(d)\) via the maps \(\theta_{p,q}\) and \(\theta^{\prime}_{p,q}\), respectively (Theorem 3.1). The paper is organized as follows. After recalling preliminary results on syzygies of algebraic varieties in Section 2, we show how to lift syzygies from hypersurfaces (Theorem 3.1) in Section 3. Section 4 is devoted to the proof of Theorem 1.1. Finally, in Section 5, we present complementary results and open problems on asymptotic syzygies of algebraic varieties. ### Acknowledgements The author is very grateful to Lawrence Ein, Yeongrak Kim, and Wenbo Niu for inspiring discussions. ## 2. Preliminaries In this section, we collect relevant basic facts on Koszul cohomology and Castelnuovo-Mumford regularity. ### Koszul Cohomology Let \(V\) be an \(r\)-dimensional vector space over an algebraically closed field \(\mathbf{k}\), and \(S:=\bigoplus_{m\geq 0}S^{m}V\). Consider a finitely generated graded \(S\)-module \(M\). The _Koszul cohomology group_\(K_{p,q}(M,V)\) is the cohomology of the Koszul-type complex \[\wedge^{p+1}V\otimes M_{q-1}\stackrel{{\delta}}{{\longrightarrow}} \wedge^{p}V\otimes M_{q}\stackrel{{\delta}}{{\longrightarrow}} \wedge^{p-1}V\otimes M_{q+1},\] where the Koszul differential \(\delta\) is given by \[\delta(s_{1}\wedge\cdots\wedge s_{p}\otimes t)\longmapsto\sum_{i=1}^{p}(-1)^{ i}s_{1}\wedge\cdots\wedge\widehat{s}_{i}\wedge\cdots\wedge s_{p}\otimes s_{i}t.\] Then \(M\) has a minimal free resolution where \[E_{p}=\bigoplus_{q}K_{p,q}(M,V)\otimes_{\mathbf{k}}S(-p-q).\] We may regard \(K_{p,q}(M,V)\) as the vector space of \(p\)-th syzygies of weight \(q\). Let \[0\longrightarrow M^{\prime}\longrightarrow M\longrightarrow M^{\prime \prime}\longrightarrow 0\] be a short exact sequence of finitely generated graded \(S\)-modules. By [16, Corollary (1.d.4)] (see also [2, Lemma 1.24]), this induces a long exact sequence \[\cdots\longrightarrow K_{p+1,q-1}(M,V)\longrightarrow K_{p+1,q-1}(M^{ \prime\prime},V)\longrightarrow K_{p,q}(M^{\prime},V)\longrightarrow K_{p,q }(M,V)\longrightarrow\cdots.\] Consider an injective map \[\iota^{\prime}\colon\wedge^{p+1}V\longrightarrow V\otimes\wedge^{p}V,\ s_{1} \wedge\cdots\wedge s_{p+1}\longmapsto\sum_{i=1}^{p+1}(-1)^{i}s_{i}\otimes s_{ 1}\wedge\cdots\wedge\widehat{s}_{i}\wedge\cdots\wedge s_{p+1}.\] It is straightforward to check that the following diagram commutes: Then the map \(\iota^{\prime}\) induces a map \[\iota\colon K_{p+1,q}(M,V)\longrightarrow V\otimes K_{p,q}(M,V).\] This map is the glueing of the evaluation maps \(\operatorname{ev}_{x}\colon K_{p+1,q}(M,V)\longrightarrow K_{p,q}(M,V)\) for \(x\in V^{\vee}\) in [2, Subsection 2.2.1]. We now turn to the geometric setting. Let \(X\) be a projective variety, \(B\) be a coherent sheaf on \(X\), and \(L\) be a very ample line bundle on \(X\). Put \(V:=H^{0}(X,L)\) and \(S:=\bigoplus_{m\geq 0}S^{m}V\). Then the section module \[R(X,B;L):=\bigoplus_{m\in\mathbf{Z}}H^{0}(X,B\otimes L^{m}),\] is finitely generated graded \(S\)-module. We define the _Koszul cohomology group_ as \[K_{p,q}(X,B;L):=K_{p,q}(R(X,B;L),V).\] In this paper, \(L\) is always assumed to be sufficiently positive, so we have \[H^{0}(X,B\otimes L^{-m})=0\ \ \text{for}\ m>0.\] Then \(K_{p,q}(X,B;L)=0\) for \(q<0\). It is clear that if \(K_{p_{0},q}(X,B;L)=0\) for \(0\leq q\leq q_{0}\), then \(K_{p,q}(X,B;L)=0\) for \(p\geq p_{0}\) and \(0\leq q\leq q_{0}\). Let \(M_{L}\) be the kernel bundle of the evaluation map \(\mathrm{ev}\colon H^{0}(X,L)\otimes\mathscr{O}_{X}\to L\). We have a short exact sequence \[0\longrightarrow M_{L}\longrightarrow H^{0}(X,L)\otimes\mathscr{O}_{X} \longrightarrow L\longrightarrow 0.\] We frequently use the following well-known facts. **Proposition 2.1** (cf. [8, Proposition 3.2, Corollary 3.3, Remark 3.4], [19, Proposition 2.1]).: _Assume that_ \[H^{i}(X,B\otimes L^{m})=0\ \ \text{for}\ i>0\ \text{and}\ m>0.\] _For any \(p\geq 0\), the following hold:_ \((1)\) _If_ \(q\geq 2\)_, then_ \[\begin{array}{rcl}K_{p,q}(X,B;L)&=&H^{1}(X,\wedge^{p+1}M_{L}\otimes B \otimes L^{q-1})\\ &=&H^{2}(X,\wedge^{p+2}M_{L}\otimes B\otimes L^{q-2})\\ &&\vdots\\ &=&H^{q-1}(X,\wedge^{p+q-1}M_{L}\otimes B\otimes L).\end{array}\] _Consequently,_ \(K_{p,q}(X,B;L_{d})=0\) _for_ \(p\geq r_{d}-q\)_, and_ \(K_{p,q}(X,B;L_{d})=0\) _for_ \(q\geq\dim X+2\)_._ \((2)\) _If_ \(q\geq 2\) _and_ \(H^{q-1}(X,B)=H^{q}(X,B)=0\)_, then_ \(K_{p,q}(X,B;L)=H^{q}(X,\wedge^{p+q}M_{L}\otimes B)\)_._ \((3)\) _If_ \(q=1\)_, then_ \[K_{p,1}(X,B;L)=\mathrm{coker}\,\big{(}\wedge^{p+1}H^{0}(X,L)\otimes H^{0}(X,B )\longrightarrow H^{0}(X,\wedge^{p}M_{L}\otimes B\otimes L)\big{)}.\] _If_ \(H^{1}(X,B)=0\)_, then_ \(K_{p,1}(X,B;L)=H^{1}(X,\wedge^{p+1}M_{L}\otimes B)\)_._ \((4)\) _If_ \(q=0\) _and_ \(H^{0}(X,B\otimes L^{-m})=0\) _for_ \(m>0\)_, then_ \(K_{p,0}(X,B;L)=H^{0}(X,\wedge^{p}M_{L}\otimes B)\)_._ Proof.: We have a short exact sequence \[0\longrightarrow\wedge^{p+1}M_{L}\longrightarrow\wedge^{p+1}H^{0}(X,L)\otimes \mathscr{O}_{X}\longrightarrow\wedge^{p}M_{L}\otimes L\longrightarrow 0. \tag{2.1}\] Using the Koszul-type complex and chasing through the diagram, we see that \[K_{p,q}(X,B;L)=\mathrm{coker}\,\big{(}\wedge^{p+1}H^{0}(X,L)\otimes H^{0}(X,B \otimes L^{q-1})\longrightarrow H^{0}(X,\wedge^{p}M_{L}\otimes B\otimes L^{q} )\big{)}.\] Then the proposition easily follows. See [2, Section 2.1], [7, Section 1], [8, Section 3]. **Proposition 2.2** (cf. [1, Proposition 2.4], [8, Proposition 3.5]).: _Put \(n:=\dim X\). Assume that \(X\) is smooth, \(B\) is a line bundle, and_ \[H^{i}(X,B\otimes L^{m})=0\ \ \text{for}\ i>0\ \text{and}\ m>0\ \text{or}\ i<n\ \text{and}\ m<0.\] _For any \(p\geq 0\), the following hold:_ \((1)\) _If_ \(q=n+1\)_, then_ \(K_{p,n+1}(X,B;L)=K_{r-p-n,0}(X,\omega_{X}\otimes B^{-1};L)^{\vee}\)_._ \((2)\) _If_ \(q=n\)_, then there is an exact sequence_ \[\wedge^{p+n}H^{0}(X,L)\otimes H^{n-1}(X,B)\longrightarrow K_{p,n}(X,B;L) \longrightarrow K_{r-p-n,1}(X,\omega_{X}\otimes B^{-1};L_{d})^{\vee} \longrightarrow 0.\] _If_ \(H^{n-1}(X,B)=0\)_, then_ \(K_{p,n}(X,B;L)=K_{r-p-n,1}(X,\omega_{X}\otimes B^{-1};L)^{\vee}\)_._ (3) _If_ \(2\leq q\leq n-1\)_, then there is an exact sequence_ \[\wedge^{p+q}H^{0}(X,L)\otimes H^{q-1}(X,B)\longrightarrow K_{p,q}(X,B;L)\] \[\qquad\longrightarrow K_{r-p-n,n+1-q}(X,\omega_{X}\otimes B^{-1}; L_{d})^{\vee}\longrightarrow\wedge^{p+q}H^{0}(X,L)\otimes H^{q}(X,B).\] _If_ \(H^{q-1}(X,B)=H^{q}(X,B)=0\)_, then_ \(K_{p,q}(X,B;L)=K_{r-p-n,n+1-q}(X,\omega_{X}\otimes B^{-1};L_{d})^{\vee}\)_._ (4) _If_ \(q=1\)_, then there is an exact sequence_ \[0\longrightarrow K_{p,1}(X,B;L)\longrightarrow K_{r-p-n,n}(X,\omega_{X} \otimes B^{-1};L_{d})^{\vee}\longrightarrow\wedge^{p+1}H^{0}(X,L)\otimes H^{1} (X,B)\] _If_ \(H^{1}(X,B)=0\)_, then_ \(K_{p,1}(X,B;L)=K_{r-p-n,n}(X,\omega_{X}\otimes B^{-1};L_{d})^{\vee}\)_._ (5) _If_ \(q=0\)_, then_ \(K_{p,0}(X,B;L)=K_{r-p-n,n+1}(X,\omega_{X}\otimes B^{-1};L)^{\vee}\)_._ Proof.: Let \(V:=H^{0}(X,L)\). From (2.1), we get an exact sequence \[\wedge^{p+q}V\otimes H^{q-1}(X,B)\longrightarrow H^{q-1}(X,\wedge^{p+q-1}M_{L }\otimes B\otimes L)\longrightarrow H^{q}(X,\wedge^{p+q}M_{L}\otimes B) \longrightarrow\wedge^{p+q}V\otimes H^{q}(X,B).\] By Proposition 2.1, \(H^{q-1}(X,\wedge^{p+q-1}M_{L}\otimes B\otimes L)=K_{p,q}(X,B;L)\) when \(q\geq 2\). By Serre duality, \[H^{q}(X,\wedge^{p+q}M_{L}\otimes B)=H^{n-q}(X,\wedge^{p+q}M_{L}^{\vee}\otimes \omega_{X}\otimes B^{-1})^{\vee}.\] Since \(\operatorname{rank}M_{L}=r\) and \(\det M_{L}=L^{-1}\), it follows that \(\wedge^{p+q}M_{L}^{\vee}=\wedge^{r-p-q}M_{L}\otimes L\). Thus \[H^{q}(X,\wedge^{p+q}M_{L}\otimes B)=H^{n-q}(X,\wedge^{r-p-q}M_{L}\otimes\omega _{X}\otimes B^{-1}\otimes L)^{\vee}.\] Using Proposition 2.1, the assertions easily follow. See [2, Section 2.3], [8, Section 3]. In the situation of Proposition 2.2, if we further assume \(H^{i}(X,B)=0\) for \(1\leq i\leq n-1\), i.e., \(R(X,B;L)\) and \(R(X,\omega_{X}\otimes B^{-1};L)\) are Cohen-Macaulay, then \[K_{p,q}(X,B;L)=K_{r-p-n,n+1-q}(X,\omega_{X}\otimes B^{-1};L)^{\vee}.\] **Lemma 2.3**.: _If \(H^{q}(X,M_{L}\otimes\wedge^{p}M_{L}\otimes B)=0\) and \(H^{q}(X,B)=0\), then \(H^{q}(X,\wedge^{p+1}M_{L}\otimes B)=0\)._ Proof.: Let \(V:=H^{0}(X,L)\). Consider the commutative diagram with exact rows which gives rise to the following commutative diagram with exact rows Since \(H^{q}(X,B)=0\), the middle vertical map is surjective. Thus the lemma follows. ### Castelnuovo-Mumford Regularity Let \(X\) be a projective variety, and \(L\) be a very ample line bundle on \(X\). A coherent sheaf \(\mathscr{F}\) on \(X\) is said to be _\(m\)-regular_ with respect to \(L\) if \[H^{i}(X,\mathscr{F}\otimes\mathscr{O}_{X}(m-i))=0\ \ \text{for}\ i>0.\] By Mumford's theorem ([17, Theorem 1.8.5]), if \(\mathscr{F}\) is \(m\)-regular with respect to \(L\), then \(\mathscr{F}\otimes L^{m+\ell}\) is globally generated, the multiplication map \[H^{0}(X,\mathscr{F}\otimes L^{m})\otimes H^{0}(X,L^{\ell})\longrightarrow H^{0 }(X,\mathscr{F}\otimes L^{m+\ell})\] is surjective, and \(\mathscr{F}\) is \((m+\ell)\)-regular with respect to \(L\) for every \(\ell\geq 0\). **Lemma 2.4** (cf. [3, Corollary 3.2]).: _If \(\mathscr{O}_{X}\) is \(k\)-regular with \(k\geq 1\) and \(\mathscr{F}\) is \(m\)-regular with respect to \(L\), then there are finite dimensional vector spaces \(W_{N},\ldots,W_{1},W_{0}\) over \(\mathbf{k}\) and a resolution of \(\mathscr{F}\) of the form_ \[W_{N}\otimes L^{-m-Nk}\longrightarrow\cdots\longrightarrow W_{1}\otimes L^{- m-k}\longrightarrow W_{0}\otimes L^{-m}\longrightarrow\mathscr{F}\longrightarrow 0.\] Proof.: By [17, Theorem 1.8.5], \(\mathscr{F}\otimes L^{m}\) is globally generated. Letting \(W_{0}:=H^{0}(X,\mathscr{F}\otimes L^{m})\), we have a short exact sequence \[0\longrightarrow M_{0}\longrightarrow W_{0}\otimes L^{-m}\longrightarrow \mathscr{F}\longrightarrow 0.\] By [17, Theorem 1.8.5], the map \[W_{0}\otimes H^{0}(X,L^{(m+k)-m-1})\longrightarrow H^{0}(X,\mathscr{F}\otimes L ^{(m+k)-1})\] is surjective. Note that \[H^{i}(X,L^{(m+k)-m-i})=0\ \ \text{for $i\geq 1$ and $H^{i-1}(X,\mathscr{F}\otimes L^{(m+k)-i})=0 $\ \ for $i\geq 2$}.\] Thus \(M_{0}\) is \((m+k)\)-regular with respect to \(L\). Replacing \(\mathscr{F}\) by \(M_{0}\) and continuing the arguments, we obtain the lemma. ## 3. Lifting Syzygies from Hypersurfaces The aim of this section is showing how to lift syzygies from hypersurfaces (see Theorem 3.1). This is the main ingredient of the proof of Theorem 1.1. We start by setting notations. Let \(X\) be a smooth projective variety, \(B\) be a line bundle on \(X\), and \(L\) be a very ample line bundle on \(X\). Assume that \(n:=\dim X\geq 2\). Take a very ample line bundle \(H\) on \(X\), and suppose that \[\begin{split}& H^{i}(X,B\otimes L^{m})=0\ \ \text{for $i>0$ and $m>0$ or $i<n$ and $m<0$};\\ & H^{i}(X,B\otimes H\otimes L^{m})=H^{i}(X,B\otimes H^{-1} \otimes L^{m})=0\ \ \text{for $1\leq i\leq n-1$ and $m\in\mathbf{Z}$};\\ & H^{0}(X,B\otimes H\otimes L^{m})=H^{0}(X,B\otimes H^{-1} \otimes L^{m})=0\ \ \text{for $m<0$};\\ & H^{n}(X,B\otimes H\otimes L^{m})=H^{n}(X,B\otimes H^{-1} \otimes L^{m})=0\ \ \text{for $m>0$}.\end{split} \tag{3.1}\] In particular, \(R(X,B\otimes H;L)\) and \(R(X,B\otimes H^{-1};L)\) are Cohen-Macaulay. Choose a general member \(\overline{X}\in|H|\), and put \[\begin{split}&\overline{L}:=L|_{\overline{X}},\ \overline{B}:=B|_{\overline{X}},\ \overline{H}:=H|_{\overline{X}};\\ & V:=H^{0}(X,L),\ V^{\prime}:=H^{0}(X,L\otimes H^{-1}),\ \overline{V}:=H^{0}(X,\overline{L});\\ & r:=\dim V-1,v^{\prime}:=\dim V^{\prime},\ \overline{r}:=\dim \overline{V}-1\end{split}\] so that \(r=v^{\prime}+\overline{r}\). Fix a splitting \(V=V^{\prime}\oplus\overline{V}\). As in [8, Lemma 3.12], we get \[\wedge^{p+1}M_{L}|_{\overline{X}}=\bigoplus_{j=0}^{p+1}\wedge^{j}V^{\prime} \otimes\wedge^{p+1-j}M_{\overline{L}}\ \ \text{for $p\geq 0$}. \tag{3.2}\] By Proposition 2.1, we obtain \[\begin{split}& K_{p,q}(X,\overline{B};L)=\bigoplus_{j=0}^{p} \wedge^{j}V^{\prime}\otimes K_{p-j,q}(\overline{X},\overline{B};\overline{L}) \ \ \text{for $p,q\geq 0$};\\ & K_{p,q}(X,\overline{B}\otimes\overline{H};L)=\bigoplus_{j=0}^{p }\wedge^{j}V^{\prime}\otimes K_{p-j,q}(\overline{X},\overline{B}\otimes \overline{H};\overline{L})\ \ \text{for $p,q\geq 0$}.\end{split} \tag{3.3}\] Now, put \(S:=\bigoplus_{m\geq 0}S^{m}V\) and \(\overline{S}:=\bigoplus_{m\geq 0}S^{m}\overline{V}\). Consider the following short exact sequence \[0\longrightarrow B\longrightarrow B\otimes H\longrightarrow\overline{B} \otimes\overline{H}\longrightarrow 0. \tag{3.4}\] By (3.1), we have an exact sequence of finitely generated graded \(S\)-modules \[0\longrightarrow R(X,B;L)\longrightarrow R(X,B\otimes H;L)\longrightarrow R (X,\overline{B}\otimes\overline{H};L)\longrightarrow H^{1}(X,B)\longrightarrow 0.\] Let \(\overline{R}(X,\overline{B}\otimes\overline{H};L)\) be the kernel of the map \(R(X,\overline{B}\otimes\overline{H};L)\longrightarrow H^{1}(X,B)\), which is a finitely generated graded \(\overline{S}\)-module. Then we get a short exact sequence of finitely generated graded \(S\)-modules \[0\longrightarrow R(X,B;L)\longrightarrow R(X,B\otimes H;L)\longrightarrow \overline{R}(X,\overline{B}\otimes\overline{H};L)\longrightarrow 0.\] By [16, Corollary (1.d.4)], this induces a connecting map \[\theta_{p,q}\colon\underbrace{\overline{K}_{p+1,q-1}(X,\overline{B}\otimes \overline{H};L)}_{:=K_{p+1,q-1}(\overline{R}(X,\overline{B}\otimes\overline{H };L),V)}\longrightarrow K_{p,q}(X,B;L). \tag{3.5}\] Notice that \[\overline{K}_{p+1,q-1}(X,\overline{B}\otimes\overline{H};L)=\bigoplus_{j=0}^{ p+1}\wedge^{j}V^{\prime}\otimes\underbrace{\overline{K}_{p+1-j,q-1}(\overline{X}, \overline{B};\overline{L})}_{:=K_{p+1-j,q-1}(\overline{R}(X,\overline{B} \otimes\overline{H};L),\overline{V})}. \tag{3.6}\] On the other hand, we also have a short exact sequence of finitely generated graded \(S\)-modules \[0\longrightarrow\overline{R}(X,\overline{B}\otimes\overline{H};L) \longrightarrow R(X,\overline{B}\otimes\overline{H};L)\longrightarrow H^{1}( X,B)\longrightarrow 0.\] Since \[K_{p,q}(H^{1}(X,B),V)=\begin{cases}\wedge^{p}V\otimes H^{1}(X,B)&\text{if }q=0 \\ 0&\text{if }q\geq 1,\end{cases}\] we get an exact sequence \[0\longrightarrow\overline{K}_{p+1,0}(X,\overline{B}\otimes \overline{H};L)\stackrel{{\psi_{0}}}{{\longrightarrow}}K_{p+1,0} (X,\overline{B}\otimes\overline{H};L)\\ \longrightarrow\wedge^{p+1}V\otimes H^{1}(X,B)\stackrel{{ \varphi}}{{\longrightarrow}}\overline{K}_{p,1}(X,\overline{B}\otimes \overline{H};L)\stackrel{{\psi_{1}}}{{\longrightarrow}}K_{p,1}(X, \overline{B}\otimes\overline{H};L)\longrightarrow 0 \tag{3.7}\] and an isomorphism \[\psi_{q-1}\colon\overline{K}_{p+1,q-1}(X,\overline{B}\otimes\overline{H};L) \longrightarrow K_{p+1,q-1}(X,\overline{B}\otimes\overline{H};L)\ \text{ for }q\geq 3. \tag{3.8}\] In view of (3.6), there is a map \[\overline{\psi}_{p+1-j,q-1}\colon\overline{K}_{p+1-j,q-1}(\overline{X}, \overline{B}\otimes\overline{H};\overline{L})\longrightarrow K_{p+1-j,q-1}( \overline{X},\overline{B}\otimes\overline{H};\overline{L})\] such that \[\psi_{q-1}=\bigoplus_{j=0}^{p+1}\operatorname{id}_{\wedge jV^{\prime}}\otimes \overline{\psi}_{p+1-j,q-1}.\] Next, consider the following short exact sequence \[0\longrightarrow B\otimes H^{-1}\longrightarrow B\longrightarrow\overline{B }\longrightarrow 0. \tag{3.9}\] By (3.1), we have a short exact sequence of finitely generated graded \(S\)-modules \[0\longrightarrow R(X,B\otimes H^{-1};L)\longrightarrow R(X,B;L) \longrightarrow R(X,\overline{B};L)\longrightarrow 0.\] By [16, Corollary (1.d.4)], this induces a restriction map \[\theta^{\prime}_{p,q}\colon K_{p,q}(X,B;L)\longrightarrow K_{p,q}(X, \overline{B};L). \tag{3.10}\] **Theorem 3.1**.: _Fix an index \(q\geq 1\). Then we have the following:_ \((1)\) _Suppose that the map \(\theta_{p,q}\) in (3.5) is a nonzero map for \(p=c\) with \(0\leq c\leq r-\overline{r}-1\). Then the map \(\theta_{p,q}\) is a nonzero map for \(c\leq p\leq r-\overline{r}-1\), and consequently,_ \[K_{p,q}(X,B;L)\neq 0\ \ \text{for }c\leq p\leq r-\overline{r}-1.\] \((2)\) _Suppose that the map \(\theta^{\prime}_{p,q}\) in (3.10) is a nonzero map for \(p=r-c^{\prime}\) with \(0\leq c^{\prime}\leq\overline{r}\). Then \(\theta^{\prime}_{p,q}\) is a nonzero map for \(\overline{r}\leq p\leq r-c^{\prime}\), and consequently,_ \[K_{p,q}(X,B;L)\neq 0\ \ \text{for }\overline{r}\leq p\leq r-c^{\prime}.\] Proof.: (1) There is \(\alpha_{c+1}\in\overline{K}_{c+1,q-1}(X,\overline{B}\otimes\overline{H};L)\) such that \(\theta_{c,q}(\alpha_{c+1})\neq 0\). We may assume that \[\alpha_{c+1}=s^{\prime}_{1}\wedge\cdots\wedge s^{\prime}_{j_{0}}\otimes \alpha^{\prime}\in\wedge^{j_{0}}V^{\prime}\otimes\overline{K}_{c+1-j_{0},q-1} (\overline{X},\overline{B};\overline{L})\subseteq\overline{K}_{c+1,q-1}(X, \overline{B}\otimes\overline{H};L)\] for some \(0\leq j_{0}\leq c+1\). We proceed by induction on \(p\). For \(c\leq p-1\leq r-\overline{r}-2\), we may assume that there is \[\alpha_{p}=s^{\prime}_{1}\wedge\cdots\wedge s^{\prime}_{j}\otimes\alpha^{ \prime}\in\wedge^{j}V^{\prime}\otimes\overline{K}_{c+1-j_{0},q-1}(\overline{X},\overline{B};\overline{L})\subseteq\overline{K}_{p,q-1}(X,\overline{B} \otimes\overline{H};L),\] where \(j=p-(c+1-j_{0})\), such that \(\theta_{p-1,q}(\alpha_{p})\neq 0\). Consider the commutative diagram Take any \(s^{\prime}_{j+1}\in V^{\prime}\) with \(s^{\prime}_{1}\wedge\cdots\wedge s^{\prime}_{j}\wedge s^{\prime}_{j+1}\neq 0\), and let \[\alpha_{p+1}:=s^{\prime}_{1}\wedge\cdots\wedge s^{\prime}_{j+1}\otimes\alpha^{ \prime}\in\wedge^{j+1}V^{\prime}\otimes\overline{K}_{c+1-j_{0},q-1}(\overline {X},\overline{B};\overline{L})\subseteq\overline{K}_{p+1,q-1}(X,\overline{B} \otimes\overline{H};L).\] Then \[\iota(\alpha_{p+1})=\underbrace{\sum_{i=1}^{j+1}(-1)^{i}s^{\prime}_{i}\otimes s ^{\prime}_{1}\wedge\cdots\wedge\widehat{s^{\prime}_{i}}\wedge\cdots\wedge s^{ \prime}_{j+1}\otimes\alpha^{\prime}}_{\in\ V^{\prime}\otimes\overline{K}_{p,q -1}(X,\overline{B}\otimes\overline{H};\,L)}+\underbrace{s^{\prime}_{1}\wedge \cdots\wedge s^{\prime}_{j+1}\otimes\iota(\alpha^{\prime})}_{\in\ V\otimes \overline{K}_{p,q-1}(X,\overline{B}\otimes\overline{H};\,L)}.\] Notice that \[\operatorname{id}_{V}\otimes\theta_{p-1,q}\big{(}(-1)^{j+1}s^{\prime}_{j+1} \otimes\alpha_{p}\big{)}=(-1)^{j+1}s^{\prime}_{j+1}\otimes\theta_{p-1,q}( \alpha_{p})\neq 0\] in \(\langle s^{\prime}_{j+1}\rangle\otimes K_{p-1,q}(X,B;L)\). Observe then that all other terms in \(\iota(\alpha_{p+1})\) go into complements of \(\langle s^{\prime}_{j+1}\rangle\otimes K_{p-1,q}(X,B;L)\) via the map \(\operatorname{id}_{V}\otimes\theta_{p-1,q}\). Thus \[(\iota\circ\theta_{p,q})(\alpha_{p+1})=((\operatorname{id}_{V}\otimes\theta_{ p-1,q})\circ\iota)(\alpha_{p+1})\neq 0,\] and hence, \(\theta_{p,q}(\alpha_{p+1})\neq 0\). (2) There is \(\beta_{r-c^{\prime}}\in K_{r-c^{\prime},q}(X,B;L)\) such that \(\theta^{\prime}_{r-c^{\prime},q}(\beta_{r-c^{\prime}})\) has a nonzero term \[s^{\prime}_{1}\wedge\cdots\wedge s^{\prime}_{j_{0}}\otimes\beta^{\prime}\in \wedge^{j_{0}}V^{\prime}\otimes K_{r-c^{\prime}-j_{0},q}(\overline{X}, \overline{B};\overline{L})\subseteq K_{r-c^{\prime},q}(X,\overline{B};L)\] for some \(r-\overline{r}-c^{\prime}\leq j_{0}\leq r-\overline{r}\). We proceed by reverse induction on \(p\). For \(\overline{r}+1\leq p+1\leq r-c^{\prime}\), we may assume that there is \(\beta_{p+1}\in K_{p+1,q}(X,B;L)\) such that \(\theta^{\prime}_{p+1,q}(\beta_{p+1})\) has a nonzero term \[s^{\prime}_{1}\wedge\cdots\wedge s^{\prime}_{j+1}\otimes\beta^{\prime}\in \wedge^{j+1}V^{\prime}\otimes K_{r-c^{\prime}-j_{0},q}(\overline{X},\overline {B};\overline{L})\subseteq K_{p+1,q}(X,\overline{B};L),\] where \(j+1=p+1-(r-c^{\prime}-j_{0})\). Consider the commutative diagram We have \[\iota(s^{\prime}_{1}\wedge\cdots\wedge s^{\prime}_{j+1}\otimes\beta^{\prime})= \underbrace{\sum_{i=1}^{j+1}(-1)^{i}s^{\prime}_{i}\otimes s^{\prime}_{1}\wedge \cdots\wedge\widehat{s^{\prime}_{i}}\wedge\cdots\wedge s^{\prime}_{j+1}\otimes \beta^{\prime}}_{\in\ V^{\prime}\otimes\wedge^{j}V^{\prime}\otimes K_{r-c^{ \prime}-j_{0},q}(\overline{X},\overline{B};\,\overline{L})\subseteq\ V^{ \prime}\otimes K_{p,q}(X,\overline{B};\,L)}+\underbrace{s^{\prime}_{1}\wedge \cdots\wedge s^{\prime}_{j+1}\otimes\iota(\beta^{\prime})}_{\in\ V\otimes K_{p,q}(X,\overline{B};\,L)}.\] Note that all terms of \(\theta^{\prime}_{p+1,q}(\beta_{p+1})\) not in \(\wedge^{j+1}V^{\prime}\otimes\langle\beta^{\prime}\rangle\) go into complements of \(V^{\prime}\otimes\wedge^{j}V^{\prime}\otimes\langle\beta^{\prime}\rangle\) via the map \(\iota\). Observe then that the term \[(-1)^{j+1}s^{\prime}_{j+1}\otimes s^{\prime}_{1}\wedge\cdots\wedge s^{\prime} _{j}\otimes\beta^{\prime}\in V^{\prime}\otimes\wedge^{j}V^{\prime}\otimes K_ {r-c^{\prime}-j_{0},q}(\overline{X},\overline{B};\overline{L})\subseteq V \otimes K_{p,q}(X,\overline{B};L)\] of \((\iota\circ\theta^{\prime}_{p+1,q})(\beta_{p+1})\) cannot be cancelled in \(V^{\prime}\otimes\wedge^{j}V^{\prime}\otimes\langle\beta^{\prime}\rangle\). Thus there is \(\beta_{p}\in K_{p,q}(X,B;L)\) such that \((\operatorname{id}_{V}\otimes\theta^{\prime}_{p,q})((-1)^{j+1}s^{\prime}_{j+ 1}\otimes\beta_{p})\) has the above term, so \(\theta^{\prime}_{p,q}(\beta_{p})\) has a nonzero term \[s^{\prime}_{1}\wedge\cdots\wedge s^{\prime}_{j}\otimes\beta^{\prime}\in\wedge ^{j}V^{\prime}\otimes K_{r-c^{\prime}-j_{0},q}(\overline{X},\overline{B}; \overline{L})\subseteq K_{p,q}(X,\overline{B};L).\] We complete the proof. ## 4. Precise Asymptotic Nonvanishing Theorem After establishing key steps as propositions, we finish the proof of Theorem 1.1 at the end of this section. We start by setting notations. Let \(X\) be a smooth projective variety of dimension \(n\geq 1\), and \(B\) be a line bundle on \(X\). For an integer \(d\geq 1\), set \[L_{d}:=\mathscr{O}_{X}(dA+P)\text{ and }\ r_{d}:=h^{0}(X,L_{d})-1,\] where \(A\) is an ample divisor and \(P\) is an arbitrary divisor on \(X\). We assume throughout that \(d\) is sufficiently large so that \(L_{d}\) is a sufficiently positive very ample line bundle and \(r_{d}=\Theta(d^{n})\). Furthermore, we have \[H^{i}(X,B\otimes L_{d}^{m})=0\ \text{ for }i>0\text{ and }m>0\text{ or }i<n\text{ and }m<0.\] For \(1\leq q\leq n\), let \(c_{q}(d)\) be the number such that \[K_{c_{q}(d),q}(X,B;L_{d})\neq 0\ \text{ and }\ K_{p,q}(X,B;L_{d})=0\ \text{ for }0\leq p\leq c_{q}(d)-1,\] and \(c^{\prime}_{q}(d)\) be the number such that \[K_{r_{d}-c^{\prime}_{q}(d),q}(X,B;L_{d})\neq 0\ \text{ and }\ K_{p,q}(X,B;L_{d})=0\ \text{ for }r_{d}-c^{\prime}_{q}(d)+1\leq p\leq r_{d}.\] If \(K_{p,q}(X,B;L_{d})=0\) for all \(p\), then we set \(c_{q}(d):=r_{d}+1\) and \(c^{\prime}_{q}(d):=r_{d}+1\). We will see in Proposition 4.4 that this cannot happen. Recall from [19, Theorem 1.1] that \(c_{q}(x)\geq\Theta(d^{q-1})\). **Lemma 4.1**.: _Assume that \(H^{i}(X,B)=0\) for \(1\leq i\leq n-1\), i.e., \(R(X,B;L_{d})\) is Cohen-Macaulay. Fix an index \(1\leq q\leq n\). Then we have the following:_ (1) _If_ \(K_{p_{0},q}(X,B;L_{d})=0\) _for some_ \(p_{0}\leq\Theta(d^{q-1})\)_, then_ \(K_{p,q}(X,B;L_{d})=0\) _for_ \(p\leq p_{0}\)_._ (2) _If_ \(K_{p_{0},q}(X,B;L_{d})=0\) _for some_ \(p_{0}\geq r_{d}-\Theta(d^{n-q})\)_, then_ \(K_{p,q}(X,B;L_{d})=0\) _for_ \(p\geq p_{0}\) Proof.: By Proposition 2.2, [8, Proposition 5.1], and [19, Theorem 1.1], if \(0\leq q^{\prime}\leq q-1\), then \[K_{p,q^{\prime}}(X,B;L_{d})=K_{r_{d}-p-n,n+1-q^{\prime}}(X,\omega_{X}\otimes B^{ -1};L_{d})^{\vee}=0\ \ \text{for}\ p\geq r_{d}-\Theta(d^{n-q^{\prime}}).\] Thus \(K_{p_{0},q^{\prime}}(X,B;L_{d})=0\) for \(0\leq q^{\prime}\leq q\), so the assertion (2) follows. Now, by Proposition 2.2, the assertion (2) implies the assertion (1). _Remark 4.2_.: If \(R(X,B;L_{d})\) is Cohen-Macaulay, then [8, Theorem 4.1], [19, Theorem 1.1], and Lemma 4.1 imply Theorem 1.1. In general, we know from [8, Theorem 4.1] and [19, Theorem 1.1] that \(c_{q}(d)=\Theta(d^{q-1})\). Using Boij-Soderberg theory [4], [13], one can show that vanishing and nonvanishing of \(K_{p,q}(X,B;L_{d})\) do not alternate for a while after \(c_{q}(d)\). This means that \[K_{p,q}(X,B;L_{d})\neq 0\ \ \text{for}\ c_{q}(d)\leq p\leq r_{d}-\Theta(d^{n-1}).\] However, we will not use this remark in our proof of Theorem 1.1. **Lemma 4.3**.: _Theorem 1.1 holds when \(n=1\)._ Proof.: When \(n=1\), [8, Proposition 5.1 and Corollary 5.2] imply \[K_{p,1}(X,B;L_{d})\neq 0\ \ \text{for}\ h^{0}(B)\leq p\leq r_{d}-h^{0}(X, \omega_{X}\otimes B^{-1})-1.\] This shows that \(c_{1}(d)=\Theta(1)\) and \(c_{1}^{\prime}(d)=\Theta(1)\). Then Lemma 4.1 implies the lemma. Assume henceforth that \(n=\dim X\geq 2\). Let \(H\) be a very ample line bundle (independent of \(d\)) on \(X\) such that \[H^{i}(X,B\otimes H)=H^{i}(X,B\otimes H^{-1})=0\ \ \text{for}\ 1\leq i\leq n-1.\] As \(d\) is sufficiently large, we have \[H^{i}(X,B\otimes H\otimes L_{d}^{m})=H^{i}(X,B\otimes H^{-1}\otimes L_{d}^{m} )=0\ \ \text{for}\ 1\leq i\leq n-1\ \text{and}\ m\in\mathbf{Z}. \tag{4.1}\] This means that \(R(X,B\otimes H;L_{d})\) and \(R(X,B\otimes H^{-1};L_{d})\) are Cohen-Macaulay. Clearly, \[\begin{split}& H^{0}(X,B\otimes H\otimes L_{d}^{m})=H^{0}(X,B \otimes H^{-1}\otimes L_{d}^{m})=0\ \ \text{for}\ m<0;\\ & H^{n}(X,B\otimes H\otimes L_{d}^{m})=H^{n}(X,B\otimes H^{-1} \otimes L_{d}^{m})=0\ \ \text{for}\ m>0;\end{split} \tag{4.2}\] Choose a general member \(\overline{X}\in|H|\), and put \[\overline{L}_{d}:=L_{d}|_{\overline{X}},\ \overline{B}:=B|_{ \overline{X}},\ \overline{H}:=H|_{\overline{X}};\] \[V_{d}:=H^{0}(X,L_{d}),\ V_{d}^{\prime}:=H^{0}(X,L_{d}\otimes H^ {-1}),\ \overline{V}_{d}:=H^{0}(X,\overline{L}_{d});\] \[r_{d}:=\dim V_{d}-1,\ v_{d}^{\prime}:=\dim V_{d}^{\prime},\ \overline{r}_{d}:=\dim\overline{V}_{d}-1\] so that \(r_{d}=v_{d}^{\prime}+\overline{r}_{d}\) and \(\overline{r}_{d}=\Theta(d^{n-1})\). As in the previous section, fix a splitting \(V_{d}=V_{d}^{\prime}\oplus\overline{V}_{d}\). Then (3.2) and (3.3) hold. Furthermore, the short exact sequences (3.4) and (3.9) induce the map \(\theta_{p,q}\) in (3.5) and \(\theta_{p,q}^{\prime}\) in (3.10), respectively. In view of [16, Corollary (1.d.4)], they fit into the following exact sequences \[K_{p+1,q-1}(X,B\otimes H;L_{d})\longrightarrow\overline{K}_{p+1,q-1}(X, \overline{B}\otimes\overline{H};L_{d})\xrightarrow{\theta_{p,q}}K_{p,q}(X,B;L _{d})\longrightarrow K_{p,q}(X,B\otimes H;L_{d}); \tag{4.3b}\] \[K_{p,q}(X,B\otimes H^{-1};L_{d})\longrightarrow K_{p,q}(X,B;L_{d}) \xrightarrow{\theta_{p,q}^{\prime}}K_{p,q}(X,\overline{B};L_{d}) \longrightarrow K_{p-1,q+1}(X,B\otimes H^{-1};L_{d}). \tag{4.3a}\] **Proposition 4.4**.: _For each \(1\leq q\leq n\), we have_ \[c_{q}(d)=\Theta(d^{q-1})\ \ \text{and}\ \ c_{q}^{\prime}(d)=\begin{cases} \Theta(d^{n-q})&\text{if}\ H^{q-1}(X,B)=0\ \text{or}\ q=1\\ q-1&\text{if}\ H^{q-1}(X,B)\neq 0\ \text{and}\ q\geq 2.\end{cases}\] Proof.: We proceed by induction on \(n\). As the assertion holds for \(n=1\) by Lemma 4.3, we assume that \(n\geq 2\) and the assertions of the lemma hold for \(\overline{X}\). First, we consider \(c^{\prime}_{q}(d)\). Suppose that \(H^{q-1}(X,B)\neq 0\) and \(q\geq 2\). Then \[K_{r_{d}-q+1,q}(X,B;L_{d})=H^{q-1}(X,\wedge^{r_{d}}M_{L_{d}}\otimes B\otimes L_ {d})=H^{q-1}(X,B)\neq 0.\] Since \(K_{p,q}(X,B;L_{d})=0\) for \(p\geq r_{d}-q\), it follows that \(c^{\prime}_{q}(d)=q-1\). Suppose that \(H^{q-1}(X,B)=0\) when \(q\geq 2\) or \(q=1\). By Proposition 2.2 and [19, Theorem 1.1], \[K_{p,q}(X,B;L)\subseteq K_{r_{d}-p-n,n+1-q}(X,\omega_{X}\otimes B^{-1};L_{d})^ {\vee}=0\ \ \text{for}\ 0\leq r_{d}-p-n\leq\Theta(d^{n-q}),\] so \(c^{\prime}_{q}(d)\geq\Theta(d^{n-q})\). For \(2\leq q\leq n\), let \(\overline{c}^{\prime}_{q-1}(d)\) be the number such that \[K_{\overline{c}^{\prime}_{q-1}(d),q-1}(\overline{X},\overline{B}\otimes \overline{H};\overline{L}_{d})\neq 0\ \ \text{and}\ \ K_{p,q-1}(\overline{X},\overline{B}\otimes\overline{H};\overline{L}_{d})=0 \ \ \text{for}\ p\geq\overline{r}_{d}-\overline{c}^{\prime}_{q-1}(d)+1.\] By induction, \(\overline{c}^{\prime}_{q-1}(d)=\Theta(d^{n-q})\). Recall that \(H^{1}(X,B)=0\) when \(q=2\). By considering (3.3), (3.7), (3.8), we get \[\overline{K}_{r_{d}-\overline{c}^{\prime}_{q-1}(d),q-1}(X,\overline{B}\otimes \overline{H};L_{d})=K_{r_{d}-\overline{c}^{\prime}_{q-1}(d),q-1}(X,\overline{B }\otimes\overline{H};L_{d})\neq 0\ \ \text{for}\ 2\leq q\leq n.\] Possibly replacing \(H\) by more positive \(H\) (still independent of \(d\)), we may assume that \[h^{0}(X,B\otimes H)>h^{0}(X,B). \tag{4.4}\] Then \(\overline{K}_{0,0}(\overline{X},\overline{B}\otimes\overline{H};\overline{L}_{ d})\neq 0\). Putting \(\overline{c}^{\prime}_{0}(d):=\overline{r}_{d}=\Theta(d^{n-1})\), we get from (3.6) that \[\overline{K}_{r_{d}-\overline{c}^{\prime}_{0}(d),0}(X,\overline{B}\otimes \overline{H};L_{d})\neq 0.\] For \(1\leq q\leq n\), thanks to (4.1), Proposition 2.2 and [19, Theorem 1.1] yield that \[K_{r_{d}-\overline{c}^{\prime}_{q-1}(d),q-1}(X,B\otimes H;L_{d})=K_{\overline{ c}^{\prime}_{q-1}(d)-n,n+2-q}(X,\omega_{X}\otimes B^{-1}\otimes H^{-1};L_{d})^{ \vee}=0,\] since \(\overline{c}^{\prime}_{q-1}(d)-n=\Theta(d^{n-q})<\Theta(d^{n+1-q})\). Then the map \[\theta_{r_{d}-\overline{c}^{\prime}_{q-1}(d)-1,q}\colon\overline{K}_{r_{d}- \overline{c}^{\prime}_{q-1}(d),q-1}(X,\overline{B}\otimes\overline{H};L_{d}) \longrightarrow K_{r_{d}-\overline{c}^{\prime}_{q-1}(d)-1,q}(X,B;L_{d})\] in (4.3a) is a nonzero injective map. Thus \(c^{\prime}_{q}(d)\leq\overline{c}^{\prime}_{q-1}(d)+1=\Theta(d^{n-q})\), so \(c^{\prime}_{q}(d)=\Theta(d^{n-q})\). Next, we consider \(c_{q}(d)\). We know from [19, Theorem 1.1] that \(c_{q}(d)\geq\Theta(d^{q-1})\). When \(q=n\), Proposition 2.2 says that there is a surjective map \[K_{p,n}(X,B;L_{d})\longrightarrow K_{r_{d}-p-n,1}(X,\omega_{X}\otimes B^{-1}; L_{d})^{\vee}.\] As we have seen in the previous paragraph that \(K_{r_{d}-p-n,1}(X,\omega_{X}\otimes B^{-1};L_{d})^{\vee}\neq 0\) for some \(p=\Theta(d^{n-1})\), we have \(c_{n}(d)\leq\Theta(d^{n-1})\). Hence \(c_{n}(d)=\Theta(d^{n-1})\). Assume that \(1\leq q\leq n-1\). Let \(\overline{c}_{q}(d)\) be the number such that \[K_{\overline{c}_{q}(d),q}(\overline{X},\overline{B};\overline{L}_{d})\neq 0\ \ \text{and}\ \ K_{p,q}(\overline{X},\overline{B};\overline{L}_{d})=0\ \ \text{for}\ p\leq\overline{c}_{q}(d)-1.\] By induction, \(\overline{c}_{q}(d)=\Theta(d^{q-1})\). Note that \[K_{\overline{c}_{q}(d),q}(X,\overline{B};L_{d})\neq 0\] thanks to (3.3). By [19, Theorem 1.1], \[K_{\overline{c}_{q}(d)-1,q+1}(X,B\otimes H^{-1};L_{d})=0\] since \(\overline{c}_{q}(d)-1=\Theta(d^{q-1})<\Theta(d^{q})\). Then the map \[\theta^{\prime}_{\overline{c}_{q}(d),q}\colon K_{\overline{c}_{q}(d),q}(X,B;L_{ d})\longrightarrow K_{\overline{c}_{q}(d),q}(X,\overline{B};L_{d})\] in (4.3b) is a nonzero surjective map. Thus \(c_{q}(d)\leq\overline{c}_{q}(d)=\Theta(d^{q-1})\), so \(c_{q}(d)=\Theta(d^{q-1})\). Next, we prove the following technical lemma. **Lemma 4.5**.: _Let \(B^{\prime}\) be a line bundle on \(X\) (independent of \(d\)), and \(L\) be a very ample line bundle on \(X\) (independent of \(d\)) such that_ \[H^{i}(X,L^{m})=0\ \text{ for }i>0\text{ and }m>0\text{ or }i<n\text{ and }m<0;\] \[H^{i}(X,B^{\prime}\otimes L^{m})=0\ \text{ for }i>0\text{ and }m>0\text{ or }i<n\text{ and }m<0;\] \[H^{i}(X,M_{L_{d}}\otimes L^{m})=0\ \text{ for }i>0\text{ and }m>0.\] _Put \(H:=L^{n+1}\). For \(1\leq p\leq\Theta(d^{q-1})\) and \(1\leq q\leq n+1\), we have the following:_ (1) _If \(q\geq 2\) and \(H^{q-1}(X,\wedge^{p}M_{L_{d}}\otimes B^{\prime}\otimes L_{d})=0\), then \(H^{q-1}(X,\wedge^{p+1}M_{L_{d}}\otimes B^{\prime}\otimes H\otimes L_{d})=0\)._ (2) _If \(q\leq n\) and \(H^{q}(X,\wedge^{p}M_{L_{d}}\otimes B^{\prime})=0\), then \(H^{q}(X,\wedge^{p+1}M_{L_{d}}\otimes B^{\prime}\otimes H)=0\)._ Proof.: Notice that \(\mathscr{O}_{X}\) and \(M_{L_{d}}\) are \((n+1)\)-regular with respect to \(L\). By Lemma 2.4, there are finitely dimensional vector spaces \(\ldots,W_{1},W_{0}\) over \(\mathbf{k}\) and an exact sequence \[\cdots\longrightarrow W_{1}\otimes H^{-2}\longrightarrow W_{0}\otimes H^{-1} \longrightarrow M_{L_{d}}\longrightarrow 0. \tag{4.5}\] (1) By Lemma 2.3, it is sufficient to prove that \[H^{q-1}(X,M_{L_{d}}\otimes\wedge^{p}M_{L_{d}}\otimes B^{\prime}\otimes H \otimes L_{d})=0.\] For this purpose, consider the exact sequence from (4.5): \[\cdots\longrightarrow W_{1}\otimes\wedge^{p}M_{L_{d}}\otimes B^{ \prime}\otimes H^{-1}\otimes L_{d}\longrightarrow W_{0}\otimes\wedge^{p}M_{L_ {d}}\otimes B^{\prime}\otimes L_{d}\] \[\longrightarrow M_{L_{d}}\otimes\wedge^{p}M_{L_{d}}\otimes B^{ \prime}\otimes H\otimes L_{d}\longrightarrow 0.\] In view of [17, Proposition B.1.2], it suffices to show that \[H^{q-1+i}(X,\wedge^{p}M_{L_{d}}\otimes B^{\prime}\otimes H^{-i}\otimes L_{d})= K_{p-q-i+1,q+i}(X,B^{\prime}\otimes H^{-i};L_{d}).=0\ \text{ for }i\geq 0.\] When \(i=0\), this is the given condition. When \(i\geq 1\), this follows from [19, Theorem 1.1] since \(p-q-i+1\leq\Theta(d^{q-1})<\Theta(d^{q+i-1})\). (2) By Lemma 2.3, it is sufficient to prove that \[H^{q}(X,M_{L_{d}}\otimes\wedge^{p}M_{L_{d}}\otimes B^{\prime}\otimes H)=0. \tag{4.6}\] Let \(M_{0}\) be the kernel of the map \(W_{0}\otimes H^{-1}\to M_{L_{d}}\) in (4.5). We have a short exact sequence \[0\longrightarrow M_{0}\otimes B^{\prime}\otimes H\longrightarrow W_{0} \otimes B^{\prime}\longrightarrow M_{L_{d}}\otimes B^{\prime}\otimes H \longrightarrow 0. \tag{4.7}\] As \(H^{q}(X,\wedge^{p}M_{L_{d}}\otimes B^{\prime})=0\), the claim (4.6) is implied by the injectivity of the map \[\rho\colon H^{q+1}(X,\wedge^{p}M_{L_{d}}\otimes B^{\prime}\otimes H) \longrightarrow W_{0}\otimes H^{q+1}(X,\wedge^{p}M_{L_{d}}\otimes B^{\prime}).\] This map fits into the following commutative diagram \[H^{q}(X,\wedge^{p}M_{L_{d}}\otimes M_{0}\otimes B^{\prime}\otimes H \otimes L_{d})\] \[H^{q+1}(X,\wedge^{p}M_{L_{d}}\otimes M_{0}\otimes B^{\prime} \otimes H)\] \[\wedge^{p}V_{d}\otimes H^{q+1}(X,M_{0}\otimes B^{\prime}\otimes H) \stackrel{{\varphi}}{{\longrightarrow}}W_{0}\otimes\wedge^{p}V_{d} \otimes H^{q+1}(X,B^{\prime}).\] For the injectivity of the map \(\rho\), it is enough to check that \(\psi\) and \(\varphi\) are injective. From (4.5), we have an exact sequence \[\cdots\longrightarrow W_{2}\otimes H^{-3}\longrightarrow W_{1}\otimes H^{-2} \longrightarrow M_{0}\longrightarrow 0.\] By [19, Theorem 1.1], \[H^{q+i}(X,\wedge^{p}M_{L_{d}}\otimes B^{\prime}\otimes H^{-i-1}\otimes L_{d})=K_{p- q-i,q+i+1}(X,B^{\prime}\otimes H^{-i-1};L_{d})=0\ \ \text{for}\ i\geq 0\] since \(p-q-i\leq\Theta(d^{q-1})<\Theta(d^{q+i})\). By [17, Proposition B.1.2], \[H^{q}(X,\wedge^{p}M_{L_{d}}\otimes M_{0}\otimes B^{\prime}\otimes H\otimes L_{ d})=0,\] so \(\psi\) is injective. On the other hand, we get from (4.7) that \[H^{n+2-q}(X,M_{0}\otimes B^{\prime}\otimes H)=W_{0}\otimes H^{n+2-q}(X,B^{ \prime}),\] so \(\varphi\) is an isomorphism. Now, take a very ample line bundle \(L\) on \(X\) (independent of \(d\)) such that \[H^{i}(X,L^{m})=0\ \ \text{for}\ i>0\ \text{and}\ m>0\ \text{or}\ i<n\ \text{and}\ m<0; \tag{4.8b}\] \[H^{i}(X,B\otimes L^{m})=0\ \ \text{for}\ i>0\ \text{and}\ m>0\ \text{or}\ i<n\ \text{and}\ m<0;\] (4.8c) \[H^{i}(X,M_{L_{d}}\otimes L^{m})=0\ \ \text{for}\ i>0\ \text{and}\ m>0;\] (4.8d) \[H^{i}(X,M_{L_{d}}\otimes\omega_{X}\otimes B^{-1}\otimes L^{m})=0 \ \ \text{for}\ i>0\ \text{and}\ m>0; \tag{4.8a}\] By Proposition 4.4, we can take an integer \(c\geq c_{1}(d)+1,c_{n}^{\prime}(d)-n+1\) independent of \(d\). Successively applying Lemma 4.5 and possibly replacing \(L\) by a higher power of \(L\) (still independent of \(d\)), we may assume that \[K_{c-1,1}(X,B\otimes L^{n+1};L_{d})=H^{1}(X,\wedge^{c}M_{L_{d}} \otimes B\otimes L^{n+1})=0;\] \[K_{c-1,1}(X,\omega_{X}\otimes B^{-1}\otimes L^{n+1};L_{d})=H^{1} (X,\wedge^{c}M_{L_{d}}\otimes\omega_{X}\otimes B^{-1}\otimes L^{n+1})=0.\] By Lemma 4.1, we have \[K_{c_{1}(d),1}(X,B\otimes L^{n+1};L_{d})=H^{1}(X,\wedge^{c_{1}( d)+1}M_{L_{d}}\otimes B\otimes L^{n+1})=0; \tag{4.9b}\] \[K_{c_{n}^{\prime}(d)-n,1}(X,\omega_{X}\otimes B^{-1}\otimes L^{n +1};L_{d})=H^{1}(X,\wedge^{c_{n}^{\prime}(d)-n+1}M_{L_{d}}\otimes\omega_{X} \otimes B^{-1}\otimes L^{n+1})=0. \tag{4.9a}\] From now on, replace \(H\) by \(H:=L^{n+1}\). Then (4.1) holds thanks to (4.8b), so \(R(X,B\otimes H;L_{d})\) and \(R(X,B\otimes H^{-1};L_{d})\) are Cohen-Macaulay. Clearly, (4.2) is satisfied. **Proposition 4.6**.: _Assume that \(n\geq 2\). For \(1\leq q\leq n\), we have the following:_ \((1)\) _If \(K_{p,q}(X,B;L_{d})=0\) for \(p<c_{q}(d)\), then \(K_{p+1,q}(X,B\otimes H;L_{d})=0\). Consequently, the map_ \[\theta_{c_{q}(d),q}\colon\overline{K}_{c_{q}(d)+1,q-1}(X,\overline{B}\otimes \overline{H};L_{d})\longrightarrow K_{c_{q}(d),q}(X,B;L_{d})\] _in (4.3a) is a nonzero surjective map._ \((2)\) _If \(K_{p,q}(X,B;L_{d})=0\) for \(p>r_{d}-c_{q}^{\prime}(d)\), then \(K_{p-1,q}(X,B\otimes H^{-1};L_{d})=0\). Consequently, the map_ \[\theta_{r_{d}-c_{q}^{\prime}(d),q}^{\prime}\colon K_{r_{d}-c_{q}^{\prime}(d),q }(X,B;L_{d})\longrightarrow K_{r_{d}-c_{q}^{\prime}(d),q}(X,\overline{B};L_{d})\] _in (4.3b) is a nonzero injective map._ Proof.: (1) Recall that \(R(X,B\otimes H;L_{d})\) is Cohen-Macaulay. Then Lemma 4.1 says that \[K_{c_{q}(d),q}(X,B\otimes H;L_{d})=0\implies K_{p+1,q}(X,B\otimes H;L_{d})=0 \ \ \text{for}\ p\leq c_{q}(d)-1.\] Thus it suffices to show that \(K_{c_{q}(d),q}(X,B\otimes H;L_{d})=0\). The case \(q=1\) is nothing but (4.9a). Assume that \(2\leq q\leq n\). Note that the given condition is \[H^{q-1}(X,\wedge^{c_{q}(d)+q-2}M_{L_{d}}\otimes B\otimes L_{d})=K_{c_{q}(d)-1, q}(X,B;L_{d})=0.\] Then Lemma 4.5 (1) yields \[K_{c_{q}(d),q}(X,B\otimes H;L_{d})=H^{q-1}(X,\wedge^{c_{q}(d)+q-1}M_{L_{d}}\otimes B \otimes H\otimes L_{d})=0.\] (2) Recall that \(R(X,B\otimes H^{-1};L_{d})\) is Cohen-Macaulay. Then Lemma 4.1 says that \[K_{r_{d}-c^{\prime}_{q}(d),q}(X,B\otimes H^{-1};L_{d})=0\ \implies K_{p-1,q}(X,B \otimes H^{-1};L_{d})=0\ \ \text{for}\ p\geq r_{d}-c^{\prime}_{q}(d)+1.\] Thus it suffices to show that \(K_{r_{d}-c^{\prime}_{q}(d),q}(X,B\otimes H^{-1};L_{d})=0\). By Proposition 2.2, \[K_{r_{d}-c^{\prime}_{q}(d),q}(X,B\otimes H^{-1};L_{d})=K_{c^{\prime}_{q}(d)-n, n+1-q}(X,\omega_{X}\otimes B^{-1}\otimes H;L_{d})^{\vee}.\] We need to show that \[H^{n+1-q}(X,\wedge^{c^{\prime}_{q}(d)-q+1}M_{L_{d}}\otimes\omega_{X}\otimes B ^{-1}\otimes H)=0. \tag{4.10}\] When \(q=n\), (4.10) is the same to (4.9b). Assume that \(1\leq q\leq n-1\). If \(q\geq 2\) and \(H^{q-1}(X,B)\neq 0\), then \(c^{\prime}_{q}(d)=q-1\) so that (4.10) holds by (4.8b). Assume \(H^{q-1}(X,B)=0\) when \(q\geq 2\). Then \(c^{\prime}_{q}(d)=\Theta(d^{n-q})\). The given condition and Serre duality yield \[H^{n+1-q}(X,\wedge^{c^{\prime}_{q}(d)-q}M_{L_{d}}\otimes\omega_{ X}\otimes B^{-1}) = H^{n+1-q}(X,\wedge^{r_{d}-c^{\prime}_{q}(d)+q}M_{L_{d}}^{\vee} \otimes\omega_{X}\otimes B^{-1}\otimes L_{d}^{-1})\] \[= H^{q-1}(X,\wedge^{r_{d}-c^{\prime}_{q}(d)+q}M_{L_{d}}\otimes B \otimes L_{d})^{\vee}\] \[= K_{r_{d}-c^{\prime}_{q}(d)+1,q}(X,B;L_{d})^{\vee}\ =\ 0.\] Then the claim (4.10) follows from Lemma 4.5 (2). Theorem 1.1 now follows at once from the previous propositions and Theorem 3.1. Proof of Theorem 1.1.: By Lemma 4.3, we may assume that \(n\geq 2\). Note that (1.1) is proved in Proposition 4.4. For \(1\leq q\leq n\), by Theorem 3.1 (1) and Proposition 4.6 (1), \[K_{p,q}(X,B;L_{d})\neq 0\ \ \text{for}\ c_{q}(d)\leq p\leq r_{d}-\overline{r}_{ d}-1.\] On the other hand, [8, Proposition 5.1] says \(K_{p,0}(X,B;L_{d})=0\) for \(p>\Theta(1)\). Thus we obtain (1.2) for \(q=1\). For \(2\leq q\leq n\), by Theorem 3.1 (2) and Proposition 4.6 (2), \[K_{p,q}(X,B;L_{d})\neq 0\ \ \text{for}\ \overline{r}_{d}\leq p\leq r_{d}-c^{ \prime}_{q}(d).\] As \(r_{d}-\overline{r}_{d}-1=\Theta(d^{n})>\Theta(d^{n-1})=\overline{r}_{d}\), we obtain (1.2) for \(2\leq q\leq n\). ## 5. Complements and Problems In this section, we show some additional results, and discuss some open problems. Recall that the asymptotic vanishing theorem ([19, Theorem 1.1]) holds for singular varieties with coherent sheaves. Precisely, let \(X\) be a projective variety of dimension \(n\), and \(B\) be a coherent sheaf on \(X\). For an integer \(d\geq 1\), let \(L_{d}:=\mathscr{O}_{X}(dA+P)\), where \(A\) is an ample divisor and \(P\) is an arbitrary divisor on \(X\). For each \(1\leq q\leq n+1\), if \(d\) is sufficiently large, then \[K_{p,q}(X,B;L_{d})=0\ \ \text{for}\ 0\leq p\leq\Theta(d^{q-1}).\] We expect that Theorem 1.1 also holds in this setting. **Conjecture 5.1**.: _Theorem 1.1 still holds when \(X\) is a projective variety and \(B\) is a coherent sheaf on \(X\) with \(\operatorname{Supp}B=X\)._ Note that the expected nonvanishing of \(K_{p,q}(X,B;L_{d})\) for \(q>\dim\operatorname{Supp}B+1\) may not hold. _Remark 5.2_.: In the proof of Theorem 1.1, we use the assumption that \(X\) is smooth and \(B\) is a line bundle only when we apply Serre duality. Thus Theorem 1.1 holds when \(X\) is Cohen-Macaulay and \(B\) is a vector bundle. From now on, we assume that \(X\) is smooth and \(B\) is a line bundle as in Theorem 1.1. In the remaining, fix an index \(1\leq q\leq n\). It is very natural to study the asymptotic growth of \(c_{q}(d)\) and \(c_{q}^{\prime}(d)\) as \(d\to\infty\). In the spirit of [23], we give an effective upper bound for each of \(c_{q}(d)\) and \(c_{q}^{\prime}(d)\). For this purpose, we introduce some notations. Choose suitably positive very ample divisors \(H_{1},\dots,H_{n-1}\) on \(X\) such that \[\overline{X}_{i}:=H_{1}\cap\dots\cap H_{i}\] is a smooth projective variety for every \(0\leq i\leq n-1\). Note that \(\overline{X}_{0}=X\). For each \(0\leq i\leq n-1\), put \[\overline{H}_{i} :=\mathscr{O}_{X}(H_{i+1})|_{\overline{X}_{i}},\ \overline{B}_{i}:=B|_{ \overline{X}_{i}},\ \overline{B}_{i}^{\prime}:=B(H_{1}+\dots+H_{i})|_{ \overline{X}_{i}};\] \[\overline{L}_{d} :=L_{d}|_{\overline{X}_{i}},\ \overline{r}_{i}(d):=h^{0}( \overline{X}_{i},\overline{L}_{d})-1=\Theta(d^{n-i}),\ \overline{r}_{n}(d):=0,\] and assume that (4.1), (4.2), (4.4) hold for \[X=\overline{X}_{i},\ B=\overline{B}_{i},\overline{B}_{i}^{\prime},\omega_{X_{ i}}\otimes\overline{B}_{i}^{-1},\ H=\overline{H}_{i},\ L_{d}=\overline{L}_{d}.\] **Proposition 5.3**.: \(c_{q}(d)\leq\overline{r}_{n+1-q}(d)-q+1\) _and \(c_{q}^{\prime}(d)\leq\overline{r}_{q}(d)+q\)._ Proof.: We proceed by induction on \(n=\dim X\). When \(n=1\), the assertion is trivial. Assume that \(n\geq 2\). For \(c_{q}^{\prime}(d)\), we may assume that \(H^{q-1}(X,B)=0\) or \(q=1\). In the proof of Proposition 4.4, we proved that \[\theta_{\tau_{d}-\overline{r}_{q}(d)-q,q}\colon\overline{K}_{\tau_{d}- \overline{r}_{q}(d)-(q-1),q-1}(X,\overline{B}\otimes\overline{H};L_{d}) \longrightarrow K_{r_{d}-\overline{r}_{q}(d)-q,q}(X,B;L_{d})\] is nonzero, so \(c_{q}^{\prime}(d)\leq\overline{r}_{q}(d)+q\). We also have \(K_{r_{d}-\overline{r}_{1}(d)-1,1}(X,\omega_{X}\otimes B^{-1};L_{d})\neq 0\). By Proposition 2.2, \(K_{r_{1}(d)-n+1,1}(X,B;L_{d})\neq 0\). Thus we obtain \(c_{n}(d)\leq\overline{r}_{1}(d)-n+1\). For \(1\leq q\leq n-1\), in the proof of Proposition 4.4, we proved that \[\theta_{\overline{r}_{n+1-q}(d)-q+1,q}^{\prime}\colon K_{\tau_{n+1-q}(d)-q+1,q }(X,B;L_{d})\longrightarrow K_{\tau_{n+1-q}(d)-q+1,q}(X,\overline{B};L_{d})\] is nonzero, so \(c_{q}(d)\leq\overline{r}_{n+1-q}(d)-q+1\). _Remark 5.4_.: In Proposition 5.3, we do not assume that \(R(X,B;L_{d})\) is Cohen-Macaulay. However, when \(R(X,B;L_{d})\) is Cohen-Macaulay, by a more careful analysis, one can improve bounds for \(c_{q}(d)\) and \(c_{q}^{\prime}(d)\) as in [23]. In particular, one can recover [8, Theorem 6.1]: If \(X=\mathbf{P}^{n},B=\mathscr{O}_{\mathbf{P}^{n}}(b),L_{d}=\mathscr{O}_{ \mathbf{P}^{n}}(d)\) and \(b\geq 0,d\gg 0\), then \[c_{q}(d)\leq\binom{d+q}{q}-\binom{d-b-1}{q}-q\ \ \text{and}\ \ c_{q}^{\prime}(d)\leq \binom{d+n-q}{n-q}-\binom{n+b}{q+b}+q. \tag{5.1}\] We leave the details to interested readers. In characteristic zero, David Yang [22, Theorem 1] confirmed that \(c_{1}(d)\) is a constant. This gives an answer to [8, Problem 7.2]. On the other hand, for Veronese syzygies, Ein-Lazarsfeld [10, Conjecture 2.3] conjectured that equalities hold in (5.1) whenever \(d\geq b+q+1\). In particular, \(c_{q}(d)\) and \(c_{q}^{\prime}(d)\) are polynomials. One may hope that the same is true in general. **Question 5.5** (cf. [6, Remark 3.2]).: (1) _Does the limit_ \[\lim_{d\to\infty}\frac{c_{q}(d)}{d^{q-1}}\] _exist? If so, is the function \(c_{q}(d)\) a polynomial of degree \(q-1\) for sufficiently large \(d\)? What can one say about the leading coefficient \(a_{q-1}:=\lim_{d\to\infty}c_{q}(d)/d^{q-1}\) of \(c_{q}(d)\)?_ _._ (2) _Suppose that_ \(H^{q-1}(X,B)=0\) _if_ \(q\geq 2\)_. Does the limit_ \[\lim_{d\to\infty}\frac{c_{q}^{\prime}(d)}{d^{n-q}}\] _exist? If so, is the function_ \(c_{q}^{\prime}(d)\) _a polynomial of degree_ \(n-q\) _for sufficiently large_ \(d\)_? What can one say about the leading coefficient_ \(a_{n-q}^{\prime}:=\lim_{d\to\infty}c_{q}^{\prime}(d)/d^{n-q}\) _of_ \(c_{q}^{\prime}(d)\)_?_ When \(\operatorname{char}(\mathbf{k})=0\), a geometric meaning of the constant \(c_{1}(d)\) was explored as follows. Ein-Lazarsfeld-Yang [11, Theorem A] proved that if \(B\) is \(p\)-jet very ample, then \(c_{1}(d)\geq p+1\). Agostini [1, Theorem A] proved that if \(c_{1}(d)\geq p+1\), then \(B\) is \(p\)-very ample. These results are higher dimensional generalizations of the gonality conjecture on syzygies of algebraic curves, which was established by Ein-Lazarsfeld [9] and Rathmann [21]. On the other hand, Eisenbud-Green-Hulek-Popescu [12] related the nonvanishing of \(K_{p,q}(X,L_{d})\) to the existence of special secant planes. In particular, if the property \(N_{k}\) holds for \(L_{d}\), then \(L_{d}\) is \((k+1)\)-very ample. It would be exceedingly interesting know whether the nonexistence of special secant planes implies the vanishing of certain \(K_{p,q}(X,L_{d})\). When \(X\) is a smooth projective curve and \(\operatorname{char}(\mathbf{k})=0\), Rathmann [21, Theorem 1.2] showed that if \(H^{1}(X,L_{d})=0\) and \(H^{1}(X,B^{-1}\otimes L_{d})=0\), then \(K_{p,1}(X,B;L_{d})=0\). It is natural to extend this effective result to higher dimensions. When \(B=\mathscr{O}_{X}\) and \(L_{d}=\mathscr{O}_{X}(K_{X}+dA)\), the following problem is closely related to Mukai's conjecture [7, Conjecture 4.2]. **Problem 5.6**.: (1) _Suppose that \(c_{q}(d)\) is a polynomial of degree \(q-1\) for sufficiently large \(d\). Find an effective bound for \(d_{0}\) such that \(c_{q}(d)\) becomes a polynomial for \(d\geq d_{0}\)._ (2) _Suppose that_ \(H^{q-1}(X,B)=0\) _if_ \(q\geq 2\) _and_ \(c_{q}^{\prime}(d)\) _is a polynomial of degree_ \(n-q\) _for sufficiently large_ \(d\)_. Find an effective bound for_ \(d_{0}^{\prime}\) _such that_ \(c_{q}^{\prime}(d)\) _becomes a polynomial for_ \(d\geq d_{0}\)_._ Now, we turn to the asymptotic behaviors of the _Betti numbers_ \[\kappa_{p,q}(X,B;L_{d}):=\dim K_{p,q}(X,B;L_{d}).\] Ein-Erman-Lazarsfeld conjectured that the Betti numbers \(\kappa_{p,q}(X,B;L_{d})\) are normally distributed [5, Conjecture B], and they verified the conjecture for curves [5, Proposition A]. The normal distribution conjecture suggests the following unimodality conjecture. **Conjecture 5.7**.: _The Betti numbers \(\kappa_{p,q}(X,B;L_{d})\) form a unimodal sequence._ As the cases of very small \(p\) and very large \(p\) for \(\kappa_{p,q}(X,B;L_{d})\) are negligible in the normal distribution conjecture, the unimodality conjecture is not a consequence of the normal distribution conjecture. Finally, we verify the unimodality conjecture for curves following the strategy of Erman [15] based on Boij-Soderberg theory. Eisenbud-Schreyer [13] and Boij-Soderberg [4] showed that the Betti table of a graded module over a polynomial ring is a positive rational sum of pure diagrams (see [14, Theorem 2.2] for the precise statement). Let \(C\) be a smooth projective curve, \(B\) be a line bundle, and \(L\) be a very ample line bundle of sufficiently large degree \(d\). Put \(c:=h^{0}(X,B)\) and \(c^{\prime}:=h^{1}(X,B)\). By [8, Proposition 5.1 and Corollary 5.2], \(\kappa_{p,0}(C,B;L)=0\) for \(p\geq c\) and \(\kappa_{p,2}(C,B;L)=0\) for \(p\leq r-1-c^{\prime}\), where \(r:=h^{0}(X,L)-1\). By Riemann-Roch theorem, \(r=d-g\approx d\) since \(d\) is sufficiently large. Let \(\pi\) be the Betti table of \(R(C,B;L)\). Then [14, Theorem 2.2] says that \[\pi=\sum_{i=0}^{c}\sum_{j=0}^{c^{\prime}}a_{i,j}\pi_{i,j}\ \ \text{for some rational numbers $a_{i,j}\geq 0$ with }\sum_{i=0}^{c}\sum_{j=0}^{c^{\prime}}a_{i,j}=d, \tag{5.2}\] where each \(\pi_{i,j}\) is the pure diagram of the form: \begin{tabular}{c|c c c c c c c c c} & 0 & \(\cdots\) & \(i-1\) & \(i\) & \(\cdots\) & \(r-j-1\) & \(r-j\) & \(\cdots\) & \(r-1\) \\ \hline 0 & \(\ast\) & \(\cdots\) & \(\ast\) & - & \(\cdots\) & - & - & \(\cdots\) & - \\ 1 & - & \(\cdots\) & - & \(\ast\) & \(\cdots\) & \(\ast\) & - & \(\cdots\) & - \\ 2 & - & \(\cdots\) & - & - & \(\cdots\) & - & \(\ast\) & \(\cdots\) & \(\ast\) \\ \end{tabular} Here "\(\ast\)" indicates a nonzero entry, and "-" indicates a zero entry. We have \[\kappa_{p,0}(\pi_{i,j}) =\frac{(r-1)!(i-p)(r-j+1-p)}{(r+1-p)!p!}\ \ \text{for}\ 0\leq p\leq i-1;\] \[\kappa_{p,1}(\pi_{i,j}) =\frac{(r-1)!(p+1-i)(r-j-p)}{(r-p)!(p+1)!}\ \ \text{for}\ i\leq p\leq r-j-1;\] \[\kappa_{p,2}(\pi_{i,j}) =\frac{(r-1)!(p+2-i)(p-r+j+1)}{(r-p-1)!(p+2)!}\ \ \text{for}\ r-j\leq p\leq r-1.\] **Proposition 5.8**.: (1) _The Betti table of \(R(C,B;L)\) is asymptotically pure:_ \[\frac{a_{i,j}}{d}\to\begin{cases}1&\text{if $i=c$ and $j=c^{\prime}$}\\ 0&\text{otherwise}\end{cases}\ \ \text{as $d\to\infty$}.\] (2)_\(\kappa_{p,0}(C,B;L),\ldots,\kappa_{c-1,0}(C,B;L)\) is increasing, and \(\kappa_{r-c^{\prime},2}(C,B;L),\ldots,\kappa_{r-1,2}(C,B;L)\) is decreasing._ (3) _The Betti numbers \(\kappa_{p,1}(C,B;L)\) form a unimodal sequence._ Proof.: (1) Let \(\overline{a}_{i,j}:=a_{i,j}/d\). Then (5.2) says \[\sum_{i=0}^{c}\sum_{j=0}^{c^{\prime}}\overline{a}_{i,j}=1.\] Notice that \(\kappa_{0,0}(\pi_{i,j})\to i/d\) and \(\kappa_{r-1,2}(\pi_{i,j})\to j/d\) as \(d\to\infty\). Since \(\kappa_{0,0}(C,B;L)=c\) and \(\kappa_{r-1,2}(C,B;L)=c^{\prime}\), it follows from (5.2) that \[\sum_{i=0}^{c}\sum_{j=0}^{c^{\prime}}i\overline{a}_{i,j}\to c\ \ \text{and}\ \ \sum_{i=0}^{c}\sum_{j=0}^{c^{\prime}}j\overline{a}_{i,j}\to c^{\prime}\ \ \text{as}\ d\to\infty.\] Then we have \[\sum_{i=0}^{c}\sum_{j=0}^{c^{\prime}}(c-i)\overline{a}_{i,j}\to 0\ \ \text{and}\ \ \sum_{i=0}^{c}\sum_{j=0}^{c^{\prime}}(c^{\prime}-j)\overline{a}_{i,j}\to 0 \ \ \text{as}\ d\to\infty.\] Thus \(\overline{a}_{i,j}\to 0\) as \(d\to\infty\) unless \(i=c\) and \(j=c^{\prime}\), and hence, \(\overline{a}_{c,c^{\prime}}\to 1\) as \(d\to\infty\). (2) As \(d\gg 0\), we have \(r\approx d\) and \[\kappa_{p,0}(\pi_{i,j})\approx\frac{i-p}{p!}d^{p-1}\ \ \text{for}\ 0\leq p\leq i-1.\] Then (1) implies that \[\kappa_{p,0}(C,B;L)\approx\frac{c-p}{p!}d^{p}\ \ \text{for}\ 0\leq p\leq c-1.\] Thus the first assertion holds, and the second assertion follows from Proposition 2.2. (3) For \((r-2+c)/2\leq p\leq r-j-1\), we find \[\frac{\kappa_{p+1,1}(\pi_{i,j})}{\kappa_{p,1}(\pi_{i,j})}=\frac{(r-p)(p+2-i)(r -j-p-1)}{(p+2)(p+1-i)(r-j-p)}\leq 1\] since \(r-p\leq p+2\) and \[(p+2-i)(r-j-p-1)-(p+1-i)(r-j-p)=(r-j-p-1)-(p+1-i)\leq 0.\] For \((r-2+c)/2\leq p\leq r-2\), we get \[\kappa_{p+1,1}(C,B;L)=\sum_{i=0}^{c}\sum_{j=0}^{\min\{c^{\prime},r-p-2\}}a_{i,j }\kappa_{p+1,1}(\pi_{i,j})\leq\sum_{i=0}^{c}\sum_{j=0}^{\min\{c^{\prime},r-p-1 \}}a_{i,j}\kappa_{p,1}(\pi_{i,j})=\kappa_{p,1}(C,B;L),\] so the Betti numbers \(\kappa_{p,q}(C,B;L)\) with \((r-2+c)/2\leq p\leq r-1\) form a decreasing sequence. Now, as in [5, Proof of Proposition A], we compute \[\kappa_{p,1}(C,B;L)=\chi(C,\wedge^{p}M_{L}\otimes B\otimes L)-\binom{r+1}{p+1 }c=\binom{r}{p}\left(b-\frac{pd}{r}-\frac{(r+1)c}{p+1}\right) \tag{5.3}\] for \(c-1\leq p\leq r-c^{\prime}\), where \(b:=r+\deg B+1\). For \((r-1)/2\leq p\leq(r-2+c)/2\), we have \[\frac{\kappa_{p+1,1}(C,B;L)}{\kappa_{p,1}(C,B;L)}=\frac{(r-p)(b-(p+1)d/r-(r+1 )c/(p+2))}{(p+1)(b-pd/r-(r+1)c/(p+1))}\leq 1\] since \(r-p\leq p+1\) and \[\left(b-\frac{(p+1)d}{r}-\frac{(r+1)c}{p+2}\right)-\left(b-\frac{pd}{r}-\frac {(r+1)c}{p+1}\right)=-\frac{d}{r}+\frac{(r+1)c}{(p+1)(p+2)}\approx-1+\frac{dc }{d/2\cdot d/2}<0.\] Thus \(\kappa_{p+1,1}(C,B;L)\leq\kappa_{p,1}(C,B;L)\). We have shown that the Betti numbers \(\kappa_{p,q}(C,B;L)\) with \((r-1)/2\leq p\leq r-1\) form a decreasing sequence. By Proposition 2.2, the Betti numbers \(\kappa_{p,q}(C,B;L)\) with \(0\leq p\leq(r-1)/2\) form an increasing sequence. _Remark 5.9_.: When \(B=\mathscr{O}_{C}\), Proposition 5.8 (1) is the main theorem of [15]. In view of Proposition 5.8 (2), one may expect that nonzero entries of the top row (\(q=0\)) of the Betti table form an increasing sequence and nonzero entries of the bottom row (\(q=n+1\)) of the Betti table form a decreasing sequence in higher dimensions. **Example 5.10**.: Recall that a log-concave sequence of positive terms is unimodal. It is tempting to expect that \(\kappa_{p,1}(C,B;L)\) form a log-concave sequence. Unfortunately, this may fail when \(p\) is small. For instance, let \(C\) be a general smooth projective complex curve of genus \(3\), and \(B:=\omega_{C}(x)\) for a point \(x\in C\). Note that \(B\) is not base point free, \(\deg B=5\), and \(h^{0}(C,B)=3\). If \(L\) is a very ample line bundle on \(C\) of degree \(d\gg 0\), then [9, Theorem C] says that \(\kappa_{1,1}(C,B;L)\) is a polynomial in \(d\) of degree \[\gamma_{1}(B):=\dim\{\xi\in C_{2}\mid\underbrace{H^{0}(C,B)\to H^{0}(\xi,B|_{ \xi})\text{ is not surjective}}_{\Longleftrightarrow\,h^{1}(C,B(-\xi))=h^{0}(C, \mathscr{O}_{C}(\xi-x))=1}\Longleftrightarrow\,x\in\xi}=1\] On the other hand, from (5.3), we find \[\kappa_{2,1}(C,B;L) =\binom{d-3}{2}\left(d+3-\tfrac{2d}{d-3}-\tfrac{(d-2)3}{3}\right) \approx\tfrac{3}{2}d^{2};\] \[\kappa_{3,1}(C,B;L) =\binom{d-3}{3}\left(d+3-\tfrac{3d}{d-3}-\tfrac{(d-2)3}{4}\right) \approx\tfrac{1}{24}d^{4}.\] Thus \(\kappa_{2,1}(C,B;L)^{2}<\kappa_{1,1}(C,B;L)\cdot\kappa_{3,1}(C,B;L)\).
2302.00516
Quantifying the HIV reservoir with dilution assays and deep viral sequencing
People living with HIV on antiretroviral therapy often have undetectable virus levels by standard assays, but "latent" HIV still persists in viral reservoirs. Eliminating these reservoirs is the goal of HIV cure research. The quantitative viral outgrowth assay (QVOA) is commonly used to estimate the reservoir size, i.e., the infectious units per million (IUPM) of HIV-persistent resting CD4+ T cells. A new variation of the QVOA, the Ultra Deep Sequencing Assay of the outgrowth virus (UDSA), was recently developed that further quantifies the number of viral lineages within a subset of infected wells. Performing the UDSA on a subset of wells provides additional information that can improve IUPM estimation. This paper considers statistical inference about the IUPM from combined dilution assay (QVOA) and deep viral sequencing (UDSA) data, even when some deep sequencing data are missing. Methods are proposed to accommodate assays with wells sequenced at multiple dilution levels and with imperfect sensitivity and specificity, and a novel bias-corrected estimator is included for small samples. The proposed methods are evaluated in a simulation study, applied to data from the University of North Carolina HIV Cure Center, and implemented in the open-source R package SLDeepAssay.
Sarah C. Lotspeich, Brian D. Richardson, Pedro L. Baldoni, Kimberly P. Enders, Michael G. Hudgens
2023-02-01T15:36:16Z
http://arxiv.org/abs/2302.00516v2
# Quantifying the HIV reservoir with dilution assays and deep viral sequencing ###### Abstract People living with HIV on antiretroviral therapy often have undetectable virus levels by standard assays, but "latent" HIV still persists in viral reservoirs. Eliminating these reservoirs is the goal of HIV cure research. The quantitative viral outgrowth assay (QVOA) is commonly used to estimate the reservoir size, i.e., the infectious units per million (IUPM) of HIV-persistent resting CD4+ T cells. A new variation of the QVOA, the Ultra Deep Sequencing Assay of the outgrowth virus (UDSA), was recently developed that further quantifies the number of viral lineages within a subset of infected wells. Performing the UDSA on a subset of wells provides additional information that can improve IUPM estimation. This paper considers statistical inference about the IUPM from combined dilution assay (QVOA) and deep viral sequencing (UDSA) data, even when some deep sequencing data are missing. The proposed methods accommodate assays with wells sequenced at multiple dilution levels and include a novel bias-corrected estimator for small samples. The proposed methods are evaluated in a simulation study, applied to data from the University of North Carolina HIV cure Center, and implemented in the open-source R package SLDeepAssay. Distinct viral lineages infectious units per million maximum likelihood estimation missing data Poisson distribution serial limiting dilution assay. ## 1 Introduction Modern antiretroviral therapy (ART) is a highly effective treatment for people living with HIV, often helping them achieve viral suppression (i.e., have a level of virus in their blood that is below the limit of detection of standard assays) and eliminating their risk of transmission to others. However, despite viral suppression, "latent" HIV-infected cells, which do not produce viral proteins and are not recognized by the immune system, will remain. These latently infected cells are commonly referred to as the HIV reservoir (Ndung'u et al., 2019). If a person living with HIV stops taking ART, these latently infected cells will result in viral rebound, sometimes in a matter of weeks (Li et al., 2021). Thus, the continued use of ART is necessary to maintain viral suppression, but there are costs and potential toxicities associated with lifelong use (Chawla et al., 2018). Furthermore, as of 2021, only an estimated 75% of the 38.4 million people living with HIV worldwide currently have access to treatment (UNAIDS, 2022), and it is unclear whether a feasible path towards 100% treatment coverage exists. For these reasons, developing a cure for HIV that eliminates the latent viral reservoir and removes the need for ART is of high scientific and public health importance (Ndung'u et al., 2019). In HIV cure studies, a primary endpoint is the concentration of latent HIV-infected cells, often measured in infectious units per million cells (IUPM). This concentration is not directly measurable and is typically estimated through a serial limiting dilution (SLD) assay, wherein wells with known dilution levels (i.e, known numbers of cells) are tested for the presence of at least one cell with infectious virus. Repeating this process over multiple replicate wells (i.e., wells with the same dilution level) and at various dilution levels provides information for estimating the IUPM in the source population of cells (i.e., the person taking ART). The quantitative viral outgrowth assay (QVOA) is one standard SLD assay for quantifying the HIV reservoir, as measured by the IUPM of resting CD4+ T cells. The QVOA tests wells for the presence of the HIV p24 antigen, an indicator that at least one cell within the well is HIV-infected. Various statistical methods have been proposed for drawing inference about the IUPM based on data from dilution assays like the QVOA. Myers et al. (1994) proposed a maximum likelihood estimator (MLE) of the IUPM, along with a corresponding exact confidence interval derived by inverting the likelihood ratio test. Trumble et al. (2017) proposed a bias-corrected MLE (BC-MLE), adapted from Hepworth and Watson (2009), that corrects for upward bias of the MLE in small samples. The open-source SLDAssay software package implements the methods described above. The Ultra Deep Sequencing Assay of the outgrowth virus (UDSA), a variation of the QVOA, is a newer SLD assay for measuring the latent HIV reservoir that tests for the presence of distinct viral lineages (DVLs) in each well. Whereas the QVOA tests only for the presence of HIV in a given well, the UDSA provides additional information about the number of DVLs therein. Assuming that most latently infected cells are infected with at most one DVL, knowing the number of DVLs provides an improved lower bound (relative to the QVOA) for the number of infected cells in that well. Often, the QVOA is initially performed to identify the wells that are infected with at least one DVL (i.e., are positive), and then the UDSA is performed on a subsample of positive wells; this leads to a missing data problem. Lee et al. (2017) proposed an MLE of the IUPM that incorporates partially observed additional information from the UDSA. This paper justifies and extends existing methods to quantify the HIV reservoir from dilution assay and deep viral sequencing data. The Lee et al. (2017) estimator is shown to be consistent and asymptotically normal, and a bias-corrected MLE that accounts for the additional information from the UDSA is introduced. The possibility of the UDSA not detecting all DVLs in the source population is considered. Further, the MLE is extended to accommodate assays with multiple dilution levels, fully capturing all available information. The proposed methods are compared with existing methods via simulation studies and an application to real assay data from the University of North Carolina (UNC) HIV Cure Center. The rest of the paper proceeds as follows. In Section 2, notation is defined, assumptions are given, and the proposed methods are introduced. The simulation studies are presented in Section 3, and data from the UNC HIV Cure Center are analyzed in Section 4. The paper concludes with a discussion in Section 5. ## 2 Methods For simplicity, Sections 2.1-2.5 assume that only assay data from a single dilution level are utilized. In Section 2.6, the methods are extended to the multiple dilution level setting. ### Model and data Assume that \(M\) replicate wells, each with the same number of cells, are sequenced via the QVOA, out of which \(M_{P}\) are found to be HIV-positive (i.e., to contain at least one DVL) and \(M_{N}\) are found to be HIV-negative. Assume for now that there are one million cells per well; Section 2.6 discusses the scenario with assay data from a single dilution level other than one million cells per well. Of the \(M_{P}\) positive wells, a subset of size \(m\) (\(m\leq M_{P}\)) is deep-sequenced via the UDSA to obtain DVL information. Following Myers et al. (1994) Trumble et al. (2017), and Lee et al. (2017), assume: (A1) Cells are sampled randomly into the \(M\) wells from a larger source population. (A2) For each DVL, infected cells are randomly distributed amongst the wells. (A3) The QVOA and UDSA have perfect sensitivity and specificity. Assay data are collected in two stages. _Stage 1 (QVOA):_ First, let \(X_{j}\) be a latent variable denoting the number of cells in well \(j\) that are infected with any DVL of HIV, \(j\in\{1,\ldots,M\}\). From the QVOA, indicator variables \(W_{j}=\text{I}(X_{j}\geq 1)\) are observed in place of \(X_{j}\), where \(W_{j}=1\) if well \(j\) is positive and \(W_{j}=0\) otherwise. _Stage 2 (UDSA):_ Denote by \(n\) the number of DVLs detected across the deep-sequenced wells, \(n\in\{0,1,2,\ldots\}\). Then, let \(X_{ij}\) be a latent variable denoting the number of cells in well \(j\) that are infected with observed DVL \(i\) of HIV, \(i\in\{1,\ldots,n\}\). In practice, the indicator variables \(Z_{ij}=\text{I}\left(X_{ij}\geq 1\right)\) are observed directly from the UDSA instead of the \(X_{ij}\). For a given well \(j\), let the vector \(\boldsymbol{Z}_{j}=(Z_{1j},\ldots,Z_{nj})^{\text{T}}\) contain indicators of whether each of the \(n\) DVLs was detected therein. The random variables from Stages 1 and 2 are related via \(W_{j}=\text{I}(\sum_{i=1}^{n}Z_{ij}\geq 1)\). Let the vector \(\boldsymbol{X}_{i}=(X_{i1},\ldots,X_{iM})^{\text{T}}\) contain the numbers of cells infected with DVL \(i\) in wells \(1\) through \(M\). Suppose that the components \(X_{ij}\) are independent (A2) and Poisson distributed with rate \(\lambda_{i}\geq 0\), where the DVL-specific rate parameter \(\lambda_{i}\) represents the mean number of cells per well infected with DVL \(i\). The counts of infected cells should be approximately Poisson distributed when the number of cells per well is large and \(\lambda_{i}\) is small (Myers et al., 1994; Trumble et al., 2017). Because so few cells are laterally infected and, of those that are, most are infected by only one DVL, the indicators for each DVL (i.e., \(Z_{1j},\ldots,Z_{nj}\)) are approximately independent. Then, the indicators \(W_{j}\) and \(Z_{ij}\) follow Bernoulli distributions with \(\Pr(W_{j}=1)=1-\exp{(-\sum_{i=1}^{n}\lambda_{i})}\) and \(\Pr(Z_{ij}=1)=1-\exp{(-\lambda_{i})}\). Because it is assumed that there are one million cells per well, \(\sum_{i=1}^{n}\lambda_{i}\) is the IUPM, which will be denoted by \(\Lambda\). Often, not all of the \(M_{P}\) positive wells are sequenced with the UDSA, which introduces missingness. Let \(R_{j}\) be a complete data indicator for well \(j\), defined such that \(R_{j}=1\) if the well has complete data and \(R_{j}=0\) otherwise. Complete data are available from the \(m\) positive wells with the additional UDSA information and from the \(M_{N}\) negative wells. (No data are missing from the negative wells because, under (A3), negative QVOA results imply that there are zero DVLs in the negative wells.) Thus, the number of wells with complete data is \(\sum_{j=1}^{M}R_{j}=m+M_{N}\). ### Likelihood construction All wells are initially tested for the presence of infectious virus using the QVOA, so the Stage 1 variables \(\boldsymbol{W}=(W_{1},\ldots,W_{M})^{\text{T}}\) are fully observed. However, since only a subset of the positive wells undergoes the UDSA in Stage 2, \(\boldsymbol{Z}=(\boldsymbol{Z}_{1},\ldots,\boldsymbol{Z}_{M})\) will have missing data for the \((M_{P}-m)\) unsequenced positive wells. Based on this data collection scheme, illustrated in Figure 1, there are three types of well-level observations to consider: * (T1) A negative well (\(R_{j}=1,W_{j}=0,\boldsymbol{Z}_{j}=\boldsymbol{0}\)); * (T2) A positive well that was deep sequenced (\(R_{j}=1,W_{j}=1,\boldsymbol{Z}_{j}=\boldsymbol{z}_{j}\)); and * (T3) A positive well that was not deep sequenced (\(R_{j}=0,W_{j}=1,\boldsymbol{Z}_{j}=\boldsymbol{7}\)). For the two positive well types, at least one element in the \(\boldsymbol{Z}_{j}\) vector equals one since the well has to be positive for at least one DVL. To incorporate all available information on all wells, the observed-data likelihood function is proportional to \[L(\boldsymbol{\lambda}|\boldsymbol{W},\boldsymbol{Z},\boldsymbol{R})=\prod_ {j=1}^{M}\Pr_{\boldsymbol{\lambda}}(W_{j},\boldsymbol{Z}_{j})^{R_{j}}\Pr_{ \boldsymbol{\lambda}}(W_{j})^{(1-R_{j})},\] where \(\Pr_{\boldsymbol{\lambda}}(W_{j},\boldsymbol{Z}_{j})\) is the joint probability mass function (PMF) of \((W_{j},\boldsymbol{Z}_{j})\) and \(\Pr_{\boldsymbol{\lambda}}(W_{j})\) is the marginal PMF of \(W_{j}\). Because wells are selected for deep sequencing based only on the fully observed QVOA results \(\boldsymbol{W}\), the UDSA results \(\boldsymbol{Z}\) are missing at random (MAR) for the unsequenced wells (Little and Rubin, 2002). Therefore, the distribution of \(\boldsymbol{R}\) can be omitted from the likelihood for \(\boldsymbol{\lambda}\). Under the assumption of perfect QVOA sensitivity and specificity (A3), and since \(W_{j}\) is completely determined by \(\boldsymbol{Z}_{j}\), \(\Pr_{\boldsymbol{\lambda}}(W_{j}|\boldsymbol{Z}_{j})=1\) and it follows that \(\Pr_{\boldsymbol{\lambda}}(W_{j},\boldsymbol{Z}_{j})=\Pr_{\boldsymbol{\lambda }}(W_{j}|\boldsymbol{Z}_{j})\Pr_{\boldsymbol{\lambda}}(\boldsymbol{Z}_{j})=\Pr_ {\boldsymbol{\lambda}}(\boldsymbol{Z}_{j})\). Assuming independence between the DVLs, \(\Pr_{\boldsymbol{\lambda}}(\boldsymbol{Z}_{j})=\prod_{i=1}^{n}\Pr_{\lambda_{i}} (Z_{ij})\). Thus, \[L(\boldsymbol{\lambda}|\boldsymbol{W},\boldsymbol{Z},\boldsymbol{R})=\prod_ {j=1}^{M}\left[\prod_{i=1}^{n}\left\{1-\exp(-\lambda_{i})\right\}^{Z_{ij}} \exp(-\lambda_{i})^{(1-Z_{ij})}\right]^{R_{j}}\left\{1-\exp{(-\Lambda)} \right\}^{(1-R_{j})},\] which simplifies to \[\left[\prod_{i=1}^{n}\left\{1-\exp(-\lambda_{i})\right\}^{\sum_{j=1}^{M}Z_{ij}R _{j}}\exp(-\lambda_{i})^{\sum_{j=1}^{M}(1-Z_{ij})R_{j}}\right]\left\{1-\exp{(- \Lambda)}\right\}^{\sum_{j=1}^{M}(1-R_{j})}.\] For ease of notation, let \(Y_{i}=\sum_{j=1}^{M}Z_{ij}R_{j}\) and \(\mathbf{Y}=(Y_{1},\ldots,Y_{n})^{\mathsf{T}}\). It follows from the two-stage data collection procedure (Section 2.1) that \(\sum_{j=1}^{M}(1-Z_{ij})R_{j}=(M_{N}+m)-Y_{i}\) and \(\sum_{j=1}^{M}(1-R_{j})=M-(M_{N}+m)\). Therefore, (\(M_{N},\mathbf{Y}\)) are sufficient statistics for \(\mathbf{\lambda}\) and the likelihood can be rewritten as \[L(\mathbf{\lambda}|M_{N},\mathbf{Y})=\left[\prod_{i=1}^{n}\left\{1-\exp(-\lambda_{i}) \right\}^{Y_{i}}\exp(-\lambda_{i})^{M_{N}+m-Y_{i}}\right]\left\{1-\exp\left(- \Lambda\right)\right\}^{M-(M_{N}+m)}. \tag{1}\] ### Maximum likelihood estimation The MLE of the DVL-specific rate parameters, denoted \(\widehat{\mathbf{\lambda}}=(\widehat{\lambda}_{1},\ldots,\widehat{\lambda}_{n})^{ \mathsf{T}}\), is found by maximizing the observed-data log-likelihood \(\ln\{L(\mathbf{\lambda}|M_{N},\mathbf{Y})\}\) based on (1) with respect to \(\mathbf{\lambda}\), under the constraint that Poisson rates must be non-negative. When a subset of the positive wells are sequenced, the MLE \(\widehat{\mathbf{\lambda}}\) does not appear to have a closed-form solution but can be obtained numerically. However, analytical solutions do exist in two special cases: (i) when no positive wells are deep-sequenced or (ii) when all positive wells are deep-sequenced. See Web Appendix A for details on finding the MLE. Assuming that the UDSA data are indeed MAR and under suitable regularity conditions, the MLE \(\widehat{\mathbf{\lambda}}\) will be consistent for the true values \(\mathbf{\lambda}\) and asymptotically normally distributed (Little and Rubin, 2002). That is, \(\sqrt{M}(\widehat{\mathbf{\lambda}}-\mathbf{\lambda})\leadsto\mathcal{N}_{n}(\mathbf{0}, \mathbf{\Sigma})\), where \(\leadsto\) denotes convergence in distribution and \(\mathcal{N}_{n}(\mathbf{0},\mathbf{\Sigma})\) is an \(n\)-variate normal distribution with mean vector \(\mathbf{0}\) and covariance matrix \(\mathbf{\Sigma}\). By the invariance property of MLEs, it follows that the MLE for the LUPM \(\Lambda=\sum_{i=1}^{n}\lambda_{i}\) is \(\widehat{\Lambda}=\sum_{i=1}^{n}\widehat{\lambda}_{i}\). Moreover, by the continuous mapping theorem and the delta method, \(\widehat{\Lambda}\) is a consistent and asymptotically normal estimator of \(\Lambda\). The asymptotic covariance matrix of \(\widehat{\mathbf{\lambda}}\) is given by the inverse of the Fisher information matrix, i.e., \(\mathbf{\Sigma}=\mathcal{I}(\mathbf{\lambda})^{-1}=\mathsf{E}\left\{-\partial^{2}l( \mathbf{\lambda}|M_{N},\mathbf{Y})/\partial\mathbf{\lambda}\partial\mathbf{\lambda}^{T}\right\} ^{-1}\), which can be consistently estimated by \(\widehat{\mathbf{\Sigma}}=\mathcal{I}(\mathbf{\lambda})^{-1}|_{\mathbf{\lambda}=\widehat{ \mathbf{\lambda}}}\). A derivation of \(\widehat{\mathbf{\Sigma}}\) is given in Web Appendix B. Further, the standard error of the IUPM estimator \(\widehat{\Lambda}\) can be estimated by \(\widehat{\mathrm{SE}}(\widehat{\Lambda})=(\Sigma_{i=1}^{n}\Sigma_{j=1}^{n} \widehat{\Sigma}_{i,j})^{1/2}\), where \(\widehat{\Sigma}_{i,j}\) denotes the \((i,j)\)th element of \(\widehat{\mathbf{\Sigma}}\). By the delta method, a \(100(1-\alpha)\%\) Wald confidence interval for the log IUPM \(\ln(\Lambda)\) has endpoints \(\ln(\widehat{\Lambda})\pm z_{\alpha/2}\widehat{\mathrm{SE}}(\widehat{\Lambda} )/\widehat{\Lambda}\), where \(z_{\alpha/2}\) denotes the \((1-\alpha/2)\)th percentile of the standard normal distribution. Exponentiating these endpoints gives a strictly positive confidence interval for \(\Lambda\). Figure 1: Illustration of the data collection scheme from the QVOA and UDSA at a single dilution level ### Bias correction for small samples In SLD assay settings, the MLE will be upwardly biased with a small number of replicate wells, such that \(\widehat{\Lambda}\) tends to overestimate the size of a person's latent HIV reservoir (Trumble et al., 2017). A bias-corrected MLE (BC-MLE) based on Hepworth and Watson (2009) was proposed by Trumble et al. (2017). The Trumble et al. (2017) bias correction is intended for a one-dimensional parameter estimator and therefore cannot be applied to \(\widehat{\mathbf{\lambda}}\). Instead, a bias-correction method for the multi-dimensional setting, developed by Hashemi and Schneider (2021), is adapted here. The method involves subtracting a correction term from the MLE \(\widehat{\mathbf{\lambda}}\) to reduce the order of the bias from \(\mathcal{O}(M^{-1})\) to \(\mathcal{O}(M^{-2})\). Following Hashemi and Schneider (2021), the bias of the MLE \(\widehat{\mathbf{\lambda}}\) can be expressed as \[\mathrm{E}\left(\widehat{\mathbf{\lambda}}-\mathbf{\lambda}\right)=\mathbf{\Sigma}\ \mathbf{A}(\mathbf{\lambda})\ \mathrm{vec}(\mathbf{\Sigma})+\mathcal{O}(M^{-2}), \tag{2}\] where \(\mathbf{A}(\mathbf{\lambda})=\left[\mathbf{A}_{1}(\mathbf{\lambda})\,,\ldots,\mathbf{A}_{n}(\mathbf{ \lambda})\right]\) is the \(n\times n^{2}\) matrix with \(n\times n\) submatrices \[\mathbf{A}_{i}(\mathbf{\lambda})=\frac{\partial}{\partial\lambda_{i}}\mathcal{I}(\bm {\lambda})-\frac{1}{2}\mathrm{E}\left\{\frac{\partial^{3}}{\partial\mathbf{ \lambda}\partial\mathbf{\lambda}^{T}\partial\lambda_{i}}l(\mathbf{\lambda}|M_{N},\mathbf{Y })\right\}\] and \(\mathrm{vec}(\mathbf{\Sigma})\) denotes the \(n^{2}\times 1\) column vector obtained by stacking the columns of \(\mathbf{\Sigma}\). The components of submatrix \(\mathbf{A}_{i}(\mathbf{\lambda})\) are derived in Web Appendix C. Equation (2) motivates the BC-MLE for the DVL-specific rate parameters: \(\widehat{\mathbf{\lambda}}^{*}=(\widehat{\lambda}_{1}^{*},\ldots,\widehat{\lambda }_{n}^{*})^{\text{T}}=\widehat{\mathbf{\lambda}}-B(\widehat{\mathbf{\lambda}})\), where \(B(\widehat{\mathbf{\lambda}})=\widehat{\mathbf{\Sigma}}\mathbf{A}(\widehat{\mathbf{\lambda}}) \mathrm{vec}(\widehat{\mathbf{\Sigma}})\). Finally, the BC-MLE for the IUPM is \(\widehat{\Lambda}^{*}=\sum_{i=1}^{n}\widehat{\lambda}_{i}^{*}\) Conveniently, the MLE and the BC-MLE have the same asymptotic distribution. To see this, note that the bias correction term \(B(\widehat{\mathbf{\lambda}})=\mathcal{O}_{p}(M^{-1})\), i.e., \(MB\mathbf{\left(\widehat{\mathbf{\lambda}}\right)}\) is bounded in probability, so \(\sqrt{M}B(\widehat{\mathbf{\lambda}})\) converges in probability to zero. Then, using Slutsky's theorem, \[\sqrt{M}\mathbf{\left(\widehat{\mathbf{\lambda}}^{*}-\mathbf{\lambda}\right)}=\sqrt{M} \left[\left\{\widehat{\mathbf{\lambda}}-B\mathbf{\left(\widehat{\mathbf{\lambda}}\right)} \right\}-\mathbf{\lambda}\right]=\left\{\sqrt{M}\left(\widehat{\mathbf{\lambda}}-\mathbf{ \lambda}\right)-\sqrt{M}B\mathbf{\left(\widehat{\mathbf{\lambda}}\right)}\right\} \rightsquigarrow\mathcal{N}_{n}(\mathbf{0},\mathbf{\Sigma}),\] i.e., \(\widehat{\mathbf{\lambda}}^{*}\) is also a consistent and asymptotically normal estimator of \(\mathbf{\lambda}\). Moreover, the asymptotic covariance of \(\widehat{\mathbf{\lambda}}^{*}\) can be consistently estimated by \(\widehat{\mathbf{\Sigma}}\). However, by construction, \(\widehat{\mathbf{\lambda}}^{*}\) will tend to have have smaller bias than \(\widehat{\mathbf{\lambda}}\) for a small number of wells. ### Estimation with undetected viral lineages When a person living with HIV is tested with the UDSA, only a very small subset of their CD4+ T cells are obtained (typically by leukapheresis). The person may have additional DVLs in their population of CD4+ T cells that were not present in the subset of cells sampled, in which case the UDSA would not detect these additional DVLs, even with perfect sensitivity and specificity. Below it is shown that the proposed IUPM estimator can still be viewed as an MLE, even in the presence of undetected viral lineages. Suppose that there are \(n^{\prime}\) DVLs present in an individual living with HIV, \(n\) of which are detected by the UDSA, \(n^{\prime}\in\{n+1,n+2,n+3,\ldots\}\). This leaves \(n^{\prime}-n\) undetected viral lineages, with corresponding counts of infected cells \(\mathbf{Y}^{\prime}=(Y_{n+1},\ldots,Y_{n^{\prime}})^{\text{T}}\) that are independent and Poisson distributed with rates \(\lambda_{i^{\prime}}\), \(i^{\prime}\in\{n+1,\ldots,n^{\prime}\}\). With some abuse of notation, let \(Y_{0}=Y_{n+1}+\cdots+Y_{n^{\prime}}\) denote the number of wells infected with any of the undetected DVLs. Then, \(Y_{0}\) also has a Poisson distribution with rate \(\lambda_{0}=\lambda_{n+1}+\cdots+\lambda_{n^{\prime}}\), and, since no wells are infected with these lineages, \(Y_{0}=0\) is observed. Thus, the augmented likelihood function accounting for all DVLs (detected and undetected) can be written as \[L^{\prime}(\mathbf{\lambda}^{\prime}|M_{N},\mathbf{Y},Y_{0})\] \[=\left[\prod_{i=0}^{n}\left\{1-\exp(-\lambda_{i})\right\}^{Y_{i}} \exp(-\lambda_{i})^{(M_{N}+m-Y_{i})}\right]\left\{1-\exp\left(-\sum_{i=0}^{n} \lambda_{i}\right)\right\}^{(M-M_{N}-m)}, \tag{3}\] where \(\mathbf{\lambda}^{\prime}=(\lambda_{0},\mathbf{\lambda}^{\text{T}})^{\text{T}}\). Given that DVLs \(n+1,\ldots,n^{\prime}\) are undetected, a reasonable heuristic estimate for the rate of their sum \(\lambda_{0}\) is zero. In fact, it is proven in the Appendix that the MLE for \(\lambda_{0}\)_is_ zero. That is, the vector \(\widehat{\mathbf{\lambda}}^{\prime}\) that maximizes (3) necessarily satisfies \(\widehat{\lambda}_{0}=0\). When estimating \(\Lambda\), this implies that using the sum of the \(n\)-dimensional MLE \(\widehat{\mathbf{\lambda}}\) from the original likelihood in (1) is equivalent to using the sum of the \((n+1)\)-dimensional MLE \(\widehat{\mathbf{\lambda}}^{\prime}\) from the augmented likelihood in (3). In other words, summing the DVL-specific MLEs for the detected DVLs gives the MLE for the sum of _all the_ DVL-specific rate parameters, detected or not. ### Incorporating multiple dilution levels So far, it has been assumed that the assay was conducted at a single dilution level, with each replicate well containing one million (\(10^{6}\)) cells. In practice, dilution levels other than one million cells per well may be used. Moreover, multiple dilution levels are often tested with the QVOA to pinpoint one or more appropriate dilution levels for the UDSA (i.e., dilution levels with sufficient positive wells). The methods from Sections 2.1-2.5 are now adapted to handle these two cases. First, consider the setting where an assay is done at a single dilution level, but each replicate well contains \(u\times 10^{6}\) cells for some \(u>0\). Continue to let \(\lambda_{i}\) be the mean count of cells per well infected with DVL \(i\), \(i\in\{1,\ldots,n\}\). Now, let \(\tau_{i}\) be the mean count of cells _per million_ infected with DVL \(i\), and denote by \(\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{n})^{\mathsf{T}}\) the vector of DVL-specific IUPMs for all \(n\) DVLs. If \(u=1\), as assumed in previous sections, then \(\boldsymbol{\lambda}=\boldsymbol{\tau}\) and \(\Lambda=T\). More generally, for a dilution level of \(u\), the DVL-specific IUPMs and mean counts per cell are related through \(\boldsymbol{\tau}=\boldsymbol{\lambda}/u\). Using this relationship, the likelihood in (1) can be rewritten as a function of \(\boldsymbol{\tau}\) by substituting \(\boldsymbol{\lambda}=u\boldsymbol{\tau}\) and defining \(\widetilde{L}(\boldsymbol{\tau}|M_{N},\boldsymbol{Y},u)=L(u\boldsymbol{\tau}| M_{N},\boldsymbol{Y})\). Then, the MLE for \(\boldsymbol{\tau}\) is the value \(\widehat{\boldsymbol{\tau}}\) that maximizes the likelihood \(\widetilde{L}\) over the parameter space \([0,\infty)^{n}\), and, from it, the IUPM is estimated as \(\widetilde{T}=\sum_{i=1}^{n}\widehat{\tau}_{i}\). Now, consider the second case where assay data \((M_{N}^{(d)},\boldsymbol{Y}^{(d)},u^{(d)})\), \(d\in\{1,\ldots,D\}\), are available from replicate wells at \(D\) distinct dilution levels, \(D\in\{1,2,\ldots\}\). Let \(M_{N}^{(d)},\boldsymbol{Y}^{(d)}\), and \(u^{(d)}\) denote the number of negative wells, vector of UDSA results, and dilution level, respectively, for the \(d\)th dilution. Assume independence between replicate wells and across dilution levels; this is the natural extension of assumption (A1) to the multiple dilution level setting. Then, the joint likelihood given data from all \(D\) dilution levels is equal to the product of the individual likelihoods given data from each dilution level: \[\widetilde{L}(\boldsymbol{\tau}|\boldsymbol{M_{N}},\boldsymbol{Y},\boldsymbol {u})=\prod_{d=1}^{D}\widetilde{L}\left(\boldsymbol{\tau}|M_{N}^{(d)}, \boldsymbol{Y}^{(d)},u^{(d)}\right), \tag{4}\] where \(\boldsymbol{M_{N}}=(M_{N}^{(1)},\ldots,M_{N}^{(D)})^{\mathsf{T}}\), \(\boldsymbol{Y}=(\boldsymbol{Y}^{(1)},\ldots,\boldsymbol{Y}^{(D)})\), and \(\boldsymbol{u}=(u^{(1)},\ldots,u^{(D)})^{\mathsf{T}}\). The MLE for the vector of DVL-specific IUPMs is the value \(\widehat{\boldsymbol{\tau}}\) that maximizes the likelihood (4), and the corresponding IUPM can be calculated as their sum. The MLE \(\widehat{T}\) from this joint likelihood (4) is once again consistent for the true IUPM \(T\) and asymptotically normal with asymptotic variance \(\sum_{i=1}^{n}\sum_{j=1}^{n}\widehat{\Sigma}_{ij}\), where \(\widehat{\boldsymbol{\Sigma}}=\widetilde{\mathcal{I}}(\boldsymbol{\tau})^{-1 }=\text{E}[-\partial^{2}\ln\left\{\widetilde{L}(\boldsymbol{\tau}|\boldsymbol{ M_{N}},\boldsymbol{Y},\boldsymbol{u})\right\}/\partial\boldsymbol{\tau} \partial\boldsymbol{\tau}^{T}]^{-1}\). The same bias correction method introduced in Section 2.4 can be applied to the multiple dilution level setting. Details on estimating \(\widetilde{\Sigma}\) and computing the bias correction term in this setting are given in Web Appendix D. Note that the likelihood from Myers et al. (1994), which uses QVOA data only, is a special case of the likelihood in (4) where none of the positive wells are deep sequenced (i.e., \(m=0\)). Thus, (4) can be used even when deep sequencing is not done; see special case (ii) in Web Appendix A. ## 3 Simulation Simulation studies were performed to assess the proposed methods. Various settings were considered, inspired by real-world dilution assay studies with a single (Section 3.1) or multiple (Section 3.2) dilution levels. In addition to demonstrating the methods' validity, these simulations illustrate the notable efficiency gains from incorporating deep viral sequencing. ### Simulations with a single dilution level Data for a single dilution assay were simulated as follows. First, full results from the UDSA were generated as the DVL-specific infection indicators \(Z_{ij}\) for all wells, \(j\in\{1,\ldots,M\}\), and DVLs, \(i\in\{1,\ldots,n^{\prime}\}\), from independent Bernoulli distributions with \(\Pr(Z_{ij}=1)=1-\exp(-\lambda_{i})\). Results from the QVOA were then calculated as \(W_{j}=\text{I}(\sum_{i=1}^{n}Z_{ij}\geq 1)\) for all wells. The number of wells to undergo the UDSA was computed as \(m=\lfloor qM_{P}\rfloor\), where \(q\) was the fixed proportion of positive wells that undergo the UDSA and \(\lfloor\cdot\rceil\) denotes the nearest integer function. Based on \(q\), a random sample of \(M_{P}-m\) positive wells were set to be missing their \(Z_{ij}\) information for all DVLs. The simulation studies utilized a fully factorial design by considering all possible combinations of: \(M=12,14\), or \(32\) replicate wells; \(n^{\prime}=6,12\), or \(18\) DVLs; proportions \(q=1,0.75\), or \(0.5\) of positive wells that underwent the UDSA; and IUPM \(T=0.05\) or \(1\). These choices of parameters, which were motivated by the real data used in Section 4, led to 54 unique simulation settings defined by (\(M,n^{\prime},q,T\)). For simplicity, the single dilution level was chosen to be \(u=1\) million cells per well, so \(T=\Lambda\). In addition, two allocations of the IUPM \(T\) across the \(n^{\prime}\) DVLs were considered: (i) _constant rate_, i.e., the same IUPM for all DVLs such that \(\tau_{i}=T/n^{\prime}\) for \(i\in\{1,\ldots,n^{\prime}\}\), and (ii) _non-constant rate_, i.e., a larger IUPM for the last \(n^{\prime}/2\) DVLs such that \(\tau_{i}=T/(2n^{\prime})\) for \(i\in\{1,\ldots,n^{\prime}/2\}\) versus \(\tau_{i}=3T/(2n^{\prime})\) for \(i\in\{n^{\prime}/2+1,\ldots,n^{\prime}\}\). Both allocations were applied for \(T=1\); for \(T=0.5\) only the constant rate scenario was considered. Two extreme results were possible: (i) all wells were negative, in which case the UDSA would not be done, or (ii) all wells were positive and a particular DVL was detected in each deep sequenced well, in which case the IUPM estimators would be infinite. While (i) never happened, (ii) occurred in 98 out of \(81{,}000\) simulations (0.1%); in these cases the simulated assay data were discarded and resimulated. Four IUPM estimators were applied to each simulated assay: (i) MLE without UDSA, (ii) BC-MLE without UDSA, (iii) MLE with UDSA, and (iv) BC-MLE with UDSA. All estimators have been implemented in R packages, with (i) and (ii) in SLDAssay[Trumble et al., 2017] and (iii) and (iv) in SLDeepAssay (newly developed to accompany this paper). Estimators (ii) and (iv) were expected to have smaller bias than (i) and (iii) in small samples, and estimators (iii) and (iv) were expected to be more precise than (i) and (ii) due to the added DVL information from the UDSA. A number of metrics are reported for comparison of the four IUPM estimators, summarizing the 1000 data sets simulated for each setting. The relative bias ("bias") was computed by dividing (i) the mean differences between the estimated and true IUPM across replications by (ii) the true IUPM. The average standard error (ASE) and empirical standard error (ESE) were computed as the empirical mean of the standard error estimator and the empirical standard deviation of the IUPM estimators, respectively. Finally, the empirical coverage probability (CP) for the 95% confidence interval was computed as the proportion of simulations where the true IUPM fell between the lower and upper bounds of the interval. Detailed results for a single dilution assay with IUPM of \(T=1\) and a constant rate of infected cells for all DVLs can be found in Table 1. As expected, the two BC-MLEs had very little bias in all settings (both \(\leq 3\%\)). Meanwhile, the uncorrected MLEs saw bias as large as 10%; this bias improved, though, as either (i) the number of wells \(M\) increased or (ii) the proportion \(q\) being deep sequenced increased (for the estimators with UDSA). Bias for all estimators was unchanged by an increasing number of DVLs \(n^{\prime}\). Overall, the standard error estimators approximated the empirical standard errors well. For the MLE without UDSA, the ASE overestimated the ESE, but this was resolved as \(M\) increased. Based on ASE or ESE, the variability of the estimators with UDSA decreased when there were more replicate wells (i.e., larger \(M\)) and, as expected, when the deep sequencing information was available for more wells (i.e., larger \(q\)). In fact, the MLE and BC-MLE with UDSA were as much as \(46\%\) and \(35\%\) more efficient, respectively, than their counterparts without sequencing data. Despite needing to estimate more parameters when there were more DVLs, the variability of these estimators was stable as \(n^{\prime}\) increased. The confidence intervals for the BC-MLEs were sometimes conservative with the smallest number of \(M=12\) wells, but appeared reasonable for all other settings. Also, the over-coverage was slightly less severe for the estimators with UDSA than without. Otherwise, the confidence intervals achieved the appropriate coverage. Results with a non-constant rate of infected cells were nearly identical (Web Table S1). Aside from uniformly smaller standard errors, results with the smaller a IUPM of \(T=0.5\) were comparable to those discussed already (Web Table S2). ### Simulations with multiple dilution levels Data for an assay at multiple dilution levels were simulated as in Section 3.1, with a few modifications. For each scenario, three single dilution assay datasets were simulated, one for each of the \(D=3\) dilution levels, and then combined for analysis. The following parameters were held fixed: (i) the true IUPM \(T=1\), (ii) the three dilution levels \(\boldsymbol{u}=(u_{1},u_{2},u_{3})=(0.5,1,2)\) million cells per well, and (iii) the proportions of positive wells to be deep sequenced at the three dilution levels \(\boldsymbol{q}=(q_{1},q_{2},q_{3})=(0,0.5,1)\). The simulation settings varied by the number of replicate wells per dilution level, \(\boldsymbol{M}=(M_{1},M_{2},M_{3})=(6,12,18),(9,18,27)\), or \((12,24,36)\), and the number of DVLs, \(n^{\prime}=6,12\), or \(18\). Again, the IUPM could be allocated across the \(n^{\prime}\) DVLs in a constant or non-constant way. No simulated data sets were discarded due to all wells being negative or positive (at all dilution levels). The same four estimators (MLE and BC-MLE, with and without UDSA) were applied to each simulated assay and compared with respect to bias, ASE, ESE, and CP. Detailed results for the multiple dilution level assays with a constant rate of infected cells can be found in Table 2. In comparison to the single dilution simulations, the two uncorrected MLEs had relatively small bias (\(\leq 4\%\) versus \(\leq 10\%\)). This is likely due to the fact that these estimators incorporate more information from the multiple dilutions. Still, the two bias-corrected MLEs further reduced this bias to \(\leq 1\%\) in all settings. The estimated standard errors were approximately consistent with the empirical ones, and the confidence intervals achieved near-nominal coverage, with empirical estimates between 94% and 97%. Across all settings, the estimators that used the UDSA had greater efficiency than those that did not, as reflected by the reductions in the ASE and ESE. As in the single dilution case, results with a non-constant rate of infected cells for the DVLs were nearly identical (Web Table S3). \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c c c} \hline \hline & & \multicolumn{10}{c}{**Without UDSA**} & \multicolumn{10}{c}{**With UDSA**} \\ \cline{3-14} & & \multicolumn{4}{c}{**MLE**} & \multicolumn{4}{c}{**Bias-Corrected MLE**} & \multicolumn{4}{c}{**MLE**} & \multicolumn{4}{c}{**Bias-Corrected MLE**} \\ \cline{3-14} \(\boldsymbol{n^{\prime}}\) & \(\boldsymbol{M}\) & **Bias** & **ASE** & **ESE** & **CP** & **Bias** & **ASE** & **ESE** & **CP** & **Bias** & **ASE** & **ESE** & **CP** & **Bias** & **ASE** & **ESE** & **CP** \\ \hline 6 & 12 & 0.50 & 0.10 & 0.45 & 0.39 & 0.96 & 0.00 & 0.37 & 0.39 & 1.00 & 0.09 & 0.38 & 0.36 & 0.95 & \(-\)0.02 & 0.34 & 0.36 & 0.98 \\ & & 0.75 & 0.09 & 0.45 & 0.39 & 0.96 & \(-\)0.01 & 0.37 & 0.39 & 0.99 & 0.06 & 0.34 & 0.33 & 0.94 & \(-\)0.01 & 0.32 & 0.33 & 0.97 \\ & & 1.00 & 0.09 & 0.46 & 0.39 & 0.95 & \(-\)0.01 & 0.37 & 0.39 & 0.99 & 0.05 & 0.32 & 0.31 & 0.94 & 0.00 & 0.30 & 0.31 & 0.96 \\ & 24 & 0.50 & 0.04 & 0.31 & 0.27 & 0.96 & \(-\)0.01 & 0.28 & 0.27 & 0.94 & 0.03 & 0.25 & 0.24 & 0.94 & \(-\)0.01 & 0.24 & 0.24 & 0.96 \\ & & 0.75 & 0.04 & 0.29 & 0.27 & 0.95 & 0.00 & 0.27 & 0.27 & 0.94 & 0.03 & 0.23 & 0.23 & 0.94 & 0.00 & 0.22 & 0.23 & 0.96 \\ & & 1.00 & 0.04 & 0.29 & 0.27 & 0.96 & 0.00 & 0.27 & 0.27 & 0.95 & 0.02 & 0.21 & 0.22 & 0.95 & \(-\)0.01 & 0.21 & 0.22 & 0.96 \\ & 32 & 0.50 & 0.03 & 0.24 & 0.23 & 0.97 & 0.00 & 0.22 & 0.23 & 0.97 & 0.02 & 0.20 & 0.21 & 0.96 & \(-\)0.01 & 0.20 & 0.21 & 0.97 \\ & & 0.75 & 0.03 & 0.24 & 0.23 & 0.95 & 0.00 & 0.23 & 0.23 & 0.95 & 0.02 & 0.20 & 0.20 & 0.95 & 0.00 & 0.19 & 0.20 & 0.96 \\ & & 1.00 & 0.04 & 0.25 & 0.23 & 0.96 & 0.01 & 0.24 & 0.23 & 0.96 & 0.02 & 0.18 & 0.19 & 0.96 & 0.00 & 0.18 & 0.19 & 0.96 \\ 12 & 12 & 0.50 & 0.09 & 0.45 & 0.39 & 0.95 & 0.00 & 0.37 & 0.39 & 0.99 & 0.10 & 0.38 & 0.36 & 0.93 & \(-\)0.01 & 0.35 & 0.36 & 0.97 \\ & & 0.75 & 0.07 & 0.44 & 0.39 & 0.95 & \(-\)0.02 & 0.37 & 0.39 & 0.98 & 0.04 & 0.33 & 0.32 & 0.94 & \(-\)0.02 & 0.31 & 0.32 & 0.96 \\ & & 1.00 & 0.10 & 0.44 & 0.39 & 0.96 & 0.00 & 0.36 & 0.39 & 0.99 & 0.07 & 0.31 & 0.31 & 0.94 & 0.02 & 0.30 & 0.31 & 0.96 \\ & 24 & 0.50 & 0.06 & 0.32 & 0.27 & 0.95 & 0.02 & 0.29 & 0.27 & 0.93 & 0.06 & 0.26 & 0.24 & 0.92 & 0.01 & 0.25 & 0.24 & 0.94 \\ & & 0.75 & 0.04 & 0.29 & 0.27 & 0.96 & 0.00 & 0.27 & 0.27 & 0.94 & 0.03 & 0.22 & 0.22 & 0.95 & \(-\)0.01 & 0.21 & 0.22 & 0.96 \\ & & 1.00 & 0.02 & 0.29 & 0.27 & 0.96 & \(-\)0.01 & 0.27 & 0.27 & 0.95 & 0.02 & 0.22 & 0.21 & 0.95 & 0.00 & 0.21 & 0.21 & 0.96 \\ 32 & 0.50 & 0.03 & 0.24 & 0.23 & 0.97 & 0.00 & 0.23 & 0.23 & 0.97 & 0.04 & 0.20 & 0.21 & 0.95 & 0.00 & 0.20 & 0.21 & 0.96 \\ & & 0.75 & 0.02 & 0.24 & 0.23 & 0.96 & \(-\)0.01 & 0.23 & 0.23 & 0.96 & 0.02 & 0.19 & 0.19 & 0.94 & 0.00 & 0.19 & 0.95 \\ & & 1.00 & 0.02 & 0.25 & 0.23 & 0.95 & 0.00 & 0.24 & 0.23 & 0.95 & 0.01 & 0.19 & 0.18 & 0.94 & \(-\)0.01 & 0.18 & 0.18 & 0.95 \\ 18 & 12 & 0.50 & 0.09 & 0.42 & 0.39 & 0.97 & \(-\)0.01 & 0.35 & 0.39 & 1.00 & 0.09 & 0.36 & 0.35 & 0.95 & \(-\)0.01 & 0.33 & 0.35 & 0.98 \\ & & 0.75 & 0.06 & 0.42 & 0.39 & 0.96 & \(-\)0.03 & 0.35 & 0.39 & 0.98 & 0.06 & 0.32 & 0.32 & 0.94 & \(-\)0.01 & 0.31 & 0.32 & 0.96 \\ & & 1.00 & 0.08 & 0.43 & 0.39 & 0.97 & \(-\)0.02 & 0.36 & 0.39 & 0.99 & 0.05 & 0.31 & 0.30 & 0.95 & 0.00 & 0.29 & 0.30 & 0.96 \\ & 24 & 0.50 & 0.03 & 0.28 & 0.27 & 0.97 & \(-\)0.01 & 0.26 & 0.27 & 0.95 & 0.03 & 0.24 & 0.24 & 0.94 & \(-\)0.01 & 0.23 & 0.24 & 0.96 \\ & & 0.75 & 0.04 & 0.30 & 0.27 & 0.96 & \(-\)0.01 & 0.28 & 0.27 & 0.94 & 0.03 & 0.23 & 0.22 & 0.95 & 0.00 & 0.22 & 0.26 & 0.96 \\ & & 1.00 & 0.03 & 0.30 & 0.27 & 0.96 & \(-\)0.01 & 0.27 & 0.27 & 0.94 & 0.03 & 0.22 & 0.21 & 0.94 & 0.00 & 0.22 & 0.21 & 0.95 \\ 32 & 0.50 & 0.03 & 0.26 & 0.23 & 0.95 & 0.00 & 0.25 & 0.23 & 0.95 & 0.03 & 0.21 & 0.21 & 0.95 & 0.00 & 0.20 & 0.21 & 0.96 \\ & & 0.75 & 0.02 & 0.24 & 0 ## 4 HIV Application The proposed methods were used to analyze data for 17 people living with HIV on ART with suppressed viral load from the University of North Carolina HIV Curve Center. With multiple dilution QVOA and single dilution UDSA information, these data provide an additional opportunity to quantify the efficiency gain attributable to adding deep sequencing data over QVOA alone. For each subject (i.e., source population), an SLD assay was performed over \(D=3\)-\(4\) dilution levels and with \(M=6\)-\(36\) replicate wells per dilution level. For each subject, deep sequencing was done on 50-100% of positive wells at one dilution level (i.e., \(0.5\leq q\leq 1\)). Details summarizing the assay results are provided in Web Table S4, and the full data are accessible as described in the data availability statement. Methods applied to the UNC data included the estimators for multiple dilution QVOA with/without UDSA from Section 2.6 and those for single dilution QVOA with UDSA. Previously, Lee et al. (2017) compared the multiple dilution QVOA without UDSA to the single dilution QVOA with UDSA. However, this comparison does not isolate the benefits of using the multiple over single dilution QVOA or the addition of deep sequencing information, since the estimators used either multiple dilutions _or_ deep sequencing, but not both. Here, comparisons are made between estimators based on (i) multiple dilution QVOA with versus without UDSA and (ii) single dilution UDSA with single versus multiple dilution QVOA. Estimated log IUPM and 95% confidence intervals for the 17 people are provided in Figure 2. The log IUPM and its untransformed confidence interval were used to compare the methods' precision. More detailed analysis results for the IUPM can be found in Web Table S5. As expected, all subjects' bias-corrected IUPM estimates were smaller than their uncorrected ones; these smaller estimates are expected to be closer to the subjects' true HIV concentrations. First, comparing the confidence intervals based on the multiple dilution QVOA with versus without UDSA highlights the precision gain attributable to incorporating the additional sequencing data. For some people, the confidence intervals using the UDSA data were remarkably narrower than those using the QVOA data alone. Incorporating the UDSA data led to the greatest increase in precision for Subject C12, who had 65 observed DVLs across 32 deep sequenced wells and a 57% narrower confidence interval (for the BC-MLE) when incorporating the UDSA. Meanwhile, for other people with fewer DVLs observed (e.g., Subject C13, who had seven observed DVLs from four sequenced wells), the confidence interval widths did not decrease when using sequencing information. Heuristically, the UDSA is more informative when more DVLs are detected and when more wells are deep sequenced. Second, comparing the confidence intervals based on the UDSA with single versus multiple dilution QVOA illustrates the precision gain due to using data from all dilution levels. Again, large gains were seen for some subjects, while the inclusion of multiple dilutions did not change the precision much for others. Consider Subject C17, who had QVOA data available from four dilution levels. For this subject, using data from all dilution levels provided a 25% narrower confidence interval (for the BC-MLE) than only using data from the one deep sequenced dilution level. On the other hand, consider Subject C1, who had data from three dilution levels and had zero positive wells at the two unsequenced ones. For this person, the two unsequenced, all-negative dilution levels added little information. As a result, the confidence intervals for Subject C1 using UDSA and single or multiple dilution QVOA data are nearly identical. In general, the unsequenced dilutions were more informative when there were many more positive wells. In summary, the estimators based on the multiple dilutions QVOA with UDSA make use of all available information and tend to have better precision than the estimators that ignore the other dilution levels or the UDSA data. The extent of the precision gain depends on the particular assay results. ## 5 Discussion In this paper, methods were developed to analyze data from SLD assays augmented with additional information provided by deep sequencing. The estimator proposed by Lee et al. (2017), which uses information from dilution assays and deep sequencing, was given a formal justification and shown to be consistent and asymptotically normal. A bias-corrected MLE was proposed, and it was shown that the MLE is unchanged by the possibility of undetected viral lineages. The Lee et al. (2017) method was extended to the case where the QVOA and deep sequencing data were collected at multiple dilution levels. Simulations for both the single and multiple dilution settings demonstrated that the BC-MLE has low bias and its corresponding confidence interval achieves nominal coverage. The reduced bias and efficiency gains of the proposed methods relative to existing ones were demonstrated in an application to data from from the UNC HIV Curve Center. There are many directions for future work to expand inference procedures for combined dilution and deep sequencing assays. For the setting with only QVOA data, Myers et al. (1994) derived an exact confidence interval for the IUPM by inverting the likelihood ratio test. Myers et al. (1994) and Trumble et al. (2017) also proposed a goodness-of-fit Figure 2: Estimated infectious units per million (IUPM) with 95% confidence intervals for 17 people living with HIV in the University of North Carolina HIV Cure Center Study. The IUPM and confidence interval were log transformed for comparisons of precision. p-value (PGOF), which can be helpful in identifying possible technical problems with an assay. Calculating both the exact confidence interval and PGOF involve enumerating all possible assay outcomes. Without UDSA data, an assay with \(D\) dilution levels and \(M_{d}\) replicate wells per dilution level \(d\) has \(\prod_{d=1}^{D}(M_{d}+1)\) possible outcomes. With UDSA data, the number of possible assay outcomes grows much more quickly. For example, if all positive wells were deep sequenced (i.e., \(q_{d}=1\) for \(d\in\{1,\ldots,D\}\)), and \(n\) DVLs were detected, then there are \(\prod_{d=1}^{D}(M_{d}+1)^{n}\) possible outcomes. This combinatorial explosion makes calculating an exact confidence interval and PGOF for UDSA data computationally challenging. Therefore, developing a computationally feasible exact confidence interval and PGOF to this setting would be interesting areas for future research. Other extensions of the methods considered in this paper include (i) comparing IUPMs between a pair of samples take from an individual before and after a treatment [Li et al., 2022] and (ii) comparing the distributions of IUPMs between two treatment groups with multiple individuals per group. ## Acknowledgements This research was supported by the University of North Carolina at Chapel Hill Center for AIDS Research (CFAR), a National Institutes of Health (NIH) funded program P30AI50410, and NIH grant R37AI029168. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. ## Supporting Information Web Appendices and Tables referenced in Sections 2-4, along with the R code for Sections 3-4, can be found on figshare at [https://figshare.com/projects/SLDeepAssay/155489](https://figshare.com/projects/SLDeepAssay/155489). The R package SLDeepAssay is available on GitHub at [https://github.com/sarahlotspeich/SLDeepAssay/](https://github.com/sarahlotspeich/SLDeepAssay/). ## Data Availability Statement The data that support the findings of this study are openly available in figshare at [https://doi.org/10.6084/m9.figshare.21821229.v1](https://doi.org/10.6084/m9.figshare.21821229.v1).
2308.01653
Measurement-Induced Criticality is Tomographically Optimal
We develop a classical shadow tomography protocol utilizing the randomized measurement scheme based on hybrid quantum circuits, which consist of layers of two-qubit random unitary gates mixed with single-qubit random projective measurements. Unlike conventional protocols that perform all measurements by the end of unitary evolutions, our protocol allows measurements to occur at any spacetime position throughout the quantum evolution. We provide a universal classical post-processing strategy to approximately reconstruct the original quantum state from intermittent measurement outcomes given the corresponding random circuit realizations over repeated experiments. We investigated the sample complexity for estimating different observables at different measurement rates of the hybrid quantum circuits. Our result shows that the sample complexity has an optimal scaling at the critical measurement rate when the hybrid circuit undergoes the measurement-induced transition.
Ahmed A. Akhtar, Hong-Ye Hu, Yi-Zhuang You
2023-08-03T09:35:08Z
http://arxiv.org/abs/2308.01653v2
# Measurement-Induced Criticality is Tomographically Optimal ###### Abstract We develop a classical shadow tomography protocol utilizing the randomized measurement scheme based on hybrid quantum circuits, which consist of layers of two-qubit random unitary gates mixed with single-qubit random projective measurements. Unlike conventional protocols that perform all measurements by the end of unitary evolutions, our protocol allows measurements to occur at any spacetime position throughout the quantum evolution. We provide a universal classical post-processing strategy to approximately reconstruct the original quantum state from intermittent measurement outcomes given the corresponding random circuit realizations over repeated experiments. We investigated the sample complexity for estimating different observables at different measurement rates of the hybrid quantum circuits. Our result shows that the sample complexity has an optimal scaling at the critical measurement rate when the hybrid circuit undergoes the measurement-induced transition. Classical shadow tomography [1; 2; 3] offers an efficient randomized measurement scheme for extracting physically relevant information from a quantum state. Much research [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18] primarily concentrates on the randomized measurement protocol that entails random unitary evolution, followed by the final stage of local measurements on all qubits. This process is akin to halting the universe's time evolution to measure every qubit. A more realistic measurement scheme involves conducting local measurements intermittently while the entire quantum system continues to evolve, which more closely imitates how we observe the quantum universe surrounding us. This situation can be represented by hybrid quantum circuits [19; 20; 21; 22] formed by randomly interspersing local measurements among unitary gates in a quantum circuit. Notably, hybrid quantum circuits reveal a phase transition [23; 24; 25; 26; 27; 28; 29; 30; 31; 32] in the quantum entanglement among qubits when the measurement rate surpasses a critical threshold, known as the measurement-induced entanglement transition or the purification transition. Our focus in this work is to explore the hybrid circuit as a randomized measurement scheme for classical shadow tomography and investigate the reconstruction of the quantum state using measurement outcomes obtained from intermittent measurements during the hybrid circuit's evolution, as illustrated in Fig. 1. The primary scientific question we aim to address concerns the efficiency of extracting information about the initial quantum state from intermittent measurement outcomes collected from the hybrid quantum circuit, within the context of classical shadow tomography. To address this problem, we first expanded the existing classical shadow tomography framework to accommodate more general scenarios where measurements can occur at any spacetime position throughout the quantum evolution. In particular, we introduced a systematic classical post-processing method for reconstructing the quantum state from the classical data of random circuit realizations and measurement outcomes in repeated experiments. Numerical simulations were conducted to validate the proposed reconstruction formula. Subsequently, we defined the locally-scrambled shadow norm [9; 11] for the hybrid quantum circuit measurement scheme, which quantifies the typical number \(M\) of experiments required to estimate the expectation value of an observable accurately, also referred to as the _sample complexity_ in quantum state tomography. Utilizing the tensor network method [15; 16], we found that the sample complexity \(M\) scales with the operator size \(k\) of the observable as \(M\simeq\beta^{k}\text{poly}(k)\), with the base \(\beta\) depending on the measurement rate \(p\) of the hybrid quantum circuit. We noted that \(\beta\) is minimized (yielding optimal sample complexity scaling) when the measurement rate \(p=p_{c}\) is tuned to the critical point of the measurement-induced transition in the hybrid quantum circuit. The minimal value is found to be around \(\beta_{\min}\approx 2.2\). Therefore, measurement-induced criticality is tomographically optimal within the scope of the hybrid quantum circuit Figure 1: Using hybrid quantum circuit as a randomized measurement scheme for classical shadow tomography. Starting from an unknown quantum state \(\rho\), evolve the system by layers of random local Clifford gates, and measure each qubit with probability \(p\) in random Pauli basis in each layer. The final state is trashed, but the circuit realization \(\mathcal{C}\) (the gate choices and measurement observables) and the measurement outcomes \(\mathbf{b}\) are recorded as a classical shadow. Repeated randomized measurements of copies of \(\rho\) will collect a dataset of classical shadows, which can be used to predict the physical properties of the state \(\rho\) through classical post-processing. measurement scheme. _Generalized Classical Snapshots. --_ The theoretical framework of classical shadow tomography can be extended to accommodate more general randomized measurement schemes [33; 34] that permit intermittent and partial measurements throughout random quantum evolutions. Conceptually, the idea is as follows: irrespective of how single-qubit measurements are arranged and implemented in a single-shot experiment, the experimental result must be a string of classical bits, denoted as \(\mathbf{b}=(b_{1},b_{2},\cdots)\), which represents the measurement outcome \(b_{n}\in\{0,1\}\) for the \(n\)th measurement in the process. Given an initial quantum state \(\rho\) and a particular measurement circuit \(\mathcal{C}\) (specified by both the circuit structure and gate choices), the entire measurement protocol can be characterized by the conditional probability \(p(\mathbf{b}|\rho,\mathcal{C})\). The linearity of quantum mechanics implies that there must exist a measurement operator \(\sigma_{\mathbf{b}|\mathcal{C}}\) associated with each possible string of measurement outcomes \(\mathbf{b}\), such that: \[p(\mathbf{b}|\rho,\mathcal{C})\propto\mathrm{Tr}(\sigma_{\mathbf{b}|\mathcal{C}}\rho). \tag{1}\] We will call the operator \(\sigma_{\mathbf{b}|\mathcal{C}}\) a _classical snapshot_. In the conventional classical shadow tomography, where the randomized measurement is implemented by first applying a random unitary transformation \(U\) to the initial state \(\rho\) (as \(\rho\to U\rho U^{\dagger}\)) and then measuring every qubit separately in the \(Z\)-basis, the classical snapshot \(\sigma_{\mathbf{b}|\mathcal{C}}\) reduces to the standard form of \(\sigma_{\mathbf{b}|\mathcal{C}}=U^{\dagger}|\mathbf{b}\rangle\langle\mathbf{b}|U\). Beyond this conventional setup, Eq. (1) provides a more general definition of classical snapshots when the measurement protocol is more involved. The classical snapshot \(\sigma_{\mathbf{b}|\mathcal{C}}\) should be a Hermitian positive semi-definite operator to ensure the real positivity of the conditional probability \(p(\mathbf{b}|\rho,\mathcal{C})\). Given this property, it is natural to normalize \(\sigma_{\mathbf{b}|\mathcal{C}}\) such that \(\mathrm{Tr}\sigma_{\mathbf{b}|\mathcal{C}}=1\)[35], and view \(\sigma_{\mathbf{b}|\mathcal{C}}\) as another density matrix, called the classical snapshot state. _Hybrid Quantum Circuit Measurement. --_ The hybrid quantum circuit measurement scheme is depicted in Fig. 1. Starting from an \(N\)-qubit unknown quantum state \(\rho\) of interest, apply the measurement and unitary layers alternately, where: * Measurement layer: For each qubit independently, with probability \(p\), choose to measure it in one of the three Pauli bases randomly. In the \(l\)-th measurement layer, suppose \(A_{l}\) is the subset of qubits chosen to be measured. For each chosen qubit \(i\in A_{l}\), let \(P_{i}^{(l)}\in\{X_{i},Y_{i},Z_{i}\}\) be the choice of Pauli observable and \(b_{i}^{(l)}\in\{0,1\}\) be the corresponding measurement outcome. The measurement layer is described by the Kraus operator \[K_{l}^{M}=\prod_{i\in A_{l}}\frac{\mathds{1}+(-)^{b_{i}^{(l)}}P_{i}^{(l)}}{2}.\] (2) * Unitary layer: For every other nearest-two-qubit bond independently, apply a Clifford gate [36; 37] uniformly drawn from the two-qubit Clifford group. The Kraus operator for the \(l\)-th unitary layer is \[K_{l}^{U}=\left\{\begin{array}{ll}\prod_{i}U_{2i,2i+1}^{(l)}&l\in\text{even},\\ \prod_{i}U_{2i-1,2i}^{(l)}&l\in\text{odd},\end{array}\right.\] (3) which alternates between even and odd bonds with the layer index \(l\) (such that the unitary gates form a brick-wall pattern as shown in Fig. 1). Packing the choice of measurement subsets \(A_{l}\), Pauli observables \(\mathbf{P}^{(l)}\) and Clifford gates \(\mathbf{U}^{(l)}\) (for \(l=1,2,\cdots\)) altogether into the specification of a measurement circuit \(\mathcal{C}\), and gathering all the measurement outcomes \(\mathbf{b}=\{\mathbf{b}^{(l)}\}\) together as a classical bit-string, the probability to observe \(\mathbf{b}\) given \(\mathcal{C}\) is \[p(\mathbf{b}|\rho,\mathcal{C})=\mathrm{Tr}(K_{\mathbf{b}|\mathcal{C}}\,\rho\,K_{\mathbf{b} |\mathcal{C}}^{\dagger}), \tag{4}\] where \(K_{\mathbf{b}|\mathcal{C}}=\prod_{l}K_{l}^{U}K_{l}^{M}\) is the overall Krause operator. Then following the assertion in Eq. (1), the classical snapshot associated with such measurement outcome should be identified as \[\sigma_{\mathbf{b}|\mathcal{C}}=\frac{K_{\mathbf{b}|\mathcal{C}}^{\dagger}K_{\mathbf{b}| \mathcal{C}}}{\mathrm{Tr}(K_{\mathbf{b}|\mathcal{C}}^{\dagger}K_{\mathbf{b}| \mathcal{C}})}, \tag{5}\] where the denominator normalizes the classical snapshot as a state. Since the measurement circuit \(\mathcal{C}\) is composed of Clifford gates and Pauli measurements, every classical snapshot \(\sigma_{\mathbf{b}|\mathcal{C}}\) is a stabilizer state and can be efficiently represented and reconstructed on classical computers [36; 37]. As illustrated in Fig. 2, to reconstruct \(\sigma_{\mathbf{b}|\mathcal{C}}\), one starts with a maximally mixed state (described by the density matrix \(\mathds{1}/(\mathrm{Tr}\,\mathds{1})\)) and traces back the measurement circuit: inverting every unitary gate, replacing Figure 2: Protocol of classical shadow tomography for hybrid quantum circuits. The quantum state \(\rho\) is efficiently encoded as classical information by randomized measurements in the data acquisition phase. A classical snapshot state \(\sigma\) is decoded by backward evolution from a maximally mixed state, given the circuit structure and measurement outcomes \(\mathbf{b}\). On the other hand, its prior Pauli weights \(w_{\varepsilon_{\sigma}}(P)\) are inferred following the operator spreading dynamics. every measurement by projection to the measurement outcome, and normalizing the final state in the end. _Posterior and Prior Distributions._ -- We can interpret the hybrid quantum circuit measurement process as a measure-and-prepare quantum channel that measures the initial state \(\rho\) and prepares the classical snapshot state \(\sigma\) with the _posterior_ probability: \[p(\sigma|\rho):=\sum_{\mathbf{b},\mathcal{C}}\delta_{\sigma,\sigma_{\mathbf{b}| \mathcal{C}}}p(\mathbf{b}|\rho,\mathcal{C})p(\mathcal{C}), \tag{6}\] where \(p(\mathcal{C})\) denotes the probability of realizing a specific measurement circuit \(\mathcal{C}\). Assuming that all Pauli measurements and Clifford gates are chosen uniformly, \(p(\mathcal{C})\propto\prod_{l}p^{|A_{l}|}(1-p)^{N-|A_{l}|}\) will only be affected by the measurement rate \(p\) of the hybrid quantum circuit. By conducting the hybrid quantum circuit measurements of the target state \(\rho\) repeatedly, one can sample classical snapshot states \(\sigma\) from the _posterior_ distribution \(p(\sigma|\rho)\), forming an ensemble \(\mathcal{E}_{\sigma|\rho}=\{\sigma|\sigma\sim p(\sigma|\rho)\}\). The objective of classical shadow tomography is to predict properties of \(\rho\) based on the samples of \(\mathcal{E}_{\sigma|\rho}\) collected from experiments as classical data. We introduce the _prior_ distribution \(p(\sigma)\) of the classical snapshot [15], defined as \(p(\sigma):=p(\sigma|\rho=\mathds{1}/(\operatorname{Tr}\mathds{1}))\). This distribution describes our knowledge about classical snapshots before observing the quantum state \(\rho\) (as if \(\rho\) is maximally mixed). The prior distribution solely characterizes the statistical properties of the randomized measurement scheme, reflecting our uncertainty about the measurement circuit structures and gate choices. _Pauli weight._ -- A crucial property of the prior classical snapshot ensemble \(\mathcal{E}_{\sigma}=\{\sigma|\sigma\sim p(\sigma)\}\) is its _Pauli weight_[16, 11] \[w_{\mathcal{E}_{\sigma}}(P):=\mathop{\mathbb{E}}_{\sigma\sim p( \sigma)}(\operatorname{Tr}P\sigma)^{2}, \tag{7}\] defined for any Pauli operator \(P=\prod_{i}P_{i}\) (where \(P_{i}\in\{I,X,Y,Z\}\) denotes the Pauli operator on the \(i\)-th qubit). The Pauli weight \(w_{\mathcal{E}_{\sigma}}(P)\) fully characterizes the second-moment statistical feature of the prior distribution \(p(\sigma)\). It represents the probability for a Pauli observable \(P\) to be transformed to the measurement basis and observed directly by the randomized measurement. It plays an important role in performing and analyzing classical shadow tomography. For hybrid quantum circuits, the Pauli weight can be computed following the operator dynamics [38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]. For every step of the physical evolution of a random quantum state \(\rho\) through a random quantum channel \(\mathcal{K}\), the Pauli weight will be updated by the Markov process [55] \[w_{\mathcal{E}_{\mathcal{K}(\rho)}}(P)=\sum_{P^{\prime}}w_{\mathcal{E}_{ \mathcal{K}}}(P,P^{\prime})w_{\mathcal{E}_{\rho}}(P^{\prime}), \tag{8}\] where \(w_{\mathcal{E}_{\mathcal{K}}}(P,P^{\prime}):=\mathop{\mathbb{E}}_{\mathcal{K }\in\mathcal{E}_{\mathcal{K}}}(\operatorname{Tr}(P\mathcal{K}(P^{\prime}))/ \operatorname{Tr}\mathds{1})^{2}\) is the Pauli transfer matrix of the random channel ensemble \(\mathcal{E}_{\mathcal{K}}\). For every two-qubit random Clifford unitary channel \(\mathcal{U}\) and every probabilistic single-qubit random Pauli measurement channel \(\mathcal{M}\), the corresponding Pauli transfer matrices are \[w_{\mathcal{E}_{\mathcal{U}}}(P,P^{\prime}) =\tilde{\delta}_{P,\mathds{1}}\tilde{\delta}_{P^{\prime},\mathds{ 1}}+\frac{1}{15}(1-\tilde{\delta}_{P,\mathds{1}})(1-\tilde{\delta}_{P^{\prime},\mathds{1}}),\] \[w_{\mathcal{E}_{\mathcal{M}}}(P,P^{\prime}) =\frac{p}{9}(1+2\delta_{P,\mathds{1}})(1+2\delta_{P^{\prime}, \mathds{1}})+(1-p)\tilde{\delta}_{P,P^{\prime}}, \tag{9}\] where \(\tilde{\delta}\) denotes the Kronecker delta symbol restricted to the support of the corresponding quantum channel. Starting from the initial Pauli weight \(w_{0}(P)=\delta_{P,\mathds{1}}\) of the maximally mixed state and applying the Pauli transfer matrix in accordance with the measurement circuit structure (see Fig. 2), the classical snapshot Pauli weight \(w_{\mathcal{E}_{\sigma}}(P)\) can be evaluated following Eq. (8) [56]. In the end, the Pauli weight should be normalized to \(w_{\mathcal{E}_{\sigma}}(\mathds{1})=1\) to be consistent with the normalization of the classical snapshot states defined in Eq. (5). _Observable Estimation._ -- We now present a key result of our study: given any Pauli observable \(P\), its expectation value on the initial quantum state \(\rho\) can be inferred from the posterior classical snapshots via [16, 11] \[\langle P\rangle:=\operatorname{Tr}(P\rho)=\mathop{\mathbb{E}}_{\sigma\sim p (\sigma|\rho)}\frac{\operatorname{Tr}(P\sigma)}{w_{\mathcal{E}_{\sigma}}(P)}. \tag{10}\] For more general observable \(O=\sum_{P}o_{P}P\), the expectation value can be similarly predicted by \(\langle O\rangle=\mathop{\mathbb{E}}_{\sigma\sim p(\sigma|\rho)}O_{\sigma}\), where \(O_{\sigma}:=\sum_{P}o_{P}\operatorname{Tr}(P\sigma)/w_{\mathcal{E}_{\sigma}}(P)\) is the _single-shot estimation_[1] of the observable \(O\) given a particular classical snapshot \(\sigma\), defined based on Eq. (10). This allows us to decode the quantum information about the original state \(\rho\) from the classical shadows collected from the hybrid quantum circuit measurement. In practice, the expectation \(\mathop{\mathbb{E}}_{\sigma\sim p(\sigma|\rho)}\) is often estimated by the median of means over a finite number of classical snapshots collected from experiments. To demonstrate the validity of Eq. (10), we carried out numerical experiments. We take the Greenberger-Horne-Zeilinger (GHZ) [57] state of \(N=12\) qubits, described Figure 3: Demonstration of hybrid quantum circuit classical shadow tomography on a 12-qubit GHZ state. (a) Predicted observable expectation values \(\langle P\rangle\) and (b) locally-scrambled shadow norm \(\|P\|^{2}_{\mathcal{E}_{\sigma}}\) as functions of the measurement rate \(p\). Colors label different Pauli observables \(P=Z^{\otimes k}\). by \(\rho=|\psi\rangle\langle\psi|\) with \(|\psi\rangle=({|0\rangle}^{\otimes N}+{|1\rangle}^{\otimes N})/\sqrt{2}\). We consider a randomized measurement scheme implemented by shallow hybrid circuits, which contain three layers of random Clifford gates, together with random Pauli measurements inserted before each unitary layer with probability \(p\) on each qubit. We simulate the protocol numerically on a classical computer by repeatedly preparing the GHZ state, applying the hybrid circuit, and collecting the measurement outcomes. For every given measurement rate \(p\), we collect \(M=50000\) samples and estimate the Pauli observables \(P=Z^{\otimes k}\) based on the measurement outcomes using Eq. (10). Our results, shown in Fig. 3(a), demonstrate that the estimated observable expectation values are consistent with their theoretical expectation on the GHZ state throughout the full range of \(p\), i.e., \(\langle Z^{\otimes k}\rangle=\frac{1}{2}(1+(-1)^{k})\). _Sample Complexity Scaling._ -- The statistical uncertainty in the estimation, indicated by the error bar in Fig. 3(a), is due to the finite number of samples. The typical variance \(\mathrm{var}\langle O\rangle\sim\|O\|^{2}_{\mathcal{E}_{\sigma}}/M\) scales inversely with the number \(M\) of samples. The coefficient \(\|O\|^{2}_{\mathcal{E}_{\sigma}}:=\mathbb{E}_{\sigma\sim p(\sigma)}\,O^{2}_{\sigma}\) is the _locally-scrambled shadow norm_, introduced in Ref. [9]. It upper-bounds the variance of the single-shot estimation \(O_{\sigma}\) over the prior classical snapshot ensemble \(\mathcal{E}_{\sigma}\). For Pauli observable \(P\), the shadow norm has a simple expression [11; 16] \[\|P\|^{2}_{\mathcal{E}_{\sigma}}=\frac{1}{w_{\mathcal{E}_{\sigma}}(P)}. \tag{11}\] In Fig. 3(b), the second moment of the single-shot estimation \(\mathbb{E}_{\sigma\sim p(\sigma)}\,P^{2}_{\sigma}\) is compared with the inverse Pauli weight \(1/w_{\mathcal{E}_{\sigma}}(P)\) calculated from operator spreading dynamics. The results indicate a close match between the two measures. For generic observable \(O=\sum_{P}o_{P}P\), the shadow norm is given by \(\|O\|^{2}_{\mathcal{E}_{\sigma}}=\sum_{P}|o_{P}|^{2}\|P\|^{2}_{\mathcal{E}_{ \sigma}}\). The shadow norm quantifies the number \(M\) of samples needed to control the estimation variances \(\mathrm{var}\langle O\rangle\lesssim\delta^{2}\) below a desired level set by a small \(\delta\), which scales as \(M\sim\|O\|^{2}_{\mathcal{E}_{\sigma}}/\delta^{2}\). Therefore, the shadow norm measures the sample complexity for classical shadow tomography to predict the observable \(O\) based on the randomized measurement scheme characterized by \(\mathcal{E}_{\sigma}\). To study how the shadow norm scales with the operator size, we use the matrix product state (MPS) based approach developed in Ref. [15; 16] to compute the Pauli weight \(w_{\mathcal{E}_{\sigma}}(P)\) following the operator spreading dynamics and determine the shadow norm \(\|P\|^{2}_{\mathcal{E}_{\sigma}}=1/w_{\mathcal{E}_{\sigma}}(P)\) for _consecutive_ Pauli string observables \(P\) of different sizes \(k=|\operatorname{supp}P|\). The result is plotted in Fig. 4(a). The shadow norm scales with the operator size \(k\) exponentially with a base \(\beta\) at the leading level \[\|P\|^{2}_{\mathcal{E}_{\sigma}}\simeq\beta^{k}\mathrm{poly}(k), \tag{12}\] where \(\mathrm{poly}(k)\) stands for sub-leading correction that is polynomial in \(k\). This is consistent with the intuition that longer Pauli observable will require exponentially more local measurements to determine. However, the base \(\beta\) depends on the measurement rate \(p\), as shown in Fig. 4(b). We find that \(\beta\) is minimized at \(p=p_{c}\) when the hybrid quantum circuit operates at the measurement-induced criticality, and the shadow norm scales as \[\|P\|^{2}_{\mathcal{E}_{\sigma}}|_{p=p_{c}}\simeq\beta^{k}_{\min}k^{2\Delta}, \tag{13}\] where \(\beta_{\min}=2.23\pm 0.006\) and \(\Delta=0.33\pm 0.02\) are determined by fitting. We expect the critical exponent \(\Delta\) to be universal, corresponding to the scaling dimension of a defect operator in the boundary conformal field theory (CFT) for the measurement-induced transition [58]. The minimal \(\beta_{\min}\) enters the region between \(3^{3/4}\approx 2.28\) and \(2\), which is the range of optimal scaling achievable by shallow circuit classical shadows [18]. The minimization of \(\beta\) can be understood by examining it from both sides of the phase transition. In the area-law phase (\(p>p_{c}\)), \(\beta\) should decrease with decreasing measurement rate \(p\). This is because a lower measurement rate allows for a few more local measurements to be deferred to deeper layers of the unitary circuit, enabling larger-size observables to be probed more efficiently by leveraging the scrambling power of shallow circuits. However, in the volume-law phase (\(p<p_{c}\)), if the measurement rate continues to decrease, \(\beta\) will instead increase. Because the circuit's scrambling power becomes so strong that it begins to hide the quantum information of the initial state from local measurements deep in the circuit [24; 25; 29], which renders the measurements increasingly inefficient. As the measurement rate approaches zero (\(p\to 0\)), the shadow norm must diverge, because it becomes impossible to reconstruct the initial state in the absence of measurements. Therefore, the optimal scaling of the shadow norm (or the sample complexity) can only occur at the transition point \(p_{c}\), where observables of all scales are probed efficiently [59]. _Summary and Discussions._ -- In this work, we present the classical shadow tomography approach for decoding Figure 4: (a) Dependence of log shadow norm \(\log\|P\|^{2}_{\mathcal{E}_{\sigma}}\) of consecutive Pauli string observable \(P\) of size \(k\) at different measurement rates \(p\), demonstrating a leading linear behavior. (b) The base \(\beta\) minimizes at a measurement rate \(p_{c}\) that matches the measurement-induced transition of hybrid circuits. The measurement rates exemplified in (a) are highlighted as stars in (b). quantum information from measurement outcomes of hybrid quantum circuits. This method involves computing classical snapshots associated with measurement outcomes and using them to infer properties of the initial quantum state. The Pauli weight of the prior classical snapshot ensemble characterizes the statistical properties of the randomized measurement scheme, and the shadow norm quantifies the sample complexity for predicting observables. The log shadow norm scales linearly with the operator size of the observable and exhibits optimal scaling at a critical measurement rate of the hybrid circuit that corresponds to the measurement-induced criticality. Hybrid quantum circuits are known for their error correction encoding in the volume-law phase [24, 25, 29]. To use them as a random quantum error correction code, the ability to decode quantum information from measurement outcomes is essential. Classical shadow tomography provides a systematic and universal way to decode hybrid quantum circuits, making them suitable for more exciting quantum information applications. Measurement-induced transition in hybrid quantum circuits was originally proposed as an entanglement transition. However, measuring entanglement entropy is a difficult task that requires post-selections. With classical shadow tomography, we can directly benchmark the prior classical snapshot Pauli weight \(w_{\mathcal{E}_{\sigma}}(P)\) on a known quantum states \(\rho\) (assuming \(\operatorname{Tr}(P\rho)\neq 0\)), \[w_{\mathcal{E}_{\sigma}}(P)=\frac{\mathbb{E}_{\sigma\sim p(\sigma|\rho)} \operatorname{Tr}(P\sigma)}{\operatorname{Tr}(P\rho)}. \tag{14}\] where \(p(\sigma|\rho)\) can be sampled by performing the hybrid circuit measurement on \(\rho\). Then \(\beta\) can be extracted by fitting the dependence of \(w_{\mathcal{E}_{\sigma}}(P)\) with respect to its operator size \(k\). It is supposed to exhibit a kink at the measurement-induced transition as Fig. 4, which provides another method to detect the transition without post-selections apart from the cross-entropy benchmark [60]. _Note added._ -- Up on finishing this work, we become aware that a related work [61] appeared. ###### Acknowledgements. We acknowledge the helpful discussions with Ehud Altman, Matthew Fisher, Michael Gullans, Yaodong Li, and Bryan Clark. We are especially grateful to Ehud Altman for inspiring us on the quantum statistical mechanical model understanding of our results. A.A.A. and Y.Z.Y. are supported by a startup fund from UCSD. H.Y.H. is grateful for the support by Harvard Quantum Initiative Fellowship.
2306.06490
Automated Code Editing with Search-Generate-Modify
Code editing is essential in evolving software development. Many automated code editing tools have been proposed that leverage both Information Retrieval-based techniques and Machine Learning-based code generation and code editing models. Each technique comes with its own promises and perils, and they are often used together to complement their strengths and compensate for their weaknesses. This paper proposes a hybrid approach to better synthesize code edits by leveraging the power of code search, generation, and modification. Our key observation is that a patch obtained by search and retrieval, even if imperfect, can provide helpful guidance to a code generation model. However, a retrieval-guided patch produced by a code generation model can still be a few tokens off from the intended patch. Such generated patches can be slightly modified to create the intended patches. SARGAM is a novel tool designed to mimic a real developer's code editing behavior. Given an original code version, the developer may search for related patches, generate or write the code, and then modify the generated code to adapt it to the right context. Our evaluation of SARGAM on edit generation shows superior performance with respect to current state-of-the-art techniques. SARGAM also shows great effectiveness on automated program repair tasks.
Changshu Liu, Pelin Cetin, Yogesh Patodia, Saikat Chakraborty, Yangruibo Ding, Baishakhi Ray
2023-06-10T17:11:21Z
http://arxiv.org/abs/2306.06490v2
# Automated Code Editing with Search-Generate-Modify ###### Abstract Code editing is essential in evolving software development. Many automated code editing tools have been proposed that leverage both Information Retrieval-based techniques and Machine Learning-based code generation and code editing models. Each technique comes with its own promises and perils, and they are often used together to complement their strengths and compensate for their weaknesses. This paper proposes a hybrid approach to better synthesize code edits by leveraging the power of code search, generation, and modification. Our key observation is that a patch obtained by search and retrieval, even if imperfect, can provide helpful guidance to a code generation model. However, a retrieval-guided patch produced by a code generation model can still be a few tokens off from the intended patch. Such generated patches can be slightly modified to create the intended patches. Sargam is a novel tool designed to mimic a real developer's code editing behavior. Given an original code version, the developer may _search_ for related patches, _generate_ or write the code, and then _modify_ the generated code to adapt it to the right context. Our evaluation of Sargam on edit generation shows superior performance with respect to current state-of-the-art techniques. Sargam also shows great effectiveness on automated program repair tasks. Bug fixing, Automated Program Repair, Edit-based Neural Network + Footnote †: Manuscript received June 7, 2023; revised August 16, 2021. ## I Introduction In a rapidly-evolving software development environment, developers often edit code to add features, optimize performance, or fix bugs. This process can be complex and requires a deep understanding of the underlying programming language, as well as expertise in the relevant domain. To facilitate code editing, developers often search existing codebases [1, 2, 3] or online resources [4] for relevant code. They may also leverage automated code generation tools such as GitHub Copilot1. However, the search results [5, 6] or generated code may not always be ideal, necessitating developers to customize it for the given situation [7]. Therefore, developers may have to further modify the generated code to achieve the desired outcome. Footnote 1: [https://github.com/features/copilot/](https://github.com/features/copilot/) In the past, various tools and techniques have been proposed to reduce the manual effort needed for code editing [8, 9, 10, 11]. These techniques can be broadly classified into three categories: _Search, Generate, and Modify._ _Search._ Searching is a popular information retrieval-based approach to suggest edits that were previously applied to similar code contexts [1, 2, 12, 13]. However, each retrieval-based technique relies on its perceived definition of code similarity (e.g., token, tree, or graph-based similarity) and fails to produce patches with a slight variation of that definition. As a result, these methods tend to have limited applicability to diverse editing contexts. _Generate._ In recent years, the most promising approach is perhaps Large Language Model (LLM)-based code generation models where code is generated based on developers' intent and surrounding code context. Open-source code-specific LLMs such as PLBART [14], CodeGPT-2 [15], CodeT5 [16], and NatGen [17] have shown significant potential in code generation. Additionally, industry-scale LLMs like GPT-3 [18] and Codex [19] have become widely popular for generating source code and are used as the backbone of commercial code generation software such as GitHub Copilot1. Footnote 1: [https://github.com/features/copilot/](https://github.com/features/copilot/) There is a subtle difference between edit generation and code generation. Developers generate edits by transforming the original code version into a new version, often by deleting and adding lines. Edit generation can thus be thought of as a conditional probability distribution, generating new lines of code by conditioning on the old lines. Existing LLM-based code generation approaches do not capture granular edit operations: which tokens will be deleted, which tokens will be inserted, and in particular, where the new tokens will be inserted. _Edit._ Many previous works [20, 21, 22, 23] designated special outputs to represent edit operations. Recently CoditT5 [23] proposed an edit-specific LLM. Given an original code version, CoditT5 first comes up with an edit plan in terms of deletion, insertion, substitution. It then generates the edits in an auto-regressive manner by conditioning on the edit plan. CoditT5 shows promise in generating edits over vanilla code generation. The goal of this work is to produce higher-quality code edits by harnessing the power of all three techniques. Each approach offers unique ingredients that can contribute to better edit generation. **Our Insight.** Code search can retrieve relevant patches that provide more guidance to a code generation model, leading to better patch generation. However, most of the time, the patches generated with code search are off by a few tokens from the intended patch--even random permutations and combinations of the generated tokens could lead to an intended patch [24]. A more systematic approach would involve using an edit-generation model that specifically targets the generated tokens that require further modifications (such as deletion, insertion, etc.). This approach enables more focused and precise modifications of the code generated in the previous step and finally outputs the intended patch. **Proposed Approach.** We propose a framework, SarGAM, leveraging both code generation augmented by search and modification to generate code edits. SarGAM emulates a developer's typical code editing practice where given an edit location and context, she might search for related code, write the retrieved code (i.e., generation) to the target location, and modify it to contextualize. SarGAM contains three steps: (i) Search: An information retrieval-based technique to retrieve candidate patches from a database of previous edits that may fit the edit context, (ii) Generation: An off-the-shelf code generation model that takes input from the edit location, edit context, and the retrieved patches, and outputs a token sequence corresponding to the edited code, and (iii) Modification: A novel code editing model slightly modifying the token sequence from the previous step and outputting granular edit operations in terms of deleted and inserted tokens Existing edit-generation models [23] which aim to generate the edit operations directly from the original version. We instead let a generation model generate the token sequence first and then modify it further to produce the final patch. We observe that a granular edit model generally performs better for generating smaller edits. If a generation model already produces a sufficiently accurate patch, applying edits on top of it can enhance the efficacy of the edit-generation model. **Results.** We evaluate our approach on two tasks: code editing and program repair. For code editing, we examine SarGAM on two different datasets. SarGAM improves \(top1\) patch generation accuracy over state-of-the-art patch generation models (PLBART, NatGen, and CoditT5) from 4.82% to 22.42% in different settings. For program repair, we compare SarGAM with recent deep learning-based techniques on Defects4J\({}_{1.2}\), Defects4J\({}_{2.0}\), and QuixBugs datasets and report state-of-the-art performance. Additionally, we conduct extensive ablation studies to justify our different design choices. In particular, we investigate the three components, search, generate, and modify, individually and prove that SarGAM can benefit from each one of them. In summary, our key contributions are: * We prototype a code editing model, SarGAM, built on top of off-the-shelf pre-trained code generation models augmented with the generation with code search and code modification. * We propose a new code modification model that involves generating granular edit operations (i.e., deletion and insertion operations at token granularity as opposed to generating token sequences). * We demonstrate SarGAM's ability to generate patches for general-purpose code edits and bug fixes. Across most of the settings, SarGAM achieves state-of-the-art performances. We present a detailed ablation study to justify our different design choices. * We release our prototype tool at [https://github.com/SarGAMTEAM/SarGAM.git](https://github.com/SarGAMTEAM/SarGAM.git). ## II Background: Code Generation Models Machine Learning-based Code Generation has gained significant attention in recent years, where code is generated given a Natural Language (NL) description or code context. Different types of Sequence-to-Sequence (\(seq2seq\)) models play a significant role in achieving this success [25, 26]. The input to a \(seq2seq\) model is a sequence of tokens (\(X=x_{1},x_{2},...,x_{n}\)), and the output is a token sequence (\(Y=y_{1},y_{2},...,y_{m}\)), where the model learns conditional probability distribution \(P(Y|X)\). Recurrent Neural Networks (RNN) and Long Short Term Memory (LSTM)-based models [27] once held a dominant role in code generation [28, 11, 29, 30]. RNNs and LSTMs take a piece of code token-by-token in a sequential manner and try to predict the next token by conditioning on the immediately preceding tokens in the sequence. Both types of models largely depend on the tokens in close vicinity and tend to suffer from not capturing the long-range dependencies [31]. ### _Transformer for Code Generation_ Transformer-based models [32] have recently outperformed alternative architectures for code generation due to the introduction of self-attention mechanisms. Transformers process the entire input token sequence as a complete graph 2. Each token corresponds to a vertex in the graph, and an edge connecting two vertices corresponds to the "attention" between connected tokens. Attention is the relative influence of a token on the representation of other tokens in the sequence. Attention weights signify the importance of a token in making the final prediction for a particular task [33, 34]. The model learns the attention weights depending on the task during the training process. The Transformer also encodes the relative position of each token in the input sequence (positional encoding). Footnote 2: [https://en.wikipedia.org/wiki/Complete_graph](https://en.wikipedia.org/wiki/Complete_graph) The attention mechanism and positional encoding allow Transformers to capture more long-range dependencies. The self-attention mechanism enables parallel processing of input sequence that leads to significant speedup during training [32]. Many previous works use Transformers for code generation problems (e.g., patching, code editing, and program repair) due to their success. [35, 36, 37, 17]. Transformer-based models roughly fall into two categories: encoder-decoder and decoder-only. **Encoder-decoder**. As shown in Figure 0(a), an encoder-decoder model has a Transformer encoder and an auto-regressive Transformer decoder. The encoder is trained to extract features from the input sequence. The decoder generates the next token by reasoning about the feature extracted by the Transformer encoder and previously generated tokens. Thus, an encoder-decoder model can be used to understand and generate code simultaneously. PLBART [14], CodeT5 [16], and NatGen [17] are examples of encoder-decoder models trained on code corpora with denoising pre-training. CoditT5 [23] further presents a custom pre-trained model for code editing tasks using the same architecture as CodeT5. MODIT [37], on 3) the other hand, fine-tunes PLBART for code editing tasks. **Decoder-only**. Decoder-only models only have an auto-regressive Transformer decoder (shown in Figure 1b). Since there is no encoder, decoder-only is a "generate-only" architecture. Such models are pre-trained in an unsupervised fashion with large corpora to build Generative Pre-trained Transformer models (GPT). Jiang _et al._[38] show the effectiveness of GPT for the task of source code patching. Other representative decoder-only code generation models include Polycoder [39], OpenAI's Codex[19], GPT-3 [18], etc. Decoder-only models are suitable for open-ended code generation, where a prompt describing the functionality is passed to the model. ### _Levenshtein Transformer_ The Transformers usually generate outputs from scratch. When there is much overlap between input and output token sequences (e.g., automatic text editing where only a few tokens are changed, keeping most of the text as it is), Transformers tend to struggle [40] to preserve the unchanged tokens. Levenshtein Transformers (LevTs) [40] show promise in such cases, as they use basic edit operations such as _insertion_ and _deletion_ to implement granular sequence transformations. Levenshtein Distance [41] between the ground truth and the output token sequence is measured during training after each deletion or insertion. The predicted operation is chosen for the next interaction if the distance reduces. Figure 0(a) and Figure 0(c) show architectural differences between a vanilla Transformer and a LevT. Although both share the same encoder and decoder blocks, the vanilla Transformer uses a linear layer and softmax upon stacks of decoder layers to predict the next token, while LevT uses three additional classifiers to apply edit operations. In LevT, the output of the last Transformer decoder block (e.g. \(h=\{h_{0},h_{1},\cdots,h_{n}\}\)) is passed to following classifiers: 1. Deletion Classifier: for each token in the sequence, this binary classifier predicts whether it should be deleted(=1) or kept(=0). \(\pi_{\theta}^{\mathrm{del}}(h_{i})=softmax(W_{del}h_{i})\), where \(W_{del}\) is the weight matrix of the deletion classifier. 2. Placeholder Classifier: predicts how many place holders should be inserted between any consecutive token pairs in the whole sequence. \(\pi_{\theta}^{\mathrm{phl}}(<h_{i},h_{i+1}>)=softmax(W_{yth}\cdot concat(h_{i},h _{i+1}))\),where \(W_{phh}\) is the weight matrix of the placeholder classifier. Insertion Classifier: for each placeholder we inserted in the last step, the insertion classifier predicts which token should be inserted in this position: \(\pi_{\theta}^{\mathrm{ins}}(h_{i})=softmax(W_{ins}h_{i})\),where \(W_{ins}\) is the weight matrix of the insertion classifier. We choose various Transformer-based code generation models to generate patches and use a novel LevT-based edit generation model to further edit the generated patches. ## III SarGaM Approach We introduce SarGaM, a tool to synthesize source code edits (i.e., patches). A patch is a set of edits to the source code used to update, fix, or improve the code. Throughout the paper, we use code edits and patches interchangeably. More formally, **Definition III.1**: _A program patch, \(p:=\Delta(v^{0},v^{1})\), is the set of syntactic differences that update a program version \(v^{0}\) to \(v^{1}\). Each change is a token that is either deleted from \(v^{0}\) or inserted to \(v^{1}\)._ We have designed SarGaM to mimic a real developer's behavior while patching source code. Given \(v^{0}\), the developer may (i) _search_ for the related patches, (ii) _generate_ or write the code, and (iii) further _modify_ the generated code to adapt it to the right context. To this end, we design SarGaM to have these three steps: Search, Generate, and Modify. An overview of SarGaM's workflow is shown in Figure 2. ### _Overview_ SarGaM takes the following as input: \(v^{0}\), the exact edit location (can be captured using the current cursor location), and optional edit intent in the form of Natural Language. SarGaM goes through the Search, Generate, and Modify steps and outputs the final code version, \(v^{1}\). * _Step 1. Search_: Given the input as a query, SarGaM searches a database of patches to find similar patches applied previously in a similar context. This step is similar to a search-based patch recommendation engine [1]. Each retrieved patch is concatenated with the original input and passed to the next step (see Figure 3). In the motivating example in Figure 2, given the buggy code as query, the Fig. 1: Different Types of Transformer-based Generative Models search retrieves a similar patch from the code base: for (int i=begin; i;n; i++). Although the retrieved patch is not perfect, the introduction of the!begin? token facilitates the final result. * _Step 2. Generate._ This step takes the search augmented input and outputs a token sequence to generate the patched code. We use off-the-shelf \(seq2seq\) models [14, 37, 17], as discussed in Section II-A, to generate code. Figure 2 shows the generation step produces a token sequence for (int i=begin; i;i;weights.length; i++), which is close to the intended patch. * _Step 3. Modify._ However, the generated patch can still be incorrect, as shown in our running example -- often, they are quite close to the intended edit (i.e., low edit distance), nevertheless, incorrect [42]. Developers still need to modify the generated patch here and there to get the intended output. In this step, we aim to capture such small modifications by explicitly modeling them as deleted and added operations. Our key insight is, as there is a significant overlap between the source and target token sequences, learning fine-grained edit operations can be beneficial. In particular, we use LevT, as described in Section II-B, to explicitly model the edits as a sequence of deleted and added tokens. In the case of Figure 2, this step explicitly deletes weights. and adds token sequence begin+, resulting in the correct patch. In this work, we implemented our own Search and Edit Models on top of existing generation models [14, 37, 17, 23]. The rest of this section elaborates each part in detail. ### _Input Processing_ While pre-processing the inputs, following MODIT [37], we create a multi-modal input capturing (i) exact patch location, (ii) patch context, and (iii) developers' intent or guidance in the form of natural text. Figure 3 provides an example input. Following some recent code editing and program repair techniques [38, 43, 37, 44], we assume that the developer knows the exact patch location. We realize the localization with a tree-based parser GumTree [45]. Patch context is the whole function body where the patch will be applied (including both the context before and after the patch location). The third modality (intent) is optional and approximated from the patch commit message. We further augment each input with a token sequence retrieved from the search step as discussed below. Each modality is separated by a special token <s>. For code editing tasks, the input of each modality is a list of tokens. We tokenize the input with the tokenizer which is compatible with the backbone generation model. ### _Search_ We maintain a database \(P\) of previous patches, where each patch can be stored as tuple (\(v^{0}\),\(v^{1}\)). In this step, given the original code version as a query, say \(v^{0}_{q}\), SarGAM retrieves the potential edits from the database that were applied before within a similar code context. In particular, SarGAM computes edit distances between \(v^{0}_{q}\) and all the instances of \(v^{0}\) in the database and creates a ranked list based on the similarity scores. SarGAM then retrieves \(top\ k\) similar \(v^{0}\)s and fetches their corresponding patches (\(v^{1}\)s). Each retrieved patch is finally augmented with the original input, as shown in Figure 3. Algorithm 1 shows the pseudo-code for our technique. As inputs, the algorithm takes an original code version that needs to be patched (\(v^{0}_{q}\)), a database of previous patches \(P\), and Fig. 3: Search Augmented Input Modalities of SarGAM Fig. 2: Overview of the SarGAM Pipeline and a Motivating Example of a bug fixing patch taken from Defects4I1.2 dataset. Here, inside a for loop, the loop counter initialization and loop condition ( int i=0; i;iweights.length ) are buggy and (int i=begin; i;jbegin+length) is the expected fix. After the _Search_ (Step 1), SarGAM retrieves a similar patch (int i=begin; i;n), the retrieval of begin token benefits _Generation_ (Step 2). The generated patch is close to the ground truth: (int i=begin; i;weights.length), yet not correct. Finally, the _Modification_ model (Step 3) further modifies the generated patch by deleting weights. and inserting begin+. how many patches we want to retrieve (_top_\(k\)). For each original version of a patch \(v_{p}^{0}\) in the database, we compute its edit distance from \(v_{q}^{0}\). We compute the edit distance in the embedded space to facilitate the computation. Thus, all the original code versions in the patch database are stored in its embedded form \(\mathcal{E}(v^{0})\) and the query is also embedded. We use PLBART [14] to get such embeddings. The edit distance is computed using cosine similarity-for any two pieces of embedded code \(x\) and \(y\), we compute: \[d=1-\frac{\mathbf{x}\cdot\mathbf{y}}{\|\mathbf{x}\|\|\mathbf{y}\|}=1-\frac{ \sum_{i=1}^{N}x_{i}y_{i}}{\sqrt{\sum_{i=1}^{N}x_{i}^{2}\sqrt{\sum_{i=1}^{N}y_{ i}^{2}}}} \tag{1}\] For each candidate p in the database, the computed distance along with the retrieved patch (\(v_{p}^{1}\)) is stored in a list (line 4). The final list is further sorted in descending order by distance to the query (line 6), and the algorithm returns the _top_\(k\) closest entries to the query (line 7). Such similarity measurements simulate the real-life situation where the developer looks for use cases on the internet and chooses the problem statement most similar to their scenario. ### _Generation Model_ Here we use three state-of-the-art edit generation models: PLBART [14, 37], CoditT5 [23], and NatGen [17]. The output of this step is a token sequence corresponding to the generated patch. For PLBART and NatGen, the output formats are identical to the expected patch format, and no more post-processing is needed. However, CoditT5's [23] is an edit generation model; its output sequence is of the format: _Edit Operations!s? fixed code_. Thus, we further post-process them to create a sequence of tokens corresponding to the generated patch. ### _Modification Model_ Here, generated code, e.g. \(v_{gen}\), from the previous step is further modified. We describe two basic edit operations on \(v_{gen}\): * **delete** token \(d\) from \(v_{gen}\). * **insert** token \(i\) at location \(l\) in \(v_{gen}\). Any other code change operation, e.g., replacement, move, etc., can be expressed in terms of delete and insert [13, 12]. Multiple modifications can further be expressed as a sequence of token deletion and insertion operations, resulting in the final patched code. To capture such insertion-deletion operations, we use LevT, as discussed in Section II-B. Figure 4 illustrates this step w.r.t. our motivating example (see Figure 2). **Modeling Edits.** Given a token sequence representing \(T=(t_{1},t_{2},...,t_{n})\), the two edit operations, deletion and insertion, are consecutively applied to generate the final output. As discussed in Section II-B, LevT decoder has three classification heads: Insertion, Deletion, and Placeholder. We model the code operations using these three classifiers, as follows: _Token Deletion._ LevT reads the input sequence \(T\), and for every token \(t_{i}\in T\), the deletion classifier makes the binary decision of 1 (delete the token) or 0 (keep the token). The output of the deletion head is \(T^{\prime}\). Figure 4 shows that the deletion classifier identifies the tokens weights and. for deletion (marked in **red**). _Token Insertion._ On \(T^{\prime}\), the insertion operation is realized Fig. 4: Example modification steps generated by Levenshtein Transformer corresponding to the motivating example. The encoder takes patch location, context, and optional developer’s intent as input and outputs hidden state \(H=\{h_{1},h_{2},\cdots,h_{N}\}\), where N refers to the length of the input sequence. LevT decoder takes \(H\) and patch location, and after some Transformer decoder layers, outputs \((z_{1},z_{2},\cdots,z_{M})\). It is passed to three classifiers (deletion, placeholder, insertion) to perform the edits. in two phases: predicting locations to insert the token and then predicting the tokens to be inserted. First, among all the possible locations where a new token can be inserted, _i.e.,_\((t^{\prime}_{i},t^{\prime}_{i+1})\in T^{\prime}\), the Placeholder head of LevT predicts how many placeholders can be added. Next, the insertion head of LevT replaces each placeholder with a token chosen from the vocabulary. For instance, in Figure 4, the Placeholder Classifier predicts two placeholder positions between tokens < and length, as marked by [PLH] [PLH] (_i.e.,_ i [PLH] [PLH] length). Next, the Insertion Classifier focuses only on the two placeholders and predicts begin and + respectively. Finally, we get the intended patch int i = begin, i < begin + length, i++ ;. ## IV Experimental Design ### _Datasets_ Table I summarizes the dataset we use for our study. _Code Editing Data:_ The accuracy of the code editing task of SarGam is evaluated by utilizing the Bug2Fix dataset [46] similar to [37, 23] (including \(B2F_{s}\) and \(B2F_{m}\)). \(B2F_{s}\) contains shorter methods with a maximum token length 50, and \(B2F_{m}\) contains longer methods with up to 100 token length. _Bug Fixing Data:_ The effectiveness of the pipeline is measured with Defects4j [47] and QuixBugs[48]. The Generate and Edit parts of the pipeline are trained with Java 2006 [43], which has over two million samples in the training set. After training, we test our pipeline on (1) 75 single-line bugs in Defects4J\({}_{1.2}\) and (2) 85 single line bugs in Defects4J\({}_{2.0}\) and (3) 40 bugs in QuixBugs. ### _Baselines_ We fine-tune three large-scale pre-trained language generation models: PLBART [14], CoditT5 [23] and NatGen [17] on each dataset and consider them as our baselines. CoditT5 is an edit-generation model that generates edits in terms of token addition, deletion, and replacement operations. In contrast, NatGen and PLBART are code-generation models that generate a sequence of tokens. Another edit generation model, MODIT [37] studied several information modalities on top of PLBART. We use MODIT's recommendation to select the input modalities and report results on the different baselines. We compare SarGam with the following deep learning-based baselines for the bug-fixing task: CocoNut [43], CURE [38], KNOD [35], and AlphaRepair [36]. ### _Training_ For training LevT, we followed [40] and trained on 4 GeForce RTX3090 Ti GPU with 64,000 tokens batch size. We use the same dual-policy learning objective [40] during training and stopped once the performance on the validation set did not improve for 5 consecutive epochs. On code editing task, we fine-tune three pre-trained generation models. With respect to PLBART, we followed [37] and train with a learning rate of \(5e^{-5}\) and batch size of 16. Following [23], we fine-tune CoditT5 and NatGen with a learning rate of \(5e^{-5}\) and batch size of 48. In the fine-tuning stage, we adopt the same early stopping strategy that we use for training LevT. ### _Evaluation Metric_ When a synthesized patch is exactly identical to the expected patch, _i.e.,_ it is an "exact match" of the expected patch, we declare the synthesized patch to be correct. We use accuracy (exact match) to evaluate the results on both code editing and program repair. For code editing tasks, we report \(top1\) and \(top5\) accuracies. Given a retrieval augmented input, we let the code generation model output up to top 5 patches; modify each of the generated patches once and produce up to 5 final candidate patches. In the case of our program repair tool, we generate and evaluate up to the top 1250 patches. We made this choice in consideration of other APR tools, which often evaluate up to the top 5000 patches. We believe that reporting accuracy at the top 1250 is a reasonable and fair choice, particularly because our APR approach includes test cases to validate the generated patches. ## V Results and Analysis In this section, we present empirical results as answers to the following research questions: 1. How effective is SarGAM for code editing? 2. What are the contributions of different design choices? 1. Importance of input modalities. 2. Effectiveness of a Levenshtein transformer over a vanilla transformer for patch modification. 3. How effective is SarGam for automated program repair? ### _Rq1._SarGAM for code editing #### V-A1 Motivation Here we investigate the core functionality of SarGAM, i.e., generating code edits. We evaluate it on popular code editing benchmarks \(B2F_{s}\) and \(B2F_{m}\). #### V-A2 Experimental Setup We compare SarGAM's performance with three state-of-the-art pre-trained code generation models that show effectiveness for code editing tasks: PLBART [14], CoditT5 [23], and NatGen [17]. We fine-tune all three pre-trained models on the same dataset (\(B2F_{s}\) or \(B2F_{m}\)). While comparing with a code generation model, we incorporate the same model in SarGAM's pipeline. In that way, it shows how much SarGAM can improve compared to the corresponding generation-only setting. \begin{table} \begin{tabular}{l|c c c} \hline \hline **Dataset** & **\#Train** & **\#Eval** & **\#Test** \\ \hline **Bug2Fix small (\(B2F_{s}\))** & 46628 & 5828 & 5831 \\ **Bug2Fix medium (\(B2F_{m}\))** & 53324 & 6542 & 6538 \\ **Java 2006** & 2593572 & 324196 & - \\ **Defects4J\({}_{1.2}\)** & - & - & 75 \\ **Defects4J\({}_{2.0}\)** & - & - & 82 \\ **QuixBugs** & - & - & 40 \\ \hline \hline \end{tabular} \end{table} TABLE I: Studied Code Editing & Bug-fixing Datasets In the search step, we search for similar edits from the training sets of \(B2F_{s}\) or \(B2F_{m}\). The retrieved patch from the training sets of \(B2F_{s}\) or \(B2F_{m}\) are added to the input. The generation and edit models are fine-tuned on the search augmented input. For a given retrieval augmented input, we take \(top1\) and \(top5\) outputs from the generation step and further modify them to produce the final patches. The reported numbers in Table II present the accuracy of the final patches. #### Iv-A3 Results We find that SarGAM can always outperform its pure generation counterpart by a considerable margin. SarGAM improves upon baseline PLBART, NatGen, and CoditT5 by 15.61%, 3.80% and 5.85% in terms of \(top1\) exact match, respectively. When we consider \(top5\) accuracy, SarGAM improves upon three baseline models by 7.46%, 7.40% and 4.47%, respectively. Figure 4(a) shows the progress each step makes towards the synthesis of the correct patch. Given the previous code as input, we retrieve a patch that is very similar to the ground truth from the code base. The Levenshtein distance between the retrieved patch and the ground truth is 2 while that between previous code and ground truth is 14. The generation model (NatGen) utilizes the retrieved patch and generates a patch based on the code context. This step brings the generated patch one step closer to the correct patch, which is only one step away from our goal. Finally, the modification model completes the last step by deleting isSoftStopCondition and inserting true. Figure 4(b) shows another example which can prove the robustness of SarGAM. The presence of "by using infinite reconnection loop" in the commit message suggests that stop() should be removed from the previous code. Although the retrieved patch is not even close to the ground truth, the generation model (CoditT5 in this case) still recognizes part of the developer's intent and removes stop(). Based on the output of generation model, the editing model further deletes another statement m_loadFailure = true; and finally returns reportNode (t.getMessage());, which proves to be the correct patch. Figure 6 further shows the effectiveness of each step (search, generate, modify): for all the three off-the-shelf code generation models, introducing search can improve the patch generation, and modifying the generated patch can further improve the performance. On \(B2F_{s}\) retrieved edits can improve the \(top1\) exact match of PLBART by \(3.97\%\), and the modifying step further improves it with another \(1.81\%\). Such an improvement can also be found on \(B2F_{m}\). **Result 1**: SarGAM can generate better patches than generation-only or edit-only models. On average SarGAM can improve upon three baseline models by 8.42% and 6.44%, in terms of \(top1\) accuracy and \(top5\) accuracy, respectively. ### _RQ2. Analyzing Design Choices_ #### Iv-B1 Motivation The success of the search and modification steps depends on different design choices. In particular, we investigate: 1. During search, which input modalities (edit location, context, and user intent) and their combinations matter for a successful patch generation. 2. How LevT outperforms a vanilla Transformer-based model for patch modification. \begin{table} \begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Tool} & \(B2F_{s}\) & \(B2F_{m}\) & \(B2F_{s}\) & \(B2F_{m}\) \\ \hline \multicolumn{5}{c|}{Top1} & \multicolumn{2}{c}{Top5} \\ \hline PLBART & 29.99 & 23.03 & 47.08 & 36.51 \\ SarGAM (PLBART) & 35.77 & 27.58 & 52.43 & 37.81 \\ \hline NatGen & 36.55 & 28.53 & 52.39 & 42.99 \\ SarGAM (NatGen) & 38.31 & 29.32 & 57.31 & **45.31** \\ \hline CoditT5 & 37.52 & 28.33 & 54.99 & 42.32 \\ SarGAM (CoditT5) & **39.54** & **30.12** & **57.46** & 44.20 \\ \hline \hline \end{tabular} \end{table} TABLE II: Experiment results of SarGAM for code editing. Models in () are the off-the-shelf generative models used by SarGAM. Fig. 5: Example correct patches generated by SarGAM. Inputs are presented in light brown boxes, and synthesised patches are presented in light green boxes. Fig. 6: Ablation Study of the steps of SarGAM on \(B2F_{s}\) #### RQ2.1: Impact of Input Modalities on Search #### Iv-B2 Experimental Setup. Here, the search query is formed with different combinations of three types of inputs: patch location, patch context, and developer intent. Each modality is matched with a similar modality during retrieval. We report the results both for search+generate and search+generate+modification as shown in Table III. Note that we do not compare the impact of different modalities for the generation step, as Chakraborty _et al._[37] on the same dataset has conclusively shown that all three modalities do matter. #### Iv-B3 Results Table III shows the results of SarGAM on \(B2F_{s}\) and \(B2F_{m}\) with different combinations of retrieved patches as an additional modality. The results show that the retrieved patches improve the performance of the generation model (PLBART) across all combinations. On \(B2F_{s}\), the best result is achieved when we use both the patches retrieved with context and those retrieved with commit messages. In this case, we improve the performance of PLBART by 13.24%. However, on \(B2F_{m}\), PLBART, achieves its best performance when patches retrieved with patch location and patches retrieved with commit message are passed to the input and it finally improves baseline PLBART by 9.77%. The improvement that the retrieved patches bring to the generation model still holds after further modification. On \(B2F_{s}\), using patches retrieved with all the three types of queries achieves the highest accuracy, which is actually the second best combination in the "Search+Generation" setting. On \(B2F_{m}\), patch location & commit message is still the best combination. #### Iv-B4 Experimental Setup In this section, we follow the experimental setup in Section V-A and use LevT and the vanilla Transformer to modify the output of code-generation models, which have been augmented with search results. For fairness, both LevT and the vanilla Transformer are trained on the same dataset (\(B2F_{s}\) and \(B2F_{m}\)). #### Iv-B5 Results In Table IV we report the performance of using vanilla Transformer and LevT for editing. Across all the different settings, LevT always achieves a higher exact match (accuracy) of the generated edit. In addition, we present the exact numbers of overlapped and unique correct edits produced by Transformer and LevT in Figure 7. On PLBART \(B2F_{m}\) and PLBART \(B2F_{s}\), LevT complements Transformer by producing 110 and 59 more correct patches, respectively. Similarly, by modifying NatGen's output, LevT can produce 29 and 67 more unique patches over vanilla Transformer for \(B2F_{s}\) and \(B2F_{m}\) respectively. Even when we consider CoditT5, which is an edit generation model, LevT produces 37 and 3 more unique patches over a vanilla Transformer for \(B2F_{s}\) and \(B2F_{m}\) respectively. These results show LevT is a better design choice for patch modification over a vanilla Transformer. **Result 2** Experiments reveal that the combinations of edit location, context, and developer's intent during patch retrieval can improve PLBART by up to 13.24%. Meanwhile, LevT-based patch modification model outperforms the vanilla Transformer by modeling fine-grained edit operations explicitly. ### _Rq3._ SarGAM for Bug Fixing #### Iv-C1 Motivation We want to check SarGaM's applicability for program repair, which is a special class of code editing task. For bug fixing, the plausibility of the generated edits can be estimated by the passing and failing of test cases. #### Iv-C2 Experimental Setup Following Jiang el al. [49]'s findings that LLMs tend to outperform all the other DL-based repair-only models, here we choose OpenAI's Codex [19] (at zero-shot setting), one of the largest code generation models at the time of writing the paper. In particular, for bug fixing, we use Codex as our off-the-shelf code generation model and build the rest of the steps on top of it. Our motivation is to investigate, whether the search and modification steps can provide any additional benefits when using Codex. To provide input to Codex, we design a prompt that combines code and natural language. Our prompt is inspired by several previous works [50, 51, 52]. The prompt consists of the following components (see Figure 8): * _Describing Task._ The comment at the beginning of the prompt ("fix the bug in the following method") describes the task to be carried out by Codex. \begin{table} \begin{tabular}{l|c c c} \hline \hline **Tool** & **Defect4j1.2** & **Defect4j2.0** & **QuixBugs** \\ \hline **CocoNut** & & - & - & 13 \\ **CURE** & & - & - & 26 \\ **KNDO** & & 48 & 34 & 25 \\ **AlphaRepair** & 45 & 36 & 28 \\ **Codex** & & 33 & 38 & 31 \\ \hline **SarGaM** & 40 & **42** & **34** \\ (Search+Codex+Modify) & & & & \\ \hline \hline \end{tabular} \end{table} TABLE V: Experiment Results of SarGaM for Bug Fixing. \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Patch \\ Location \\ \end{tabular} } & \multirow{2}{*}{Context} & \multirow{2}{*}{\begin{tabular}{c} Commit \\ Message \\ \end{tabular} } & \multicolumn{2}{c|}{Search+Generate } & \multicolumn{2}{c}{Search+Generate+Modify} \\ & & & \(B2F_{s}\) & \(B2F_{m}\) & \(B2F_{s}\) & \(B2F_{m}\) \\ \hline - & - & - & 29.99 & 23.02 & 31.03 & 24.33 \\ \hline - & - & & 31.97 & 23.81 & 32.02 & 24.26 \\ \hline - & & - & 31.92 & 24.49 & 32.43 & 25.92 \\ \hline - & & & **33.96** & 24.43 & 35.60 & 26.49 \\ \hline \multirow{2}{*}{\begin{tabular}{c} ✓ \\ \end{tabular} } & - & - & 31.50 & 24.72 & 32.77 & **27.58** \\ \hline \multirow{2}{*}{\begin{tabular}{c} ✓ \\ \end{tabular} } & - & - & 32.82 & **25.27** & 34.01 & 25.67 \\ \hline \multirow{2}{*}{\begin{tabular}{c} ✓ \\ \end{tabular} } & - & - & 31.85 & 25.01 & 33.29 & 26.11 \\ \hline \multirow{2}{*}{\begin{tabular}{c} ✓ \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} ✓ \\ \end{tabular} } & 33.63 & 24.33 & **35.77** & 26.39 \\ \hline \hline \end{tabular} \end{table} TABLE III: Impact of Different Input Modalities in Search Query * _Specifying Bug Location_. The buggy code snippet that needs to be repaired is marked with a comment "buggy line is here". * _Retrieved Patch_. The retrieved patch is augmented with comment "A possible patch for buggy line". * _Context_. The context before the buggy line is further highlighted with a comment: "change the buggy line to fix the bug:". To obtain the patch, we take the first line generated by Codex. Here, we perform the search step in a larger training set: Java 2006. In the search step, we retrieve up to 25 similar patches, and in the generation step, we generate top50 possible patches. Hence at the inference stage, we obtain up to (\(50*25=1250\)) candidate patches for every single bug. This number of candidate patches is still relatively small compared to the settings in some previous works [38, 28, 36], which can generate up to 5,000 patches. Here, we use the Defects4J test suite to validate patches after each step. Following previous work [43], we call patches synthesized by SarGAM "candidate patches". After validating these candidate patches with the corresponding test suite, we report only the correct patches that are exactly the same as the patches provided by the developers. #### Iv-B3 Results Table V shows the results of SarGAM and other APR baselines on three benchmarks under the condition of perfect bug localization. SarGAM can fix more bugs than Codex in all the settings, showing that even if a really large high-capacity code generation model is used, search and modification steps still add significant value on top of it. Overall, SarGAM fixes 42 single line bugs, outperforming all the other baselines on Defects4J\({}_{2.0}\), and produces 6 and 8 more correct patches than the latest APR tools AlphaRepair and KNOD, respectively. On Defects4J\({}_{1.2}\), SarGAM outperforms most of the deep learning baselines, but it is worse than KNOD and AlphaRepair. Note that we report accuracies based on the top 1250 patches, whereas KNOD and AlphaRepair use 5000 patches. We believe that given similar resources, we will perform comparably in this setting. Table V also presents the effectiveness of the proposed method on QuixBugs, where it outperforms all the other baselines. Figure 9 presents the relationship of bugs that can be fixed by SarGAM, AlphaRepair, and KNOD. Figure (a)a indicates that SarGAM complements AlphaRepair and KNOD by fixing 10 and 9 more bugs respectively on Defects4J\({}_{1.2}\). Similarly, on Defects4j 2.0 SarGAM can also improve AlphaRepair and KNOD by fixing 17 bugs and 20 additional bugs. Figure 10 shows three examples of bugs only fixed by SarGAM. Math-96 (Figure (a)a) is a hard bug because all the Double.doubleToRawLongBits need to be deleted from the original sequence. Csv-12(10b) is also not an easy one because a new api.withAllowMissingColumnNames(true) is called in the correct fix, which does not appear in the context. However, SarGAM is still able to fix both examples with the help of patch search and patch edi Fig. 8: An example prompt (Codec-17) including the buggy code (green lines), buggy line (red line), retrieved patch (yellow line), and the context before the buggy line (pink lines). Fig. 7: Union diagrams of the numbers of correct modifications made by LevT and Transformer Fig. 9: Unique fixes of SarGAM, AlphaRepair and KNOD. JSoup-26(Figure 9(c)), which indicates that SarGAM is able to insert a new line into the buggy code. ## VI Related Work **Code Repair Models.** Seq2Seq models have been studied by many researchers to repair code. Tufano _et al._[53] introduced the encoder-decoder architecture to fix bugs. SequenceR [28] further combined a copy mechanism with the \(seq2seq\) model to overcome the unlimited vocabulary problem. CocoNut [43] used ensemble learning for this purpose, while some other works use tree/graph-based models [11, 35, 54]. SimFix [55] claimed that both existing patches and similar code could be helpful in APR. Their findings motivated us to explore if patches retrieved from the existing code base can provide more information to \(seq2seq\) models and further improve their performance. LLMs like CodeBERT [56], PLBART [14, 37], CodeT5 [16], and NatGen [17] have shown very promising results on APR. Fu _et al._[57] proposed VulRepair, a T5-based vulnerability repair approach, to solve the limitations of previous work: VRepair [58]. A new trend in recent research works is zero-shot use of LLMs for program repair, which means that no extra training or fine-tuning is involved. Xia _et al._[36] directly leverage pre-trained CodeBERT to build a cloze-style APR tool. Jiang _et al._[49] systematically evaluated 10 LLMs for APR with and without fine-tuning. [50] and [51] designed prompts for Codex to convert code repair into code generation. Our proposed approach complements this line of work as we show the potential for taking any off-the-shelf APR tools and composing them with retrieval and modification steps, improving the overall patch generation performance. **Code Editing Models** Recent studies explore if DL models can learn explicit edit operations. Ding _et al._[26] confirmed this claim. Chen _et al._[21] proposed to pair a graph encoder with a Transformer decoder to generate Tocopo sequence [20], which is used to represent a code edit. Similarly, Hephaestus [22] used an LSTM \(seq2seq\) model to learn edit operations via the introduction of Levenshtein edit operations and their condensed forms: basic, strict, and loose compound operations. Zhang _et al._[23] designed a pre-trained model CoditT5 [23] for editing tasks. Unlike these previous code editing models, we do not output an edit sequence directly; rather, our approach slowly and carefully guides the tool towards generating the intended edit in multiple steps. Many previous works [20, 21, 22, 23] designed special outputs to represent edit operations. For example, in the inference stage CoditT5 [23] first generates an edit plan which explicitly edit operations, then it auto-regressively generated the edited target sequence. In their open source implementation, since the output is composed of two parts, extra post-processing (decoding) is needed. However, LevT does not focus on output design and directly applies explicit edit operations within the decoder. ## VII Threats To Validity _External Validity._ Our evaluation of edit generation is based on two datasets: \(B2F_{s}\) and \(B2F_{m}\). These datasets focus on small to medium-sized edits and, therefore, may not capture diverse edit characteristics. Additionally, the commit messages that we use as edit intents may contain noise. However, these datasets are based on real developers' commits and, thus, provide a valuable reflection of actual development scenarios. To minimize this issue, we further assess the performance of SarGAM on fixing bugs from three other datasets widely studied in the APR literature. This additional evaluation ensures that the results are robust and generalize well to diverse types of edit generation scenarios. _Construct Validity._ SarGAM requires precise edit location for input. While developers' cursors can simulate the edit location during edit generation, determining a bug's exact location can be more challenging. To address this issue, we delegate bug-finding to other tools and assume that once the buggy line is identified, SarGAM can be used to generate a fix for the bug. By doing so, we focus on the strengths of SarGAM in generating high-quality patches and avoid potential issues that may arise from inaccurate bug identification. _Internal Validity._ It is worth noting that our reported results may be influenced by hyperparameter choices (_e.g.,_ learning rate, batch size, etc.). To address this concern, we plan to release our tool as an open-source project. By doing so, other researchers and practitioners could evaluate our approach in a wider range of settings, which will help to validate our findings further and minimize this potential threat. ## VIII Conclusion We propose SarGAM, a novel approach to improve pre-trained code generation models by incorporating patch search, Fig. 10: Unique bugs only fixed by SarGAM generation, and modification. Our goal is to mimic the behavior of a developer while editing, who first searches for a related patch, sketches out a rough draft of a patch, and then modifies it to produce a final patch for integration with the rest of the code. To this end, we propose a novel patch modification model based on Levenshtein Transformer, which generates fine-grained edit operations to achieve patch modification. We evaluate our approach on two tasks: code editing and automated program repair. Our results demonstrate that SarGaM is highly effective for both tasks and outperforms state-of-the-art methods in the majority of settings. ## IX Acknowledgements We would like to thank Matthew H. Retchin for his extensive feedback on this paper.
2303.04459
Subfactors and Mathematical Physics
This paper surveys the long-standing connections and impact between Vaughan Jones's theory of subfactors and various topics in mathematical physics, namely statistical mechanics,quantum field theory,quantum information and two-dimensional conformal field theory.
David E. Evans, Yasuyuki Kawahigashi
2023-03-08T09:16:40Z
http://arxiv.org/abs/2303.04459v1
# Subfactors and mathematical physics ###### Abstract. This paper surveys the long-standing connections and impact between Vaughan Jones's theory of subfactors and various topics in mathematical physics, namely statistical mechanics, quantum field theory, quantum information and two-dimensional conformal field theory. Key words and phrases:Braiding, conformal field theory, fusion category, statistical mechanics, subfactors, Temperley-Lieb algebra, vertex operator algebra 2020 Mathematics Subject Classification: Primary 46L37; Secondary 17B69, 18D10, 81R10, 81T05, 81T40, 82B20, 82B23 The second author was partially supported by JST CREST program JPMJCR18T6 and Grants-in-Aid for Scientific Research 19H00640 and 19K21832. ## 1. Introduction The study of the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian problem of finding the _Hamiltonian problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian_ problem of finding the _Hamiltonian problem of finding the entropy of the shift on the Jones projections and the calculation of the Connes-Stormer entropy [26], \(H(M|N)=\ln([M:N])\), for irreducible subfactors. For a subfactor \(N\subset M\) with finite Jones index, we have the Jones tower construction \[N\subset M\subset M_{1}\subset M_{2}\subset\cdots,\] where \(M_{k}\) is generated by \(M\) and \(e_{1},e_{2},\ldots,e_{k}\). The basic construction from \(N\subset M\) to \(M\subset M_{1}\) and its iteration to give the Jones tower of \(\mathrm{II}_{1}\) factors has a fundamental role in subfactor theory and applications in mathematical physics. The higher relative commutants \(M^{\prime}_{j}\cap M_{k}\), \(j\leq k\), give a system of _commuting squares of inclusions_ of finite dimensional \(C^{*}\)-algebras with a trace, an object denoted by \(\mathcal{G}_{N\subset M}\) and called the standard invariant of \(N\subset M\). This exceptionally rich mathematical structure encodes algebraic and combinatorial information about the subfactor, a key component of which is a connected, possibly infinite bipartite graph \(\Gamma_{N\subset M}\), of Cayley type, called the principal graph of \(N\subset M\), with a canonical weight vector \(\vec{v}\), whose entries are square roots of indices of irreducible inclusions in the Jones tower. The weighted graph \((\Gamma,\vec{v})\) satisfies the Perron-Frobenius type condition \(\Gamma^{\sharp}\Gamma(\vec{v})=[M:N]\vec{v}\), and also \(\|\Gamma\|^{2}\leq[M:N]\). Of particular relevance to mathematical physics is when \(N\subset M\) has finite depth, corresponding to the graph \(\Gamma\) being finite, in which case the weights \(\vec{v}\) give the (unique) Perron-Frobenius eigenvector, entailing \(\|\Gamma\|^{2}=[M:N]\). Finite depth is automatic when the index \([M:N]\) is less than \(4\), where indeed all bipartite graphs are finite and have norms of the form \(2\cos^{2}(\pi/n)\), \(n\geq 3\). The objects \(\mathcal{G}_{N\subset M}\) have been axiomatised in a number of ways, by Ocneanu with paragroups and connections [97] in the finite depth case, then in the general case by Popa with \(\lambda\)-lattices [107] and by Vaughan with planar algebras [80]. By Connes fundamental result in [22], the hyperfinite \(\mathrm{II}_{1}\) factor \(R\), obtained as an inductive limit of finite dimensional algebras, is the unique amenable \(\mathrm{II}_{1}\) factor, so in particular all its finite index subfactors are isomorphic to \(R\). In a series of papers [105, 106, 108, 109], Popa identified the appropriate notion of amenability for inclusions of \(\mathrm{II}_{1}\) factors \(N\subset M\) and for the objects \(\mathcal{G}_{N\subset M}\), in several equivalent ways, one of which being the Kesten-type condition \(\|\Gamma_{N\subset M}\|^{2}=[M:N]\). He proved the important result that for hyperfinite subfactors \(N\subset M\) satisfying this amenability condition, \(\mathcal{G}_{N\subset M}\) is a complete invariant. In other words, whenever \(M\simeq R\) and \(\|\Gamma_{N\subset M}\|^{2}=[M:N]\) (in particular if \(N\subset M\) has finite depth), \(N\subset M\) can be recovered from the data encoded by the sequence of commuting squares in the Jones tower. Constructions of interesting commuting squares are related to statistical mechanics through the Yang-Baxter equation and an IRF, vertex or spin model [72]. (See the monograph of Baxter [6] for this type of statistical mechanical models. Also see [75] for a general overview by Vaughan on this type of relations.) We choose one edge each from the four diagrams for the four inclusions so that they make a closed square. Then we have an assignment of a complex number to each such square. Ocneanu [97] gave a combinatorial characterisation of this assignment of complex numbers under the name of a paragroup and a flat connection. We also assign a complex number, called a Boltzmann weight, to each square arising from a finite graph in the theory of IRF or vertex models and we have much similarity between the two notions. The simplest example corresponds to the Ising model built on the Coxeter-Dynkin diagram \(A_{3}\) and a more general case corresponds to the Andrews-Baxter-Forrester model [1] related to the quantum groups \(U_{q}(sl_{2})\) for \(q=\exp(2\pi i/l)\) a root of unity. These fundamental examples correspond to the subfactors generated by the Jones projections alone and the graphs for these cases are the Coxeter-Dynkin diagrams of type \(A_{n}\). Others related to the quantum groups \(U_{q}(sl_{n})\) have been studied in [67, 28]. We give a typical example of a flat connection as follows. Fix one of the Coxeter-Dynkin diagrams of type \(A_{n}\), \(D_{2n}\), \(E_{6}\) or \(E_{8}\) and use it for the four diagrams. Let \(h\) be its Coxeter number and set \(\varepsilon=\sqrt{-1}\exp(\pi\sqrt{-1}/2h)\). We write \(\mu_{j}\) for the Perron-Frobenius eigenvector entry for a vertex \(j\) for the adjacency of the diagram. Then the flat connection is given as in Fig.1 and is essentially a normalisation of the braid element (2.2): The index value given by this construction is \(4\cos^{2}(\pi/h)\). If the graph is \(A_{n}\), then the vertices are labeled with \(j=1,2,\dots,n\) and the Perron-Frobenius eigenvector entry for the vertex \(j\) is given by \(\sin(j\pi/(n+1))\). The value in Fig.1 in this case is essentially the same as what the Andrews-Baxter-Forrester model gives at a limiting value and it also arises from a specialisation of the quantum \(6j\)-symbols for \(U_{q}(sl_{2})\) at a root of unity in the sense that two of the "\(6j\)"s are chosen to be the fundamental representation of \(U_{q}(sl_{2})\). These are also related to IRF models by Roche in [113]. These subfactors for the Dynkin diagrams \(A_{n}\) are the ones constructed by Vaughan [69] as \(N=\langle e_{2},e_{3},\dots\rangle\) and \(M=\langle e_{1},e_{2},e_{3},\dots\rangle\) with the above relations (2.1) with \([M:N]=4\cos^{2}(\pi/(n+1))\). The same formula as in Fig.1 for the Coxter-Dynkin diagrams \(D_{2n+1}\) and \(E_{7}\) almost gives a flat connection, but the flatness axiom fails. There are corresponding subfactors but they have principal graphs \(A_{4n-1}\) and \(D_{10}\) respectively. Nevertheless, the diagrams \(D_{2n+1}\) and \(E_{7}\) have interesting interpretations in connection with non-local extensions of conformal nets \(SU(2)_{k}\), as explained below. The relations (2.1) of the Jones projections \(e_{j}\) are reminiscent of the defining relations of the Hecke algebra \(H_{n}(q)\) of type \(A\) with complex parameter \(q\), which is the free complex algebra generated by \(1,g_{1},g_{2},\dots,g_{n-1}\) satisfying \[\left\{\begin{aligned} g_{j}g_{j+1}g_{j}&=g_{j+1}g_{j} g_{j+1},\\ g_{j}g_{k}&=g_{k}g_{j},\quad\text{for $j\neq k$},\\ g_{j}^{2}&=(q-1)g_{j}+q.\end{aligned}\right.\] This similarity was exploited to construct more examples of subfactors with index values \(\dfrac{\sin^{2}(k\pi/l)}{\sin^{2}(\pi/l)}\) with \(1\leq k\leq l-1\) in the early days of subfactor theory by Wenzl in a University of Pennsylvania thesis supervised by Vaughan [127]. He constructed representations \(\rho\) of \(H_{\infty}(q)=\bigcup_{n=1}^{\infty}H_{n}(q)\) with roots of unity \(q=\exp(2\pi i/l)\) and \(l=4,5,\dots\) such that \(\rho(H_{n}(q))\) is always semi-simple and gave a subfactor as Figure 1. A flat connection on the Coxeter-Dynkin diagram \(\rho(\langle g_{2},g_{3},\dots\rangle)^{\prime\prime}\subset\rho(\langle g_{1},g_{2 },\dots\rangle)^{\prime\prime}\) using a suitable trace. The index values converge to \(k^{2}\) as \(l\to\infty\). When \(k=2\), these subfactors are the ones constructed by Vaughan for the Coxeter-Dynkin diagram \(A_{l-1}\). This construction is also understood in the context of IRF models [28, 67] related to \(SU(k)\). The relation between the Hecke algebras and the quantum groups \(U_{q}(sl_{n})\) is a "quantum" version of the classical Weyl duality. This duality also connects this Jones-Wenzl approach based on statistical mechanics and type II\({}_{1}\) factors with the Jones-Wassermann approach based on quantum field theory and type III\({}_{1}\) factors which is explained below. It is important to have a spectral parameter for the Boltzmann weights satisfying the Yang-Baxter equation in solvable lattice models, but we do not have such a parameter for a flat connection initially in subfactor theory. We usually obtain a flat connection by a certain specialisation of a spectral parameter for a Boltzmann weight. Vaughan proposed "Baxterization" in [73] for the converse direction in the sense of introducing a parameter for analogues of the Boltzmann weights in subfactor theory. This is an idea to obtain a physical counterpart from a subfactor, and we discuss a similar approach to construct a conformal field theory from a given subfactor at the end of this article. It should be noted that to rigorously construct a conformal field theory at criticality is a notoriously difficult problem - even for the Ising model, see e.g. [114]. The finite depth condition means that we have a finite graph in this analogy to solvable lattice models. Even from a set of algebraic or combinatorial data similar to integrable lattice models involving infinite graphs, one sometimes constructs a corresponding subfactor. A major breakthrough of Popa [104] was to show that the Temperley-Lieb-Jones lattice is indeed a standard invariant showing for the first time that for any index greater than 4 that there exist subfactors with just the Jones projections as the higher relative commutants. Then, introducing tracial amalgamated free products, Popa [107] could show existence in full generality. These papers [104, 107] led to important links with free probability theory, leading to more sophisticated free random models to prove that certain amalgamated free products are free group factors and adapted, by Ueda [122], to prove similar existence/reconstruction statements for actions of quantum groups. Popa and Shlyakhtenko [110] showed that any \(\lambda\)-lattice acts on the free group factor \(L(\mathbb{F}_{\infty})\). This involved a new construction of subfactors from \(\lambda\)-lattices, starting from a commuting square of semifinite von Neumann algebras, each one a direct sum of type I\({}_{\infty}\) factors with a semifinite trace, and with free probability techniques showing that the factors resulting from this construction are \(\infty\)-amplifications of \(L(\mathbb{F}_{\infty})\). The von Neumann algebras resulting in these constructions are not hyperfinite. A new proof using graphical tools, probabilistic methods and planar algebras was later found by Guionnet-Jones-Shlyakhtenko [59]. Moreover they and Zinn-Justin [61] use matrix model computations in loop models of statistical mechanics and graph planar algebras to construct novel matrix models for Potts models on random graphs. This is based on the planar algebra machinery developed by Vaughan [80] for understanding higher relative commutants of subfactors. In [60] Guionnet-Jones-Shlyakhtenko explicitly show that it is the same construction as in the Popa-Shlyakhtenko [110] paper. The paper [80] has been published only very recently in the Vaughan Jones memorial special issue after his passing away, but its preprint version appeared in 1999 and has been highly influential. Note also that Kauffman [82, 83] had found a diagrammatic construction of the Jones polynomial directly related to the Potts model based on a diagrammatic presentation of the Temperley-Lieb algebra which then has a natural home in the planar algebra formalism. The polynomial was understood by Reshetikhin-Turaev in [112] in the context of representations of the quantum groups \(U_{q}(sl_{2})\)[33, 66]. ## 3. Subfactors and quantum field theory Witten [128] gave a new interpretation of the Jones polynomial based on quantum field theory, the Chern-Simons gauge field theory, and generalised it to an invariant of a link in a compact 3-manifold. However, it was not clear why we should have a polynomial invariant in this way. Taking an empty link, yields an invariant of a compact 3-manifold. Witten used a path integral formulation and was not mathematically rigorous. A mathematically well-defined version based on combinatorial arguments using Dehn surgery and the Kirby calculus has been given by Reshetikhin and Turaev [112]. In the case of an empty link, we realise a 3-manifold from a framed link with the Dehn surgery, make a weighted sum of invariants of this link using representations of a certain quantum group at a root of unity and prove that this weighted sum is invariant under the Kirby moves. Two framed links give homeomorphic manifolds if and only if they are related with a series of Kirby moves. For the quantum group \(U_{q}(sl_{2})\), the link invariant is the colored Jones polynomial. A color is a representation of the quantum group and labels a connected component of a link. This actually gives a \((2+1)\)-dimensional topological quantum field theory in the sense of Atiyah [5], which is a certain mathematical axiomatisation of a quantum field theory based on topological invariance. Roughly speaking, we assign a finite dimensional Hilbert space to each closed 2-dimensional manifold, and also assign a linear map from one such Hilbert space to another to a cobordism so that this assignment is functorial. It is also easy to extend this construction from quantum groups to general modular tensor categories as we explain below. A closely related, but different, \((2+1)\)-dimensional topological quantum field theory has been given by Turaev and Viro [121]. In this formulation, one triangulates a 3-manifold, considers a weighted sum of quantum \(6j\)-symbols arising from a quantum group depending on the triangulation, and proves that this sum is invariant under the Pachner moves. Two triangulated manifolds are homeomorphic to each other if and only if we obtain one from the other with a series of Pachner moves. This has been generalised to another \((2+1)\)-dimensional topological quantum field theory using quantum \(6j\)-symbols arising from a subfactor by Ocneanu. (See [42, Chapter 12].) Here we only need a fusion category structure which we explain below, and no braiding. This is different from the above Reshetikhin-Turaev case. For a given fusion category, we apply the Drinfel\({}^{\prime}\)d center construction, a kind of "quantum double" construction, to get a modular tensor category with a non-degenerate braiding. This construction was developed in subfactor theory by Ocneanu [97] through an asymptotic inclusion, by Popa [106] through a symmetric enveloping algebra, through the Longo-Rehren subfactor [94] and Izumi [64, 65] and in a categorical setting by Muger [96]. We then apply the Reshetikhin-Turaev construction to the double. We can also apply the Turaev-Viro-Ocneanu construction to the original fusion category, and these two procedures give the same topological quantum field theory [87]. In particular, if we start with \(U_{q}(sl_{2})\) at a root of unity, the Turaev-Viro invariant of a closed 3-manifold is the square of the absolute value of the Reshetikhin-Turaev invariant of the same 3-manifold. Another connection of subfactors to quantum field theory is through algebraic quantum field theory, which is a bounded operator algebraic formulation of quantum field theory. The usual ingredients for describing a quantum field theory are as follows. 1. A spacetime, such as the 4-dimensional Minkowski space. 2. A spacetime symmetry group, such as the Poincare group. 3. A Hilbert space of states, including the vacuum. 4. A projective unitary representation of the spacetime symmetry group on the Hilbert space of states. 5. A set of quantum fields, that is, operator-valued distributions defined on the spacetime acting on the Hilbert space of states. An ordinary distribution assigns a number to each test function. An operator-valued distribution assigns a (possibly unbounded) operator to each test function. The Wightman axioms give a direct axiomatisation using these and they have a long history of research, but it is technically difficult to handle operator-valued distributions, so we have a different approach based on bounded linear operators giving observables. Let \(O\) be a region within the spacetime. Take a quantum field \(\varphi\) and a test function \(f\) supported on \(O\). The self-adjoint part of \(\langle\varphi,f\rangle\) is an observable in \(O\) which could be unbounded. Let \(A(O)\) denote the von Neumann algebra generated by spectral projections of such self-adjoint operators. This passage from operator-valued distributions to von Neumann algebras is also used in the construction of a conformal net from a vertex operator algebra by Carpi-Kawahigashi-Longo-Weiner [20] which we explain below. Note that a von Neumann algebra contains only bounded operators. Locality is an important axiom arising from the Einstein causality which says that if two regions are spacelike separated, observables in these regions have no interactions, hence the corresponding operators commute. In terms of the von Neumann algebras \(A(O)\), we require that \([A(O_{1}),A(O_{2})]=0\), if \(O_{1}\) and \(O_{2}\) are spacelike separated, where the Lie bracket means the commutator. This family of von Neumann algebras parameterised by spacetime regions is called a net of operator algebras. Algebraic quantum field theory gives an axiomatisation of a net of operator algebras, together with a projective unitary representation of a spacetime symmetry group on the Hilbert space of states including the vacuum. A main idea is that it is not each von Neumann algebra but the relative relations among these von Neumann algebras that contains the physical contents of a quantum field theory. In the case of two-dimensional conformal field theory, which is a particular example of a quantum field theory, each von Neumann algebra \(A(O)\) is always a hyperfinite type III\({}_{1}\) factor, which is unique up to isomorphism and is the Araki-Woods factor of type III\({}_{1}\). Thus the isomorphism class of a single von Neumann algebra contains no physical information. Each local algebra of a conformal net is a factor of type III\({}_{1}\) by [58, Proposition 1.2]. It is also hyperfinite because it has a dense subalgebra given as an increasing union of type I algebras, which follows from the split property shown in [95, Theorem 5.4]. Fix a net \(\{A(O)\}\) of von Neumann algebras. It has a natural notion of a representation on another Hilbert space without the vacuum vector. The action of these von Neumann algebras on the original Hilbert space itself is a representation and it is called the vacuum representation. We also have natural notions of unitary equivalence and irreducibility of representations. The unitary equivalence class of an irreducible representation of the net \(\{A(O)\}\) is called a superselection sector. We also have a direct sum and irreducible decomposition for representations. If we have two representations of a group, it is very easy to define their tensor product representation, but it is not clear at all how to define a tensor product representation of two representations of a single net of operator algebras. Doplicher-Haag-Roberts gave a proper definition of the tensor product of two representations [30, 31]. Under a certain natural assumption, each representation has a representative given by an endomorphism of a single algebra \(A(O)\) acting on the vacuum Hilbert space for some fixed \(O\). This endomorphism contains complete information about the original representation. For two such endomorphisms \(\rho\) and \(\sigma\), the composed endomorphism \(\rho\sigma\) also corresponds to a representation of the net \(\{A(O)\}\). This gives a correct notion of the tensor product of two representations. Furthermore, it turns out that the two compositions \(\rho\sigma\) and \(\sigma\rho\) of endomorphisms give unitarily equivalent representations. If the spacetime dimension is higher than 2, this commutativity of the tensor product is similar to unitary equivalence of \(\pi_{1}\otimes\pi_{2}\) and \(\pi_{2}\otimes\pi_{1}\) for two representations \(\pi_{1}\) and \(\pi_{2}\) of the same group. The representations now give a symmetric monoidal \(C^{*}\)-category, where a representation gives an object, an intertwiner gives a morphism, and the above composition of endomorphisms gives the tensor product structure. This category produces a compact group from the new duality of Doplicher-Roberts [32]. Here an object of the category is an endomorphism and a morphism in \(\operatorname{Hom}(\rho,\sigma)\) is an intertwiner, that is, an element in \[\{T\in A(O)\mid T\rho(x)=\sigma(x)T\text{ for all }x\in A(O)\}.\] In other words, the Doplicher-Roberts duality gives an abstract characterisation of the representation category of a compact group among general tensor categories. The vacuum representation plays the role of the trivial representation of a group, and the dual representation of a net of operator algebras corresponds to the dual representation of a compact group. This duality is related to the classical Tannaka duality, but gives a duality more generally for abstract tensor categories. Using the structure of a symmetric monoidal \(C^{*}\)-category, we define a statistical dimension of each representation, which turns out to be a positive integer or infinity [30, 31]. That the Jones index value takes on only discrete values below 4 is reminiscent of this fact that a statistical dimension can take only integer values. Longo [91, 92] showed that the statistical dimension of the representation corresponding to an endomorphism \(\rho\) of \(A(O)\) is equal to the square root of the Jones index \([A(O):\rho(A(O))]\). This opened up a wide range of new interactions between subfactor theory and algebraic quantum field theory. Generalizing the notion of a superselection sector, Longo [91, 92] introduced the notion of a sector, the unitary equivalence class of an endomorphism of a factor of type III, inspired by Connes theory of correspondences, based on the equivalences between Hilbert bimodules, endomorphisms and positive definite functions on doubles [23][24, VB], [25] and see e.g. Popa [103] for developments. He defined a dual sector using the canonical endomorphism which he had introduced based on the modular conjugation in Tomita-Takesaki theory. Note that in a typical situation of a subfactor \(N\subset M\), these von Neumann algebras are isomorphic, so we have an endomorphism \(\rho\) of \(M\) onto \(N\). Then we have the dual endomorphism \(\bar{\rho}\), and the irreducible decompositions of \(\rho\bar{\rho}\rho\bar{\rho}\cdots\bar{\rho}\) give objects of a tensor category, where the morphisms are the intertwiners of endomorphisms and the tensor product operation is composition of endomorphisms. If we have finitely many irreducible endomorphisms arising in this way, which is equivalent to the finite depth condition, our tensor category is a fusion category, where we have the dual object for each object and we have only finitely many irreducible objects up to isomorphisms. The higher relative commutants \(M^{\prime}\cap M_{k}\) are described as intertwiner spaces like \(\operatorname{End}(\rho\bar{\rho}\rho\bar{\rho}\cdots\bar{\rho})\) or \(\operatorname{End}(\rho\bar{\rho}\rho\bar{\rho}\cdots\rho)\). In our setting, for a factor \(M\), we have the standard representation of \(M\) on the Hilbert space \(L^{2}(M)\), the completion of \(M\) with respect to a certain inner product, and this \(L^{2}(M)\) also has a right multiplication by \(M\) based on Tomita-Takesaki theory. For an endomorphism \(\rho\) of \(M\), we have a new \(M\)-\(M\) bimodule structure on \(L^{2}(M)\) by twisting the right action of \(M\) by \(\rho\). In this setting, all \(M\)-\(M\) bimodules arise in this way, and we have a description of the above tensor category in terms of bimodules. Here the tensor product operation is given by a relative tensor product of bimodules over \(M\). For type II\({}_{1}\) factors, we need to use this bimodule description to obtain the correct tensor category structures. It is more natural to use type II\({}_{1}\) factors in statistical mechanics, and it is more natural to use type III\({}_{1}\) factors in quantum field theory, but they give rise to equivalent tensor categories, so if we are interested in tensor category structure, including braiding, this difference between type II\({}_{1}\) and type III\({}_{1}\) is not important. ## 4. Subfactors and conformal field theory A two-dimensional conformal field theory is a particular example of a quantum field theory, but it is a rich source of deep interactions with subfactor theory, so we treat this in an independent section. We start with the \((1+1)\)-dimensional Minkowski space and consider quantum field theory with conformal symmetry. We restrict a quantum field theory onto two light rays \(x=\pm t\) and compactify a light ray by adding a point at infinity. The resulting \(S^{1}\) is our "spacetime" now, though space and time are mixed into one dimension, and our symmetry group for \(S^{1}\) is now \(\operatorname{Diff}(S^{1})\), the orientation preserving diffeomorphism group of \(S^{1}\). Our spacetime region is now an interval \(I\), a non-empty, non-dense open connected subset of \(S^{1}\). For each such an interval \(I\), we have a corresponding von Neumann algebra \(A(I)\) acting on a Hilbert space \(H\) of states containing the vacuum vector. Isotony means that we have \(A(I_{1})\subset A(I_{2})\) if we have \(I_{1}\subset I_{2}\). Locality now means that \([A(I_{1}),A(I_{2})]=0\), if \(I_{1}\cap I_{2}=\O\). Note that spacelike separation gives this very simple disjointness. Our spacetime symmetry group now is \(\operatorname{Diff}(S^{1})\), and we have a projective unitary representation \(U\) on \(H\). Conformal covariance asks for \(U_{g}A(I)U_{g}^{*}=A(gI)\) for \(g\in\operatorname{Diff}(S^{1})\). Positivity of the energy means that the restriction of \(U\) to the subgroup of rotations of \(S^{1}\) gives a one-parameter unitary group and its generator is positive. In this setting, a family \(\{A(I)\}\) of von Neumann algebras satisfying these axioms is called a conformal net. A representation theory of a conformal net in the style of Doplicher-Haag-Roberts now gives a braiding due to the low-dimensionality of the "spacetime" \(S^{1}\). This is a certain form of the non-trivial commutativity of endomorphisms up to inner automorphisms. That is, two representations give two endomorphisms \(\lambda,\mu\) of a single von Neumann algebra \(A(I_{0})\) for some fixed interval \(I_{0}\), and we have a unitary \(\varepsilon(\lambda,\mu)\in A(I)\) satisfying \(\operatorname{Ad}(\varepsilon(\lambda,\mu))\lambda\mu=\mu\lambda\). This unitary \(\varepsilon(\lambda,\mu)\), sometimes called a statistics operator, arises from the monodromy of moving an interval in to a disjoint one and back, and satisfies various compatibility conditions such as braiding-fusion equations for intertwiners as in [45, 52, 92]. Switching two tensor components corresponds to switching two wires of a braid. For two wires, we have an overcrossing and an undercrossing. They correspond to \(\varepsilon(\lambda,\mu)\) and \(\varepsilon(\mu,\lambda)^{*}\). In particular, if we fix an irreducible endomorphism and use it for both \(\lambda\) and \(\mu\), we have a unitary representation of the braid group \(B_{n}\) for every \(n\). In the case of a higher-dimensional Minkowski space, \(\varepsilon(\lambda,\mu)\) gives a so-called degenerate braiding, like the case of a group representation where we easily have unitary equivalence of \(\pi\otimes\sigma\) and \(\sigma\otimes\pi\) for two representations \(\pi\) and \(\sigma\), but we now have a braiding in a more non-trivial way on \(S^{1}\). It was proved by Kawahigashi-Longo-Muger in [86] that if we have a certain finiteness of the representation theory of a conformal net, called complete rationality, then the braiding of its representation category is non-degenerate, and hence it gives rise to a modular tensor category by definition. A modular tensor category is also expected to be useful for topological quantum computations as in the work of Freedman-Kitaev-Larsen-Wang [48]. This is a hot topic in quantum information theory and many researchers work on topological quantum information using the Jones polynomial and its various generalisations. It is a highly non-trivial task to construct examples of conformal nets. The first such attempt started in a joint project of Vaughan and Wassermann trying to construct a subfactor from a positive energy representation of a loop group. Wassermann [125] then constructed conformal nets arising from positive energy representations of the loop groups of \(SU(N)\) corresponding to the Wess-Zumino-Witten models \(SU(N)_{k}\), where \(k\) is a positive integer called a level. These examples satisfy complete rationality as shown by Xu in [132]. The conformal nets corresponding to \(SU(2)_{k}\) give unitary representations of the braid groups \(B_{n}\) which are the same as the one given by Vaughan from the Jones projections \(e_{j}\). Wassermann's construction has been generalised to other Lie groups by Loke, Toledano Laredo and Verrill in dissertations supervised by him, [90, 120, 124], see also [126]. Loke worked with projective unitary representations of \(\operatorname{Diff}(S^{1})\) and obtained the Virasoro nets. A relative version \(\{A(I)\subset B(I)\}\) of a conformal net for intervals \(I\subset S^{1}\) called a net of subfactors has been given in [94]. Suppose that \(\{A(I)\}\) is completely rational. Assuming that we know the representation category of \(\{A(I)\}\), we would like to know that of \(\{B(I)\}\). The situation is similar to a group inclusion \(H\subset G\) where we know representation theory of \(H\) and would like to know that of \(G\). In the group representation case for \(H\subset G\), we have a restriction of a representation of \(G\) to \(H\) and an induction of a representation of \(H\) to \(G\). In the case of a net of subfactors, the restriction of a representation of \(\{B(I)\}\) to \(\{A(I)\}\) is easy to define, but the induction procedure is more subtle. Our induction procedure is now called \(\alpha\)-induction, first defined by Longo-Rehren in [94] and studied by Xu [129], Bockenhauer-Evans [8, 9, 10, 11], and Bockenhauer-Evans-Kawahigashi [12, 13], also in connection to Ocneanu's graphical calculus on Coxeter-Dynkin diagrams in the last two papers. (In these two papers, this \(\alpha\)-induction is studied in the more general context of abstract modular tensor categories of endomorphisms rather than conformal field theory. For an \(A\)-\(A\) bimodule \(X\), then the tensor product \(X\otimes B\) can be regarded as a \(B\)-\(B\) module if one uses the braiding to let \(B\) act on the left.) Take a representation of \(\lambda\) of \(\{A(I)\}\) which is given as an endomorphism of \(A(I_{0})\) for some fixed interval \(I_{0}\). Then using the braiding on the representation category of \(\{A(I)\}\), we define an endomorphism \(\alpha_{\lambda}^{\pm}\) of \(B(I_{0})\) where \(\pm\) represents a choice of a positive or negative braiding, \(\varepsilon^{\pm}(\lambda,\theta)\), where \(\theta\) represents the dual canonical endomorphism of the subfactor \(A(I)\subset B(I)\). This nearly gives a representation of \(\{B(I)\}\), but not exactly. It turns out that the irreducible endomorphisms arising both from a positive induction and a negative one exactly correspond to those arising from irreducible representations of \(\{B(I)\}\). The braiding of the representation category of \(\{A(I)\}\) gives a finite dimensional unitary representation of \(SL(2,\mathbb{Z})\) through the so-called \(S\)- and \(T\)-matrices. Bockenhauer-Evans-Kawahigashi [12] showed that the matrix \(Z_{\lambda,\mu}=\langle\alpha_{\lambda}^{+},\alpha_{\mu}^{-}\rangle\), where \(\lambda,\mu\) label irreducible representations of \(\{A(I)\}\) and the symbol \(\langle\cdot,\cdot\rangle\) counts the number of common irreducible endomorphisms including multiplicities, satisfies the following properties: 1. We have \(Z_{\lambda,\mu}\in\{0,1,2,\dots\}\). 2. We have \(Z_{0,0}=1\), where the label \(0\) denotes the vacuum representation. 3. The matrix \(Z\) commutes with the image of the representation of \(SL(2,\mathbb{Z})\). Such a matrix \(Z\) is called a modular invariant, because \(PSL(2,\mathbb{Z})\) is called the modular group. For a given completely rational conformal net (or more generally, a given modular tensor category), we have only finitely many modular invariants. Modular invariants naturally appear as partition functions in \(2\)-dimensional conformal field theory and they have been classified for several concrete examples since Cappelli-Itzykson-Zuber [17] for the \(SU(2)_{k}\) models and the Virasoro nets with \(c<1\), where \(c\) is a numerical invariant called the central charge. It takes a positive real value, and if \(c<1\), then it is of the form \(1-6/m(m+1)\), \(m=3,4,5,\dots\) by Friedan-Qiu-Shenker [51] and Goddard-Kent-Olive [55]. This number arises from a projective unitary representation of \(\operatorname{Diff}(S^{1})\) and its corresponding unitary representation of the Virasoro algebra, a central extension of the complexification of the Lie algebra arising from \(\operatorname{Diff}(S^{1})\). Note that some modular invariants defined by the above three properties do not necessarily correspond to physical ones arising as partition functions in conformal field theory. Modular invariants arising from \(\alpha\)-induction are physical in this sense. The action of the \(A\)-\(A\) system on the \(A\)-\(B\) sectors (obtained by decomposing \(\{\iota\lambda=\alpha_{\lambda}^{\pm}\iota:\lambda\in A\text{-}A\}\) into irreducibles where \(\iota:A\to B\) is the inclusion) gives naturally a representation of the fusion rules of the Verlinde ring: \(G_{\lambda}G_{\mu}=\sum N_{\lambda\mu}^{\nu}G_{\nu}\,\), with matrices \(G_{\lambda}=[G_{\lambda a}^{b}:a,b\in A\text{-}B\) sectors]. Consequently, the matrices \(G_{\lambda}\) will be described by the same eigenvalues but with possibly different multiplicities. Bockenhauer-Evans-Kawahigashi [13] showed that these multiplicities are given exactly by the diagonal part of the modular invariant: \(\operatorname{spectrum}(G_{\lambda})\,=\,\{S_{\lambda\kappa}/S_{0\kappa}\,:\, \operatorname{with\,multiplicity}\,Z_{\kappa\kappa}\}\,.\) This is called a _nimrep_ - a non-negative integer matrix representation. Thus a physical modular invariant is automatically equipped with a compatible nimrep whose spectrum is described by the diagonal part of the modular invariant. The case of \(SU(2)\) is just the \(A\)-\(D\)-\(E\) classification of Cappelli-Itzykson-Zuber [17] with the \(A\)-\(B\) system yielding the associated (unextended) Coxeter-Dynkin graph. Since there is an \(A\)-\(D\)-\(E\) classification of matrices of norm less than \(2\), we can recover independently of Cappelli-Itzykson-Zuber [17] that there are unique modular invariants corresponding to the three exceptional \(E\) graphs. If we use only positive \(\alpha\)-inductions for a given modular tensor category, we still have a fusion category of endomorphisms, but no braiding in general. This is an example of a module category. For the tensor category \(Rep(G)\) of representations of a finite group \(G\), all module categories are of the form \(Rep(H,\chi)\) for the projective representations with 2-cocycle \(\chi\) for a subgroup \(H\)[98]. For this reason, module categories have also been called _quantum subgroups_. Such categories have been studied in a more general categorical context by Ostrik in [100]. However, Carpi, Gaudio, Giorgetti and Hillier [19], have shown that for unitary fusion categories, such as those that occur in subfactor theory or arise from loop groups, that all module categories are equivalent to unitary ones. For the conformal nets corresponding to \(SU(2)_{k}\), the module categories or quantum subgroups are labeled with all the Coxeter-Dynkin diagrams \(A_{n}\), \(D_{n}\) and \(E_{6,7,8}\). Here there is a coincidence with the affine \(A\)-\(D\)-\(E\) classification of finite subgroups of \(SU(2)\). Di Francesco and Zuber [28] were motivated to try to relate \(SU(3)\) modular invariants with subgroups of \(SU(3)\). There is a partial match but this is not helpful. In general whilst the number of finite subgroups of \(SU(n)\) grows with \(n\), the number of exceptional modular invariants, beyond the obvious infinite series, does not. If we have a net of subfactors \(\{A(I)\subset B(I)\}\) with \(\{A(I)\}\) being a completely rational conformal net, then the restriction of the vacuum representation of \(\{B(I)\}\) to \(\{A(I)\}\) gives a local \(Q\)-system in the sense of Longo [93]. This notion is essentially the same as a commutative Frobenius algebra, a special case of an algebra in a tensor category, in the algebraic or categorical literature. This \(Q\)-system is a triple consisting of an object and two intertwiners. Roughly speaking, the object gives \(B(I)\) as an \(A(I)\)-\(A(I)\) bimodule and the intertwiners give the multiplicative structure on \(B(I)\). Our general theory of \(\alpha\)-induction shows that the corresponding modular invariant \(Z\) for the modular tensor category of representations of \(\{A(I)\}\) recovers this object. Since we have only finitely many modular invariants for a given modular tensor category, we have only finitely many objects for a local \(Q\)-system. It is known that each object has only finitely many local \(Q\)-system structures, and we thus have only finitely many local \(Q\)-systems, which means that we have only finitely many possibilities for extensions \(\{B(I)\}\) for a given \(\{A(I)\}\). For some concrete examples of \(\{A(I)\}\), we can classify all possible extensions. In the case of the \(SU(2)_{k}\) nets, such extensions were studied in the context of \(\alpha\)-induction in [12] by Bockenhauer-Evans-Kawahigashi and it was shown in [84] by Kawahigashi-Longo that they exhaust all possible extensions. (A similar classification based on quantum groups was first given in [88].) They correspond to the Coxeter-Dynkin diagrams \(A_{n}\), \(D_{2n}\), \(E_{6}\) and \(E_{8}\). The \(A_{n}\) cases are the \(SU(2)_{k}\) nets themselves, the \(D_{2n}\) cases are given by simple current extensions of order 2, and the \(E_{6}\) and \(E_{8}\) cases are given by conformal embeddings \(SU(2)_{10}\subset SO(5)_{1}\) and \(SU(2)_{28}\subset(G_{2})_{1}\), respectively. These correspond to type I extensions in Itzykson-Zuber [17], Bockenhauer-Evans [11]. Type II extensions corresponding to \(D_{2n+1}\) and \(E_{7}\) arise from extensions of the \(SU(2)_{k}\) nets without locality. In general [11] for a physical modular invariant \(Z\) there are by Bockenhauer-Evans local chiral extensions \(N(I)\subset M_{+}(I)\) and \(N(I)\subset M_{-}(I)\) with local \(Q\)-systems naturally associated to the vacuum column \(\{Z_{\lambda,0}\}\) and vacuum row \(\{Z_{0,\lambda}\}\) respectively. These extensions are indeed maximal and should be regarded as the subfactor version of left- and right maximal extensions of the chiral algebra. The representation theories or modular tensor categories of \(M_{\pm}\) are then identified. For example, the \(E_{7}\) conformal net or module category is a then a twist or auto-equivalence on the left and right local \(D_{10}\) extensions which form the type I parents. This reduces the analysis to understanding first local extensions and then classifying auto-equivalences to identify the two left and right local extensions. For \(SU(2)\) there are only three exceptional modular invariants \(E_{6,7,8}\), and in general one expects, e.g. [99], for a WZW model that there are only a finite number of exceptionals beyond the infinite series of the trivial, orbifolds and their conjugates. Schopieray [115] using \(\alpha\)-induction found bounds for levels of exceptional invariants for rank \(2\) Lie groups, and Gannon [57] extended this for higher rank with improved lower bounds using Galois transformations as a further tool. Edie-Michell has undertaken extensive studies of auto-equivalences [34]. The realisation by Evans-Pugh [43] of \(SU(3)\)-modular invariants as full CFT's, announced in [97], is based on the classification of Gannon [53] of \(SU(3)\) modular invariants, and the classification by Evans-Pugh of full \(SO(3)\) theories or \(SO(3)\) module categories is in [44]. For a general conformal net, we always have a subnet generated by the projective unitary representation of \(\operatorname{Diff}(S^{1})\), which is called the Virasoro net, so a conformal net is always an extension of the Virasoro net. Through a unitary representation of the Virasoro algebra, a conformal net has a numerical invariant \(c\), the central charge. The Virasoro net is completely rational if \(c<1\), so the above classification scheme applies to this case, and we have a complete classification of conformal nets with \(c<1\) by Kawahigashi-Longo in [84], where they are shown to be in a bijective correspondence with the type I modular invariants of Cappelli-Itzykson-Zuber in [17]. Four of exceptional modular invariants involving the Dynkin diagrams \(E_{6}\) and \(E_{8}\) give exceptional conformal nets. Three of them are given by the coset construction, but the other one gives a new example. Similarity between discreteness of the Jones index values below \(4\) and discreteness of the central charge value below \(1\) has been pointed out since the early days of subfactor theory [74], and we have an \(A\)-\(D\)-\(E\) classification of subfactors with index below \(4\) as in Popa [105] (also see [97, 42]) and an \(A\)-\(D\)-\(E\) classification of the modular invariants of the Virasoro minimal models of Capelli-Itzykson-Zuber [17]. We then have natural understanding of classification of conformal nets with \(c<1\) in this context. \(K\)-theory has had a role in relating subfactor theory with statistical mechanics and conformal field theory. The phase transition in the two dimensional Ising model is analysed through an analysis of the ground states of the one dimensional quantum system arising from the transfer matrices. This is manifested by Araki-Evans through a jump in the Atiyah-Singer mod-\(2\) index of Fredholm operators [3]. Here Kramers-Wannier high-temperature duality is effected by the shift endomorphism \(\rho\) on the corresponding Jones projections \(e_{j}\to e_{j+1}\) which leads, Evans [35], to the Ising fusion rules \(\rho^{2}=1+\sigma\), where \(\sigma\) is the symmetry formed from interchanging \(+\) and \(-\) states, see also Evans-Gannon [41]. The tensor category of the Verlinde ring of compact Lie groups, or doubles of finite groups has been described by Freed-Hopkins-Teleman [47] through the twisted equivariant \(K\)-theory of the group acting on itself by conjugation. This has allowed the interchange of ideas between the subfactor approach and a \(K\)-theory approach to conformal field theory, employing \(\alpha\)-induction and modular invariants as bi-variant Kasparov \(KK\)-elements by Evans-Gannon [37, 38, 41]. In a similar spirit, regarding \(K\)-theory in terms of projective modules, a finitely generated modular tensor category can be realised by Aaserud-Evans [2] as \(C^{*}\)-Hilbert modules. This applies to the modular tensor categories of Temperley-Lieb-Jones associated to quantum \(SU(2)\), or more generally those of loop groups - as well as quantum doubles such as that of the Haagerup subfactor which we will focus on in the final section. This also gives a framework for braided tensor categories acting on some \(C^{*}\)-algebras as a quantum symmetry. ## 5. Vertex operator algebras We have another, more algebraic, mathematical axiomatisation for a chiral conformal field theory, namely, a vertex operator algebra. Since a conformal net and a vertex operator algebra are both mathematical formulations of the same physical theory, they naturally have close relations. We now explain those here. A quantum field on the "spacetime" \(S^{1}\) is an operator-valued distribution on \(S^{1}\), so it has a Fourier expansion with operator coefficients. In this axiomatisation, we have a \(\mathbb{C}\)-vector space \(V\) which is a space of finite energy vectors and is supposed to give the Hilbert space of states after completion. For each vector \(u\in V\), we have a formal series \(Y(u,z)=\sum_{n\in\mathbb{Z}}u_{n}z^{-n-1}\) with a formal variable \(z\) and linear operators \(u_{n}\) on \(V\), which corresponds to the Fourier expansion of a quantum field acting on the completion of \(V\). This correspondence from a vector to a formal series is called the state-field correspondence. We have two distinguished vectors, the vacuum vector and the Virasoro vector. The Fourier coefficients of the latter give the Virasoro algebra. The locality axiom in this setting says that for \(u,v\in V\), we have a sufficiently large positive integer \(N\) satisfying \((z-w)^{N}[Y(u,z),Y(v,w)]=0\). Roughly speaking, this means \(Y(u,z)Y(v,w)=Y(v,w)Y(u,z)\) for \(z\neq w\). The origin of this notion of a vertex operator algebra is as follows. A classical elliptic modular function \[j(\tau)=1728\frac{g_{2}(\tau)^{3}}{g_{2}(\tau)^{3}-27g_{3}(\tau)^{2}},\] where \(\mathrm{Im}\ \tau>0\) and \(g_{2}(\tau)\) and \(g_{3}(\tau)\) are defined by the Eisenstein series, has the following Fourier expansion with \(q=\exp(2\pi i\tau)\). \[j(\tau)=q^{-1}+744+196884q+21493760q^{2}+\cdots.\] McKay noticed that the coefficient \(196884\) is very close to \(196883\), which is the dimension of the lowest-dimensional non-trivial irreducible representation of the Monster group. Recall that the Monster group is the largest among the \(26\) sporadic finite simple groups in terms of its order which is around \(8\times 10^{53}\). It turns out that we have a similar relation \(21493760=1+196883+21296876\), where \(21296876\) is the dimension of the next lowest-dimensional irreducible representation of the Monster group. Based on this and many other pieces of information on modular functions, Conway-Norton [27] made the following Moonshine conjecture. 1. There is a graded \(\mathbb{C}\)-vector space \(V=\bigoplus_{n=0}^{\infty}V_{n}\) with some algebraic structure whose automorphism group isomorphic to the Monster group. 2. For any element \(g\) in the Monster group, \(\sum_{n=0}^{\infty}\mathrm{Tr}(g|_{V_{n}})q^{n-1}\) is the Hauptmodul for a genus \(0\) subgroup of \(SL(2,\mathbb{R})\), where \(g|_{V_{n}}\) is the linear action of an automorphism \(g\) on \(V_{n}\). Frenkel-Lepowsky-Meurman [49] gave a precise definition of a certain algebraic structure as a vertex operator algebra, constructed the Moonshine vertex operator algebra \(V^{\natural}\), and proved that its automorphism group is exactly the Monster group. They first constructed a vertex operator algebra from the Leech lattice, an exceptional \(24\)-dimensional lattice giving the densest sphere packing in dimension \(24\), and applied the twisted orbifold construction for the order two automorphism of the vertex operator algebra arising from the multiplication by \(-1\) on the Leech lattice to obtain \(V^{\natural}\). Borcherds [14] next proved the remaining part of the Moonshine conjecture. The construction of a vertex operator algebra from an even lattice has an operator algebraic counterpart for a conformal net given in Dong-Xu [29]. The operator algebraic counterpart of the Moonshine vertex operator algebra has been constructed as the Moonshine net in Kawahigashi-Longo [85]. Frenkel-Zhu gave a construction of vertex operator algebras from affine Kac-Moody and Virasoro algebras in [50], and this corresponds to the construction of conformal nets of Wassermann [125], Loke, Toledano Laredo and Verrill. We have constructions of new examples of vertex operator algebras or conformal nets from known ones as follows. 1. A tensor product 2. Coset construction 3. Orbifold construction 4. An extension using a \(Q\)-system In the operator algebraic setting, the coset construction gives a relative commutant \(A(I)^{\prime}\cap B(I)\) for an inclusion \(\{A(I)\subset B(I)\}\) of conformal nets of infinite index. The orbifold construction gives a fixed point conformal subnet given by an automorphic action of a finite group. These constructions for conformal nets have been studied by Xu in [130] and [131], respectively. The extension of a local conformal net using a \(Q\)-system was first studied by Kawahigashi-Longo in [84] for constructing exceptional conformal nets, and this was extended by Xu in [133]. The vertex operator algebra counterpart has been studied by Huang-Kirillov-Lepowsky in [63]. Xu has shown that various subfactor techniques are quite powerful even for purely algebraic problems in vertex operator algebras. From the above results, it is clear that we have close connections between conformal nets and vertex operator algebras, as expected, but it is more desirable to have a direct construction of one from the other. The relation between the two should be like the one between Lie groups and Lie algebras, and the former should be given by "exponentiating" the latter. Such a construction was first given in Carpi-Kawahigashi-Longo-Weiner [20]. That is, we have a construction of a conformal net from a vertex operator algebra with strong locality and we also recover the original vertex operator algebra from this conformal net. (Note that we obviously need unitarity for a vertex operator algebra for such construction, since we need a nice positive definite inner product on \(V\). This unitarity is a part of the strong locality assumption. There are many vertex operator algebras without unitarity, and they may be related to operator algebras through different routes such as planar algebras.) In addition to an abstract definition of strong locality, concrete sufficient conditions for this have been also given in [20]. This correspondence between vertex operator algebras and conformal nets has been vastly generalised recently in Gui [57], Raymond-Tanimoto-Tener [111] and Tener [117, 118, 119] including identification of their representation categories, and this is a highly active area of research today. Some of them started from dissertations supervised by Vaughan. ## 6. Other directions in conformal field theory The classification of subfactors with index less than 4 has an \(A\)-\(D\)-\(E\) pattern. That is, the flat connections given in Fig.1 give a complete list of hyperfinite II\({}_{1}\) subfactors with index less than 4; see the review [81]. It naturally has connections to many other topics in mathematics and physics where \(A\)-\(D\)-\(E\) patterns appear. At the index value equal to \(4\), we still have a similar \(A\)-\(D\)-\(E\) classification based on extended Dynkin diagrams due to Popa [105]. They correspond to subgroups of \(SU(2)\), and the extended Coxeter-Dynkin diagrams appear through the McKay correspondence. These subfactors arise as simultaneous fixed point algebras of actions of a subgroup of \(SU(2)\) on \[\mathbb{C}\otimes M_{2}(\mathbb{C})\otimes M_{2}(\mathbb{C})\otimes\cdots \subset M_{2}(\mathbb{C})\otimes M_{2}(\mathbb{C})\otimes M_{2}(\mathbb{C})\otimes\cdots\] with infinite tensor products of the adjoint actions, possibly with extra cohomological twists as in the classification of periodic actions by Connes [21] and for finite group actions by Vaughan in his thesis [68] on the hyperfinite \(\mathrm{II}_{1}\) factor. The subfactors with index less than \(4\) can be regarded as "quantum" versions of this construction. We have a quite different story above the index value \(4\). Haagerup searched for subfactors of finite depth above index value \(4\) and found several candidates of the principal graphs. The smallest index value among them is \((5+\sqrt{13})/2\) and he proved this index value is indeed attained by a subfactor, which is now called the Haagerup subfactor [4]. A similar method also produced the Asaeda-Haagerup subfactor in [4]. New constructions of the Haagerup subfactor were given in Izumi [65] and in Peters [101]. The latter is based on the planar algebra machinery, and has been extended to the construction of the extended Haagerup subfactor. Today we have a complete classification of subfactors of finite depth with index value between \(4\) and \(5\) as reviewed in [81], and we have five such subfactors (after identifying \(N\subset M\) and \(M\subset M_{1}\)): the Haagerup subfactor, the Asaeda-Haagerup subfactor, the extended Haagerup subfactor, the Goodman-de la Harpe-Jones [56] subfactor and the Izumi-Xu subfactor. The latter two are now understood as arising from conformal embeddings \(SU(2)\to E_{6}\) and \(G_{2}\to E_{6}\). If a subfactor arises from a connection on a finite graph \(\Gamma\), it may not have principal graph or standard invariant based on \(\Gamma\) as happens with \(D_{2n+1}\) or \(E_{7}\). Any graph whose norm squared is in the range \((4,5)\) but is not one of the five allowed values can only have \(A_{\infty}\) as principal graph just like what happens with \(E_{10}\) by unpublished work of Ocneanu, Haagerup and Schou. The fusion categories arising from the Haagerup subfactor do not have a braiding, but their Drinfel\({}^{\prime}\)d center always gives a modular tensor category. Izumi gave a new construction of the Haagerup subfactor and computed the \(S\)- and \(T\)-matrices of its Drinfel\({}^{\prime}\)d center in [64, 65], using endomorphisms of the Cuntz algebra. It is an important problem whether an arbitrary modular tensor category is realised as the representation category of a conformal net or not, and this particular case of the Drinfel\({}^{\prime}\)d center of the fusion category of the Haagerup has caught much attention. Note that all the known constructions [4, 65, 101] of the Haagerup subfactor are based on algebraic or combinatorial computations. There is little conceptual understanding of this subfactor and its double and it is not clear at all whether they are related to statistical mechanics or conformal field theory. Evidence in the positive direction has been given by Evans-Gannon in [36, 41]. They found characters for the representation of the modular group \(SL(2,\mathbb{Z})\) arising from the braiding and showing that this modular data, their \(S\) and \(T\) matrices and fusion rules have a simple expression in terms of a grafting of the double of the dihedral group \(S_{3}\) and \(SO(13)_{2}\), or indeed the orbifolds of two Potts models or quadratic (Tambara-Yamagami) systems based on \(\mathbb{Z}_{3}\times\mathbb{Z}_{3}\) and \(\mathbb{Z}_{13}\) respectively. Information about a conformal field theory from the scaling limit of a statistical mechanical model may be detected from the underlying statistical mechanical system. Cardy [18] argued from conformal invariance for a critical statistical system, that the central charge \(c\) may be computed from the asymptotics of the partition function and transfer matrices on a periodic rectangular lattice. This has been well studied for the ABF, Q-state Potts models for Q=2,3,4 and certain ice-type models; see [42, pages 453-454]. In this spirit, numerical computations have been made using transfer matrices built from associator or certain \(6j\) symbols for a Haagerup system, though not the double. These give a value of \(c=2\) (or around 2) [62, 123, 89]. However the results shown there and these methods do not show that if there is a CFT at \(c=2\) (or around 2) that it is not a known one and that if there is a CFT that its representation theory is related to the representation theory of the double of the Haagerup. Recall what we described in a preceding paragraph, that a subfactor constructed from a graph may not reproduce the graph through its invariants. The first non-trivial reconstructions of conformal field theories were achieved by Evans-Gannon for the twisted doubles of finite groups and the orbifolds of Potts models [40, 41]. Whilst von Neumann algebras and subfactors are inherently unitary, non unitary theories have been analysed by Evans-Gannon from ideas derived from subfactors. This includes the Leavitt path algebras to replace Cuntz algebras in constructing non-unitary tensor categories of algebra endomorphisms which do not necessarily preserve the \(*\)-operation [39]. These and non-unitary planar algebras could also be a vehicle to understand non-semisimple and logarithmic conformal field theories. In attempting to construct a conformal net realizing a given modular tensor category, a natural idea is to construct algebras as certain limit through finite dimensional approximations. We then use lattice approximation of the circle \(S^{1}\), but diffeomorphism symmetry is lost in this finite dimensional approximation, so it is a major problem how to recover diffeomorphism symmetry. Vaughan studied this problem, used Thompson's groups as approximations of \(\mathrm{Diff}(S^{1})\), and obtained various interesting representations of Richard Thompson's groups [15, 76, 77]. Though he proved in [77] that translation operators arising as a limit of translations for the \(n\)-chains do not extend to a translation group that is strongly continuous at the origin, these representations are interesting in their own right. The clarity of his formalism and analysis, led to concise and elegant proofs of the previously difficult facts that the Thompson groups did not have Kazhdan's property T and with his Berkeley student Arnaud Brothier [16] that the Thompson's group T did not have the Haagerup property. New results also followed - certain wreath products of groups have the Haagerup property by taking the group of fractions of group labelled forests. Taking a functor from binary forests to Conway tangles, replacing a fork by an elementary tangle, Vaughan could show that every link arises in this way from the fraction of a pair of forests just as braids yield all links through taking their closures -- providing another unexpected bridge with knots and links [79]. He further studied related problems on scale invariance of transfer matrices on quantum spin chains, introduced two notions of scale invariance and weak scale invariance, and gave conditions for transfer matrices and nearest neighbour Hamiltonians to be scale invariant or weakly scale invariant [78]. _Acknowledgement_. We wish to thank Sorin Popa for discussions and comments on the manuscript and are grateful to George Elliott and Andrew Schopieray for extremely careful proofreading.
2310.04561
DragD3D: Realistic Mesh Editing with Rigidity Control Driven by 2D Diffusion Priors
Direct mesh editing and deformation are key components in the geometric modeling and animation pipeline. Mesh editing methods are typically framed as optimization problems combining user-specified vertex constraints with a regularizer that determines the position of the rest of the vertices. The choice of the regularizer is key to the realism and authenticity of the final result. Physics and geometry-based regularizers are not aware of the global context and semantics of the object, and the more recent deep learning priors are limited to a specific class of 3D object deformations. Our main contribution is a vertex-based mesh editing method called DragD3D based on (1) a novel optimization formulation that decouples the rotation and stretch components of the deformation and combines a 3D geometric regularizer with (2) the recently introduced DDS loss which scores the faithfulness of the rendered 2D image to one from a diffusion model. Thus, our deformation method achieves globally realistic shape deformation which is not restricted to any class of objects. Our new formulation optimizes directly the transformation of the neural Jacobian field explicitly separating the rotational and stretching components. The objective function of the optimization combines the approximate gradients of DDS and the gradients from the geometric loss to satisfy the vertex constraints. Additional user control over desired global shape deformation is made possible by allowing explicit per-triangle deformation control as well as explicit separation of rotational and stretching components of the deformation. We show that our deformations can be controlled to yield realistic shape deformations that are aware of the global context of the objects, and provide better results than just using geometric regularizers.
Tianhao Xie, Eugene Belilovsky, Sudhir Mudur, Tiberiu Popa
2023-10-06T19:55:40Z
http://arxiv.org/abs/2310.04561v2
# DragD3D: Vertex-based Editing for Realistic Mesh Deformations using 2D Diffusion Priors ###### Abstract Direct mesh editing and deformation are key components in the geometric modeling and animation pipeline. Direct mesh editing methods are typically framed as optimization problems combining user-specified vertex constraints with a regularizer that determines the position of the rest of the vertices. The choice of the regularizer is key to the realism and authenticity of the final result. Physics and geometry-based regularizers are not aware of the global context and semantics of the object, and the more recent deep learning priors are limited to a specific class of 3D object deformations. In this work, our main contribution is a local mesh editing method called DragD3D for global context-aware realistic deformation through direct manipulation of a few vertices. DragD3D is not restricted to any class of objects. It achieves this by combining the classic geometric ARAP (as rigid as possible) regularizer with 2D priors obtained from a large-scale diffusion model. Specifically, we render the objects from multiple viewpoints through a differentiable renderer and use the recently introduced DDS loss which scores the faithfulness of the rendered image to one from a diffusion model. DragD3D combines the approximate gradients of the DDS with gradients from the ARAP loss to modify the mesh vertices via neural Jacobian field, while also satisfying vertex constraints. We show that our deformations are realistic and aware of the global context of the objects, and provide better results than just using geometric regularizers. **CCS Concepts** \(\bullet\)**Computing methodologies \(\rightarrow\) Mesh geometry models; Machine learning;** ## 1 Introduction Geometric deformation and editing are fundamental operations in the geometric modeling pipeline that have received a lot of attention over the years. Among the many varieties of geometric representations and editing modalities, vertex-based mesh editing through direct manipulation of mesh vertices is particularly appealing for many applications and this constitutes the main focus of this work. Classical direct mesh editing methods [26, 19] employ an optimization framework where the user vertex constraints are complemented by a regularizer whose main goal is to keep the rest of the shape realistic. To accomplish this, regularizers either use elasticity or geometric priors. These regularizers try to minimize locally computed energies and they often fail for large deformations because: (1) they assume the deformation behavior is homogeneous across the object, which is not true in practice [18, 20] and (2), especially for CAD models, there are global semantic relationships between different parts that are lost when only local regularizers are considered. Methods such as iWires [17] attempted to address this issue by first analyzing the shape to extract global relationships between parts using a network of curves, but this is not a general solution. One way to address these shortcomings is to use data-driven methods. Data-driven methods use three main strategies: (1) learn the shape space of certain classes of objects and use it to guide the deformations [19, 18, 19, 20, 21, 22, 23] (2) learn the deformation behavior from a set of example deformations in a supervised manner [18, 20, 21]. The biggest challenge in these first two strategies is the reliance on real 3D data. While the collection of 3D datasets available for research has greatly improved in the last few years, it is minuscule compared to the richness of 2D images available; it can be argued that a large gap between these two will always exist due to the inherent challenges in 3D acquisition compared to 2D acquisition. (3) Another strategy is to rely on optimizing the shape using 2D priors obtained from large pre-trained 2D models such as CLIP [21, 22] or stable diffusion [23, 24, 25], and guide the deformation through differential rendering. This strategy has only been relatively recently explored in the context of 3D shape synthesis from a text prompt, but presently none of these can accommodate user-specified geometric constraints, and it is not obvious how to extend these methods to do so. As a result, in this category, there is very little work in the context of surface editing. In this work, we propose a mesh editing method that allows for direct editing of vertices and combines a local geometric regularizer with global guidance based on a large-scale image generative model. This novel formulation supports realistic shape deformation of dense meshes through vertex level editing of a small number of vertices. More specifically, given a number of vertex constraints, a region of influence in the model, and a very brief text description of the object (could just be the object name, like chair, car, etc.), we optimize deformation for the 3D mesh using the neural Jacobian field for mesh representation and a loss which combines user supplied vertex constraints and ARAP [10] for geometric rigidity, and a DDS score obtained from multiple renderings of the object scored against large scale 2D priors to provide the global context. Geometry changes are restricted to the region of influence. We show that our method produces realistic and meaningful deformations with just a few user constraints yielding better results than traditional methods. Further, it is not restricted to any specific class or classes of geometric shapes. Our main contribution is a direct 3D mesh editing algorithm that yields a global context-aware realistically deformed mesh by dragging just a few mesh vertices. More specific contributions are: * We are the first to integrate the requirement of satisfying user-specified geometric constraints while utilizing the global context obtained through the use of 2D priors from a large-scale generative model. * A novel loss function defined as a weighted combination of user-specified vertex constraints, ARAP, and a DDS score using the 2D prior, collectively optimizes for geometry and realism in the deformed shape. * We introduce a dynamic weighting strategy for the optimization that gradually accommodates the user constraint in a global context-aware manner. ## 2 Related Work In basic terms, geometric editing techniques can be divided into data-driven and non-data-driven methods. Non-data-driven shape deformation techniques typically frame the editing operation as an optimization problem [19] where user constraints are coupled with a regularizer that guides the rest of the shape. Shape regularizers come in two flavors: physically-based regularizers that try to minimize physically-based energy functions, often based on elastic energies [16] and geometric regularizers that try to preserve the original shape of the object and are often based on shape differential operators [26, 10, 27]. With the introduction of neural representations of geometry Yang et al. [18] proposed a neural implicit level set representation that supports editing operations, but the edited shapes are obtained using an optimization framework with physically-based losses and not data-driven constraints. Yuan et al. [19, 20] proposed a geometric editing method based on the NERF representation. However, the editing is performed using the classical as-rigid-as-possible (ARAP) method applied on a mesh obtained from the NERF and which is converted back to the NERF representation afterwards. Regardless of representation, one of the shortcomings of these methods is the lack of a high-level semantic understanding of the object; so they often result in unrealistic deformations. To address such shortcomings, some papers rely on additional heuristics: [17, 18] such as maintaining the proportion of geometric features taking inspiration from CAD design or say, editing the object using a set of salient curves on the object. However, a more general solution is to use a data-driven mechanism to guide the deformation. ### Data-driven Geometric Editing Data-driven geometric editing methods are tailored to the representation of the underlying geometry. Implicit neural representa tions such as DeepSDF [14] or NERF [15] are popular due to their generative prowess. Yumar et al. [16] proposed a method that uses an implicit occupancy grid to learn the space of shapes from a set of 3D data. Deng et al. [16] propose a method that uses a neural signed distance function representation that also learns a neural shape model from a collection of 3D objects. Both of these methods are limited to a small and specific set of objects. Liu et al. [15] introduced a method that learns a conditional radiance field over an entire object class to guide the deformation behavior. However, all the above methods are subject to the same limitation of being reliant on the availability of relevant 3D data which is much less accessible as compared to 2D images. One way to overcome this limitation is to leverage large-scale image-based models such as CLIP [17] or stable diffusion [20]. Since models like CLIP and stable diffusion use text prompts as a controlling mechanism, it has opened the door to text-based 3D generation methods [20, 16]. For purposes of shape editing, Hyung et al. [18] propose a purely text-based editing framework using a NERF representation and Mikaieili et al. [19] propose a sketch-based editing method using a NERF representation. The latter method is close to ours in spirit in that it uses the SDS loss [20] to guide the deformation using a sketch-based interface. However, this method does not allow for direct control over the geometry, an operation that is very challenging when using an implicit representation and would also necessitate user interaction in several views. Despite the popularity of the neural implicit representations mentioned above, 3D triangular meshes are still very much in use in many real life applications due to their simplicity, efficiency, and downstream hardware processing support through GPUs. Wang et al. [16] propose a method that trains an end-to-end network that deforms the source model to resemble the target. Because the method infers per-vertex offset displacements, it is not suited for vertex-based mesh editing say, by specifying only a few vertex constraints. Wang et al. [16] propose a neural cage network that infers cage coordinates of the points inside. Both these networks are trained by combining shape-based Laplacian losses and other heuristics tailored towards generic man-made objects. Therefore despite being neural methods, the deformation behaviour learned is not driven directly by 3D or deformation data. Early data-driven methods [19, 20] focus on learning a deformation behavior from a set of sample poses in terms of deformation gradients. More recently Tang et al. [16] used supervision to learn the prior deformation of a specific class of objects, mostly animals from an existing database. All of the above approaches rely not only on the availability of 3D data, but also the availability of sample deformations of the same mesh for supervised learning. Jakab et al. [16] discover key feature points in a dataset of objects and cast the problem as transforming a source 3D object to a target 3D object from the same object category. The feature points used for deformation are not user selected. Furthermore, these mesh based methods have similar limitations to their neural counterparts in that they rely on a 3D database of objects of a certain class. To overcome this limitation Gao et al. [17] edit a triangular mesh using only a text prompt using CLIP embeddings. One major challenge with the CLIP methods is that CLIP embeddings do not encode a viewing direction resulting in ambiguities denoted as the Janus effect [18]. The other major challenge is that it seems difficult to accommodate the satisfaction of user-specified geometric constraints. The use of large scale pre-trained models for 2D image editing has been explored in several works. Pan et al. [15] propose a method for point-based image editing by optimizing in the latent space of the StyleGAN2 generator [19]. Similarly, Shi et al. [21] and Mou et al. [22] achieve something similar by optimizing the diffusion latent space. In [20], 3D textured meshes are generated from the learned latent space of images. Unfortunately, these methods cannot be extended to direct vertex editing of a given 3D mesh, as they are completely reliant Figure 2: Overview of our method. Input: The user specifies constraints (red vertices moved to new positions in blue). Our approach combines these constraints with a shape regularizer and DDS score (natural image prior) applied on multiple views to perform the mesh deformation. Output: resulting deformation on locating the deformed representation in the rich latent space of 2D images. As already mentioned, we do not have that luxury of such a rich latent space for 3D shapes. In our work, we also use the latent space of images, but we use it regularize the 3D geometry by enforcing it to result in a natural image when rendered from an arbitrary view. In summary, our main goal in this work is to provide a general mesh editing method with user-specified vertex constraints that do not involve supervision, hence does not rely on 3D data for training, and produces realistic results without being restricted to a specific class of objects. We achieve this by harnessing the rich and vast knowledge about natural and human-made objects that are represented in today's pre-trained large-scale models in the image format. Our method is described in the following section. ## 3 Method ### Overview The overview of our proposed method is shown in Figure 2. The user specifies: a set of mesh vertices (red) as handles to drag, paired with a corresponding set of target 3D positions (blue), an optional mask to specify the part of the mesh that is allowed to be modified, and a short (typically one word) prompt generally describing the object (e.g., a car, a chair, etc.). We optimize the modifiable part of the mesh in the gradient domain [1] guided by three losses: (1) \(l^{2}\) distance of the user constraint loss, (2) Delta Denoising Score (DDS) loss which makes the deformed mesh have realistic appearances when rendered from random viewpoints and provides global guidance to our 3D model, and (3) ARAP loss to control the local geometric behavior. The remainder of the paper is organized as follows. Sections 3.2-3.4 review the neural jacobian fields, the space in which we do the optimization, the Delta Denoising Score, and the As-Rigid-as-possible regularizer respectively. Section 3.5 explains how everything fits together and provides additional implementation details. Section 4 presents our experiments and analysis of the method including ablation studies related to our design decisions, and finally, Section 5 has conclusions, limitations, and future work. ### Neural Jacobian field The neural Jacobian field [1] operates in the intrinsic gradient domain of triangular meshes to learn highly-accurate mappings between meshes. For triangular mesh vertices \(\Phi\), the per-face Jacobian \(J_{i}\in\Re^{3\times 3}\) is defined as \[J_{i}=\Phi\nabla_{i}^{T}, \tag{1}\] where \(\nabla_{i}^{T}\) is the gradient operator of triangle \(t_{i}\). Given matrices \(M_{i}\in\Re^{3\times 3}\) for every triangle, we can compute vertex positions \(\Phi^{*}\) whose Jacobians \(J_{i}=\Phi^{*}\nabla_{i}^{T}\) are least-square closest to \(M_{i}\) by solving the Poisson equation. The solution is obtained by solving the following linear system \[\Phi^{*}=\mathcal{L}^{-1}\mathcal{A}\nabla^{T}M, \tag{2}\] where \(\mathcal{A}\) is the mesh's mass matrix, \(\mathcal{L}\) is the mesh's Laplacian, and \(M\) is the stacking of the input matrices \(M_{i}\). (For a detailed mathematical definition of the Jacobian field, please refer to [1]). In another result of interest to this work [1], Gao et.al found that when deforming a triangular mesh with guidance from text-to-image models, such as CLIP [2], instead of working directly with vertex positions, optimizing the Jacobian field of the mesh can produce a more smoothly-deforming mesh and avoid entanglement. ### Delta Denoising Score Score Distillation Sampling(SDS) was introduced in [15] to use a \(2D\) prior to guide the synthesis of \(3D\) shapes. Delta Denoising Score(DDS) [1] extended the _SDS_ mechanism for guidance in image editing. Given an input image \(x\), a text embedding \(z\), a denoising model \(\epsilon_{\phi}\) with parameters \(\phi\), a randomly sampled time step \(t\sim\mathcal{U}(0,1)\) drawn from the uniform distribution, and noise \(\epsilon\sim\mathcal{N}(0,I)\) following a normal distribution, the weighted denoising score can be expressed as \[\mathcal{L}_{Diff}(\phi,x,z,\epsilon,t)=w(t)\|\epsilon_{\phi}(x_{t},z,t)- \epsilon\|_{2}^{2} \tag{3}\] where \(w(t)\) is a weighting function that depends on the time step \(t\), and \(x_{t}\) is the \(x\) added the noise at time step \(t\). For text-conditioned diffusion models that use classifier-free guidance [1], the denoised image is expressed as \[\epsilon_{\phi}(x_{t},z,t)=(1+w)\epsilon_{\phi}(x_{t},z,t)-w\epsilon_{\phi}(x_ {t},t), \tag{4}\] which is a weighted sum of conditioned and unconditioned denoising. Thus, as shown in [15], given an arbitrary differentiable parametric function \(g_{\theta}\) that renders images, the gradient of the _SDS_ guidance is given by \[\nabla_{\theta}\mathcal{L}_{SDS}(x,z,\epsilon,t)=\hat{\epsilon}_{\phi}((x_{t },z,t)-\epsilon)\frac{\partial x_{t}}{\partial\theta}. \tag{5}\] Using that gradient to optimize the \(g_{\theta}\) can produce images that look natural. However, for image editing, [1] has shown that _SDS_ can produce non-detailed and blurry outputs due to the noisy gradient. To overcome this problem, _DDS_ was introduced which does two _SDS_ processes for the edited image and reference image respectively, and retrieves the gradient by subtraction of these two, \[\nabla_{\theta}\mathcal{L}_{DDS}=\nabla_{\theta}\mathcal{L}_{SDS}(x_{edit},z_{edit} )-\nabla_{\theta}\mathcal{L}_{SDS}(x_{ref},z_{ref}), \tag{6}\] where \(x_{edit}\) and \(x_{ref}\) are the edited image and reference image, \(z_{edit}\) and \(z_{ref}\) are the text prompts of the edited image and reference image. It shows that the _DDS_ has a less noisy gradient and can produce a higher fidelity image in the image editing task. ### As-Rigid-As-Possible Regularizer The ARAP regularizer [1] does not penalize any isometric deformations allowing local rotations, but it penalizes local stretches. More specifically: \[\mathcal{L}_{reg}=\sum_{i}\sum_{j\in N(i)}w_{ij}\|(V_{i}^{{}^{\prime}}-V_{j}^{{} ^{\prime}})-R_{i}(V_{i}-V_{j})\|^{2}, \tag{7}\] where \(V_{i}\) is the initial vertex position, \(V_{i}^{{}^{\prime}}\) is the deformed vertex position, \(N(i)\) is the set of vertices adjacent to \(V_{i}\), and \(R_{i}\) is the local estimation rotation matrix for the one ring of vertices around vertex \(i\). \(w_{ij}\) is the standard cotangent Laplacian weight [1]. At every iteration, \(R_{i}\) can be computed analytically using SVD decomposition on the local co-variance matrix [10]. ### Putting it all together: DragD3D Suppose a user constraint start point is \(c_{i}\), and target point is \(c_{i}^{{}^{\prime}}\), then the user constraint loss is given by \[\mathcal{L}_{user}=\sum_{i=0}^{N}\|c_{i}^{{}^{\prime}}-c_{i}\|_{2}^{2}, \tag{8}\] where \(N\) is the number of user-specified handles. The \(DDS\) gradient is computed as per equation 6. In our case, suppose the undeformed mesh is \(\Phi\), the deformed mesh is \(\hat{\Phi}\), the random viewpoint is \(vp\), the differentiable renderer is \(g(\cdot)\), and the text prompt is \(z\), the \(DDS\) score of the deformed mesh can be expressed as \[\nabla_{\hat{\Phi}}\mathcal{L}_{DDS}=\nabla_{\hat{\Phi}}\mathcal{L}_{SDS}(g( \hat{\Phi},vp),z)-\nabla_{\Phi}\mathcal{L}_{SDS}(g(\Phi,vp),z). \tag{9}\] For the denoising model \(\epsilon_{\Phi}\), we use the Stable Diffusion [13]. We also applied the As-Rigid-As-Possible energy on the deformed mesh as a regularizer by evaluating the equation 7, to preserve the local structure of the mesh and improve the smoothness, especially when a mask is specified. Overall, we optimize for \[\mathcal{L}_{total}=\lambda_{user}\mathcal{L}_{user}+\lambda_{DDS}\mathcal{L}_ {DDS}+\lambda_{reg}\mathcal{L}_{reg}, \tag{10}\] where the weight of the user constraint \(\lambda_{user}\) was set to dynamically increase in a linear fashion from 1 to 50 over the whole optimization process. For DDS guidance, we use Stable Diffusion 2.1 [13] as the backbone along with the \(Perp-Neg\) algorithm [1]. The guidance scale is 100, and the gradient scale is 0.00002. The weight of the _ARAP_ regularizer can be set in the range of 0.04 to 0.2, depending on the deformation and mesh quality desired. For all examples in this paper except the \(car\), we optimize for 2000 iterations. With our prototype implementation, it takes around 40 minutes on an Nvidia RTX 3090. However, with code optimization and hyperparameter tuning, we can expect significant improvements in execution times. Since the deformation of the car is larger, we optimized for 4000 iterations. We use Adan [10] as the optimizer with a learning rate set to 0.005. As noted in [12] such a large-scale optimization problem can easily get stuck in an undesired local minimum, so in order to minimize this possibility we do the following to help the optimization process. In every iteration, four camera positions were Figure 3: Gallery of results 1 sampled randomly with azimuth \(\in[-180^{\circ},180^{\circ}]\), elevation angle \(\in[0^{\circ},90^{\circ}]\), camera distance\(\in[d_{0},d_{0}+2]\), where \(d_{0}\) is the camera distance of specifying handles. ## 4 Experiments and Discussion ### Results We demonstrate the effectiveness of our method on a variety of meshes belonging to different object classes, natural and human-made. This clearly demonstrates that our editing method is not limited to any class or classes of objects. A gallery of results is shown in the Figures 1,3 and 4. In figures 3 and 4, the left three columns are the input to the pipeline, and the right two columns are the output of the pipeline. A 360 degree view of our models as well as time lapse video of the optimization is provided in the accompanying video. All the meshes were rendered with and without texture to show the geometry clearly. The meshes were obtained from TurboSquid, Thingi10K [2], and TEXTure [2]. For the meshes without texture (car, cat, chair, spaceship, castle, and snow monster), we generated the texture using [2]. As can be seen from all result figures, the deformation results are realistic while satisfying the user constraints and, as shown in Figure 5, clearly outperform the results from pure geometric deformations that are not aware of the global structure of the object. ### Comparison with optimization-based methods Our main goal is to provide a general mesh editing method that provides realistic results by leveraging the large-scale 2D models developed with millions of images of vast classes of objects from multiple views. Unlike many others, who are able to do this only for a specific class of objects or a given set of deformations to provide the guidance to the method. Therefore, our comparison is with a method that does not use any data-driven guidance for the deformations. There are a multitude of such methods, we refer to two state-of-the-art reviews [1, 2]. For practical reasons (i.e. nonavailability of code, no clear winner in terms of deformation quality), from this multitude of methods we pick the as rigid as possible as a representative of the optimization-based class of methods [2]. In figure 5, we compare our method with the improved version of As-Rigid-As-Possible(ARAP) [3], which was implemented by Libigl [1]. As noted in [2] the ARAP deformation model is representative of the optimization-based methods. Although ARAP is an efficient geometry deformation energy computation, it introduces some artifacts, e.g. bending, and is not able to deform the mesh symmetrically when the handle is not symmetric, which makes the deformed shape look unnatural. For instance, in Figure5, \((a)\), the car roof was only lifted on the handle side non-smoothly; \((c)\), the nose of the cat was bent and the mouth was stretched; \((e)\) only the areas around the handles were lifted, Figure 4: Gallery of results 2 the chair top and the armrest were bent; and \((g)\) the trunk of the elephant was bent and twisted unnaturally. On the other hand, our method can mitigate problems, such as bending and tilting, produced by _ARAP_ energy, and as a result, yielding far more realistic deformations. For example, in Figure5, \((b)\), the roof of the car was lifted on both sides, and all the other parts of the car were also deformed to make it look natural; \((d)\), the cat's nose and mouth are deformed without bending and stretching, and the left ear isn't notched for the silhouette; \((f)\), the top of the chair was deformed to a headrest which has a small arc and the armrest was kept horizontal; \((h)\), rather than only deform the trunk as in \((g)\), the head of the elephant was also lifted, making the deformation look more realistic. ### Design decisions **Weighting of ARAP vs. DDS**. One way to understand the effects of these two losses is to look at them as complementing each other. The DDS loss is very noisy and it affects the low-frequency part of the shape while the ARAP loss regularizes better the high-frequency part of the shape. Therefore the ARAP weight balances the local shape behaviour against the global semantics of the object and the choice depends on the magnitude of the deformation. Large deformations have the benefit of a larger influence of the global DDS. Therefore a smaller ARAP loss works better. Figure 6 illustrates the influence of the weight. The top result is better with higher ARAP loss Figure 5(a) to keep the geometry locally consistent as small dents like those shown in Figure 6-top-b) are not captured by the noisy DDS loss. Figure 6-bottom shows a result that works better if we allow a larger influence of the DDS loss (Figure 6-bottom-b), which translates into a low ARAP weight. A high ARAP loss will result in unwanted distortions on the face (Figure 6-bottom-a) **Dynamic User Loss** Given that the user specified constraint is the more important one to be satisfied in local mesh editing, the user constraint needs to be appropriately weighted in our loss function. If a very large weight value is set for this from the very beginning, then \(\mathcal{L}_{user}\) will dominate the gradient and reduce the effectiveness of the other losses (i.e. the _ARAP_ regularizer and DDS guidance). This, in turn, will lead to drastic deformation from the beginning. As shown in Figure 8\((a)\), in the early stages of the optimization, this drastic deformation can produce artifacts and nonsmooth mesh which then are difficult to remedy by the _ARAP_ regularizer and DDS guidance in later optimization stages. The dynamic weighting approach we have introduced helps mitigate this problem, as can be seen in Figure 8\((b)\). **Ablation Studies.** We perform various experiments to show the effectiveness of the Jacobian fields, _ARAP_ regularizer, random camera views, guidance, texture, and dynamic weight strategy. As can be seen in Figure 7, if we optimize for vertex displacement directly, the mesh structure cannot be kept. Without _ARAP_ regularizer, the local structure of the connection part between the mask and unmasked area cannot be maintained, and the mesh is not as smooth if we do not include the _ARAP_ regularizer. If the camera was fixed on four canonical views, front, back, and sides, and without any change in the elevation angle, the deformation was overfitted to these four views, which produced non-smooth deformation. As shown in Figure 7, if we replace the _DDS_ guidance with the _SDS_ guidance in our pipeline, due to the noisy gradient of _SDS_, the edited mesh is less smooth and has more distortion. Without texture, the guidance of the 2D diffusion priors is not as efficient as before, which makes for lower-quality editing. What's more, without dynamic weighting for user constraint, we observed that user Figure 5: Comparison with ARAP shows the more natural results of DragD3D compared to ARAP: symmetric deformation of the car even when only one anchor is used, detail preservation of the cat’s face, a more natural extension of the back of the chair and a more natural deformation of the elephant’s trunk. Figure 6: Effectiveness of ARAP regularizer’s weight \(\lambda_{\text{reg}}\). (a) \(\lambda_{\text{reg}}=0.2\), (b) \(\lambda_{\text{reg}}=0.04\) constraint loss decreased so fast that the mesh was deformed in a very distorted way and it made the DDS guidance incapable of remedying the distortion. Lastly, we test dependence on the precision of the text prompt given by the user. As Figure 9 shows, the description does not need to be very precise, as long as it matches the object. This is clear as the prompt is used only in the DDS loss and primarily serves to provide the global context for this object's shape. ## 5 Conclusions, Limitations and Future work Dragging 3D mesh vertices in space has been used in practice to provide the fine control that designers seek in shape design. For dense meshes this poses the problem of having to automatically determine how all other vertices should change. Pre-trained large scale image models incorporate vast knowledge about the appearance of shapes in the real world. Recovering very specific shapes from this generalized knowledge is very challenging. Tapping into this knowledge for fine level mesh editing is the challenge we have successfully addressed in this work. Our method has a few limitations. As with most mesh based methods, the quality of the triangulation can affect the result. The run-time of the algorithm is large compared to traditional mesh deformation methods, but is still aligned with other optimizations based on similar kinds of losses. In future work, we will look at ways to speed up the optimization. Our deformation method requires a simple prompt to accompany the rest of the constraints. This is easy to provide and in the future, we could use automatic prompting [12]. Our method still has some issues dealing with very large deformations. The DDS loss is very noisy and is not sensitive to small geometric artifacts. These artifacts are usually corrected by the ARAP loss, but large deformations rely more on the DDS loss and as a consequence may experience geometric artifacts.
2302.12112
Symmetries of structures that fail to interpret something finite
We investigate structural implications arising from the condition that a given directed graph does not interpret, in the sense of primitive positive interpretation with parameters or orbits, every finite structure. Our results generalize several theorems from the literature and yield further algebraic invariance properties that must be satisfied in every such graph. Algebraic properties of this kind are tightly connected to the tractability of constraint satisfaction problems, and we obtain new such properties even for infinite countably categorical graphs. We balance these positive results by showing the existence of a countably categorical hypergraph that fails to interpret some finite structure, while still lacking some of the most essential algebraic invariance properties known to hold for finite structures.
Libor Barto, Bertalan Bodor, Marcin Kozik, Antoine Mottet, Michael Pinsker
2023-02-23T15:51:07Z
http://arxiv.org/abs/2302.12112v1
# Symmetries of structures ###### Abstract We investigate structural implications arising from the condition that a given directed graph does not interpret, in the sense of primitive positive interpretation with parameters or orbits, every finite structure. Our results generalize several theorems from the literature and yield further algebraic invariance properties that must be satisfied in every such graph. Algebraic properties of this kind are tightly connected to the tractability of constraint satisfaction problems, and we obtain new such properties even for infinite countably categorical graphs. We balance these positive results by showing the existence of a countably categorical hypergraph that fails to interpret some finite structure, while still lacking some of the most essential algebraic invariance properties known to hold for finite structures. ## I Introduction ### _The story_ A major milestone in the theory of Constraint Satisfaction Problems (CSPs) was a theorem due to Hell and Nesetril [1] which established a P/NP-complete complexity dichotomy for the computational problem of \(H\)-coloring of undirected graphs. Almost two decades later, Barto, Kozik and Niven [2] extended the dichotomy result to finite directed graphs with no sources and no sinks. Both results were subsumed by the CSP dichotomy theorem proven by Bulatov and Zhuk [3, 4], which showed the same dichotomy holds for arbitrary finite directed graphs, or equivalently, arbitrary finite structures. Both of the earlier results yield not only a complexity dichotomy, but a structural dichotomy for the appropriate class of graphs, and derive the computational result as a direct consequence. In the case of the theorem of Hell and Nesetril, the structural result can be stated as follows: any finite, undirected, loopless graph is either bipartite, in which case the associated \(H\)-coloring problem is essentially just the \(2\)-coloring problem, or it _pp-constructs_ (in the sense of [5]) the \(3\)-element clique \(K_{3}\), and the \(H\)-coloring problem is as hard as \(3\)-coloring. If the graph is a core, i.e., the smallest template for its \(H\)-coloring problem, then it is either a single edge and defines precisely the \(2\)-coloring problem, or it _pp-interprets_\(K_{3}\), and consequently any finite structure, with parameters, providing a tangible witness of hardness. By an observation of Siggers [6], the latter structural dichotomy for graphs also implies an algebraic invariance property for any finite core structure that fails to pp-interpret \(K_{3}\) with parameters; similarly, the result of Barto, Kozik and Niven and its various extensions provide further such invariance properties, of which in particular the so-called weak near-unanimity (WNU) polymorphisms play a crucial role in Zhuk's proof of the CSP dichotomy theorem. While the proof of Hell and Nesetril is purely combinatorial in nature, the generalization of Barto, Kozik and Niven relies on the machinery developed in the context of the algebraic approach to CSP. The elegance of this approach comes, so it seems, at the cost of difficulties to generalize it to infinite countably categorical graphs. In fact, in the long line of research devoted to extending the CSP dichotomy theorem to the class of CSPs defined by first-order reducts of finitely bounded homogeneous structures (an important natural subclass of the class of \(\omega\)-categorical structures, see [7, 8]), the only successful attempt so far to obtain structural dichotomies for \(\omega\)-categorical graphs similar to the ones mentioned above is due to Barto and Pinsker [9]. Their approach builds on a streamlined version, due to Bulatov [10], of the proof of Hell and Nesetril, and shows that any \(\omega\)-categorical graph which contains \(K_{3}\) and which has no edge within an orbit of its automorphism group (a _pseudoloop_), pp-interprets together with the orbits of this group and parameters the clique \(K_{3}\). This result yields a non-trivial algebraic invariance property for any \(\omega\)-categorical structure which is a model-complete core and which fails to pp-interpret \(K_{3}\) with parameters, and this property separates precisely tractable from intractable CSPs for in the realm of first-order reducts of finitely bounded homogeneous structures according to a conjecture of Bodirsky and Pinsker [11]. However, all attempts at a generalization of the more general, algebraic proof due to Barto, Kozik and Niven, not to mention the more advanced algebraic results available for general finite structures, have failed. The reason for that seems to be very elementary: it is much easier to extend even very complicated combinatorial constructions than to lift the algebraic notions tailored for finite algebras. As a result, no algebraic invariance properties except for the one obtained by Barto and Pinsker are known for the templates conjectured to have tractable CSPs. In order to overcome this obstacle the following two-step approach is natural: 1. provide combinatorial proofs of the known finite structural dichotomies; 2. lift the proofs to the countably categorical setting. Upon closer inspection of step (ii), the following, obvious, difference between finite and infinite structures comes into focus. In the former, one can use _all_ elements of the structure as parameters in a single pp-interpretation, a standard trick applied throughout the entire theory. On the other hand, countably categorical structures have finitely many tuples of fixed finite length only _up to automorphisms_ (i.e., the number of orbits of their automorphism group acting on tuples is finite). Therefore generalizing results using orbits, rather than elements, as parameters seems more natural. However, no results in this direction are known even for finite structures, adding another necessary step to the plan: 1. provide stronger finite dichotomies which use orbits of a permutation group rather than elements as parameters in an interpretation. ### _Our contributions_ We achieve combinatorial results in all three directions, and moreover contrast these with strong evidence that the algebraic methods from the finite do not lift to general countably categorical structures. #### Ii-B1 Barto, Kozik and Niven, revisited Our first contribution is a new, purely combinatorial proof of the dichotomy for finite digraphs with no sources and no sinks, thus reproving the result of Barto, Kozik and Niven in the spirit of (i) above. In fact we provide two generalizations of the result, both for finite directed graphs. In our first generalization, we obtain a pp-interpretation with the orbits of a subgroup of the automorphism group of the digraph, achieving (iii) for the theorem of Barto, Kozik and Niven. **Theorem 1**.: _Let \(\mathfrak{A}\) be a finite smooth digraph, and let \(\mathcal{G}\) be a subgroup of its automorphism group \(\mathrm{Aut}(\mathfrak{A})\). If \(\mathfrak{A}\) is linked and without a loop, then \(\mathfrak{A}\) expanded by orbits of \(\mathcal{G}\) pp-interpets \(K_{3}\) and hence EVERYTHING, i.e., every finite structure._ As an immediate consequence of this theorem, we obtain for the first time a specific algebraic invariance property for any finite core structure which satisfies _any_ non-trivial algebraic invariance property, see Theorem 9. This result is a generalization of Sigger's result from [6] which requires the structure to contain all singleton relations. In the second generalization of Barto, Kozik, and Niven's result we are able to replace some of the assumptions on the digraph by "pseudo-assumptions", i.e., assumptions on the graph after factorization by orbits of a subgroup of its automorphism group, making the proof amenable to the infinite setting. **Theorem 2**.: _Let \(\mathfrak{A}\) be a finite smooth digraph, and let \(\mathcal{G}\) be a subgroup of its automorphism group \(\mathrm{Aut}(\mathfrak{A})\). If the factor graph \(\mathfrak{A}/\mathcal{G}\) has algebraic length 1 and no loop, then \(\mathfrak{A}\) pp-interpets with parameters EVERYTHING._ We remark that these two theorems are both consequences of a more general theorem we prove, Theorem 7. #### Ii-B2 Hell and Nesetril, revisited Our second main contribution is lifting the result of Hell and Nesetril to the countably categorical case: We show that if a graph without sources and sinks is such that the factor graph on the orbits of a subgroup of its automorphism group satisfies the assumptions in the theorem of Hell and Nesetril, then the graph pp-interpets, using parameters and the orbits of the group, all finite structures. This achieves (ii) above for this theorem. **Theorem 3**.: _Let \(\mathfrak{A}\) be a smooth digraph, and let \(\mathcal{G}\) be a subgroup of its automorphism group \(\mathrm{Aut}(\mathfrak{A})\). Assume that \(\mathcal{G}\) has only finitely many orbits in its action on pairs. If \(\mathfrak{A}/\mathcal{G}\) is symmetric, loopless, and not bipartite, then \(\mathfrak{A}\) expanded by the orbits of \(\mathcal{G}\) pp-interpets with parameters EVERYTHING._ This result, applied to a finite graph and the trivial group \(\mathcal{G}\), provides a new and purely relational proof of the theorem of Hell and Nesetril. It also implies a variety of algebraic invariance properties for any \(\omega\)-categorical structure that is a model-complete core and fails to pp-interpret \(K_{3}\) with parameters, thereby answering an open problem in [12, Section 5.3], vastly generalizing the result of Barto and Pinsker, and ending the long and dark period of the uniqueness of their result - see Theorem 25. #### Ii-B3 Maroti and McKenzie, revisited In our third main contribution, we go beyond the realm of graphs and consider hypergraphs. The above-mentioned algebraic invariance properties take the form of _polymorphisms_, i.e., multivariate functions on the domain of the structure which leave the structure invariant, satisfying non-trivial _identities_. The identities which have proven to be the most directly applicable to questions of computational complexity of the corresponding CSP generally stem from hypergraphs rather than graphs; in particular, this is true for the weak near-unanimity (WNU) polymorphisms used in Zhuk's proof of the CSP dichotomy theorem. The latter were shown to exist for any finite core structure not pp-interpreting \(K_{3}\) with parameters using the machinery of finite algebras by Maroti and McKenzie [13]; further algebraic proofs were given by Barto and Kozik [14] as well as Zhuk [15]. We show that similar polymorphisms need not exist for \(\omega\)-categorical structures under the same conditions. In fact, we obtain a much more general result, Theorem 40: any algebraic invariance property which is a countably infinite disjunction of statements asserting the satisfaction of identities by polymorphisms and with the property that no single member of the disjunction is implied for finite core structures which do not pp-interpret \(K_{3}\) with parameters, can be avoided by an \(\omega\)-categorical hypergraph which does not pp-interpret \(K_{3}\) with parameters. The existence of a weak near-unanimity (WNU) polymorphism is such a condition since although any finite core structure not pp-interpreting \(K_{3}\) with parameters has a WNU polymorphism, the arity of the WNU polymorphism varies for different finite structures. Our theorem solves, in particular, [7, Problem 14.2.6 (21)]. It provides evidence that the full algebraic machinery available for finite structures does not lift to the \(\omega\)-categorical setting, and indicates that finer methods as presented e.g. in [16] have to be developed for the narrower context of first-order reducts finitely bounded homogeneous structures in order to obtain the same algebraic invariance properties as in the finite. **Theorem 4**.: _There exists a hypergraph \(\mathfrak{A}\) with the following properties:_ * \(\mathfrak{A}\) _is_ \(\omega\)_-categorical;_ * \(\mathfrak{A}\) _has no pseudo-WNU polymorphisms;_ * \(\mathfrak{A}\) _expanded by the orbits of its automorphism group does NOT pp-interpret with parameters EVERYTHING._ ## II Preliminaries ### _Relations_ Our terminology for relational structures is fairly standard and we often call them just structures. We abuse notation by using the same name for a relational symbol and its interpretation in a structure; \(R(a_{1},\ldots,a_{n})\) typically means that \((a_{1},\ldots,a_{n})\) is in the interpretation of \(R\) in the given structure (which should always be clear from the context), while \(R(x_{1},\ldots,x_{n})\) is an atomic formula with free variables \(x_{1},\ldots,x_{n}\). A _digraph_ is a structure with a single binary relation that we usually denote by \(\rightarrow\), its inverse is denoted by \(\leftarrow\)-. A _graph_ is a digraph that is symmetric, i.e., \(\rightarrow=\leftarrow\)-. A _hypergraph_ is a structure with a single relation. The induced substructure of \(\mathfrak{A}\) on a subset \(B\) is denoted \(\mathfrak{A}|_{B}\), the quotient structure modulo an equivalence \(\sim\) on the domain is denoted \(\mathfrak{A}/\!\!\sim\). For instance, the quotient of a digraph \((A;\rightarrow)\) is the digraph \((A/\!\!\sim,\{([a]_{\sim},[b]_{\sim}):a\to b\})\). A first-order formula \(\varphi\) is _primitive positive_ (pp, for short) if it consists of existential quantifiers, conjunctions, and atomic formulas only. We say that \(\mathfrak{A}\) pp-defines \(\mathfrak{B}\) (or \(\mathfrak{B}\) is pp-definable in \(\mathfrak{A}\)) if every relation in \(\mathfrak{B}\) can be defined by a primitive positive formula. The same terminology is used for sets of relations. A _pp-interpretation_ of a structure \(\mathfrak{B}\) in \(\mathfrak{A}\) consists of a partial surjective map \(h\colon A^{d}\to B\) for some \(d\geq 1\) such that for every \(R\subseteq B^{n}\) that is either \(B\), or the equality relation on \(B\), or a relation of \(\mathfrak{B}\), the preimage \(h^{-1}(R)\) seen as a relation of arity \(nd\) on \(A\) has a pp-definition in \(\mathfrak{A}\). The integer \(d\) is called the _dimension_ of the interpretation. We say that \(\mathfrak{A}\) pp-interprets/pp-defines \(\mathfrak{B}\)_with parameters_ if \(\mathfrak{A}\) expanded by unary relations \(\{a\}\) (where \(a\) is in the domain of \(\mathfrak{A}\)) pp-interprets/pp-defines \(\mathfrak{B}\). It is a classical fact that the 3-element clique \(K_{3}\) pp-interprets every finite structure with parameters. An \(n\)-ary relation \(R\subseteq A^{n}\) is _subdirect_ in \(A\) if its projection to any coordinate is equal to \(A\). For two binary relations \(R\) and \(S\) on \(A\), we write \(R+S\) for the composition of \(R\) and \(S\) pp-defined by \[(R+S)(x,z)\equiv\exists y\ R(x,y)\wedge R(y,z).\] Accordingly, we write \(nR\) for the \(n\)-fold composition of \(R\) with itself. For a binary \(R\) and unary \(B\) on \(A\) we also use \(B+R\) pp-defined by \[(B+R)(y)\equiv\exists x\ B(x)\wedge R(x,y).\] For a binary relation \(\rightarrow\) of a digraph, we use \(B^{\rightarrow}\) instead of \(B+\rightarrow\), for better readability. For two relations \(R,S\) on \(A\) with arities \(n,m\) respectively we define an \((n+m)\)-ary relation \(\mathrm{OR}(R,S)\) by \(\mathrm{OR}(R,S)(a_{1},\ldots,a_{n},b_{1},\ldots,b_{m})\) if \(R(a_{1},\ldots,a_{n})\) or \(S(b_{1},\ldots,b_{m})\). Note that this _is not_ a pp-definition from \(R\) and \(S\). In our proofs of Theorem 1 and Theorem 2, we will produce a pp-definition of \(\mathrm{OR}(\alpha,\alpha)\) for a nontrivial equivalence relation \(\alpha\). The following folklore observation (see Appendix A) will finish the proofs. **Proposition 5**.: _Let \(\mathfrak{A}\) be a finite core structure containing \(\mathrm{OR}(\alpha,\alpha)\) for a proper equivalence relation \(\alpha\) on some \(B\subseteq A\). Then \(\mathfrak{A}\) pp-interprets every finite structure._ ### _Connectivity notions for digraphs_ Let \(\mathfrak{A}=(A;\rightarrow)\) be a digraph. We say that \(\mathfrak{A}\) is _smooth_ if \(\rightarrow\) is a subdirect relation on \(A\), i.e., \(A^{\rightarrow}=A^{\leftarrow}=A\). A _walk_ in \(\mathfrak{A}\) is a sequence \(a_{1}\ \epsilon_{1}\ a_{2}\ \epsilon_{2}\ldots\epsilon_{n-1}\ a_{n}\), where \(a_{i}\in A\) and each \(\epsilon_{i}\) is either \(\rightarrow\) or \(\leftarrow\). The _algebraic length_ of such a walk is the number of forward arrows minus the number of backward arrows. We say that \(\mathfrak{A}\) has _algebraic length 1_ if there exists a closed walk (i.e., \(a_{1}=a_{n}\)) of algebraic length 1. A digraph is _weakly connected_ if there exists a walk between any two vertices. Weak components are defined accordingly as maximal induced subdigraphs that are weakly connected (or, abusing notation, the corresponding subsets of \(A\)). The \(k\)-fold composition of \(\rightarrow\) with itself is pp-defined by \[x(k\rightarrow)y\equiv\exists z_{1},\ldots,z_{k-1}\ x\to z_{1} \rightarrow\cdots\to z_{k-1}\to y.\] Note that if \(\rightarrow\) is subdirect in \(A\), then so is \(k\rightarrow\) for any \(k\). The _link relation_ for \(\mathfrak{A}\) (or \(\rightarrow\)) is defined as \(L_{\rightarrow}(x,y)\equiv\exists z(x\to z\wedge y\to z)\). Note that \(L_{\rightarrow}\) is always symmetric, and if \(\rightarrow\) is subdirect, then it is also reflexive. The transitive closure of \(L_{\rightarrow}\) is called the _linkness equivalence_ associated with \(\rightarrow\). We call \(\mathfrak{A}\)_linked_ if its linkness equivalence equals \(A^{2}\). The _\(k\)-link relation_ for \(\mathfrak{A}\) is \(L_{(k\rightarrow)}\), its transitive closure is the _\(k\)-linkness equivalence_, and \(\mathfrak{A}\) is _\(k\)-linked_ if \(k\rightarrow\) is linked. Since the link relation is reflexive for smooth \(\mathfrak{A}\), its transitive closure can be pp-defined if \(A\) is finite, namely by the formula \[\exists z_{1},\ldots,z_{|A|}\ L_{\rightarrow}(x,z_{1})\wedge L_{\rightarrow}( z_{1},z_{2})\wedge\ldots\wedge L_{\rightarrow}(z_{|A|},y)\.\] The same formula then also works for \((k\rightarrow)\) in place of \(\rightarrow\). Note that a finite digraph \(\mathfrak{A}\) is \(k\)-linked for some \(k\) iff it is weakly connected and has algebraic length 1 (see, e.g., [14, Claim 3.8]). It also follows that a weak component of algebraic length 1 is pp-definable with parameters from \(\rightarrow\). A graph is \(k\)-linked for some \(k\) iff it is (weakly) connected and non-bipartite. ### _Groups, orbits, \(\omega\)-categoricity, model-complete cores_ Let \(\mathcal{G}\) be a permutation group acting on a set \(A\). For \(n\geq 1\), a equivalence relation \(\sim\!\! For a digraph \(\mathfrak{A}=(A,\rightarrow)\) and a group \(\mathcal{G}\) acting on \(A\), we write \(\mathfrak{A}/\mathcal{G}\) instead of \(\mathfrak{A}/\neg\omega\) (where the equivalence is for \(n=1\)). An \(\omega\)-categorical structure \(\mathfrak{A}\) is a _model-complete core_ if for every endomorphism \(e\) of \(\mathfrak{A}\) and finite subset \(F\) of \(A\), there exists an automorphism \(\alpha\) of \(\mathfrak{A}\) such that \(e\) and \(\alpha\) coincide on \(F\). A finite model-complete core is simply called a _core_. Every \(\omega\)-categorical structure \(\mathfrak{A}\) has an induced substructure which is a model-complete core and which admits a homomorphism from \(\mathfrak{A}\)[17]. This structure is unique up to isomorphism, and its isomorphism type is called the _model-complete core of \(\mathfrak{A}\)_. ### _Polymorphisms and identities_ A relation \(R\subseteq A^{n}\) on \(A\) is _invariant_ under an operation \(f\colon A^{m}\to A\) if for all \(r_{1},\ldots,r_{m}\in R\), the \(n\)-tuple \(f(r_{1},\ldots,r_{m})\) obtained by applying \(f\) componentwise on \(r_{1},\ldots,r_{m}\) is also in \(R\). We say that an operation \(f\) on \(A\) is a _polymorphism_ of a structure \(\mathfrak{A}\) with domain \(A\) if each relation of \(\mathfrak{A}\) is invariant under \(f\). The set of all polymorphisms of \(\mathfrak{A}\) is denoted \(\operatorname{Pol}(\mathfrak{A})\). Sets of polymorphisms are so-called _clones_, and there is a tight connection between polymorphism clones of structures and their pp-definability and pp-interpretability strength; however, we do not expand on these in this paper and refer to [18, 19, 20, 9]. An _equational condition_ is a system of identities - formal expressions of the form \(s\approx t\), where \(s\) and \(t\) are terms over a common set of function symbols. We say that \(\mathfrak{A}\) or \(\operatorname{Pol}(\mathfrak{A})\) (or some set of operations on a common domain) satisfies an equational condition \(\Sigma\), if the function symbols can be interpreted as members of \(\operatorname{Pol}(\mathfrak{A})\) so that, for each each \(s\approx t\) in \(\Sigma\), the equality \(s=t\) holds for any evaluation of variables. An example is the _weak near-unanimity (WNU) condition of arity \(n\)_ given by \(w(x,\ldots,x,y)\approx w(x,\ldots,x,y,x)\approx\cdots\approx w(y,x,\ldots,x)\), for a symbol \(w\) of arity \(n\). It is satisfied in \(\operatorname{Pol}(\mathfrak{A})\) if \(\mathfrak{A}\) has an \(n\)-ary polymorphism \(w\) such that \(w(x,\ldots,x,y)=\ldots=w(y,x,\ldots,x)\) for all \(x,y\) in the domain; such a polymorphism is then called a WNU polymorphism. Similarly, the _pseudo-WNU condition_ is given by \(u_{1}(w(x,\ldots,x,y))\approx u_{2}(w(x,\ldots,x,y,x))\approx\cdots\approx u_ {n}(w(y,x,\ldots,x))\), where the \(u_{i}\) are unary symbols. Another example is the _Siggers condition_\(s(x,y,x,z,y,z)\approx s(y,x,z,x,z,y)\) and its pseudo-version \(u(s(x,y,x,z,y,z))\approx v(s(y,x,z,x,z,y))\). An equational condition \(\Sigma\) is called a _minor condition_ if for each identity \(s\approx t\) in it, both \(s\) and \(t\) contain exactly one occurrence of a function symbol. For example, the WNU condition is minor, while pseudo-WNU is not. An equational condition is _balanced_ if in every \(s\approx t\), the same variables appear on the left- and right-hand side of the identity; the examples above are such. Finally, an equational condition is _idempotent_ if it entails \(t(x,x,\ldots,x)\approx x\) for every function symbol \(t\). E.g., the _idempotent WNU_ condition \(w(x,\ldots,x,y)=\ldots=w(y,x,\ldots,x),w(x,x,\ldots,x)\approx x\) is idempotent but not minor. An equational condition is _trivial_ if it is satisfied in every polymorphism clone. It is a folklore fact that if a structure \(\operatorname{pp-interprets}K_{3}\), then any equational condition it satisfies is trivial, and if it pp-interprets \(K_{3}\) with parameters, then any idempotent condition it satisfies is idempotent condition it satisfies is idempotent condition it satisfies is idempotent condition it satisfies is idempotent condition it satisfies is idempotent condition it satisfies is idempotent condition it satisfies is idempotent condition it satisfies is idempotent condition it satisfies is idempotent condition it satisfies is idempotent condition it satisfies is idempotent condition it it satisfies is idempotent condition it it satisfies is idempotent condition it it satisfies is idempotent condition it it satisfies is idempotent condition it it satisfies is idempotent condition it it it satisfies is idempotent condition it it it satisfies is idempotent condition it it it it satisfies is idempotent condition it \(R(f_{r(1)}(a_{1}),\ldots,f_{r(n)}(a_{n}))\) in \(\mathfrak{B}\). Note that this property extends to rpp-definable relations; this is essentially the reason why the following lemma works. The proof is in Appendix A. **Lemma 6**.: _Let \(\mathfrak{A}=(A;\rightarrow)\) be a digraph, \(g\in\operatorname{Aut}(\mathfrak{A})\), and \(S\) a relation on \(A\). Let \(\rightarrow^{\prime}=\rightarrow+g\). If \(\rightarrow^{\prime}_{01}\) rpp-defines with parameters 0-ranked \(S\), then \(\rightarrow\) pp-defines with parameters \(S\)._ Note that the set of ranked homomorphisms is closed under shifts, i.e., if \((f_{i})_{i\in\mathbb{Z}}\) is a ranked homomorphism from \(\mathfrak{A}\) to \(\mathfrak{B}\), then so is \((f_{i+k})_{i\in\mathbb{Z}}\), for any \(k\in\mathbb{Z}\). Moreover, ranked homomorphisms are closed under composition, i.e., if \((g_{i})_{i\in\mathbb{Z}}\) is a ranked homomorphism from \(\mathfrak{B}\) to \(\mathfrak{C}\), then \((g_{i}\circ f_{i})_{i\in\mathbb{Z}}\) is a ranked homomorphism from \(\mathfrak{A}\) to \(\mathfrak{C}\). In particular, the set of _ranked automorphisms_ of \(\mathfrak{A}\) (i.e., invertible ranked homomorphisms from \(\mathfrak{A}\) to \(\mathfrak{A}\)) forms a group, the _ranked automorphism group of \(\mathfrak{A}\)_. By the _projection_ of a ranked automorphism group \(\mathcal{H}\), we mean the group \(\{f_{0}:(f_{i})_{i\in\mathbb{Z}}\in\mathcal{H}\}\). It is equal to \(\{f_{j}:(f_{i})_{i\in\mathbb{Z}}\in\mathcal{H}\}\) for any \(j\). We are ready to state the main result for finite digraphs. **Theorem 7**.: _Let \(\mathfrak{A}=(A;\rightarrow)\) be a finite smooth digraph, let \(\mathcal{H}\) be a subgroup of the ranked automorphism group of \((A;\rightarrow_{01})\), let \(\mathcal{G}\) be the projection of \(\mathcal{H}\), and let \(k\geq 1\). If \(\rightarrow\) is \(k\)-linked and \((k\rightarrow)\neq A^{2}\), then \(\rightarrow_{01}\) together with the \(0\)-ranked orbits of \(\mathcal{G}\) rpp-define_ 1. \(B\subsetneq A\) _such that_ \(\mathfrak{A}|_{B}\) _is smooth and_ \(k\)_-linked, or_ 2. _0-ranked_ \(\operatorname{OR}(\alpha,\alpha)\) _for some proper equivalence relation_ \(\alpha\) _on some subset_ \(C\subseteq A\)_._ ### _Loops without parameters_ The following refined version of Theorem 1 is a simple consequence of Theorem 7. For this result, the rankings are not needed. **Theorem 8**.: _Let \(\mathfrak{A}=(A;\rightarrow)\) be a finite smooth digraph, and let \(\mathcal{G}\) be a subgroup of \(\operatorname{Aut}(\mathfrak{A})\). If \(\mathfrak{A}\) is linked, then \(\rightarrow\) and the orbits of \(\mathcal{G}\) pp-define_ * _some nonempty_ \(B\subseteq A\) _such that_ \(B^{2}\subseteq\rightarrow\)_, or_ * \(\operatorname{OR}(\alpha,\alpha)\) _for some proper equivalence relation_ \(\alpha\) _on some subset_ \(C\subseteq A\)_._ Proof.: We apply Theorem 7 with \(k=1\), the same \(\mathfrak{A}\), and \(\mathcal{H}=\{(g)_{i\in\mathbb{Z}}:g\in\mathcal{G}\}\). Both the failure of the assumption \((k\rightarrow)\neq A^{2}\) and item 2 give the desired conclusion; if item 1 holds, then we restrict \(\mathfrak{A}\) and \(\mathcal{G}\) to \(B\) and apply Theorem 7 again. In the end we either get \(\operatorname{OR}(\alpha,\alpha)\) for a proper equivalence relation \(\alpha\) on some \(C\subseteq A\), or we obtain a full subdigraph of \(\mathfrak{A}\), as required. Note that Theorem 1 is an immediate consequence of the last theorem and Proposition 5. A standard procedure for obtaining identities from structural results such as Theorem 8 gives us the following corollary. The proof is given in Appendix B-A and the result further discussed in Section VII. **Theorem 9**.: _Let \(\mathfrak{A}\) be a finite core structure and let \(m\) be greater or equal to the number of elements of \(\operatorname{Aut}(\mathfrak{A})\). Then the following are equivalent:_ * \(\mathfrak{A}\) _does not pp-interpret all finite structures; in other words,_ \(\mathfrak{A}\) _has polymorphisms that satisfy_ some _nontrivial system of identities,_ * \(\operatorname{Pol}(\mathfrak{A})\) _satisfies_ \[h(\alpha_{1}(x),\ldots, \alpha_{m}(x),x,y,x,z,y,z)\approx\] \[h(\quad y\quad,\ldots,\quad y\quad,y,x,z,x,z,y).\] ### _Pseudoloops with pseudo assumptions_ A refined version of Theorem 2 follows from Theorem 7 by employing the trick mentioned in Section III-A. We state a "slightly infinite" version that will be required in Section V. We give a proof-sketch; the full proof is in Appendix B-B. **Theorem 10**.: _Let \(\mathfrak{A}=(A;\rightarrow)\) be a smooth digraph and \(\mathcal{G}\) a subgroup of \(\operatorname{Aut}(\mathfrak{A})\). If all weak components of \(\mathfrak{A}\) are finite, and \(\mathfrak{A}/\mathcal{G}\) has algebraic length 1, then_ * \(\mathfrak{A}/\mathcal{G}\) _has a loop, or_ * \(\rightarrow\) _pp-defines with parameters_ \(\operatorname{OR}(\alpha,\alpha)\) _for some proper equivalence_ \(\alpha\) _on a finite_ \(C\subseteq A\)_._ _Sketch of proof._ Representatives of a closed walk in \(\mathfrak{A}/\mathcal{G}\) of algebraic length 1 can be shifted using automorphisms from \(\mathcal{G}\) to a walk of algebraic length 1 from some \(a\) to \(b\) in the same orbit. We take \(g\in\mathcal{G}\) so that \(g(b)=a\), define \(\rightarrow^{\prime}=\rightarrow+g\), and observe that the component of \(a\) wrt. \(\rightarrow^{\prime}\) has algebraic length 1 and is finite. Such components are known to be pp-definable with parameters. We are in the position to keep applying Theorem 10 as in the proof of Theorem 24 with the caveat that the case \((k\rightarrow)=A^{2}\) needs to be dealt with (but this is possible by a result from [14]). Application of Lemma 6 finishes the proof. We provide two examples to illustrate the main theorems. The first one is an observation that has been proved and reproved repeatedly in the literature: the undirected 6-cycle is not invariant under any idempotent weak near-unanimity operation. Now this result follows from our general theorem: Take the automorphism \(g\) of the 6-cycle according to the red arrow in Figure 1 and apply Theorem 2 to the graph and \(\mathcal{G}=\{\operatorname{id},g\}\). We get that the graph with parameters pp-interprets every finite structure; the polymorphisms thus do not satisfy any nontrivial idempotent equational condition. The second example shows that some of the assumptions in our results cannot be removed. We consider the graph \(\mathfrak{T}_{3,3}\) in Figure 2 and \(\mathcal{G}=\{\operatorname{id},g\}\), where \(g\) is in red. On the one hand, Theorem 2 still shows that polymorphisms do not satisfy any nontrivial _idempotent_ identities. On the other hand, \(\mathfrak{T}_{3,3}/\mathcal{G}\) is linked and has no loop, but \(\mathfrak{T}_{3,3}\) satisfies some non-trivial identities, see [21, Example 6.3]. This shows, e.g., that one cannot switch in Theorem 1 "\(\mathfrak{A}\) linked" to "\(\mathfrak{A}/\mathcal{G}\) linked" or "\(\mathfrak{A}\) symmetric non-bipartite", and that parameters are necessary in Theorem 2. ## IV Proof of Theorem 7 The entire section is devoted to the proof of Theorem 7. We fix a finite smooth digraph \(\mathfrak{A}=(A;\rightarrow)\) and a subgroup \(\mathcal{H}\) of the ranked automorphism group of \((A;\rightarrow_{01})\) whose projection is \(\mathcal{G}\). We assume that \(\mathfrak{A}\) is \(k\)-linked but \((k\rightarrow)\neq A^{2}\). As all the relations we work with in this section are on \(A\), we do not usually explicitly specify it, e.g., "\(R\) is subdirect" means \(R\) is subdirect in \(A\). For convenience, we sometimes assume \(A=\{1,2,\ldots,|A|\}\). Let \(O\) be the 0-ranked \(\mathcal{G}\)-orbit of the tuple \((1,2,\ldots,|A|)\). Note that for any \(g:A\to A\), the tuple \((g(1),g(2),\ldots,g(|A|))\) is in \(O\) iff \(g\in\mathcal{G}\). We will, in three steps, show that \(\rightarrow_{01}\) and \(O\) rpp-define a proper unary \(B\) such that \(\mathfrak{A}|_{B}\) is smooth and \(k\)-linked (item 1) or 0-ranked \(\operatorname{OR}(\alpha,\alpha)\) for a proper equivalence \(\alpha\) on \(C\subseteq A\) (item 2). ### _Constructing central or Q-central relations_ We start with definitions, move on to a few auxiliary facts, and conclude the subsection with a first episode of the proof of Theorem 7. The concepts we now introduce play a significant role in Rosenberg's classification of maximal clones [22]; we take them from Pinsker's presentation in [23] of Quackenbush's proof of the classification [24]. A relation \(R\) is _totally symmetric_ if for every \((a_{1},\ldots,a_{n})\in R\) and every permutation \(\sigma\) on \(\{1,2,\ldots,n\}\), the tuple \((a_{\sigma(1)},\ldots,a_{\sigma(n)})\) is in \(R\). Similarly, a relation \(R\) is _totally reflexive_ if any tuple from \(A^{n}\) with at least two repeating entries belongs to \(R\). A relation which is totally symmetric and totally reflexive will be called a _TSR-relation_. The next concept, a center, is additionally very useful in the theory of CSPs, e.g., in Zhuk's dichotomy proof [4] or in absorption theory [25] (cf. [26]). We remark that in some of the mentioned literature, the terminology slightly differs. **Definition 11** (center).: _Let \(n\geq 2\). We call a relation \(R\subseteq A^{n}\) P-central if it is subdirect and the set_ \[\{a\in A:\forall a_{2},\ldots,a_{n}\ (a,a_{2},\ldots,a_{n})\in R\}\] _is nonempty. In such a case, the above set is called a P-center. The P stands for "power"; for \(n=2\) we call a P-central relation central and P-center a center._ **Definition 12** (central equivalence).: _A relation \(R\subseteq A^{n}\) (\(n>2\)) is PQ-central (equivalence-central) if its projection to any two coordinates is full and the binary relation_ \[\alpha=\{(a,a^{\prime})\in A^{2}:\forall a_{3},\ldots,a_{n}\ (a,a^{\prime},a_{3}, \ldots,a_{n})\in R\}\] _is an equivalence relation. The above equivalence relation is then called P-central. For \(n=3\) we talk about Q-central relations and central equivalence relations._ The first fact is an easy observation whose proof is postponed to Appendix C-A. **Lemma 13**.: _Every subdirect and linked but not central ranked relation rpp-defines a proper \(0\)-ranked TSR relation. Moreover, if the new relation is binary, then it is additionally linked._ We will use the following fact stated in [23]; we provide the proof in Appendix C-A. In this lemma we need not worry about the ranking of the relations, as all the applications will use \(0\)-ranked relations exclusively (so pp-definitions will automatically give rise to \(0\)-ranked rpp-definitions). **Lemma 14**.: _Each proper TSR relation of arity at least 3, and each proper linked TSR relation of arity 2 pp-defines a proper TSR relation which is P-central or PQ-central._ The next order of business is to get rid of the powers. This can be achieved by using any relation containing only "surjective" tuples, such as \(O\). **Lemma 15**.: _A ranked TSR P-central relation \(R\) and \(O\) rpp-define the center of \(R\) and a TSR central (binary) \(R^{\prime}\) with the same center (the ranking of \(R^{\prime}\) is inherited from the first two coordinates of \(R\))._ Proof.: Let \(n\) be the arity of \(R\). The relation \[R^{n-1}(x_{1},\ldots,x_{n-1})=\exists y_{1},\ldots,y_{|A|}\\ O(y_{1},\ldots,y_{|A|})\wedge\bigwedge_{i}R(x_{1},\ldots,x_{n-1},y_{i})\] is a ranked TSR P-central relation of arity \(n-1\) with the same center as \(R\) whenever \(n\geq 3\). Repeating the construction (substituting \(R^{n-1}\) for \(R\)) we can get down to \(R^{2}\), which can be taken for \(R^{\prime}\), and to \(R^{1}\), which is an rpp-definition of the center. A similar result holds for PQ-central relations. In this case we again need not worry about the ranking of the relations. The proof is almost identical to the proof of Lemma 15 and we skip it. **Lemma 16**.: _A TSR PQ-central relation \(R\) and \(O\) pp-define the central equivalence of \(R\) and a TSR Q-central (ternary) relation \(R^{\prime}\) with the same central equivalence._ We are ready to proceed with the first step of the proof of Theorem 7. **Lemma 17** (First step).: _The ranked relations \(\rightarrow_{01}\) and \(O\) rpp-define a proper ranked central or a proper 0-ranked Q-central relation._ Fig. 1: First example Fig. 2. Second example Proof.: If \(k\to\) is central, we've already accomplished our goal. Otherwise we apply Lemma13 to obtain a TSR relation, then Lemma14 to obtain a \(0\)-ranked TSR relation which is P-central or PQ-central. We finish by applying Lemma15 or Lemma16 (depending on the case we are in) to end up with the required central or 0-ranked Q-central relation. ### _Constructing an OR relation_ The plan for this subsection is to either rpp-define 0-ranked \(\operatorname{OR}(T,T)\) for a proper TSR \(T\), or \(B\) as in item1 of Theorem7. The proof is split in two parts depending on the relation obtained from the first step. The case of 0-ranked Q-central relation is dealt with in the following lemma (where, again, we need not care about rankings). **Lemma 18** (Second step, Q-central case).: _A proper Q-central relation \(R\) invariant under \(\mathcal{G}\) and \(O\) pp-define \(\operatorname{OR}(T,T)\) for a proper TSR \(T\)._ Proof.: We assume that \(A=\{1,\ldots,n\}\), let \(R\) and \(O\) are as in the statement. Denote the central equivalence of \(R\) by \(\alpha\) and note that by Lemma16\(\alpha\) is pp-definable in \(R\) and \(O\). We choose \(a,b\in A\) such that * \((a,b)\notin\alpha\) and * the set \(I=\{i:(a,b,i)\in R\}\) is maximal (under inclusion) among similar sets defined for other \((a,b)\notin\alpha\). We assume that \(a=1\) and \(b=2\) (this can be obtained by renaming the elements of \(A\)). Next, we find a minimal number \(k\) such that * every \((k-1)\)-element subset of \(A\) is included in \(g(I)\) for some \(g\in\mathcal{G}\) and * some \(k\)-element subset of \(A\) is _not included_ in \(g(I)\) for any \(g\in\mathcal{G}\). We can always find such a \(k\) and it will satisfy \(1\leq k\leq|I|+1\leq n\); this is a consequence of \((a,b)\notin\alpha\). Our \(T\) will consists of tuples \((c_{1},\ldots,c_{k})\) such that \(\{c_{1},\ldots,c_{k}\}\subseteq g(I)\) for some \(g\) from \(\mathcal{G}\). Note that \(T\) is totally symmetric by definition and totally reflexive by the choice of \(k\). The choice of \(k\) ensures, at the same time, that \(T\neq A^{k}\). Our formula will have free variables \(x_{1}^{1},\ldots,x_{k}^{1}\) and \(x_{1}^{2},\ldots,x_{k}^{2}\). The formula is \[\exists y_{0},y_{1},y_{2},z_{1},\ldots,z_{n}\] \[y_{0}=z_{1}\wedge y_{2}=z_{2}\wedge O(z_{1},\ldots,z_{n})\wedge\] \[\bigwedge_{j=1}^{2}\left(\bigwedge_{i=1}^{k}R(y_{j-1},y_{j},x_{i}^ {j})\wedge\bigwedge_{i\in I}R(y_{j-1},y_{j},z_{i})\right)\.\] It remains to verify that the formula works. First take \((a_{1},\ldots,a_{k},b_{1},\ldots,b_{k})\in\operatorname{OR}(T,T)\). Say that \(\{a_{1},\ldots,a_{k}\}\subseteq g(I)\) for some \(g\in\mathcal{G}\) (the case \(\{b_{1},\ldots,b_{k}\}\subseteq g(I)\) is symmetric). We choose a witnessing evaluation of quantified variables as follows: \(y_{0}\mapsto g(1)\), \(y_{1},y_{2}\mapsto g(2)\), and \(z_{i}\) to \(g(i)\). The first three (simple) conjuncts hold by construction. Let's focus on the complex conjunct. For \(j=2\), the first two arguments of \(R\) are identical, thus in \(\alpha\), which is central and therefore \(x_{i}^{2}\)'s can be arbitrary. For \(j=1\), we recall that \(R(1,2,i)\) for every \(i\in I\) and, since \(g\) is an automorphism, we get \(R(g(1),g(2),g(i))\) as required. For the opposite direction, let \(\operatorname{val}\) be an evaluation of variables \(x_{i}^{j},y_{j},z_{i}\) making the quantifier-free part true. The third conjunct ensures that \(g:i\mapsto\operatorname{val}(z_{i})\) is from \(\mathcal{G}\). We can therefore define a new valid evaluation of variables by \(\operatorname{val}^{\prime}(x)=g^{-1}(\operatorname{val}(x))\). The new evaluation satisfies \(\operatorname{val}^{\prime}(z_{i})=i\), \(\operatorname{val}^{\prime}(y_{0})=1\) and \(\operatorname{val}^{\prime}(y_{2})=2\). If, for \(j=1\) or \(j=2\), \(\{\operatorname{val}^{\prime}(x_{1}^{j}),\ldots,\operatorname{val}^{\prime}(x_{ i}^{j})\}\subseteq I\) we achieved our goal. Let \(j\) be such that \((\operatorname{val}^{\prime}(y_{j-1}),\operatorname{val}^{\prime}(y_{j}))\notin\alpha\) and note that \(R(\operatorname{val}^{\prime}(y_{j-1}),\operatorname{val}^{\prime}(y_{j}), \operatorname{val}^{\prime}(z_{i})=i)\) holds for every \(i\in I\). But then the existence of \(i\) with \(\operatorname{val}^{\prime}(x_{j}^{i})\notin I\) would contradict the maximality of \(I\) (as the formula ensures that \(R(\operatorname{val}^{\prime}(y_{j-1}),\operatorname{val}^{\prime}(y_{j}), \operatorname{val}^{\prime}(x_{i}^{j})\)). Therefore no such \(i\) exists and the proof is concluded. The case of ranked central relation is more complex. We start with a ranked central relation, and use it, together with \(\neg_{01}\), to construct another a subset \(B\) with \(\mathfrak{A}|_{B}\) smooth; such a construction appears in, e.g., [14]. A proof is provided in AppendixC-B. **Lemma 19**.: _A nonempty \(C\varsubsetneq A\) and \(\neg_{01}\) rpp-define a nonempty \(B\varsubsetneq A\) such that \(\mathfrak{A}|_{B}\) is smooth._ We remark that for any rpp-formula \(\phi\) in a unary \(C\) and \(\neg_{01}\), the formula obtained by removing all conjuncts \(C(x)\) defines \(A\). This is because \(\to\) is subdirect: witnesses for quantified variables can be obtained from infinite walks to and from a given vertex (evaluate all variables of rank \(r\) to the \(r\)th vertex in this bi-infinite walk). **Lemma 20** (Second step, central case).: _Let \(R\) be a ranked central relation invariant under \(\mathcal{H}\). Then \(R\), \(O\), and \(\neg_{01}\) rpp-define_ * \(\emptyset\neq B\varsubsetneq A\) _such that_ \(\mathfrak{A}|_{B}\) _is smooth and_ \(k\)_-linked, or_ * _0-ranked_ \(\operatorname{OR}(T,T)\) _for some proper TSR_ \(T\)_._ Proof beginning.: Let \(C\) be the center of \(R\) and \(\psi\) be the rpp-formula defining \(B\) from Lemma19, i.e., \(B\) is a proper nonempty subset of \(A\) and \(\mathfrak{A}|_{B}\) is smooth. We assume that \(\mathfrak{A}|_{B}\) is not \(k\)-linked; let \(\alpha\) be the \(k\)-linkness equivalence relation on \(B\). Our aim is to rpp-define \(\operatorname{OR}(T,T)\) for a proper TSR \(T\). Take a 0-ranked rpp-formula \(\varphi\) with two free variables defining the \(k\)-linkness relation (with \(|A|\)-many links). Since \(\mathfrak{A}\) is linked, it defines \(A^{2}\). Let \(\varphi^{\prime}\) be obtained from \(\varphi\) by adding a conjunct \(B(x)\) for every variable (i.e., both the quantified and the free variables). Now \(\varphi^{\prime}\) means "being \(k\)-linked in \(\to\) restricted to \(B\)", so it is a \(0\)-ranked pp-definition of \(\alpha\). Next, we define \(\varphi^{\prime}\) by replacing in \(\varphi^{\prime}\) each conjunct \(B(x)\) on a quantified variable \(x\) by the formula \(\psi(x)\). Clearly, the formula \(\varphi^{\prime\prime}\) still defines \(\alpha\). Also note that if we remove all the conjuncts \(C(x)\) (they all come from quantified variables), then all the restrictions on the original quantified variables of \(\varphi^{\prime}\) are dropped (see the remark before the lemma), so the obtained formula defines \(B^{2}\). In \(\varphi^{\prime\prime}\) we remove, one by one, the conjuncts \(C(x)\). At some point we arrive to a formula with a selected, quantified variable \(x\) that defines a subset of \(B^{2}\) strictly larger than \(\alpha\), but if we added back the conjunct \(C(x)\), it would define \(\alpha\). By making \(x\) free, we get an rpp-definition (using \(B\), \(C\), \(\neg_{01}\)) of a ternary relation \(S\) such that * \(\exists x\ S(y,y^{\prime},x)\) is a subset of \(B^{2}\) larger than \(\alpha\), and * \(\exists x\ S(y,y^{\prime},x)\wedge C(x)\) is \(\alpha\). With this ternary relation in hand, the remaining reasoning is somewhat similar to what was done in Lemma18. We provide the full proof in AppendixC-B ### _Third step_ A plan for this section is as follows. We start with \(0\)-ranked \(\operatorname{OR}(T,T)\) for a TSR relation \(T\), next we improve \(T\) to a P-central or PQ-central TSR, and then get \(\operatorname{OR}(C,C)\) for \(C\subseteq A\) or \(\operatorname{OR}(\alpha,\alpha)\) for an equivalence on \(A\). The second case is already item2 of Theorem7. In the first case we need to still work with \(\neg_{01}\) to end up in item1 or in item2. Until the last lemma, all the relations and formulas are 0-ranked. The first lemma is proved in AppendixC-C. **Lemma 21**.: _Let \(T\) be a proper TSR relation. The relation \(\operatorname{OR}(T,T)\) pp-defines \(\operatorname{OR}(S,S)\) with a nonempty proper \(S\) such that \(S\) is unary, or \(S\) is an equivalence on \(A\), or \(S\) is TSR and P-central, or \(S\) is TSR and PQ-central._ **Lemma 22**.: _Let \(R\) be PQ-central with P-central equivalence \(\alpha\). Then \(\operatorname{OR}(R,R)\) and \(O\) pp-define \(\operatorname{OR}(\alpha,\alpha)\)._ _Let \(R\) be P-central with P-center \(C\). Then \(\operatorname{OR}(R,R)\) and \(O\) pp-define \(\operatorname{OR}(C,C)\)._ Proof.: We deal with the PQ-central case only, the proof for the P-central relations is analogous. Let \(n>2\) be the arity of \(R\). We define \[\operatorname{OR}(\alpha,R)(x_{1},x_{2},y_{1},\ldots,y_{n})\equiv\] \[\qquad\qquad\qquad\qquad\exists z_{1},\ldots,z_{|A|}\ O(z_{1}, \ldots,z_{|A|})\ \wedge\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad ### _Proof of Proposition 26_ We fix a smooth digraph \(\mathfrak{A}=(A;\rightarrow)\) and a subgroup \(\mathcal{G}\) of \(\operatorname{Aut}(\mathfrak{A})\) satisfying assumptions of Proposition 26. Most of the work takes place on the 1-orbit graph \(\mathfrak{U}=\mathfrak{A}/\mathcal{G}\). It is, by the assumptions, indeed a graph, which is finite, non-bipartite and without a loop. The domain of \(\mathfrak{U}\), denoted \(U\), is the set of 1-orbits. We use \(\leftrightarrow\) to denote its relation, and vertices \(u\leftrightarrow v\) are called _adjacent_ or _neighbors_. The goal is to prove that \(\rightarrow\) and 1-orbits of \(\mathcal{G}\) pp-define a triangle configuration for \((A,k\rightarrow)\) and \(\mathcal{G}\) for some \(k\). The proof is by contradiction; we assume that \(\mathfrak{A}\) and \(\mathcal{G}\) is a counterexample with the smallest possible \(|U|\). The strategy of the proof is to pp-define a triangle configuration in the orbit graph \(\mathfrak{U}\). In order to be able to lift pp-definability back, we need to restrict the allowed pp-definitions as follows. **Definition 27**.: _A subset \(S\subseteq U\) is tree-definable1 if \(S\) is contained in the smallest family \(\mathcal{S}\) of subsets of \(U\) such that_ Footnote 1: The term stems from the fact that tree-definability is the same as definability with parameters by a tree formula, but this is irrelevant for this paper. 1. \(\mathcal{S}\) contains \(U\) and all singletons, and 2. if \(B_{1},B_{2}\in\mathcal{S}\), then \(B_{1}^{\leftrightarrow}\in\mathcal{S}\) and \(B_{1}\cap B_{2}\in\mathcal{S}\). It is easy to prove by induction that if \(S\subseteq U\) is tree-definable, then \(\bigcup S\subseteq A\) is pp-definable from \(\rightarrow\) and 1-orbits of \(\mathcal{G}\). As the first step, we replace \(\rightarrow\) by \((k-2)\rightarrow\), where \(k\) is the length of the shortest odd cycle in \(\mathfrak{U}\). This new digraph \(\mathfrak{A}\) is still a counterexample to the proposition, since the new \(\mathfrak{U}\) is non-bipartite and without a loop. Additionally, it contains a triangle. As the next step, we derive some consequences of the minimality of our counterexample. **Lemma 28**.: _If \(S\varsubsetneq U\) is tree-definable, then \(\mathfrak{U}|_{S}\) is bipartite._ Proof.: Suppose \(\mathfrak{U}|_{S}\) is not bipartite. Since \(S\) is tree-definable, the set \(B=\bigcup S\varsubsetneq A\) is pp-definable from \(\rightarrow\) and 1-orbits of \(\mathcal{G}\), in particular, it is a union of 1-orbits of \(\mathcal{G}\). Moreover, the digraph \(\mathfrak{A}|_{S}\) together with \(\mathcal{G}|_{S}\) satisfies the assumptions of Proposition 26 and it has strictly smaller number of orbits. Since our counterexample is minimal, we obtain a pp-definable triangle configuration for this restricted digraph, which is a triangle configuration for \(\mathfrak{A}\), a contradiction. **Lemma 29**.: _Any two vertices of \(U\) have a common neighbor._ Proof.: Take arbitrary vertices \(u,u^{\prime}\in U\) and a vertex \(u_{1}\) in a triangle. Since \(\{u_{1}\}^{\leftrightarrow\leftrightarrow}\) contains that triangle, we get \(\{u_{1}\}^{\leftrightarrow\leftrightarrow}=U\) by Lemma 28. In particular, \(u_{1}\leftrightarrow u_{2}\leftrightarrow u\) for some \(u_{2}\), and then also \(u_{1}\leftrightarrow u_{3}\leftrightarrow u_{2}\) for some \(u_{3}\). Now \(u_{2}\) is in a triangle (namely \(u_{1}\), \(u_{2}\), \(u_{3}\)) and we can use the same reasoning to show that \(u\) is in a triangle. Applying the argument once more, we get \(u_{4}\) such that \(u\leftrightarrow u_{4}\leftrightarrow u^{\prime}\) - the required common neighbor. We call a triple \((U_{0},U_{1},U_{2})\) of subsets of \(U\) a _strong configuration_ if 1. each \(U_{i}\) is tree-definable, 2. each \(U_{i}\) is independent (i.e., \(U_{i}^{\leftrightarrow}\cap U_{i}=\emptyset\)), 3. \(U_{i}^{\leftrightarrow}\supseteq U_{j}\) for all \(i,j\) with \(i\neq j\), and 4. \(\mathfrak{U}|_{U_{0}\cup U_{1}}\) and \(\mathfrak{U}|_{U_{0}\cup U_{2}}\) are both (weakly) connected. We start with a strong configuration \((U_{0},U_{1},U_{2})\) = \((\{u_{0}\},\{u_{1}\},\{u_{2}\})\), where \(u_{0}\), \(u_{1}\), and \(u_{2}\) form a triangle. The strategy now is to gradually enlarge the sets \(U_{i}\) (preserving item (a)-item (d)) so that, eventually, \(U_{0}\cup U_{1}\cup U_{2}=U\). If this is achieved, then \((\bigcup U_{0},\bigcup U_{1},\bigcup U_{2},\bigcup U=A)\) clearly forms a triangle configuration for \(\rightarrow\) (note that the \(U_{i}\) are disjoint by the other conditions) and all the four sets are pp-definable from \(\rightarrow\) and 1-orbits of \(\mathcal{G}\) -- a contradiction would be reached. We grow the sets by applying Lemma 30 or Lemma 36; more precisely we apply Lemma 30 as long as it enlarges a set. If Lemma 30 fails to enlarge the configuration, we apply Lemma 36. If neither operation enlarges the configuration \(\bigcup U=A\) and we obtained our goal. We assume that * \((U_{0},U_{1},U_{2})\) is a strong configuration, * some \(u_{i}\in U_{i}\) form a triangle, and * \(U_{0}\cup U_{1}\cup U_{2}\) is a proper subset of \(U\), and present the first lemma: **Lemma 30**.: _Let \(i\in\{0,1,2\}\), and let \(U_{i}^{\prime}=U_{i-1}^{\leftrightarrow}\cap U_{i+1}^{\leftrightarrow}\) and \(U_{j}^{\prime}\coloneqq U_{j}\) for \(j\neq i\) (indices are computed modulo 3). Then \((U_{0}^{\prime},U_{1}^{\prime},U_{2}^{\prime})\) is a strong configuration and \(U_{j}\subseteq U_{j}^{\prime}\) for each \(j\)._ Proof.: Let \(j,k\) be such that \(\{i,j,k\}=\{0,1,2\}\) and \(k\neq 0\). Thus \(U_{i}^{\prime}=U_{j}^{\prime\leftrightarrow}\cap U_{k}^{\leftrightarrow}\), \(U_{j}^{\prime}=U_{j}\), \(U_{k}^{\prime}=U_{k}\), and \(\mathfrak{U}_{U_{i}\cup U_{j}}\) is connected. The sets \(U_{j}^{\leftrightarrow}\) and \(U_{k}^{\leftrightarrow}\) both contain \(U_{i}\), therefore so does \(U_{i}^{\prime}\). Moreover, \((U_{i}^{\prime})^{\leftrightarrow}\supseteq U_{i}^{\leftrightarrow}\supseteq U_{j},U_ {k}\) and \(U_{j}^{\leftrightarrow},U_{k}^{\leftrightarrow}\supseteq U_{j}^{\leftrightarrow} \cap U_{k}^{\leftrightarrow}=U_{i}^{\prime}\). Since \(U_{i}^{\prime}\) is tree-definable, conditions (a), (c) and the inclusions in the statement are verified. Since \(\mathfrak{U}_{U_{i}\cup U_{j}}\) is connected and \(U_{i}\), \(U_{j}\) are independent, any two vertices in \(U_{j}\) are connected by a walk in \(U_{i}\cup U_{j}\) of even length. Observe that every vertex of \(U_{i}^{\prime}\subseteq U_{j}^{\leftrightarrow}\) is adjacent to a vertex in \(U_{j}\). It follows that \(\mathfrak{U}_{U_{i}^{\prime}\cup U_{j}}\) is connected but also that \(U_{i}^{\prime}\) is independent. Indeed, otherwise we obtain a walk of odd length in \(U_{k}^{\leftrightarrow}\supseteq U_{i}^{\prime}\cup U_{j}\), which is a proper subset of \(U\) (as \(U_{k}\) is independent), a contradiction with Lemma 28. We have verified condition (b) and a half of (d), the other half of (d) readily follows. If one of the inclusions in Lemma 30 is proper, we succeeded in expanding our configuration. Assume, therefore, that \[U_{i}=U_{i-1}^{\leftrightarrow}\cap U_{i+1}^{\leftrightarrow}\ \text{ for each }i\enspace.\] Set \[V_{i}\coloneqq U_{i}^{\leftrightarrow}\ \setminus\ (U_{i-1}\cup U_{i+1})\ \text{ for }i\in\{0,1,2\}\] and note that we do not claim \(V_{i}\) is tree-(or pp-)definable. Now the graph has the following structure. All the \(U_{i}\) and \(V_{i}\) are pairwise disjoint, each \(u\in U_{i}\) has an edge to \(U_{i-1}\) and to \(U_{i+1}\), and has no edge to \(V_{i-1}\) or \(V_{i+1}\). Every has an edge to \(U_{i}\) (and no edge to \(U_{i-1}\) or \(U_{i+1}\)). The next lemma shows that \(v\in V_{i}\) has an edge to \(V_{i-1}\) and an edge to \(V_{i+1}\). Its proof, given in Appendix D-B, is exceptional in that it works with the original digraph \(\mathfrak{A}\), unlike all the other lemmata. This is, in a way, necessary, because a triangle in the bowtie graph provably cannot be properly expanded by means of tree definitions. **Lemma 31**.: _If \(i\neq j\), then \(V_{j}^{\leftrightarrow}\supseteq V_{i}\)._ **Lemma 32**.: _Each \(V_{i}\) is nonempty._ Proof.: Some \(V_{i}\) must be nonempty, since otherwise \(U_{i}^{\leftrightarrow}=U_{i-1}\cup U_{i+1}\) for each \(i\), and then \(U_{1}^{\leftrightarrow}\,{\leftrightarrow}=U_{1}\cup U_{2}\cup U_{3}\) is a proper tree-definable subset of \(U\) that contains a triangle, a contradiction to Lemma 28. By Lemma 31, each vertex in \(V_{i}\) has a neighbor in \(V_{j}\) for any \(j\neq i\); in particular, \(V_{j}\) is nonempty. While each \(U_{i}\) is independent, we now observe that the \(V_{i}\) are quite different. **Lemma 33**.: _Each common neighbor of \(v\in V_{i}\) and \(u\in U_{i}\) is in \(V_{i}\). In particular, each \(v\in V_{i}\) has a neighbor in \(V_{i}\)._ Proof.: The common neighbor is in \(U_{i}^{\leftrightarrow}=U_{i-1}\cup U_{i+1}\cup V_{i}\). But there are no edges between \(V_{i}\) and \(U_{i-1}\cup U_{i+1}\). The second part follows from Lemma 29. We have all the necessary structural information to expand our configuration. By Lemma 32, there exists a vertex \(v_{1}\in V_{1}\). We fix such a vertex and inductively define \[W=\{v_{1}\}^{\leftrightarrow}\cap U_{1}^{\leftrightarrow},\ S_{0}:=W^{ \leftrightarrow}\cap U_{0}^{\leftrightarrow},\ S_{n+1}:=S_{n}^{\leftrightarrow }\cap U_{0}^{\leftrightarrow}\] for \(n=1,2,\dots\). We will show that \((U_{0},S_{n},S_{n+1})\) is a strong configuration properly extending \((U_{0},U_{1},U_{2})\) for a sufficiently large even \(n\). Observe first that each \(S_{n}\) is tree-definable and let's move on to more interesting facts. **Lemma 34**.: _The following inclusions hold._ \[U_{1}\varsubsetneq S_{0}\subseteq S_{2}\subseteq\dots\subseteq S_{2n} \subseteq\dots\subseteq U_{1}\cup V_{0}\] \[U_{2}\varsubsetneq S_{1}\subseteq S_{3}\subseteq\dots\subseteq S _{2n+1}\subseteq\dots\subseteq U_{2}\cup V_{0}\] Proof.: We begin by proving \(U_{1}\varsubsetneq S_{0}\subseteq U_{1}\cup V_{0}\): Each vertex in \(W\) is adjacent to \(v_{1}\in V_{1}\) and a vertex in \(U_{1}\), so it must belong to \(V_{1}\) by Lemma 33. Since \(U_{0}^{\leftrightarrow}=U_{1}\cup U_{2}\cup V_{0}\) and there are no edges between \(U_{2}\) and \(V_{1}\), we get \(S_{0}\subseteq U_{1}\cup V_{0}\). By Lemma 29, \(v_{1}\) and each \(u_{1}\in U_{1}\) have a common neighbor, which belongs to \(W\), therefore \(u_{1}\in W^{\leftrightarrow}\). Since \(u_{1}\in U_{0}^{\leftrightarrow}\), we have shown that \(U_{1}\subseteq S_{0}\). Moreover, the inclusion is proper as each vertex in \(W\subseteq V_{1}\) has a neighbor in \(V_{0}\) by Lemma 31; this neighbor belongs to \(S_{0}\). The proof is finished by induction: e.g., for even \(n\), \(S_{n+1}\subseteq S_{n}^{\leftrightarrow}\cap U_{0}^{\leftrightarrow}\subseteq (U_{1}\cup V_{0})^{\leftrightarrow}\cap U_{0}^{\leftrightarrow}\subseteq U_{2 }\cup V_{0}\), and every vertex in \(S_{n}\) has a neighbor in \(S_{n+1}\) (use Lemma 33 for vertices in \(V_{0}\)), so \(S_{n}\subseteq S_{n+2}\). **Lemma 35**.: _Every \(S_{n}\) is independent._ Proof.: We know that \(S_{n}\subseteq U_{i}\cup V\) (where \(i\in\{1,2\}\)) by Lemma 34, that \(U_{i}\) is independent, and that there are no edges between \(V_{0}\) and \(U_{i}\). It is therefore enough to verify that there are no edges in \(S_{n}\cap V_{0}\). Assume to the contrary that \(v,v^{\prime}\) are adjacent and \(\{v,v^{\prime}\}\subseteq S_{n}\cap V_{0}\); clearly \(\{v,v^{\prime}\}\subseteq S_{n+1}\cap V_{0}\). On the one hand, \((S_{n}\cap S_{n+1})^{\leftrightarrow}\) contains a triangle by Lemma 29. On the other hand, \(S_{n}\cap S_{n+1}\subseteq V_{0}\) and therefore \((S_{n}\cap S_{n+1})^{\leftrightarrow}\cap U_{1}=\emptyset\), which contradicts Lemma 28. Lemma 34 shows that the triple \((U_{0},S_{n},S_{n+1})\) properly extends \((U_{0},U_{1},U_{2})\) for any even \(n\). The following lemma thus finishes the proof. **Lemma 36**.: _The triple \((U_{0},S_{n},S_{n+1})\) is a strong configuration for every sufficiently large even \(n\)._ Proof.: We have already observed that each \(S_{n}\) is tree-definable, so condition (a) holds. It follows from the inclusions in Lemma 34 that \(S_{n}=S_{n+2}\) for every sufficiently large \(n\). Pick such an even \(n\). By Lemma 35, \(S_{n}\) and \(S_{n+1}\) are independent, proving condition (b). As for the inclusions in condition (c), we have that \(U_{0}^{\leftrightarrow}\) contains both \(S_{n}\) and \(S_{n+1}\) by definitions, that \(S_{n}^{\leftrightarrow}\) contains \(U_{0}\) (as it contains \(U_{1}^{\leftrightarrow}\) by Lemma 34) and \(S_{n+1}\) (by definition of \(S_{n+1}\)), and that \(S_{n+1}^{\leftrightarrow}\) contains \(U_{0}\) and \(S_{n+2}\), which is equal to \(S_{n}\). The remaining, connectivity condition (d) is also simple: \(\mathfrak{U}_{0}\cup_{S_{n}}\) is connected, because every vertex of \(S_{n}\) is adjacent to a vertex of \(U_{0}\) (as \(U_{0}^{\leftrightarrow}\supseteq S_{n}\) by definition), \(S_{n}\) contains \(U_{1}\), and \(\mathfrak{U}_{U_{0}\cup U_{1}}\) is connected. For a similar reason, \(\mathfrak{U}_{U_{0}\cup S_{n+1}}\) is connected as well, and the proof is concluded. ### _Weakest pseudolooop conditions_ ## VI Countably categorical structures without pseudo-WNU polymorphisms In this section we construct an \(\omega\)-categorical model-complete core structure \(\mathfrak{A}\) that does not pp-interpret \(K_{3}\) with parameters, and whose polymorphism clone does not contain any pseudo-WNU operation of any arity. This provides a counterexample to [7, Problem 14.2.6 (21)]. In fact, we show the following stronger result: let \(\Sigma\) be a balanced minor condition, and let \(\bigvee_{i\in\omega}\Delta_{i}\) be a _weak equational condition_, i.e., a disjunction of equational conditions \(\Delta_{i}\). We prove that if for each fixed \(i\in\omega\), the satisfaction of \(\Sigma\) does not imply the satisfaction of \(\Delta_{i}\) over finite idempotent polymorphism clones, then there exists an \(\omega\)-categorical model-complete core structure \(\mathfrak{A}\) such that \(\mathrm{Pol}(\mathfrak{A})\) satisfies \(\overline{\Sigma}\) while omitting every \(\Delta_{i}\); that is, \(\mathrm{Pol}(\mathfrak{A})\) does not satisfy the disjunction of the \(\Delta_{i}\) even if this disjunction might well be implied by \(\Sigma\) over finite idempotent polymorphism clones. Moreover, the _orbit growth_, i.e., the growth of the number of \(n\)-orbits as \(n\) increases, can be taken to be smaller than doubly exponential. It was shown in [27, 28] that if \(\mathfrak{A}\) is an \(\omega\)-categorical model-complete core whose orbit growth is smaller than \(2^{2^{n}}\), then \(\mathfrak{A}\) pp-interprets \(K_{3}\) with parameters if, and only if, there exists a finite subset of \(A\) on which \(\mathrm{Pol}(\mathfrak{A})\) does not satisfy any non-trivial minor condition. Thus, our structure in Theorem 4 locally admits polymorphisms satisfying non-trivial minor conditions while still avoiding pseudo-WNU polymorphisms. We now define some basic notions that we borrow from model theory. A structure \(\mathfrak{A}\) is _homogeneous_ if for every finite set \(B\subseteq A\) and every embedding \(f\colon\mathfrak{B}\to\mathfrak{A}\) of the structure \(\mathfrak{B}\) induced by \(\mathfrak{A}\) on \(B\), there exists an automorphism \(\alpha\) of \(\mathfrak{A}\) such that \(\alpha|_{B}=f\). Homogeneous structures are uniquely identified by the class of their finite substructures, which is called their _age_. Moreover, a classical result by Fraisse's states that a countable class of finite relational structures \(\mathcal{C}\) is the age of a homogeneous structure \(\mathfrak{C}\) iff \(\mathcal{C}\) is closed under taking substructures and satisfies the so-called _amalgamation property_: for all structures \(\mathfrak{B},\mathcal{C}_{1},\mathcal{C}_{2}\in\mathcal{C}\), and all embeddings \(f_{i}\colon\mathfrak{B}\to\mathcal{C}_{i}\), there exists a structure \(\mathfrak{D}\in\mathcal{C}\) and embeddings \(e_{i}\colon\mathcal{C}_{i}\to\mathfrak{D}\) such that \(e_{1}\circ f_{1}=e_{2}\circ f_{2}\). The structure \(\mathfrak{C}\) is called the _Fraisse limit_ of \(\mathcal{C}\). In the case that the embeddings \(e_{1},e_{2}\) can always be chosen so that \(e_{1}(C_{1})\cap e_{2}(C_{2})=e_{1}(f_{1}(B))\) holds, then we say that \(\mathcal{C}\) has the _strong amalgamation property (SAP)_. Fix a finite relational structure \(\mathfrak{A}\) with domain \(\{1,\ldots,n\}\), and let \(k\geq 2\). For an arbitrary set \(B\), let \([B]^{k}\) be the set of tuples \((b_{1},\ldots,b_{k})\in B^{k}\) with pairwise distinct entries. Let \(\sigma\) be a relational signature that contains a symbol \(R^{k}\) of arity \(kr\) for every relation \(R\) of arity \(r\) of \(\mathfrak{A}\), together with a \(2k\)-ary symbol \(\sim\), and \(k\)-ary symbols \(P_{1},\ldots,P_{n}\). Let \(\mathcal{C}(\mathfrak{A},k)\) be the class of all finite substructures of \(\sigma\)-structures \(\mathfrak{B}\) satisfying the following conditions: * \(\sim\) is an equivalence relation on \([B]^{k}\) with \(n\) classes \(P_{1},\ldots,P_{n}\); * identifying \(P_{i}\) with \(i\), and denoting by \([x]_{\sim}\) the equivalence class of \(x\) for every \(x\in[B]^{k}\) we have: for every relation \(R\) of \(\mathfrak{A}\), say of arity \(r\), and for every \(b^{1},\ldots,b^{r}\in B^{k}\), one has \((b^{1},\ldots,b^{r})\in R^{k}\) if, and only if, \(b^{1},\ldots,b^{r}\in[B]^{k}\) and \(([b^{1}]_{\sim},\ldots,[b^{r}]_{\sim})\in R\). Note that any structure in \(\mathcal{C}(\mathfrak{A},k)\) is uniquely determined by \(P_{1},\ldots,P_{n}\). It can be seen that \(\mathcal{C}(\mathfrak{A},k)\) is nonempty and has the SAP, and therefore its Fraisse limit \(\mathfrak{A}^{\otimes k}\) is a homogeneous structure without algebraicity. Observe that the factor map corresponding to \(\sim\) is a pp-interpretation of \(\mathfrak{A}\) in \(\mathfrak{A}^{\otimes k}\) (again identifying \(P_{i}\) with \(i\), as we shall often do in the following); here, we use that \(\neq\) is pp-definable in \(\mathfrak{A}^{\otimes k}\) as a projection of \(\sim\), and therefore the domain of this factor map is pp-definable in \(\mathfrak{A}^{\otimes k}\). Moreover, if two \(m\)-tuples \(a,b\) from \(\mathfrak{A}^{\otimes k}\) are such that they satisfy the same equalities among their components and are such that \([a^{\prime}]_{\sim}=[b^{\prime}]_{\sim}\) for all projections \(a^{\prime},b^{\prime}\) of \(a,b\) onto the same \(k\) coordinates which are injective, then \(a\) and \(b\) belong to the same orbit under the action of \(\operatorname{Aut}(\mathfrak{A}^{\otimes k})\) on \(k\)-tuples. In particular, \(\mathfrak{A}^{\otimes k}\) is \(\omega\)-categorical. The proof of the following is deferred to Appendix E. **Proposition 37**.: _If \(\mathfrak{A}\) is a core, then \(\mathfrak{A}^{\otimes k}\) is a model-complete core._ **Proposition 38**.: _Let \(\Sigma\) be a balanced minor condition that is satisfiable in \(\operatorname{Pol}(\mathfrak{A})\) by idempotent operations. Then \(\operatorname{Pol}(\mathfrak{A}^{\otimes k})\) contains injective functions satisfying \(\overline{\Sigma}\)._ Proof.: As before, let \(\{1,\ldots,n\}\) be the domain of \(\mathfrak{A}\), let \(B\) be the domain of \(\mathfrak{A}^{\otimes k}\), and identify each equivalence class \(P_{i}\) of \(\sim\) with \(i\), for all \(i\in\{1,\ldots,n\}\). For every symbol \(s\) appearing in \(\Sigma\), we set \(C_{s}:=B^{r}\), where \(r\) is the arity of \(s\). We also use \(s\) to denote an idempotent operation in \(\operatorname{Pol}(\mathfrak{A})\) witnessing the fact that \(\operatorname{Pol}(\mathfrak{A})\) satisfies \(\Sigma\). Our goal is to assign a value in \(B\) to each element of \(C_{s}\), thus obtaining a function \(s^{\prime}\colon B^{r}\to B\); the functions thus obtained will, together with embeddings for the new unary symbols, witness the satisfaction of \(\overline{\Sigma}\). In order to do that, we first define a partial mapping \(q_{s}\colon[C_{s}]^{k}\to\{1,\ldots,n\}\) by setting, for any pairwise distinct tuples \(c^{1},\ldots,c^{k}\in C_{s}=B^{r}\) with the property that the tuples \((c^{1}_{i},\ldots,c^{k}_{i})\) are injective for all \(i\in\{1,\ldots,r\}\) \[q_{s}(c^{1},\ldots,c^{k}):=s([c^{1}_{1},\ldots,c^{k}_{1}])_{\sim},\ldots,[(c^{ 1}_{r},\ldots,c^{k}_{r})]_{\sim}).\] We then extend \(q_{s}\) to a total function on \([C_{s}]^{k}\) by setting its value to be \(1\) elsewhere. Identifying the classes of its kernel with elements of the set \(\{1,\ldots,n\}\), we see that \(q_{s}\) induces a structure \(\mathfrak{C}_{s}\) on \(C_{s}\) whose finite substructures belong to \(\mathcal{C}(\mathfrak{A},k)\). Since \(\mathfrak{A}^{\otimes k}\) is \(\omega\)-categorical, we obtain that there exists an embedding \(s^{\prime}\colon\mathfrak{C}_{s}\to\mathfrak{A}^{\otimes k}\). Since \(s\) is an idempotent polymorphism of \(\mathfrak{A}\), the identity map on \(B^{r}=C_{s}\) is a homomorphism from \(\mathfrak{A}^{\otimes k}\)? to \(\mathfrak{C}_{s}\). Hence, \(s^{\prime}\) is, viewed as the composition of an embedding with that identity map, a polymorphism of \(\mathfrak{A}^{\otimes k}\); moreover, it is injective. Let \(s,t\) be symbols of arities \(r_{s},r_{t}\) which appear in \(\Sigma\), and let \(\sigma\colon[r_{s}]\to[r]\) and \(\tau\colon[r_{t}]\to[r]\), where \(r\geq 1\). We prove that if \(s^{\sigma}\approx t^{\tau}\) is an identity in \(\Sigma\), then \(u\circ s^{\sigma\sigma}=v\circ t^{\prime\tau}\) holds for some embeddings \(u,v\) of \(\mathfrak{A}^{\otimes k}\). Let \(m\geq k\), and let \(b_{1},\ldots,b_{r}\) be \(m\)-tuples of elements of \(\mathfrak{A}^{\otimes k}\). Observe that the \(m\)-tuples \((s^{\prime})^{\sigma}(b_{1},\ldots,b_{r})\) and \((t^{\prime})^{\tau}(b_{1},\ldots,b_{r})\) satisfy the same equalities since \(\Sigma\) is balanced and since both \(s^{\prime}\) and \(t^{\prime}\) are injective. Let \(i_{1},\ldots,i_{k}\in\{1,\ldots,m\}\) be distinct, and let \(c_{j}\) be the \(k\)-tuple obtained by projecting \(b_{j}\) onto \(i_{1},\ldots,i_{k}\), for all \(j\in\{1,\ldots,r\}\). We claim that if \((s^{\prime})^{\sigma}(c_{1},\ldots,c_{r})\) is injective, then \((s^{\prime})^{\sigma}(c_{1},\ldots,c_{r})\) and \((t^{\prime})^{\tau}(c_{1},\ldots,c_{r})\) belong to the same \(\sim\)-class. If that is the case, then \((s^{\prime})^{\sigma}(b_{1},\ldots,b_{r})\) and \((t^{\prime})^{\tau}(b_{1},\ldots,b_{r})\) are in the same orbit under \(\operatorname{Aut}(\mathfrak{A}^{\otimes k})\), by the definition of \(\mathfrak{A}^{\otimes k}\) and its homogeneity. With this, and since \(m\geq k\) was arbitrary, a standard compactness argument (see e.g. [29, Lemma 3]) yields the existence of embeddings \(u,v\) of \(\mathfrak{A}^{\otimes k}\) satisfying \(u\circ(s^{\prime})^{\sigma}=v\circ(t^{\prime})^{\tau}\). This proves that \(\overline{\Sigma}\) is satisfiable in \(\operatorname{Pol}(\mathfrak{A}^{\otimes k})\). For the claim that remains to be proven, and \(t^{\prime}(c_{r(1)},\ldots,c_{r(r_{r})})\) agree. Since \((s^{\prime})^{\sigma}(c_{1},\ldots,c_{r})=s^{\prime}(c_{\sigma(1)},\ldots,c_{ \sigma(r_{s})})\), and since the analogous statement holds for \(t\) and \(\tau\), our claim follows. Let \((\mathfrak{A}_{i})_{i\in\omega}\) be a sequence of structures, and \((k_{i})_{i\in\omega}\) be a sequence of positive natural numbers. Let \(\sigma_{i}\) be the signature used in the construction of \(\mathcal{C}(\mathfrak{A}_{i},k_{i})\), and assume without loss of generality that the signatures \((\sigma_{i})_{i\in\omega}\) are all disjoint. The _superposition_ of the classes \(\mathcal{C}(\mathfrak{A}_{i},k_{i})\) is the class of structures \(\mathfrak{B}\) in the signature \(\bigcup\sigma_{i}\) whose \(\sigma_{i}\)-reduct belongs to \(\mathcal{C}(\mathfrak{A}_{i},k_{i})\) for all \(i\in\omega\). It is a standard fact that the superposition of classes that have SAP has itself SAP. A straightforward modification of the proof of Proposition38 shows that the Fraisse limit of the superposition of classes \(\mathcal{C}(\mathfrak{A}_{i},k_{i})\) also admits injective polymorphisms satisfying \(\overline{\Sigma}\). The proof can be found in AppendixE for the convenience of the reader. **Proposition 39**.: _Let \((\mathfrak{A}_{i})_{i\in\omega}\) be a sequence of structures, each having idempotent polymorphisms satisfying \(\Sigma\). Let \((k_{i})_{i\in\omega}\) be a sequence of positive natural numbers. The Fraisse limit of the superposition of all the classes \(\mathcal{C}(\mathfrak{A}_{i},k_{i})\) has injective polymorphisms satisfying \(\overline{\Sigma}\)_ **Theorem 40**.: _Let \(\Sigma\) be a balanced minor condition, and let \(\bigvee_{i\in\omega}\Delta_{i}\) be a weak equational condition such that for every \(i\in\omega\), there exists a finite idempotent polymorphism clone satisfying \(\Sigma\) and not satisfying \(\Delta_{i}\). Then there exists an \(\omega\)-categorical homogeneous model-complete core structure \(\mathfrak{A}\) with small orbit growth such that \(\operatorname{Pol}(\mathfrak{A})\) satisfies \(\overline{\Sigma}\) and does not satisfy \(\Delta_{i}\) for any \(i\in\omega\)._ Proof.: Let \((\mathfrak{A}_{i})_{i\in\omega}\) be a sequence of finite structures such that \(\operatorname{Pol}(\mathfrak{A}_{i})\) is a finite idempotent clone that satisfies \(\Sigma\) and that does not satisfy \(\Delta_{i}\). Let \(k\colon\mathbb{N}\to\mathbb{N}\) be a function increasing sufficiently fast. Let \(\mathfrak{A}\) be the Fraisse limit of the superposition of all the classes \(\mathcal{C}(\mathfrak{A}_{i},k(i))\). Then \(\mathfrak{A}\) is \(\omega\)-categorical and even has small orbit growth (see [21], or Lemma 5.6 in [30]). By definition, it is homogeneous, and as in Proposition37, one sees that it is a model-complete core since the Fraisse limit of each of the superposed classes is. By Proposition38 and Proposition39, \(\mathfrak{A}\) satisfies \(\overline{\Sigma}\). Now for any \(i\in\omega\), since \(\operatorname{Pol}(\mathfrak{A})\) pp-interprets \(\mathfrak{A}_{i}\), we can refer to [31] to conclude that \(\operatorname{Pol}(\mathfrak{A})\) does not satisfy \(\Delta_{i}\). Proof of Theorem4.: For any fixed \(n\geq 3\), the set of identities stipulating the existence of an \(n\)-ary pseudo-WNU function is not implied by the minor condition of containing a Siggers function over finite idempotent polymorphism clones. The structure \(\mathfrak{A}\) obtained by applying Theorem40 cannot pp-interpret \(K_{3}\) with parameters, since the pseudo-Siggers identity satisfied by its polymorphisms prevents this [9]; here we use the fact that it is a model-complete core. The example can be made to have a finite signature by using the Hrushovski-encoding from [32, 33]: the encoding is a structure \(\mathfrak{B}\) with a finite relational signature such that \(\operatorname{Pol}(\mathfrak{B})\) satisfies every pseudo-variant of any minor condition that is satisfied in \(\operatorname{Pol}(\mathfrak{A})\) by injections [32, Proposition 3.16], in particular the pseudo-Siggers condition. On the other hand, there exists a pp-interpretation of \(\mathfrak{A}\) in \(\mathfrak{B}\) by [32, Proposition 3.13] combined with [31], which yields the absence of pseudo-WNU operations in \(\operatorname{Pol}(\mathfrak{B})\). The structure \(\mathfrak{B}\) still has slow orbit growth [32, Proposition 3.12]. Since the structure \(\mathfrak{A}\) is a homogeneous model-complete core, its encoding \(\mathfrak{B}\) can be made so that it is a model-complete core as well: being homogeneous itself \(\mathfrak{A}\) need not be homogenized for the encoding, and the new relations used in the encoding can be enriched by relations for their complement (the proof of SAP in [32, Lemma 3.5] still works). Then all endomorphisms of \(\mathfrak{B}\) are embeddings, and the _decoding blow up_ of \(\mathfrak{B}\) is a homogeneous expansion by pp-definable relations; thus, \(\mathfrak{B}\) is a model-complete core. Hence, as was the case before the encoding, it cannot pp-interpret \(K_{3}\) with parameters by [9]. This situation would not change for an expansion of \(\mathfrak{B}\) by orbits, since \(\mathfrak{B}\) is a model-complete core and all orbits are pp-definable anyway. The signature of \(\mathfrak{B}\) can further be reduced to a single relation by replacing its relations, say \(R_{1},\ldots,R_{k}\), by \(R_{1}\times\cdots\times R_{k}\), giving the hypergraph from Theorem4. ## VII Conclusion We conclude with open problems related to our goal to further develop a structural theory amenable to infinite structures. #### Vii-1 Directed graphs Theorem1 and Theorem2 apply to digraphs, but only finite ones. On the other hand, Theorem3 applies to infinite graphs, but the factor graph has to be symmetric. A tantalizing direction is to generalize the latter theorem to non-symmetric factor digraphs of algebraic length \(1\). Our results suggest three approaches toward this goal: to improve our novel relational approach to the finite theorems, to further exploit our infinite-to-finite reduction technique applied in Proposition26, and, finally, to develop from our proofs an alternative, algebraic approach. The last outcome would be the most desired one, since the algebraic techniques, so powerful in the finite (and substantially influenced by the non-symmetric generalization [14] of [1]), remain relatively weak in the infinite. #### Vii-2 A uniform identity Theorem9 gives us, for every finite domain, a single identity satisfied in the polymorphism clone of any core structure on this domain which does not pp-interpret all finite structures. It would be desirable to obtain a single identity for all finite domains, independently of the size. This could pave the way for a positive answer to the open problem stated in [9] and [11] whether the failure of any non-trivial algebraic invariant for \(\mathfrak{A}\) leads to the existence of a pp-interpretation of \(K_{3}\)[31]. In fact, this could yield a single algebraic invariant for \(\omega\)-categorical model-complete cores witnessing the failure of the pp-interpretation (without parameters!) of EVERYTHING. #### Vii-3 Pseudo-WNUs While the negative result in Theorem4 might disappoint hopes sparked by the result of Barto and Pinsker [9] for an algebraic theory of polymorphism clones of \(\omega\)-categorical structures, not all is lost for the smaller subclass of first-order reducts of finitely bounded homogeneous structures for which a CSP complexity dichotomy has been conjectured. All complexity classifications within that class have shown the existence of pseudo-WNUs in the tractable cases (see e.g. [7, 8] for a recent account of results), and recently more general algebraic methods have been developed for these classifications [16]. One of the main open problems is whether this situation generalizes to the entire class.
2301.04220
Correlative mapping of local hysteresis properties in VO$_2$
We have developed a new optical microscopy technique able to track micron-sized surface clusters as temperature is varied. Potential candidates for study include phase separated metal-insulator materials, ferroelectrics, and porous structures. Several key techniques (including autofocus, step motor/cross correlation alignments, single-pixel thresholding, pair connectivity correlation length and image convolution) were implemented in order to obtain a time series of thresholded images. Here, we apply this new method to probe the archetypal phase separated insulator-metal transition in VO$_2$. A precise time and temperature series of the insulator-metal transition was achieved, allowing us to construct for the first time in this material spatial maps of the transition temperature T$_c$. These maps reveal multiple interesting features such as fractal electronic patterns on micron scales, regions of the sample with an extremely large or nearly absent local hysteresis, a positive correlation between the T$_c$ value and the hysteresis width $\Delta$T$_c$, and high cycle-to-cycle reproducibility of the transition. These maps also allow for the identification of individual pixels with unique transition characteristics. This unprecedented knowledge of the local properties of each spot along with the behavior of the entire network paves the way to novel electronics applications enabled by, {\em e.g.}, addressing specific regions with desired memory and/or switching characteristics, as well as detailed explorations of open questions in the theory of hysteresis.
Melissa Alzate Banguero, Sayan Basak, Nicolas Raymond, Forrest Simmons, Pavel Salev, Ivan K. Schuller, Lionel Aigouy, Erica W. Carlson, Alexandre Zimmers
2023-01-10T21:57:05Z
http://arxiv.org/abs/2301.04220v1
# Correlative mapping of local hysteresis properties in VO\({}_{2}\) ###### Abstract We have developed a new optical microscopy technique able to track micron-sized surface clusters as temperature is varied. Potential candidates for study include phase separated metal-insulator materials, ferroelectrics, and porous structures. Several key techniques (including autotocus, step motor/cross correlation alignments, single-pixel thresholding, pair connectivity correlation length and image convolution) were implemented in order to obtain a time series of thresholded images. Here, we apply this new method to probe the archetypal phase separated insulator-metal transition in VO\({}_{2}\). A precise time and temperature series of the insulator-metal transition was achieved, allowing us to construct for the first time in this material spatial maps of the transition temperature T\({}_{c}\). These maps reveal the formation of micron-sized patterns that are reproducible through multiple temperature sweeps within \(\sim\)0.6\({}^{\circ}\)C, although a few isolated patches showed T\({}_{c}\) deviations up to \(\pm\)2\({}^{\circ}\)C. We also derive maps of the local hysteresis widths \(\Delta\)T\({}_{c}\) and local transition widths \(\delta\)T\({}_{c}\). The hysteresis width maps show an average width of \(\Delta\)T\({}_{c}=\)4.3\({}^{\circ}\)C, consistent with macroscopic transport measurements, with, however, small regions as low as \(\Delta\)T\({}_{c}\)\(\sim\)[0\({}^{\circ}\)C-1\({}^{\circ}\)C], and as high as 8\({}^{\circ}\)C. The transition width \(\delta\)T\({}_{c}\) maps shows an average of 2.8\({}^{\circ}\)C and vary greatly (from 0\({}^{\circ}\)C to 8\({}^{\circ}\)C), confirming the strong inhomogeneities of T\({}_{c}\) in the subpixel structure. A positive correlation between T\({}_{c}\) value and hysteresis width \(\Delta\)T\({}_{c}\) is observed by comparing the spatial distributions of each map. Finally, individual pixels with unique transition characteristics are identified and put forward. This unprecedented knowledge of the local properties of each spot along with the behavior of the entire network paves the way to novel electronics applications enabled by, _e.g._, addressing specific regions with desired memory and/or switching characteristics, as well as detailed explorations of open questions in the theory of hysteresis. ## I Introduction Electronic phase separation commonly emerges in a wide variety of quantum materials such as high-T\({}_{c}\) superconductors [1], colossal magnetoresistance manganites [2], insulator-metal transition (IMT) materials [3], multilayer rhombohedral graphene [4],_etc_. An archetypal example of a phase-separated material is vanadium dioxide, VO\({}_{2}\), which undergoes a 1\({}^{st}\) order IMT at T\({}_{c}\)\(\sim\)68\({}^{\circ}\)C [5] (_i.e._, just above room temperature) accompanied by an abrupt several-order-of-magnitude resistivity decrease and monoclinic-to-tetragonal structural change. The exact nature of the transition, whether it is a Peierls transition driven by electron-phonon interactions or a Mott-Hubbard transition driven by electron-electron interactions, is still under debate [6]. In the vicinity of the transition, VO\({}_{2}\) exhibits a spatial coexistence of metal and insulator domains that form intricate patterns [7]. Analyzing the shape, characteristic size and scaling properties of those patterns can yield valuable information about the fundamental interactions that drive the transition [8]. Therefore, understanding and controlling the phase-separate state in quantum materials has become a major research field in recent years [9]. Currently, phase separation imaging in quantum materials reported in the literature mostly comes from scanning probe techniques such as STM [1; 2] and s-SNIM [7; 8]. While these methods have a very high spatial resolution, fine temporal resolution remains hard to implement since scanning probes are very time-consuming. Moreover, STM lacks resolution at room temperature and loses registry as the temperature is changed [10]. To solve this we have developed a new microscopy method to map out clear and stabilized images of the IMT. This optical method allows the precise filming of the transition with hundreds or even thousands of images taken in quick succession (\(\sim\)10 seconds per final image). This allows us to not only follow fine details in the time evolution of the metal-insulating patches but also to filter out thermal noise if needed. We first describe the sample preparation and optical response. We then describe the experimental steps necessary to achieve this mapping. While most steps are straightforward, four new crucial steps were keys to this study: "Height z focusing", "Single pixel time traces", "Pair connectivity correlation length" and "Time domain convolution". These technical developments allowed us to acquire accurate spatial maps of transition temperature distribution, from which the phase separation patterns can be easily obtained at any given temperature. The T\({}_{c}\) maps reveal multiple interesting features including the presence of spots with an extremely large or nearly absent hysteresis of the IMT, a positive correlation between the T\({}_{c}\) value and the hysteresis width, and high cycle-to-cycle reproducibility of the transition. The detailed knowledge of local properties is the necessary ingredient to develop and test basic phase separation and hysteresis theories, as well as to gain microscopic understanding of the device performance for practical applications of quantum materials. ## II Methods ### VO\({}_{2}\) thin film epitaxy, resistivity, and reflectivity Vanadium dioxide thin films were prepared by reactive RF magnetron sputtering of a V\({}_{2}\)O\({}_{3}\) target (\(>\)99.7%, ACI Alloys, Inc.) on an r-cut sapphire substrate. Sample A is \(130nm\) thick and sample B is \(300nm\) thick. A mixture of ultrahigh purity (UHP) argon and UHP oxygen was used for sputtering. The total pressure during deposition was 4mTorr, and the oxygen partial pressure was optimized to 0.1mTorr (2.5% of the total pressure). The substrate temperature during deposition was \(600^{o}\)C while the RF magnetron power was kept at 100W. Grain size in these films is typically found to be 40-\(130nm\) in 100-\(150nm\) films [11]. Grain size is expected to typically be slightly larger in the \(300nm\) film. The sample is found to have a relative 27% optical change in the visible range when passing the IMT (see SI Sec.S1 for details). Gold electrodes were deposited on top of the film, separated by \(10\mu m\) (sample A) and \(30\mu m\) (sample B). Both samples showed a clear IMT (see Fig. S1) above \(68^{o}\)C as evidenced by a drop in resistivity of 4 orders of magnitude [12]. ### Image/temperature recording The optical experimental setup consists of a VO\({}_{2}\) thin film sample placed on a Peltier heater or a Linkam Thms350V temperature controller inside a Nikon optical microscope in epi configuration (both the illumination and reflection of light travel through the same objective). Illumination in the visible range was used (halogen lamp, no filters) [13]. Two surface sample images (sample A \(10\mu m\)\(\times\)\(50\mu m\) and sample B \(30\mu m\)\(\times\)\(35\mu m\)) were measured around the focal point of 1mm in the visible range using a \(\times\)150 magnification dry Olympus objective lens with an optical aperture of NA = 0.9. The theoretical lateral resolution is estimated to be \(\delta\)r= \(1.22\lambda\)/(2 NA) = \(370nm\) in the visible range using the Rayleigh criterion [14]. Temperature was measured using a Pt100 glued next to the sample. Temperature sweeps (35\({}^{o}\)C\(\ll\)T\({}_{c}\) to 82\({}^{o}\)C\(\gg\)T\({}_{c}\) and back) spanning the entire IMT were performed multiple times at a rate of 1\({}^{\circ}\)C/min, temperature swept linearly, with temperature and images recorded every \(\sim\)0.17\({}^{\circ}\)C. ### Height z focusing and x-y drift correction Inevitable temperature dilation of the experimental system during temperature sweeps brings the sample out of focus during temperature sweeps. In order to compensate for this z drift, we employ a "fuzzy focusing" technique as follows. During the experiment, the sample was continually moved up and down \(10\mu m\) every 10 seconds by a piezoelectric crystal placed under it, in order to bring the sample in and out of focus. A stack of 120 images was recorded this way for each temperature. Over the years, various metrics have been evaluated for selecting the sharpest image in such a stack [16; 17; 18]. Some studies focus explicitly on images that don't have sharp contrast [19], like the raw images acquired here (see Fig. 2(m)). Most metrics reported perform well in selecting the focused image. We have first chosen one using the compression rate of the recorded images [20]. This one is based on the intuitive idea that, when very out of focus, the sample surface will look homogeneously gray due to blurring. In this case, the raw recorded Bitmap (BMP) image can be highly compressed in lossless Tiff format using a standard Lempel-Ziv-Welch (LZW) compression protocol [21; 22], since nearly every pixel is the same. On the contrary, when the sample is in focus, the image contains much more information (since most pixels are different from their neighbors), and the raw BMP image cannot be compressed as much. Using this method, one can determine the most sharply focused image in the stack by selecting the one with the largest Tiff file size [23; 24]. Among the 62,000 images of sample A acquired during the 14 hour experiment (consisting of 3 major temperature loops and 10 subloops [25]), we retain the 894 images that are in focus within \(80nm\). A recent update of the microscope has allowed us to select the best focused image of sample B \(during\) the experiment. In the live selection process we have used a computationally faster method based on image gradient using the Tenengrad function [19]. Both metrics cited above were vetted using micron-sized gold disks lithographed on a glass substrate where the sharpest image can be defined as the image with the sharpest step function (gold to substrate). Using the focusing stack technique, we have also compared the image height on the sample four corners. This allowed us to correct the tilt of the sample (due to sample positioning using thermal paste). The updated setup also uses a piezoelectric PI Pifoc PD72Z1x to move the objective up and down rather than moving the sample placed inside the Linkam stage. The current setup can thus output an image every 10s in focus on the full field of view as a function of temperature. As the temperature is cycled repeatedly, in addition to drifts along z-axis (perpendicular to the film), there are also drifts in the \(xy\) plane (the plane of the film). These thermal drifts were compensated: (\(i\)) live within \(1\mu m\) using step \(xy\) motors below the sample and (\(ii\)) post experiment using cross correlation to track and realign part of the gold leads which contain imperfections (spots) and rough edges with VO\({}_{2}\) (see Fig. 5 (a)). Although the lateral image resolution is limited by diffraction and is estimated to be 370nm, the drift compensation tracks each pixel (\(\approx 37nm\) wide) on the sample throughout the whole experiment. The remaining spatial variations we observe in reflected intensity from the VO\({}_{2}\) region are primarily due to changes in local reflectivity due to the IMT. However, there can be other contributions to this spatial variation, including effects such as surface height variations from sample warping, variations in film thickness, minor surface defects, and even shadows cast from the \(150nm\) thick gold leads. There can even be differences in pixel sensitivity in the camera itself. Because each of these contributions is independent of temperature (_i.e._ constant in time), their effects can be distinguished from that of the temperature driven IMT, as described in the next section. ### Single pixel scaled and binary thresholded images In order to isolate the changes in local reflectivity which are due to the IMT, we introduce two novel image processing techniques. We use single pixel time traces to generate _single pixel scaled images_ (panel (n) of Fig. 2), as well as _binary thresholded images_ (panel (o) of Fig. 2, discussed in the following subsections). Both types of images begin by considering a full warming or cooling sweep (_i.e._ from fully insulating to fully metallic, or _vice versa_) to follow the intensity and analyze each pixel individually. As an example, Fig. 2 (a-l) shows the raw optical intensity time/frame traces of 12 different pixels during a cooling sweep. See S6 for the time traces of 1600 pixels from the center of the sample. In order to construct Figure 1: Schematics of the microscope and image analysis created specifically to measure spatial maps of clusters in VO\({}_{2}\) during the IMT while recording resistivity R(T) simultaneously. The sample was positioned on a Peltier heater or Linkam Thms350V temperature controller to apply temperature ramps (bottom left). The sample height was varied by steps of \(80nm\) via a piezoelectric actuator placed under it. The best-focused images were chosen post-experiment using an image compression method and Tenengrad function (described in Sec. II.3). The height focus of the sample was thus controlled within \(80nm\) throughout the experiment. Fine \(xy\) plane drift correction within a single pixel was performed post-experiment (described in Sec.II.3). Camera sensitivity was normalized throughout the recording (described in Sec.S3 of the SI). Using this fully stabilized image series, black and white thresholds were applied for each pixel individually, accurately determining if it is in the metallic or insulating state (described in Sec. II.4). We use this information to construct spatial maps of the local transition temperature T\({}_{c}\), hysteresis width \(\Delta\)T\({}_{c}\) and transition width \(\delta\)T\({}_{c}\). a _single pixel scaled image_, we normalize each individual pixel's 8-bit grayscale intensity time trace with respect to itself, such that its maximum intensity is scaled to 1, and its minimum intensity is scaled to 0. The resulting single pixel scaled image is shown in Fig. 2(n). This type of image is a relatively quick way to study the temperature dependent IMT, as it eliminates temperature-independent spatial variations that are not due to the IMT. In order to construct a _binary thresholded image_ which clearly delineates metal and insulator domains, we must define a criterion for when each pixel changes from metal to insulator or _vice versa_. The orange curve in each of the panels (a-l) in Fig. 2 is a Gaussian-smoothed version of the raw time trace, using an 11-point Gaussian convolution (\(\sigma\)=2.5). We use this smoothed time trace of the intensity in order to determine the midway point intensity for each individual pixel (shown by the red horizontal dotted lines). We use the pair connectivity correlation length to justify setting the threshold at midway, as described in the following subsections (Secs. II.4.1 and II.4.2). This allows us to construct _binary_ black and white images of the metal and insulator domains at each measured temperature, as shown in Fig. 2(o). Different pixels go through the midway point at different _frame numbers_, and therefore at different _temperatures_. We use this information to construct spatial maps of the local transition temperature T\({}_{c}\) recorded at each pixel revealing the highly spatially-textured nature of the IMT in VO\({}_{2}\)[7; 8]. These T\({}_{c}\) maps, as well as hysteresis width \(\Delta\)T\({}_{c}\) maps and transition width \(\delta\)T\({}_{c}\) maps, are presented in the experimental results Sec. III. #### ii.2.1 Pair Connectivity Correlation Length As can be seen in the single pixel time traces shown in Fig. 2 (see SI Figures. S6 for many more examples), each pixel experiences a definite switch from metal to insulator or _vice versa_, consistent with the Ising-type model we have previously developed to describe the IMT in VO\({}_{2}\) thin films [8; 26]. While the Ising model was originally developed to describe magnetic domains of orientation "up" or "down", here we map "up" and "down" to metal and insulator domains. While the metal-insulator transition is first order, this transition ends in a critical point as a function of quenched disorder. The influence of that critical point is felt throughout a critical region, which includes part of the first order line in the vicinity of the critical end point.[8] We use the correlation length of the pair connectivity correlation function to determine the threshold between metal and insulator domains. During the IMT, VO\({}_{2}\) metal-insulator domains form intri Figure 2: Single pixel intensity normalization and thresholding process. (a-l) Representative single-pixel turn-on functions in sample A during cooling. Blue traces are the raw intensity in 8-bit grayscale where 0 is black and 255 is white. The orange traces are smoothed versions of the blue traces, in which we have applied an 11-point Gaussian convolution (\(\sigma\)=2.5). Purple curves are the difference between the raw (blue) curve and the smoothed version (orange curve). The green curve is a numerical derivative of the blue curve (discussed and used in SI Sec. S4), taken via a finite difference with a 10-point stencil [15]. (m) Raw optical image (frame 847) partway through cooling for VO\({}_{2}\) sample A. (n) The same image after the intensity is scaled, pixel-by-pixel, such that light pixels are in the insulating phase and dark pixels are in the metallic phase. (o) The same image, with metal and insulator domains, clearly delineated as black and white. Images are 7.3\(\mu m\) wide. cate patterns, often becoming fractal due to proximity to a critical point [8]. At criticality, correlation lengths diverge. Away from criticality, the divergence is muted, although the correlation length still displays a maximum at the point of closest approach to criticality. For example, changing the interaction strength between metal and insulator domains to be farther away from criticality, or changing the strength of various types of disorder farther from criticality causes the correlation length to go down. Similarly, changing the intensity threshold by which we identify metal and insulator domains also changes this correlation length. In disordered systems, setting an unphysical threshold will not move the system toward criticality, but only away. Therefore, one way to set the proper threshold between metal and insulator domains is to maximize the correlation length. The pair connectivity correlation function is familiar from percolation models, where the corresponding pair connectivity correlation length diverges at the critical point [27]. Coniglio and coworkers showed that the pair connectivity correlation length also diverges at the critical temperature in the two-dimensional Ising model [28]. We have recently shown that the pair connectivity correlation length also diverges at other Ising critical points, including that of the two-dimensional random field Ising model [29], as well as on slices of three dimensional models at criticality, including the clean Ising model [30] and the random field Ising model [29]. Near a critical point, the correlation function is power law at distances less than the correlation length, in this case \(\xi_{\text{pair}}\). This pair correlation length can be calculated directly from an image via [31]: \[\xi_{\text{pair}}^{2}=\frac{\sum_{i,j}r_{i,j}^{2}p_{i,j}^{f}}{\sum_{i,j}p_{i, j}^{f}} \tag{1}\] where \(p_{i,j}^{f}\) is the likelihood that i and j are in the same finite cluster. Another way to view this is as: \[\xi_{\text{pair}}=\sqrt{\langle R_{G}^{2}\rangle_{f}} \tag{2}\] where \(R_{G}\) is the radius of gyration of each connected cluster, and the average is taken over the finite clusters. This quantity diverges at the percolation threshold as: \[\xi_{\text{pair}}\propto\frac{1}{|p-p_{c}|^{\nu_{\text{pair}}}}. \tag{3}\] It diverges at clean Ising transitions as: \[\xi_{\text{pair}}\propto\frac{1}{|T-T_{c}|^{\nu_{\text{pair}}}}\, \tag{4}\] and it diverges at random field Ising transitions as: \[\xi_{\text{pair}}\propto\frac{1}{|R-R_{c}|^{\nu_{\text{pair}}}}. \tag{5}\] Figure 3: Pair connectivity correlation length \(\xi_{\text{pair}}\)_vs._ temperature during the warming branch of an external hysteresis loop, as a function of different threshold values for determining metal and insulator domains in sample A. The correlation length diverges when the system is closest to criticality. Figure 4: (a) Single pixel time trace of intensity. The blue curve is the raw time trace of the measured optical intensity of pixel (127,734) in sample B. The orange curve is a Gaussian convolution (\(\sigma\)=2.5) of the same time trace over 3 frames. The double crossing at the midway is eliminated in the smoothed data set. (b) Binary black and white image (frame 260) of the sample generated by thresholding at midway the single pixel time traces as presented in (a). (c) Smoothed out binary black and white image (frame 260) of the sample generated by thresholding at midway the 3 frame convoluted single pixel time traces as presented in (a). Setting Thresholds of Metal and Insulator Signal in Optical Data In order to know at what intensity to set the threshold between metal and insulator in each pixel, we calculate the pair connectivity correlation length in a series of images, as a function of different intensity thresholds. For this we use the single pixel scaled images as described in the previous subsection. In Fig. 3, we plot the evolution of the pair connectivity correlation length (Eqn. 1) during the warming branch of a hysteresis loop. The blue circles in Fig. 3 have each pixel's threshold set at the midway point of that particular pixel's intensity. The black circles have each pixel's threshold set higher by an amount that is +10% of the difference between the saturated metal and saturated insulator values of intensity. The pink circles have each pixel's threshold set higher by only +7.5%, and similarly for other colors as denoted in the figure legend. Similar to the way the theoretical threshold was set in Ref. [8], we set the threshold according to the longest correlation lengths. Since in Fig. 3 the longest correlation length happens for a threshold equal to the average between metal and insulator intensity (the blue circles in Fig. 3) we use this midway threshold throughout the paper. ### Time domain convolution One of the strong points of obtaining a series of 100-1000 images via this autofocus optical microscope is the possibility of filtering out high frequency noise. A similar technique is used in resistivity experiments that probe samples thousands of times per second. Fig. 4 (a) compares a raw single pixel time trace to a smoothed version in which a 3-point Gaussian convolution (\(\sigma\)=2.5) has been applied in the time domain. In this example, the raw single pixel time trace crosses the midway point twice, whereas the 3-point convolved curve passes the midway point only once. Notice that this procedure of filtering high frequency noise in the time domain greatly suppresses the white noise evident in the spatial domain near the metal-insulator boundaries derived from the raw time traces (see Fig. 4 (b) and (c) for comparison). This smoothing is useful for studying spatial correlations from frame to frame. However, if filtering is not necessary, raw data is used throughout the analysis. This is the case for T\({}_{c}\) maps in the section below and ramp reversal memory maps presented elsewhere [25]. High frequency noise was filtered in the temperature data taken using the Pt100 by fitting a linear slope through the large temperature sweeps. This matched the internal temperature sensor slope of the Linkam Thms350V temperature controller. ## III Results Having described the various key steps in the previous sections (including autofocusing, step motor/cross correlation aligning, single pixel scaling and thresholding, pair connectivity correlation length analysis, and time domain convolution) we now present the detailed spatially-resolved study of the IMT in VO\({}_{2}\) films using our new optical mapping method. ### Maps **Transition Temperature T\({}_{c}\) maps**: Fig. 5 (c) reports the local critical temperature T\({}_{c}\) map in VO\({}_{2}\) sample B. These maps show a large spatial variation in T\({}_{c}\), with rich pattern formation over tens of microns, similar to s-SNIM sub-micron measurements [7], but acquired with a much faster procedure that allows for much finer time and temperature resolution. This large scale spatial variation, along with detailed spatial knowledge of the location of these variations, can potentially be exploited to optimize memory elements by addressing specific regions of the sample. _Reproducibility of T\({}_{c}\) maps_: Previous reports on avalanches in this material showed jumps in resistivity randomly appearing during the transition in macroscopic transport measurements [33]. This suggested that the metal-insulator patterns could be appearing randomly during each temperature sweep. At first glance, this appears to be at odds with the optical data reported in this study, where we find that the metal and insulator patterns are highly repeatable globally (occurring at the same location and with the same shape) during successive temperature sweeps (see Fig 6). The repeatability suggests that the patterns are strongly influenced by an underlying random field present in the thin film or its substrate [8; 26; 34]. The observed stochasticity of resistance jumps in transport measurements [33] could arise from small variations in the exact time at which avalanches are triggered. In addition, small changes in optical maps can potentially create large changes in resistance, when tiny "shorts" connect pre-existing larger metallic clusters. **Transition Width \(\delta\)T\({}_{c}\) maps**: The transition width \(\delta\)T\({}_{c}\) of each pixel can be accessed by fitting single pixel scaled intensity time traces to a hyperbolic tangent: \(-\frac{1}{2}(tanh(\frac{T-{\rm T}_{c}}{\delta{\rm T}_{c}})\)-1). Because T\({}_{c}\) is known from our time trace analysis, there is only one fitting parameter. The map of \(\delta\)T\({}_{c}\) distribution is shown in Fig. 5 (e). The average transition width of the pixels as measured in optics is \(2.8\pm 1.1\)degC with extremes from 0degC to 8degC. Moreover, a small number of pixels show more than one step during a transition (see for example first pixel (305,300) in Fig. S6). These cases could arise from an overlap between multiple metal or insulator domains affecting a single pixel. This could be due to information from surrounding pixels affecting the signal at one pixel, since the pixel size is \(\sim\)10 times smaller than the resolution. Or, it could arise from structures that are smaller than the pixel size. Indeed, s-SNIM has clearly observed inhomogeneities on smaller length scales than the optical maps presented here [7; 8]. Interestingly, the standard deviation of local T\({}_{c}\)'s across the sample, \(\sigma_{\text{T}_{c}}\)(1.2\({}^{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\texttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttext \text{\text from avalanches, further analysis is needed to extract the full dynamics occurring. In the three correlation maps, no trend is seen in the last two, but T\({}_{c}\) vs. \(\Delta\)T\({}_{c}\) shows a slight positive correlation. This means that pixels with low T\({}_{c}\) tend to have low \(\Delta\)T\({}_{c}\) (_i.e._ close to zero) and _vice versa_. The positive correlation in Fig. 8(a) is not to be confused with the few diagonal lines present in this panel explained just above. ### Hand picking specific hysteric properties The wide range of behaviors contained in the three maps presented in the section above (Fig. 5 c, d and e), gives us the unprecedented opportunity to find individual pixels with desired properties. Fig. 9 shows the T\({}_{c}\) map of the sample with six different types of pixels selected. The pixel labeled "std" for standard has a rounded transition with values of T\({}_{c}\), \(\Delta\)T\({}_{c}\) and \(\delta\)T\({}_{c}\) which are close to the average values found in the distribution of these three quantities (see Fig. 7 a, b and c). Pixels A and B show the most common type of local characteristics found in the maps: when T\({}_{c}\) is high, \(\Delta\)T\({}_{c}\) is high; when T\({}_{c}\) is low, \(\Delta\)T\({}_{c}\) is low. This positive correlation is evident at a global level in Fig. 8 (a). However, on a local level, individual pixels can have a large deviation from the global average behavior. Indeed pixel E shows a possibility of finding \(\Delta\)T\({}_{c}\) very low (0.3\({}^{\circ}\)C) with a T\({}_{c}\) (66.3\({}^{\circ}\)C) low but closer to the mean value of the map. Pixels C and D illustrate the case where the width \(\delta\)T\({}_{c}\) of the transition is very sharp (0.5\({}^{\circ}\)C) or very wide (5\({}^{\circ}\)C). Pixel C shows a representative sharp pixel, where within the temperature steps of 0.17\({}^{\circ}\)C, the transition occurs in a sharp, avalanche mode. Further analysis to see where and how these avalanches occur will be pursued in future work. Finally pixel E shows a case where \(\Delta\)T\({}_{c}\) is within the lower values [0\({}^{\circ}\)C-1\({}^{\circ}\)C]. As mentioned previously, small hysteresis could be useful in opto-electronic devices or neuromorphic devices. In the first case, small hysteresis avoids optical detectors getting stuck in subloops [35]; in the second case, small hysteresis allows lowering the voltage threshold needed for spiking [36]. _General remarks_ on the pixel selection procedure: (_i_) as mentioned previously in the \(\delta\)T\({}_{c}\) section above, some pixels in the map clearly present two steps during the IMT. These two-step pixels can potentially be detected in an automated way from their anomalously high error on the fit to the hyperbolic tangent function; (_ii_) the features put forward in these 6 pixels above are not unique to the 37\(nm\) square pixel location. These features usually also hold for many pixels around the \(xy\) coordinates reported. Figure 6: a) Three T\({}_{c}\) maps while cycling through the IMT (warming) at 1’C/min. b) Difference maps between cycles. Global patterns are generally reproducible (\(\sigma_{T_{c}}/T_{c}=0.6\)°C/68°C= 1%). However some small regions present deviations up to \(\pm\)2°C. Full histograms (with mean and standard deviation) of maps in b) are shown in Fig. 7. Difference map between T\({}_{c3}\) and T\({}_{c1}\) (the most separated, time wise, temperature sweeps in this study) and the corresponding histogram are presented in SI Fig. S4. Images are 33.6\(\mu m\) x 27.6\(\mu m\). ## IV Conclusions We have reported the first T\({}_{c}\) maps derived from single pixel optical imaging on VO\({}_{2}\). Multiple new experimental steps were needed to align, focus and calibrate the raw grayscale images recorded. These experimental achievements allowed us to accurately track the spatial distribution of metal and insulator clusters. Binary black and white images, time traces, T\({}_{c}\) maps, \(\Delta\)T\({}_{c}\) maps, and \(\delta\)T\({}_{c}\) maps were plotted and discussed. The sample shows micron-sized patterns that are found to be mostly reproducible through multiple temperature sweeps. The \(\Delta\)T\({}_{c}\) hysteresis width map exhibits, on average, the same average hysteresis width of 4.3\({}^{\circ}\)C as macroscopic resistivity hysteresis, but exhibits strong variation on a local scale, down to \(\sim\)[0\({}^{\circ}\)C-1\({}^{\circ}\)C] in certain small regions and as large as \(\sim\) 8\({}^{\circ}\)C in other regions. These findings open an exciting opportunity to access local properties of VO\({}_{2}\) by, _e.g._, contacting specific parts of the sample electrically in order to select unique parameter combinations Figure 8: Correlations between T\({}_{c}\) (upon warming), \(\Delta\)T\({}_{c}\) and \(\delta\)T\({}_{c}\). Each of the 666,000 pixels (900x740) is represented. Only T\({}_{c}\) vs. \(\Delta\)T\({}_{c}\) (panel (a) shows a slight diagonal trend meaning that pixels with low T\({}_{c}\) tend to have low \(\Delta\)T\({}_{c}\) (_i.e._ close to zero) and _vice versa_. Figure 7: Histograms of maps presented in in Fig. 5 and 6. (a) T\({}_{c}\) maps (upon warming); (b) \(\Delta\)T\({}_{c}\) map; (c) \(\delta\)T\({}_{c}\) map and (d) and (e) two difference maps T\({}_{c2}\)-T\({}_{c1}\) and T\({}_{c3}\)-T\({}_{c2}\) for specific applications in electrical and optoelectronic devices. The observation of a positive correlation between T\({}_{c}\) value and hysteresis width could enable a new approach for tailoring the material's response to external drives, in addition to providing a new perspective in studying open questions in the theory of hysteresis. ## Acknowledgements We thank M. J. Carlson for technical assistance with image stabilization, and acknowledge helpful conversations with K. A. Dahmen. S.B., F.S., and E.W.C. acknowledge support from NSF Grant No. DMR-2006192 and the Research Corporation for Science Advancement Cottrell SEED Award. S.B. acknowledges support from a Bilsand Dissertation Fellowship. E.W.C. acknowledges support from a Fulbright Fellowship, and thanks the Laboratoire de Physique et d'Etude des Materiaux (LPEM) at Ecole Superieure de Physique et de Chimie Industrielles de la Ville de Paris (ESPCI) for hospitality. This research was supported in part through computational resources provided by Research Computing at Purdue, West Lafayette, Indiana [37]. The work at Figure 9: T\({}_{c}\) map with six pixels chosen to illustrate specific characteristics in the hysteresis loops. The table shows the numerical values of T\({}_{c}\), \(\Delta\)T\({}_{c}\) and \(\delta\)T\({}_{c}\) for each pixel. The numbers in bold highlight the unique characteristic of each pixel. UCSD (PS, IKS) was supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0242. The work at ESPCI (M.A.B., L.A., and A.Z.) was supported by Cofund AI4theSciences hosted by PSL University, through the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant No. 945304.
2307.08794
Non-Stationary Policy Learning for Multi-Timescale Multi-Agent Reinforcement Learning
In multi-timescale multi-agent reinforcement learning (MARL), agents interact across different timescales. In general, policies for time-dependent behaviors, such as those induced by multiple timescales, are non-stationary. Learning non-stationary policies is challenging and typically requires sophisticated or inefficient algorithms. Motivated by the prevalence of this control problem in real-world complex systems, we introduce a simple framework for learning non-stationary policies for multi-timescale MARL. Our approach uses available information about agent timescales to define a periodic time encoding. In detail, we theoretically demonstrate that the effects of non-stationarity introduced by multiple timescales can be learned by a periodic multi-agent policy. To learn such policies, we propose a policy gradient algorithm that parameterizes the actor and critic with phase-functioned neural networks, which provide an inductive bias for periodicity. The framework's ability to effectively learn multi-timescale policies is validated on a gridworld and building energy management environment.
Patrick Emami, Xiangyu Zhang, David Biagioni, Ahmed S. Zamzam
2023-07-17T19:25:46Z
http://arxiv.org/abs/2307.08794v1
# Non-Stationary Policy Learning for Multi-Timescale Multi-Agent Reinforcement Learning ###### Abstract In multi-timescale multi-agent reinforcement learning (MARL), agents interact across different timescales. In general, policies for time-dependent behaviors, such as those induced by multiple timescales, are non-stationary. Learning non-stationary policies is challenging and typically requires sophisticated or inefficient algorithms. Motivated by the prevalence of this control problem in real-world complex systems, we introduce a simple framework for learning non-stationary policies for multi-timescale MARL. Our approach uses available information about agent timescales to define a periodic time encoding. In detail, we theoretically demonstrate that the effects of non-stationarity introduced by multiple timescales can be learned by a periodic multi-agent policy. To learn such policies, we propose a policy gradient algorithm that parameterizes the actor and critic with phase-functioned neural networks, which provide an inductive bias for periodicity. The framework's ability to effectively learn multi-timescale policies is validated on a gridworld and building energy management environment. ## I Introduction The ability to control multiple interacting components is essential to efficiently manage complex systems. For instance, in power systems applications, flexible loads and distributed generation operate within this complex system with coupling in the dynamical models, constraints, and objectives. Similar multi-component control challenges appear in transportation systems, robotics, etc. Thus, interest in data-driven approaches that try to learn distributed control policies from experience has grown recently. Multi-agent reinforcement learning (MARL) is a promising agent-based sequential decision-making framework for learning complex coordination strategies. Crucially, it does not depend on having access to a model of the (often stochastic and nonlinear) environment dynamics. However, applying MARL to real-world problems is challenging because agents typically only receive noisy, partial observations of the environment state and have limited communication with each other. Moreover, from the perspective of each agent, the environment dynamics appear to shift over time as other agents learn to adapt their behaviors. All of these factors contribute to the extreme _non-stationarity_ present in MARL, which makes learning good agent policies notoriously challenging [1, 2]. Despite the dedication of significant effort to discover practical MARL algorithms capable of learning policies in the face of non-stationarity [3, 4], there remains a gap between the synthetic environments used for algorithm development and the real world. This work studies an under-explored MARL setting inspired by real world applications where agents need to coordinate _time-dependent_ actions across different timescales. This type of time-dependent coordination, arises, for example, in: * Power systems applications where we wish to learn a coordination strategy between electrical devices that can be controlled at different timescales (e.g., energy storage units, solar panels, and thermostatically controlled loads) connected to the same micro-grid (Fig. 1), * Robotic control tasks where multiple heterogeneous robots try to collaborate to execute a task (e.g., moving an object) while their actuators are controlled at different frequencies. Introducing a time dependency via multiple timescales to MARL adds _additional_ sources of non-stationarity, which makes learning policies particularly challenging. In this work, we formally define the multi-timescale Fig. 1: In **Multi-timescale MARL.**, agents that act on different timescales try to learn to coordinate to achieve a goal, e.g., controlling a heterogeneous set of electronic devices for building energy optimization. Learning a _non-stationary_ multi-agent policy allows multi-timescale agents to perform complex time-dependent behaviors. decentralized partially observable Markov decision process (Dec-POMDP) setting and propose a framework for learning multi-agent time-dependent (i.e., non-stationary) policies to solve such environments. We show that multiple timescales induces the optimal multi-timescale policy to be periodic in nature. A practical policy gradient method for learning periodic multi-agent policies based on phase-functioned neural networks [5] is provided. Our framework's ability to learn effective multi-timescale policies with fewer environment interactions than key baselines is validated on a gridworld and a building energy management environment. ## II Multi-timescale Dec-POMDPs First, we define a Dec-POMDP [6], which is a multi-agent stochastic game defined by a tuple: \[\texttt{DEC-POMDP}:=(N,S,\mathbf{A},\mathbf{O},r,T,dt,\gamma,\rho,p,P). \tag{1}\] The number of agents is \(N\), \(S\) is the state space of the environment, \(\mathbf{A}:=\times_{i=1}^{N}A^{i}\) is a joint action space, \(\mathbf{O}:=\times_{i=1}^{N}O^{i}\) is a joint observation space, \(r(s,a):S\times\mathbf{A}\rightarrow\mathbb{R}\) is a global reward function, \(T\) is the (possibly infinite) horizon, \(dt\) is the time discretization of the environment, \(\gamma\in[0,1]\) is a discount factor, and \(\rho:S\rightarrow[0,1]\) is the initial state distribution. The joint action \(\mathbf{a}\in\mathbf{A}\) induces a transition from state \(s\) to state \(s^{\prime}\) according to the transition function \(p(s,\mathbf{a}):S\times\mathbf{A}\to S\). We assume that each agent is only given a noisy, partial observation of the state governed by an observation function \(P(s,i):S\times\mathbb{N}\to O^{i}\), where \(i\) is the agent index, and that there is limited or no communication between agents. Each agent learns a stochastic policy function \(\pi^{i}(a^{i}\mid o^{i})\) that maps an observation to a distribution over \(A^{i}\), and the joint policy is \(\mathbf{\pi}=\{\pi^{i},\dots,\pi^{N}\}\). In this work, we introduce multi-timescale Dec-POMDPs as an extension of Dec-POMDPs: \[\texttt{MT-DEC-POMDP}:=[\texttt{DEC-POMDP};k,C], \tag{2}\] where \([;]\) is concatenation. The agents are defined with action frequencies \(k:=\{k_{1},\dots,k_{N}\}\) and the environment is defined to be periodic with period \(C\geq 1\), i.e., the reward function \(r_{t}(s,\mathbf{a})\) and transition functions \(p_{t}(s,\mathbf{a})\) are \(C\)-periodic. When \(C=1\), the environment is aperiodic, as is typically assumed in a DEC-POMDP. Each agent's timescale is defined by the action frequency \(k_{i}\) times the base timescale discretization \(dt\). For example, agent 1 with \(k_{1}=2\) acts every two steps (i.e., when \(t\bmod 2dt=0\)) and agent 2 with \(k_{2}=3\) acts every three steps (i.e., when \(t\bmod 3dt=0\)). We assume environments are defined with \(dt=1\) for simplicity in the remainder of the paper. Between actions, "slow" agents take a null action \(a_{\texttt{null}}^{i}\) or repeat the most recently taken action (e.g., when an action represents the setpoint for a device). Our setting differs from the multi-timescale MARL setting considered in Wu et al. [7] as they assume agents can communicate locally with their neighbors and that the environment is aperiodic. One can observe that the sequencing of actions across agent timescales repeats periodically every \(\tilde{K}=\text{LCM}(k_{1},\dots,k_{N})\) time steps, where LCM is the least common multiple. Taking into account the periodicity of the environment \(C\), we see that the pattern of action and state transition sequencing repeats every \(K=\text{LCM}(\tilde{K},C)\) steps. When \(K>1\), solving a MT-DEC-POMDP is generally more difficult than solving a DEC-POMDP, as the periodicity introduces a time dependency that compounds on top of the non-stationarity caused by partial observability and limited communication between agents [8, 3]. To help motivate our framework, we now briefly describe a class of MARL environments where the optimal stationary joint policy is sub-optimal (i.e., when the agents ignore time dependencies). In detail, certain MARL environments cause agents to suffer from what we call the _observation aliasing problem_. In Sec. V-A, we provide an example multi-timescale environment exhibiting observation aliasing. This aliasing problem, which is closely related to the state aliasing problem that can arise in single agent RL [9], occurs when agent \(i\) receives a specific partial observation \(o^{i}\) that has a different action under non-stationary policy \(\pi^{i}\) depending on the time step: \[\exists t,t^{\prime}\text{ with }t<t^{\prime}\text{ s.t. }\pi_{t}^{i}(o_{t}^{i})\neq\pi_{t^{\prime}}^{i}(o_{t^{\prime}}^{i}).\] A _time-unaware_ agent is not able to learn a stationary policy that distinguishes between \(o_{t}^{i}\) and \(o_{t^{\prime}}^{i}\) due to the impossibility of proper credit assignment. Intuitively, time-unaware agents perceive time-dependent rewards or dynamics as stochasticity in the environment (e.g., from an exogenous source). Consequentially, these agents learn suboptimal policies that try to explain the aliased observation as perceived randomness. A common heuristic is to make each agent time aware by appending the current time step \(t\) to the observation. Alternatively, recurrent neural networks can be used to infer a _belief state_ for each agent based on its history [10]. In this work, we explore how information about agent timescales and environment periodicity can be used to more effectively learn non-stationary policies in MT-DEC-POMDPs. Recent work has introduced custom multi-timescale solutions for power systems problems that simply attempt to learn a stationary policy [11] or use recurrent neural networks to model temporal information for a fast agent and a slow agent [12]. The single agent RL setting with multiple action frequencies is also a closely related setting. Multi-timescale MDPs (MMDPs) [13, 14] involve an agent formulated as a hierarchical MDP with fast and slow actions. MMDPs aim to learn a (top-down) hierarchy over action timescales, in the sense that actions taken on slower timescales influence the actions taken on faster timescales, but not vice versa. Also related is a setting where a factored-action MDP agent can choose actions that persist for various lengths of time [15]. ## III \(K\)-periodic Non-stationary Joint Policies We now demonstrate that, under simplifying assumptions, policy iteration in the space of \(K\)_-periodic_ non-stationary joint policies converges to the optimal multi-timescale policy and value function. **Definition 1**: _For any \(K\geq 1\), a \(K\)-periodic non-stationary joint policy satisfies_ \[\bar{\mathbf{\pi}}(\mathbf{o}_{t})=\bar{\mathbf{\pi}}(\mathbf{o}_{t+K}). \tag{3}\] Let \(\bar{\mathbf{\Pi}}_{K}=\big{\{}\bar{\mathbf{\pi}}:\bar{\mathbf{\pi}}(\mathbf{o}_{t})=\bar{\bm {\pi}}(\mathbf{o}_{t+K})\big{\}}\) be the set of all such policies. We use this to define a _multi-timescale non-stationary joint policy_\(\mathbf{\pi}_{t}\in\mathbf{\Pi}\) with timescale action frequencies \(k\) as the policy _induced_ by a \(K\)-periodic non-stationary policy: \[\forall t,\mathbf{\pi}_{t}=\{\pi_{t}^{1},\ldots,\pi_{t}^{N}\}\text{ s.t.} \tag{4}\] \[\pi_{t}^{i}:=\begin{cases}\bar{\pi}_{t}^{i}&\text{ if }t\bmod k_{i}=0\\ \delta_{t-(t\bmod k_{i})}(a^{i})\text{ or }\delta_{t}(a_{\texttt{null}}^{i})&\text{ otherwise.}\end{cases}\] Here, \(\delta_{t}\) is the Dirac delta function. In general, the inducing policy for a multi-timescale policy \(\mathbf{\pi}_{t}\) does not need to be \(K\)-periodic. However, later in this section we prove that policy iteration in the space of \(K\)-periodic policies converges to the optimal multi-timescale policy assuming cooperative agents and full observability. **Definition 2**: _The projection operator for joint actions \(\Gamma_{t}^{k}(\mathbf{a}_{t})\) is defined as_ \[\Gamma_{t}^{k}(\mathbf{a}_{t})=\{\bar{a}_{t}^{1},\ldots,\bar{a}_{t}^{N}\}\text {, where} \tag{5}\] \[\bar{a}_{t}^{i}:=\begin{cases}a_{t}^{i}&\text{ if }t\bmod k_{i}=0\\ \bar{a}_{t-1}^{i}\text{ or }a_{\texttt{null}}^{i}&\text{ otherwise.}\end{cases} \tag{6}\] _Notice that \(\Gamma_{t}^{k}\) is \(\tilde{K}\)-periodic. That is, \(\Gamma_{t}^{k}(\mathbf{a})=\Gamma_{t+\tilde{K}}^{k}(\mathbf{a})\) for any \(t\) and joint action \(\mathbf{a}\)._ **Definition 3**: _The projection operator for the reward and transition function is defined as_ \[\Theta_{t}^{C}(f_{t})=\begin{cases}f_{t}^{0}&\text{ if }t\bmod C=0\\ \vdots&\\ f_{t}^{C-1}&\text{ if }t\bmod C=C-1.\end{cases} \tag{7}\] Notice that \(\Theta_{t}^{C}\) is \(C\)-periodic, i.e., \(\Theta_{t}^{C}(r(s,\mathbf{a}))=\Theta_{t+C}^{C}(r(s,\mathbf{a}))\) for any \(t\) and \((s,\mathbf{a})\). The agents aim to learn an optimal multi-timescale policy \[\mathbf{\pi}^{*}=\operatorname*{arg\,max}_{\mathbf{\pi}\in\Pi}\mathbb{E}_{\mathbf{\pi}, \Theta_{t}^{C}(p_{t}(s,a))}\bigg{[}\sum_{t=0}^{T}\gamma^{t}\Theta_{t}^{C} \big{(}r_{t}(s_{t},\Gamma_{t}^{k}(\mathbf{a}_{t}))\big{)}\bigg{]}, \tag{8}\] and the optimal multi-timescale value function is \[Q_{t}^{\mathbf{\pi}^{*}}(a,s)=\mathbb{E}_{\mathbf{a}_{\mathbf{t+1}}:\mathbf{a} \mathbf{r},s_{t+1}:s\mathbf{\pi}}\big{[}R_{t}|\Gamma_{t}^{k}(\mathbf{a}_{\mathbf{ t}}),s_{t},\mathbf{\pi}^{*}\big{]}, \tag{9}\] where \(R_{t}=\sum_{\tau=\tau}^{T}\gamma^{\tau-t}\Theta_{\tau}^{C}r_{\tau}(s_{\tau}, \mathbf{a}_{\tau})\). The following theorem establishes convergence to the optimal multi-timescale value function via _policy iteration in the space of \(K\)-periodic non-stationary policies_ assuming agent cooperation and full observability. **Assumption 1**: _The agents are cooperative and have full observability of the environment._ **Theorem 1**: _Under assumption 1, for any \(\mathbf{\pi}^{0}\in\Pi\) induced by \(K\)-periodic non-stationary policy \(\bar{\mathbf{\pi}}^{0}\in\bar{\Pi}_{K}\), \(K=\text{LCM}(K,C)\), the sequence of value functions \(Q^{\mathbf{\pi}^{n}}\) and improved policies \(\mathbf{\pi}^{n+1}\) due to policy iteration converges to the optimal multi-timescale value function and optimal multi-timescale policy \(\mathbf{\pi}^{*}\), i.e., \(Q^{\mathbf{\pi}^{*}}(s,a)=\lim_{n\to\infty}Q^{\mathbf{\pi}^{n}}(s,a)\geq Q^{\mathbf{\pi} }(s,a)\). Moreover, the optimal multi-timescale value function always exists and is induced by a \(K\)-periodic non-stationary policy._ **Lemma 1**: _Under assumption 1, MT-DEC-POMDP reduces to a multi-timescale multi-agent MDP [16]. Furthermore, a multi-timescale multi-agent MDP is equivalent to an action-persistent factored action (FA) MDP [15]._ **Proof** A multi-agent MDP can be described by the tuple \(\{S,N,\mathbf{A},p,r\}\) with elements defined as in Sec. II [16]. A multi-timescale multi-agent MDP is defined similarly to MT-DEC-POMDP (Eq. (2)), i.e., by expanding a multi-agent MDP with \(k\) and \(C\). Assuming agent cooperation, MT-DEC-POMDP has a single reward shared by all agents. Assuming full observability, the observation function \(P(s,i)\) in a MT-DEC-POMDP is the identity mapping. Therefore, each agent's policy in MT-DEC-POMDP becomes \(\pi^{i}(a^{i}|s)\), which completes the reduction. A factored action (FA) MDP with a fully factorized policy over \(N\)-dimensional actions \(\pi(a^{1},\ldots,a^{N}|s)=\prod_{n=1}^{N}\pi^{n}(a^{n}|s)\) is a single agent MDP that can be thought of as equivalently having \(N\) agents, i.e., a multi-agent MDP. A \(k\)-persistent FA MDP assumes that action \(a^{i}\) is decided every \(k_{i}\) steps and repeated otherwise [15], and can similarly be extended to periodic environments with period \(C\). By modifying the persistence property to allow action \(a^{i}\) to be repeated \(k_{i}\) times _or_ for a null action \(a_{\texttt{null}}^{i}\) to be subsequently taken \(k_{i}-1\) times, the \(k\)-persistent FA MDP is equivalent to the multi-timescale multi-agent MDP setting, completing the proof. Given Lemma 1, our proof for Theorem 1 follows the same proof technique used to prove Theorem 3 in Lee et al. [15]. Differently, our proof handles \(C\)-periodic environments. To that end, we define here the one-step multi-timescale Bellman _optimality_ operator \(\bar{\mathcal{T}}_{t}^{*}\) induced by \(\bar{\mathbf{\pi}}\) for \(t\in\{0,\ldots,K-1\}\): \[(\bar{\mathcal{T}}_{t}^{*}Q)(s,\mathbf{a}):= \tag{10}\] \[\Theta_{t}^{C}(r_{t}(s_{t},\mathbf{a}_{t}))+\gamma\mathbb{E}_{s_{t +1}\sim\Theta_{t}^{C}(p_{t})}\big{[}\max_{\mathbf{a}_{t+1}}Q(s_{t+1},\Gamma_{t} ^{k}(\mathbf{a}_{t+1})\big{]}.\] Notice that \(\bar{\mathcal{T}}_{t}^{*}\) is \(K\)-periodic due to the \(\tilde{K}\)-periodic action projection \(\Gamma_{t}^{k}\) and the \(C\)-periodic projection operator \(\Theta_{t}^{C}\). Thus, \(\bar{\mathcal{T}}_{t}^{*}Q=\bar{\mathcal{T}}_{t+K}^{*}Q\) for any \(t\) and \(Q\). Now, we define the \(K\)-step multi-timescale Bellman optimality operator \(\bar{H}_{t}^{*}\) by composing one-step Bellman optimality operators as follows: \[(\bar{H}_{0}^{*}Q)(s,a) :=(\bar{\mathcal{T}}_{0}^{*}\bar{\mathcal{T}}_{1}^{*}\cdots\bar{ \mathcal{T}}_{K-2}^{*}\bar{\mathcal{T}}_{K-1}^{*}Q)(s,a) \tag{11}\] \[(\bar{H}_{1}^{*}Q)(s,a) :=(\bar{\mathcal{T}}_{1}^{*}\bar{\mathcal{T}}_{2}^{*}\cdots\bar{ \mathcal{T}}_{K-1}^{*}\bar{\mathcal{T}}_{K}^{*}Q)(s,a)\] \[\vdots\] \[(\bar{H}_{K-1}^{*}Q)(s,a) :=(\bar{\mathcal{T}}_{K-1}^{*}\bar{\mathcal{T}}_{K}^{*}\cdots\bar{ \mathcal{T}}_{K-3}^{*}\bar{\mathcal{T}}_{K-2}^{*}Q)(s,a).\] The next lemma establishes that \(K\)-step multi-timescale Bellman optimality operators are a contraction mapping. **Lemma 2**: _For all \(t\in\{0,\ldots,K-1\}\), the \(K\)-step multi-timescale Bellman optimality operator \(\bar{H}_{t}^{*}\) is a \(\gamma^{K}\)-contraction with respect to infinity norm with \(\bar{H}_{t}^{*}Q_{t}^{*}=Q_{t}^{*}\) as the unique fixed point solution. That is, for any \(Q_{t}^{0}\), define \(Q_{t}^{n+1}=\bar{H}_{t}^{*}Q_{t}^{n}\). Then, the sequence \(Q_{t}^{n}\) converges to the \(t^{\text{th}}\) multi-timescale optimal value function as \(n\to\infty\). **Proof** Without loss of generality, it is sufficient to prove the case \(t=0\). For any \(Q_{1},Q_{2}\) and \(s_{0}\in\mathcal{S},\mathbf{a}_{0}\in\mathbf{A}\), \[|(\bar{H}_{0}^{*}Q_{1})(s_{0},\mathbf{a}_{0})-(\bar{H}_{0}^{*}Q_ {2}(s_{0},\mathbf{a}_{0})|\] \[=|(\bar{\mathcal{T}}_{0}^{*}\cdots\bar{\mathcal{T}}_{K-1}^{*}Q_{1 })(s_{0},\mathbf{a}_{0})-(\bar{\mathcal{T}}_{1}^{*}\cdots\bar{\mathcal{T}}_{K -1}^{*}Q_{2})(s_{0},\mathbf{a}_{0})|\] \[=\Bigg{|}\mathbb{E}_{\mathbf{s}_{1}\sim\Theta_{0}^{C}(p_{0}(s_{0},\mathbf{a}_{0}))}\Big{[}\Theta_{0}^{C}(r_{0}(s_{0},\mathbf{a}_{0}))\] \[\quad+\gamma\max_{\mathbf{a}_{1}}(\bar{\mathcal{T}}_{1}^{*}\cdots \bar{\mathcal{T}}_{K-1}^{*}Q_{1})(s_{1},\Gamma_{1,\mathbf{a}_{0}}^{k}( \mathbf{a}_{1}))\Big{]}\] \[\quad-\mathbb{E}_{\mathbf{s}_{1}\sim\Theta_{0}^{C}(p_{0}(s_{0}, \mathbf{a}_{0}))}\Big{[}\Theta_{0}^{C}(r_{0}(s_{0},\mathbf{a}_{0}))\] \[\quad+\gamma\max_{\mathbf{a}_{1}}(\bar{\mathcal{T}}_{1}^{*}\cdots \bar{\mathcal{T}}_{K-1}^{*}Q_{2})(s_{1},\Gamma_{1,\mathbf{a}_{0}}^{k}( \mathbf{a}_{1}))\Big{]}\Bigg{|}\] \[=\gamma\Bigg{|}\mathbb{E}_{\Theta_{0}^{C}(p_{0})}\Big{[}\max_{ \mathbf{a}_{1}}(\bar{\mathcal{T}}_{1}^{*}\cdots\bar{\mathcal{T}}_{K-1}^{*}Q_{ 1})(s_{1},\Gamma_{1,\mathbf{a}_{0}}^{k}(\mathbf{a}_{1}))\] \[\quad-\max_{\mathbf{a}_{1}}(\bar{\mathcal{T}}_{1}^{*}\cdots\bar{ \mathcal{T}}_{K-1}^{*}Q_{2})(s_{1},\Gamma_{1,\mathbf{a}_{0}}^{k}(\mathbf{a}_{ 1}))\Big{]}\Bigg{|}\] \[\leq\gamma\Bigg{|}\mathbb{E}_{\Theta_{0}^{C}(p_{0})}\Big{[}( \bar{\mathcal{T}}_{1}^{*}\cdots\bar{\mathcal{T}}_{K-1}^{*}Q_{1})(s_{1}, \mathbf{a}_{1}^{*})\] \[\quad-(\bar{\mathcal{T}}_{1}^{*}\cdots\bar{\mathcal{T}}_{K-1}^{*} Q_{2})(s_{1},\mathbf{a}_{1}^{*})\Big{]}\Bigg{|}\] \[\quad\text{where }\mathbf{a}_{1}^{*}=\operatorname*{arg\,max}_{ \mathbf{a}}\Big{[}(\bar{\mathcal{T}}_{1}^{*}\cdots\bar{\mathcal{T}}_{K-1}^{*} Q_{1})(s_{1},\Gamma_{1,\mathbf{a}_{0}}^{k}(\mathbf{a}_{1}))\] \[\quad-(\bar{\mathcal{T}}_{1}^{*}\cdots\bar{\mathcal{T}}_{K-1}^{*} Q_{2})(s_{1},\Gamma_{1,\mathbf{a}_{0}}^{k}(\mathbf{a}_{1}))\Big{]}\] \[\leq\gamma\max_{s,\mathbf{a}}\Bigg{|}(\bar{\mathcal{T}}_{1}^{*} \cdots\bar{\mathcal{T}}_{K-1}^{*}Q_{1})(s,\mathbf{a})\] \[\quad-(\bar{\mathcal{T}}_{1}^{*}\cdots\bar{\mathcal{T}}_{K-1}^{*} Q_{2})(s,\mathbf{a})\Bigg{|}.\] We can continue to expand the inequality in a similar manner: \[\forall s_{0},\mathbf{a}_{0},\] \[|(\bar{H}_{0}^{*}Q_{1})(s_{0},\mathbf{a}_{0})-(\bar{H}_{0}^{*}Q_{ 2})(s_{0},\mathbf{a}_{0})|\] \[\leq\gamma\max_{s,\mathbf{a}}\big{|}(\bar{\mathcal{T}}_{1}^{*} \cdots\bar{\mathcal{T}}_{K-1}^{*}Q_{1})(s,\mathbf{a})-(\bar{\mathcal{T}}_{1}^{* }\cdots\bar{\mathcal{T}}_{K-1}^{*}Q_{2})(s,\mathbf{a})\big{|}\] \[\leq\gamma^{2}\max_{s,\mathbf{a}}\lfloor(\bar{\mathcal{T}}_{2}^{* }\cdots\bar{\mathcal{T}}_{K-1}^{*}Q_{1})(s,\mathbf{a})-(\bar{\mathcal{T}}_{2}^ {*}\cdots\bar{\mathcal{T}}_{K-1}^{*}Q_{2})(s,\mathbf{a})\big{|}\] \[\quad\vdots\] \[\leq\gamma^{K}\max_{s,\mathbf{a}}\big{|}Q_{1}(s,\mathbf{a})-Q_{2} (s,\mathbf{a})\big{|},\] which implies \(||\bar{H}_{0}^{*}Q_{1}-\bar{H}_{0}^{*}Q_{2}||_{\infty}\leq\gamma^{K}||Q_{1}-Q_ {2}||_{\infty}\). Therefore \(\bar{H}_{t}^{*}\) is a \(\gamma^{K}\)-contraction with respect to infinity norm, and by the Banach fixed-point theorem, \(\bar{H}_{t}^{*}Q_{t}^{*}=Q_{t}^{*}\) is the unique fixed point solution for all \(t\). It follows that the fixed points of \(\bar{H}_{0}^{*},\ldots,\bar{H}_{K-1}^{*}\) together make up the optimal multi-timescale value function, which is represented by the \(K\) values \(Q_{0}^{\pi^{*}},\ldots,Q_{K-1}^{\pi^{*}}\). Next, we show that these fixed points have the largest value compared to any other multi-timescale value function for any history-dependent policy \(\bar{\pi}\in\Pi\). **Lemma 3**: _Let \(\bar{H}_{t\bmod K}^{*}=\bar{\mathcal{T}}_{t\bmod K}^{*}\cdots\bar{\mathcal{T}}_ {(t+K-1)\bmod K}^{*}\) be the \(K\)-step multi-timescale Bellman optimality operator and \(Q_{t}^{\pi^{*}}\bmod K\) be its fixed point. Then, for any history-dependent policy \(\bar{\pi}\in\bar{\Pi}\), \(Q_{t}^{\pi^{*}}\bmod K(s,\mathbf{a})\geq Q_{t}^{\pi}(s,\mathbf{a})\)._ **Proof** For any \(\bar{\pi}\in\bar{\Pi}\), \(t,s,\mathbf{a}\) and \(Q\), the following inequality holds: \[(\bar{\mathcal{T}}_{t}^{\bar{\pi}}Q)(s_{t},\mathbf{a}_{t})\] \[:=\Theta_{t}^{C}(r_{t}(s_{t},\mathbf{a}_{t}))\] \[\quad+\gamma\mathbb{E}_{s_{t+1}\sim\Theta_{t}^{C}(p_{t}),\mathbf{ a}_{t+1}=\bar{\pi}}\big{[}Q(s_{t+1},\Gamma_{t+1,\mathbf{a}_{t}}^{k}(\mathbf{a}_{t+1}) \big{]}\] \[\quad+\gamma\max_{\mathbf{a}_{t+1}}\mathbb{E}_{s_{t+1}\sim\Theta_{t}^ {C}(p_{t})}\big{[}Q(s_{t+1},\Gamma_{t+1}^{k}(\mathbf{a}_{t+1})\big{]}\] \[\quad=(\bar{\mathcal{T}}_{t\bmod K}^{*}Q)(s_{t},\mathbf{a}_{t}).\] This implies \[(\bar{\mathcal{T}}_{t}^{\bar{\pi}}\bar{\mathcal{T}}_{t+1}^{\bar{ \pi}}\dots\bar{\mathcal{T}}_{t+K-1}^{\bar{\pi}}Q)(s,\mathbf{a})\] \[\leq(\bar{\mathcal{T}}_{t\bmod K}^{*}\bar{\mathcal{T}}_{(t+1) \bmod K}^{*}\cdots\bar{\mathcal{T}}_{(t+K-1)\bmod K}^{*}Q)(s,\mathbf{a})\] \[=(\bar{H}_{t\bmod K}^{*}Q)(s,\mathbf{a}).\] Therefore, \[Q_{t}^{\pi}(s,\mathbf{a})\] \[=\lim_{n\to\infty}(\bar{\mathcal{T}}_{t}^{\bar{\pi}}\bar{\mathcal{T}}_ {t+1}^{\bar{\pi}}\dots\bar{\mathcal{T}}_{t+Kn-1}^{\bar{\pi}}Q)(s,\mathbf{a})\] \[\leq\lim_{n\to\infty}((\bar{H}_{t\bmod K}^{*})^{n}Q)(s,\mathbf{a})=Q_{t }^{\pi^{*}}(s,\mathbf{a})\] holds, which concludes the proof. Finally, to prove the main claim in Theorem 1, by Lemma 3 it is sufficient to show \( function obtained by policy iteration under Assumption 1 may overestimate the true optimal value function. See Theorem 5.1 in Oliehoek et al. [8] for proof details. ## IV Phase Policy Gradient Method The theory from the previous section suggests encoding \(K\)-periodicity into learning algorithms for MT-DEC-POMDP to encourage learning the optimal policy when \(k\) and \(C\) are known. Let the integer \(\triangle_{t}\in\{0,\ldots,K-1\}\) indicate the current _phase_, i.e., \(t\bmod K\). A straightforward approach is to encode this phase as a one-hot vector of size \(K\), O.H.(\(\triangle\)), which can be concatenated to each agent's observation \([o_{t}^{i};\texttt{O.H.}(\triangle_{t})]\). However, when the mapping between the phase-augmented observation and the optimal action is complex, encoding the phase as a one-hot vector may not be sufficient to provide a good inductive bias for \(K\)-periodicity. Alternatively, we can parameterize each agent with phase-functioned neural networks [5] (PFNNs). PFNNs are spline-based neural architectures whose weights smoothly vary as a function of the current phase. This provides an inductive bias of using similar weights for adjacent phases and reusing the same network weights at time steps separated by a specified period. **Favorably, the number of parameters in PFNNs scales proportionally with the number of spline control points (a constant) and _not_ with the period \(K\). The use of PFNNs in RL is under-explored, with only one known previous use for training single agents in cyclic environments [17]. Each layer \(l\) of a PFNN has a weight matrix \(\alpha\) computed by a _phase function_\(\alpha_{l}=\Theta(\beta_{l};2\pi\triangle_{t}/K)\) conditioned on learnable weight matrices \(\beta_{l}\) and phase \(2\pi\triangle_{t}/K\in[0,2\pi]\). Following [5], we use a Catmull-Rom spline for \(\Theta\), which is a cubic spline with **four** learnable spline control points \(\beta_{l}=[\beta_{l}^{0},\beta_{l}^{1},\beta_{l}^{2},\beta_{l}^{3}]\). The weight for layer \(l\) is \[\alpha_{l} =\beta_{l}^{x_{1}}+w(\frac{1}{2}\beta_{l}^{x_{2}}-\frac{1}{2} \beta_{l}^{x_{0}})\] \[+w^{2}(\beta_{l}^{x_{0}}-\frac{5}{2}\beta_{l}^{x_{1}}+2\beta_{l}^ {x_{2}}-\frac{1}{2}\beta_{l}^{x_{3}})\] \[+w^{3}(\frac{3}{2}\beta_{l}^{x_{1}}-\frac{3}{2}\beta_{l}^{x_{2}}+ \frac{1}{2}\beta_{l}^{x_{3}}-\frac{1}{2}\beta_{l}^{x_{0}}),\] where \(w=4\triangle_{t}/K\)\((\texttt{mod}\ 1)\) and \(x_{n}=\lfloor 4\triangle_{t}/K\rfloor+n-1\)\((\texttt{mod}\ 4)\). The bias for layer \(l\) is computed in a similar fashion. The start and end control points for each layer are the same, making each PFNN layer cyclic. In this work, we adapt the actor-critic policy gradients method COMA [4] by using PFNNs with period \(K=\texttt{LCM}(\hat{K},C)\) for the actor and critic networks. ## V Experiments ### _The Move Box Problem_ **Setup:** We adapted a gridworld environment called Move Box [18] to create a toy multi-timescale environment with a time-dependent optimal policy. That is, the two agents need to coordinate their actions to push a green box to a goal location ("G") within a maximum of 20 steps. In the **easy** version, one agent (red) is a "fast" agent that uses \(k_{1}=1\) and one agent (blue) is a "slow" agent that acts every two steps \(k_{2}=2\). The period provided to time-aware agents is therefore \(K:=\texttt{LCM}(1,2,C=1)=2\). For the **hard** version of this task, the fast agent uses \(k_{1}=2\) and slow agent uses \(k_{2}=3\), thus \(K:=\texttt{LCM}(2,3,C=1)=6\). Each agent receives a 4D partial observation consisting of its own position and the position of the box. To avoid the need for a complex exploration strategy, we restrict the action space to 3 discrete actions: move up, move down, or null (do nothing). To push the green box up or down, the agents have to be on either side of the box and move in the same direction at the same time. Agents receive a reward of +1 whenever the box moves towards the goal and a reward of +20 once the box reaches the goal. The easy version can be solved with 7 actions while the hard version requires 19 actions; there is a significant increase in exploration difficulty between the easy and hard versions. To implement multiple timescales, the only legal action available to the slow agent between steps is the null action. _Move Box is designed so that a time-unaware fast agent suffers from observation aliasing_ (Sec. II). A time-unaware fast agent's limited information means it cannot precisely determine whether to move up or do nothing to synchronize its actions with the slow agent. **Agents:** We train four COMA-based agents: * Basic COMA, a time-unaware agent that uses a feed-forward neural network for the actor and critic networks. * Recurrent COMA, a time-aware agent that uses an RNN for the actor and critic networks to condition on the full history up to the current time step [10]. * One-Hot (O.H.) phase-aware COMA, an agent whose observations are augmented with a one-hot encoding of the current phase \(\triangle_{t}\). * Phase COMA, an agent whose actor and critic networks are PFNNs with weights indexed by \(2\pi\triangle_{t}/K\). We also run a variant of PFNN with period 4 instead of \(K\), Phase COMA (4), to explore performance sensitivity to this hyperparameter. In the easy environment \(4>K\) and in the hard environment \(4<K\). All actor and critic networks share parameters. We take the standard approach of providing a one-hot agent ID as an auxiliary input to distinguish between agents. **Results:** Table I shows quantitative results and Fig. 3 compares test return as a function of environment steps. Both O.H. phase-aware COMA and Phase COMA learn to reliably solve the easy Move Box environment across all random seeds, with Phase COMA demonstrating a small advantage in terms of efficiency. Recurrent COMA needs more steps to achieve a good test return yet ultimately performs less reliably. Basic COMA fails on this environment as expected due to observation aliasing (Fig. 2). In the hard version (Fig. 3), only Phase COMA learns to solve the environment on just 50% of the training runs. The PFNN-based actor and critic is robust to a slightly smaller or larger period than \(K\), although it appears to require more environment steps. ### _Building Energy Management_ **Setup:** In this environment, agents attempt to coordinate the control of HVAC and energy storage (ES) for a five-zone small office building. See Biagioni et al. [19] for details about the reduced order model used to simulate the building. There are 7 agents: an HVAC agent per zone able to change the mass flow rate (kg/s) every 5 mins, an HVAC chiller agent that can change the discharge air temperature (\({}^{\circ}\)C) every 5 mins, and an ES agent that can change its charging or discharging power (kW) every 1 minute. The goal is for the agents to coordinate their total power consumption to track a reference power signal that changes every 3 mins while minimizing discomfort to building occupants. The control horizon is set to 30 mins (\(dt=1\)min). The global reward function at time step \(t\) is defined as \[r_{t}=-\sum_{\text{zone}_{i}}\big{(}(T_{t}^{i}-\overline{T})^{+}+(\underline {T}-T_{t}^{i})^{+}\big{)}^{2}-\alpha(p_{t}-p_{t}^{\text{ref}})^{2},\] where \(\alpha=0.01\), \(\overline{T}:=26\)\({}^{\circ}\)C and \(\underline{T}:=24\)\({}^{\circ}\)C is the thermal comfort band, \(T_{t}^{i}\) is zone \(i\)'s temperature, and \(p_{t}\) is the total power. To implement multiple timescales, agents repeat their previous action between steps. The period used for learning periodic non-stationary policies is \(K:=\texttt{LCM}(1,5,C=3)=15\), where \(C\) encodes the cyclic power reference signal. **Results:** Out of all agents, only Phase COMA is able to reliably learn a near-optimal joint policy (Fig. 4). The variant with arbitrary PFNN period \(4\ll K\) is the second best agent. We visualize the control actions selected by the joint policy from one of the Phase COMA runs in Fig. 6. The slow HVAC agents have successfully learned to coordinate with the faster ES agent to track the reference signal (Fig. 5) without violating the thermal comfort band; for example, by increasing their power consumption between 10-15 mins while the ES agent is already maximally discharging. ## VI Conclusions In this work, we proposed a multi-timescale MARL framework for learning policies that can represent complex time-dependent behaviors. We introduced the multi-timescale non Fig. 4: **BEM results.** Mean return over 8 random seeds of the learned greedy policy at various steps during training (shaded region is the 95% CI). Best possible is the black dotted line. The Phase COMA agent with correct period \(K=15\) outperforms non-phasic variants by a wide margin. Fig. 5: **BEM qualitative results.** Total power vs. power reference signal from one of the Phase COMA training runs. Fig. 3: **Move Box return vs. steps.** Mean return over 8 random seeds of the learned greedy policy at various steps during training (shaded region is the 95% confidence interval (CI)). Best possible return shown by black dotted line. The PFNN-based agent Phase COMA with correct periods \(K=2,6\) achieves the highest reward in the fewest environment steps. Fig. 2: **Move Box qualitative analysis.** In the easy setting, the red agent acts every step (\(k_{1}=1\)) and the blue agent acts every two steps (\(k_{2}=2\)). The joint action is shown in white on each agent. The time-unaware Basic COMA red agent tries to move the box up at \(t=2\), which causes it to drop the box at \(t=3\). The time-aware Phase COMA red agent learns to take a null action at \(t=2\). The hard setting requires more sophisticated coordination between agents. Best viewed in color. stationary joint policy as the policy induced by a \(K\)-periodic non-stationary joint policy, where the period \(K\) is given by knowledge about agent timescales and cyclic environment components, both of which are typically known _a priori_. We use phase-functioned neural networks to introduce an inductive bias for learning a periodic non-stationary joint policy. Our results on grid world and building energy management environments establish the effectiveness of our framework, suggesting that follow-up work could explore using it to solve more advanced power systems problems.